Smarter, more comprehensive visual computing tools that extend far beyond product creation

0



Jul 21, 2022

From: Alex Herrera



Herrera on hardware: developing, maintaining and refining products, buildings and operations with digital twins.

Is it possible that after all these years of evolution in computer-aided design and simulation, we have only scratched the surface of the potential of visual computing? Stepping back and thinking about the typical projects that rely on CAD – buildings, physical infrastructure, and personal and industrial products of all kinds – our attention has been mostly or exclusively focused on the beginnings of these projects. We sketch, draw, simulate, visualize and refine iteratively until we have a virtual model ready for physical implementation. And fortunately, but unfortunately not always, the virtual can be converted to physical in an automated workflow, with minimal manual rework. And then we’re almost done. Of course, the models will often be leveraged for later product enhancements or even debugging, but essentially the project is archived and we move on to the next design.

But while the work of the designer and creator is done, the real life of the product has only just begun. Chances are that its lifespan will last much longer than design and development. So why not leverage all that design knowledge to optimize the physical life of the product as much as its virtual creation? This is the premise behind one of the emerging uses of the digital twin and the metaverse. It’s a concept introduced in previous columns, and now embraced by some of the biggest names in the automotive, architecture and construction industries.

NVIDIA promotes Omniverse as a key enabler for digital twin applications. Image source: Nvidia.

From inception to the end of its physical life: the digital twin as an end-to-end computing environment

At first glance, we might think of the metaverse and a digital twin as simply a more physically accurate and complete environment for more traditional CAD purposes, an environment that provides as much detail in the world of a virtual product as in the product itself. -same. And, yes, the fuller scope of this environment can yield feedback and clues in physics simulation, operations, and lighting that might otherwise be missed with a party-only model with minimal reality in its surrounding world. .

Think about more conventional (although conventional should not be construed as trivial) lighting simulation such as rendering complex outdoor scenes like the tree-lined street below. Factor in many iterations in a dawn-to-dusk simulation, and the computational load becomes huge, but well-suited to the cloud-hosted metaverse.

Smarter, more comprehensive visual computing tools that extend far beyond product creation

Physics-based outdoor lighting with complex landscapes using simulation. Image source: Nvidia.

On an even larger scale, consider Siemens’ use of Omniverse in the design and layout of an offshore wind farm. Within Omniverse, Siemens ran thousands of CFD simulations to determine the best possible guide for the placement and orientation of each turbine to maximize production throughput.

Smarter, more comprehensive visual computing tools that extend far beyond product creation

Siemens wind farm design showcases the power of design in the metaverse. Image source: Siemens.

The wind farm project highlights the unique power of a cloud-hosted, GPU-powered metaverse versus traditional client-side workstation computing. Where a full-accuracy iteration of simulation would bog down the fastest multi-core processor for not just hours but days, simulations in Omniverse not only offer the accuracy of the full natural wind and weather environment, but can exploit Omniverse’s built-in components in machine learning, GPU compute acceleration, and massive scalable performance available in the cloud.

Machine learning, for example, allows each simulation iteration to run at a low-grain scale and then intelligently scale to fine-grained detail. And a sea of ​​cloud GPUs can run orders of magnitude faster than a single client processor. The combination allowed Siemens to turn multi-day iterations into minutes, in turn enabling many more iterations and more optimal and precise placement of the turbine array. In fact, Siemens believes the Omniverse-based digital twin of this network is so much more optimal and accurate that the wind farm will provide power to 20,000 more homes than would otherwise be possible.

Another compelling example of pre-production use of a digital twin in the metaverse is in construction and business operations: the factory. This is an application that we have already covered, and which has a huge potential impact on the cost and schedule of fabrication, construction and operation. By accurately simulating the entire environment in the Omniverse, manufacturers like BMW can best optimize the layout and flow of a future factory floor, avoiding potentially costly rework in the future physical realm. . And, while this workshop will be increasingly autonomous, driven by ever more advanced intelligent robotics, the staff will also be there, making the interfaces between man and machine as optimal and safe as possible. In its Omniverse environment, NVIDIA has already ensured this link with robotics with its Isaac Sim functionality.

Similarly, consider architectural design in medical facilities, such as scanning and operating rooms, where doctors, nurses and patients need the most efficient and comfortable control and access to a myriad of tools and equipment. Simulating procedures in the metaverse – most likely in combination with virtual and augmented reality – reveals any subtle operating inefficiencies before committing to physical implementation.

A parallel aging digital twin

But companies like NVIDIA and its customers are now looking to leverage the metaverse well beyond creation – CAD’s exclusive historical focus – and into subsequent physical operation, encompassing the entire lifecycle.

Smarter, more comprehensive visual computing tools that extend far beyond product creation

NVIDIA positions the universe as an end-to-end digital twin environment: from virtual creation to physical exploitation. Image source: Nvidia.

Beyond Omniverse’s fundamental technology enablers – cloud hosting, real-time rendering and graphics, improved networking speed and latency – there’s more at stake to make post-production digital twin functionality more applicable and more valuable. A digital model that does not age, break down or show fatigue would offer limited insight into the health of its physical twin down the road. The availability of (mostly) cheap wireless sensors at the edge, in combination with machine learning – both at the edge and in the cloud – provides the link in the feedback loop to update a digital twin that is an accurate reflection not only on his first day of life, but his last as well. In addition, the measured data collected during manual inspection can also be fed back to maintain the digital twin.

Smarter, more comprehensive visual computing tools that extend far beyond product creation

The closed feedback loop of a digital twin makes it viable for computer-aided operations. Image source: Bentley Systems.

However, a wealth of sensor data is worthless without the intelligence to interpret it. And therein lies the keystone of Omniverse’s ability to manage digital twins: machine learning. Cloud-hosted GPUs (in theory complemented by edge inference) running machine-learned inference in parallel provide insight into how physical aging translates to current and future potential maintenance as well as failure pure and simple potential. Siemens used a digital twin of its steam turbine to optimize maintenance and predict possible breakdowns in advance, saving time and money. In another striking example, Singapore is creating a digital twin of almost all of the country’s infrastructure, including water, construction and transport. Early operations actually signaled a potential water main failure before it happened (although due to logistics and timing, not in time to prevent it).

Tying everything together in the metaverse

No, we haven’t all missed an obvious use of our CAD environments all these years. Focusing so much attention not only on the interaction of a virtual product in a more complete and realistic environment, as well as piloting this product throughout its life cycle in its physical world, are applications that were not everything just not practical until recently.

A mature cloud and network infrastructure, as well as real-time (or at least near) physical rendering of very complex environments, created the basic capabilities to make this a viable proposition. And, the advent of machine learning has created the means to turn real-world lifecycle data into a digital twin that does not show its age very closely to its physical sibling, but can be used to optimize maintenance, repair, availability and functionality to a degree never before possible.

We’ve already had to complicate the acronym CAD over the years. Personally, for the sake of simplicity, I tend to use it to encompass CAM and CAE already, and now maybe it’s time to expand it further, beyond uses in design, l engineering and manufacturing to operations, mimicking the physical realm over the full lifecycle of a product.


Source link

Share.

Comments are closed.