Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Computer-Aided Engineering, AI and the Bad News

Tags: model

Models are only models. Remember how many assumptions one must make to write a partial differential equation (PDE) describing the vibrations of a simple beam? The beam is long and slender, the constraints are perfect, the displacements are small, shear effects are neglected, rotational inertia is neglected, the material is homogenous, the material is elastic, sections remain plane, loads are applied far from constraints, etc., etc. How much physics has been lost in the process? 5%? 10%? But that’s not all. The PDE must be discretized using finite difference or finite element (FE) schemes. Again, the process implies an inevitable loss of physics. If that were not enough, very often, because of high CPU consumption, large FE models are projected onto the so-called response surfaces, i.e., surrogates. Needless to say, this too removes physics. At the end of the day, we are left with a numerical artifact which, if one is lucky (and has some grey hair), the Model captures correctly 80-90% of the real thing. And never forget that

the most important things in a model are those it doesn’t contain

Many questions may arise at this point. For instance, one could ask how relevant is an optimization exercise which, exposing such numerical constructs to a plethora of algorithms, delivers an improvement in performance of, say, 5%. Five percent is often a lot! This and other similar questions bring us to a fundamental and probably most neglected aspect of digital simulation–that of model credibility and model validation. If your model misses 10% of the physics it is supposed to emulate, what is the meaning of a, say, 5% improvement in performance obtained on the basis of such a model? Can you trust this 5% to be real?

The importance of a knowing how much one can trust a digital model is of paramount importance:

  • Models are supposed to be cheaper than the real thing; physical tests are expensive.
  • Some things just cannot be tested (e.g., spacecraft in orbit).
  • If a model is supposed to replace a physical test but one cannot quantify how credible the model is (80%, 90%, or maybe 50%), how can any claims or decisions based on that model be taken seriously?
  • You use a computer model to deliver an optimal design, but you don’t know the level of trust of the model. You then build the real thing. Are you sure it is really optimal?

But is it possible to actually measure the level of credibility of a computer model? The answer is affirmative. Based on our QCM technology, a single physical test and a single simulation are sufficient to quantify the level of trust of a given computer model. Over the past two decades, we have measured the degree of credibility of hundreds of computer models using OntoTest and have found that

the majority of CAE simulation models rarely exceeds a degree of credibility of 80-85%

When it comes to crash simulations, this can get significantly lower. Ever since the advent of High Performance Computing, the focus has been on running quickly huge models but never on their validity. This fact is extraordinary, to say the least. It appears that Supercomputers have the ability to beatify math models and automatically make them credible. Using expensive computational resources and millions of Finite Elements guarantees nothing. Quite the opposite – highly complex mathematical constructs can quickly turn into extravagant video games that look so realistic as to never be questioned. Moreover, their immense complexity will exert a masking effect, discouraging anyone from engaging in painstaking and futile investigations of details that can ruin the fun. The Emperor is naked, and nobody has the courage to say otherwise.

Below is an example. The data comes from a real, physical crash test and a real, industrial FE crash model. The measured and simulated outputs are nineteen acceleration pulses. The OntoTest GUI below shows the corresponding Complexity Maps, which reflect the interdependencies between the 19 channels. There are 159 such interdependencies in the test data and 153 in the simulated data. The strong mathematical condition for model validity is that the distance between the two maps be small. If the maps are dissimilar, the model is wrong. The weak condition – the more popular one – is to get the basis stats of the two sets of signals to come close. However, it is the structure of these maps that reflects the physics contained in the data; hence, it is of paramount importance to get this structure right.

Many acceleration pulses match quite well. Below is an example. The curve on the left is the measured pulse and the one on the right is the simulated version thereof.

          TEST                SIMULATION

Below, the situation is quite different, with the model providing a totally different picture.

The situation becomes evident when one considers the interdependencies between the pulses:

Pulse 3 versus 4 – this looks quite good:

In the case of pulse 6 versus pulse 12, the situation is dramatically different.

Globally speaking, the model has a very poor level of credibility,

What 66.5% means is that this crash model misses 33.5% of what the experiment produces. Basically, it is useless, even though if you animate the results it looks quite realistic.

OntoTest also indicates where and how to intervene to improve the model, but that is totally different matter.

Imagine now that you use a math model of unknown degree of confidence in conjunction with Artificial Intelligence, i.e. you use it in a Machine Learning context to train a piece of SW for a specific task. How reliable is the result? In the most favorable (and unlikely) scenario, the AI layer will not amplify the imperfections of the underlying model. But what if it does? And if it does, to what degree? How real, or how fake, would the result be? Would you trust such a piece of software with your life, in a car, on a train, or on a pilot-less plane? Most certainly not.

When you stack up layers of sophisticated technology – Finite Element modeling and analysis, Multi-body Dynamics, DOE, response surfaces, Monte Carlo Simulation, control systems and Machine Learning – the result is a hugely complex mathematical construct that, like everything in life, will obey the Principle of Incompatibility. Coined by the late L. Zadeh, the principle states:

High precision is incompatible with high complexity

In other words, highly complex system behave in a fuzzy manner. The more you make them complex, the more fuzzy will they get, and there is nothing that can be done about it. An alternative, maybe easier to grasp, way of expressing the Principle of Incompatibility is this:

When facing high complexity, precise statements are irrelevant and relevant statements are imprecise.

No numerical alchemy will ever neutralize the Principle of Incompatibility. It is not a matter of flops or bytes. It is physics. Think about it when you engineer complex products. High complexity has one unpleasant property: it tends to blow up in your face.

Data science is not a science, physics is.



This post first appeared on Quantitative Complexity Management By Ontonix, please read the originial post: here

Share the post

Computer-Aided Engineering, AI and the Bad News

×

Subscribe to Quantitative Complexity Management By Ontonix

Get updates delivered right to your inbox!

Thank you for your subscription

×