Artificial Intelligence Could Optimize Your Next Design

Modern electronics design is increasingly revealing the inadequacies of simulation-based verification. But researchers believe machine learning holds the answer.

The more complex modern electronic systems have gotten – the less comprehensive simulation has become as a design tool. But there's a solution on the horizon in the form of behavioral modeling based on machine learning. One of the leading centers behind this research is the Center for Advanced Electronics through Machine Learning (CAEML) at the University of Illinois at Urbana-Champaign. Funded by the National Science Foundation and formed with the aim of applying machine-learning techniques to microelectronics and micro-systems modeling, CAEML is already conducting research into several areas including: Design Optimization of High-Speed Links; Nonlinear Modeling of Power Delivery Networks; and Modeling for Design Reuse.

Elyse Rosenbaum

“The limitations in simulation that people experience have always been there. But people are trying to do more ambitious things. And we need more accurate models than we've had in the past,” Elyse Rosenbaum , director of CAEML told Design News in an interview. “For example, we make everything smaller. The physical accuracy of the models hasn't changed, but we're entering regimes where there's increasing cross talks between components simply because we're packing them together more closely.”

Rosenbaum, who will be delivering a keynote on machine learning and electronics modeling at DesignCon 2017 , said new product demands, such as the push for greener technologies – which calls for ever-improving energy minimization – are creating an environment for design engineers in which simulation-based verification alone is simply not practical. “When you're designing a product, such as, say, a cellphone, you have maybe about a hundred or so components on the circuit board. That's a lot of design values. To completely explore that design space and try every possible combination of components is unfeasible. You'd never get your product out of the door,” Rosenbaum said.

The solution then for Rosenbaum and the researchers at CAEML is highly abstracted behavioral models that let engineers rapidly do a design space exploration to find an optimal sign, not just one that's good enough.

“When we want to do design optimization we can't be concerned with every single variable inside the system,” Rosenbaum said. “All we really care about is what's happening in the aggregate – the signals at the outside of the device where the humans are interacting with it. So we want these abstracted models and that's what machine learning gives you – models that you then use for simulation.”

Accomplishing this is no small task, given that simulations require engineers to model everything in a system, and all of those effects can be represented. What Rosenbaum and her team are seeking is completely data-driven modeling, not based on any prior knowledge of what's inside the system. To do this they need to use machine learning algorithms to that can predict a particular output and represent the behaviors of particular components.

 DesignConMachine Learning Overcomes Hurdles. In her  keynote, NSF CAEML Director Elyse Rosenbaum will explore how machine learning can strengthen behavior modeling and provide a solution to simulation verification impracticalities. At  DesignCon 2017  , Jan. 31 to Feb. 2 in Santa Clara, CA.  Register here  for the event,

Comments

Recalling the claims of Bob Pease, "simulation can never be more accurate than the model used", and so the question becomes one of how accurate will the model be. Machine learning and artificial intelligence may possibly come close a lot of the time, but is that good enough? When the purpose is to create a "box" that provides the same responses as the actual design, is it likely that the machine learning will cover enough? Many bugs seem to be sequence sensitive and even sensitive to the random

(continued from previous) timing between multiple tasks being handled in a context-switching environment. It is not likely that AI and machine learning will discover that problem, or even try to look for it. The "twitter-like" character limit is a royal pain!!!!!

Some purely analog electronic systems working at low frequency will be easy to describe rather accurately in this way, but in most electronic systems there is no direct relations between input and output. This can be systems with internal memory , or having internal dynamics, hence most of the systems. To automate this process without drowning in data we need AI tools having the relevant laws of physics built in (Newton, Maxwell ++). - a challenge for the AI-community

M.H. is certainly correct! But in addition, also those systems that have any sort of memory and have multiple inputs will be quite a challenge to emulate. So while the system described is interesting it may not be very useful. In-House fabrication may be a more secure option, even if it is more expensive. Maintaining adequate security is never trivial and seldom cheap.

Add new comment

By submitting this form, you accept the Mollom privacy policy.