Artificial Intelligence Could Optimize Your Next Design: Page 2 of 3

Modern electronics design is increasingly revealing the inadequacies of simulation-based verification. But researchers believe machine learning holds the answer.
hosted by   Design News’ parent company, UBM.

Beyond the potential for more comprehensive simulation Rosenbaum said machine learning-based modeling also offers several other benefits that should be attractive to companies, such as the ability to share models without revealing vital intellectual property (IP).

“Because behavior modeling only describes, say input/output characteristics, they don't tell you what's inside the black box. They preserve or obscure IP. With a behavioral model a supplier can easily share that model with their customer without disclosing proprietary information,” Rosenbaum explained. “It allows for the free flow of critical information and it allows the customer then to be able to design their system using that model from the supplier.”

Most integrated circuit manufacturers, for example, use Input/Output Buffer Information Specification (IBIS) models to share information about input/output (I/O) signals with customers, while also protecting IP. The problem, Rosenbaum said, is that IBIS models tell you absolutely nothing about the circuit design details.

“Where machine learning can help is to make models such as IBIS better,” Rosenbaum said. “IBIS models don't represent interactions between the multiple I/O pins of an integrated circuit. There's a lot of unintended coupling that current models can't replicate. But with more powerful methods based on machine learning for obtaining models, next-gen models may be able to capture those important effects.”

The other great benefit would be reduced time to market. In the current state of circuit design there's almost a sense of planned failure that eats up a lot of development time. “Many chips don't pass qualification testing and need to undergo a re-spin,” Rosenbaum said. “With better models we can get designs right the first time.”

Rosenbaum comes from a background in system level ESD, a world she said is built on trial and error and would benefit greatly from behavioral modeling. “[Design engineers] make a product, say a laptop, it undergoes testing, probably fails, then they start sticking additional components on the circuit board until it passes...and it wastes a lot of time,” she said. “They build in time to fix things, but it's often by the seat of one's pants. If we had accurate models for how these systems would respond to ESD we could design them to pass qualification testing the first time.”

The willingness and interest in machine learning-based behavioral models is there, but the hurdles are in the details. How do you actually do this? Today, machine learning finds itself being largely applied to image recognition, natural language processing, and, perhaps most ignominiously, the sort of behavior prediction that lets Google guess what ads it wants to serve you.

“There's only been a little bit of work in regards to electronics modeling,” Rosenbaum said. “We have to figure out all the details. We're working with real measurement data. How much do you need? Do you need to process or filter it before delivering

Comments

Recalling the claims of Bob Pease, "simulation can never be more accurate than the model used", and so the question becomes one of how accurate will the model be. Machine learning and artificial intelligence may possibly come close a lot of the time, but is that good enough? When the purpose is to create a "box" that provides the same responses as the actual design, is it likely that the machine learning will cover enough? Many bugs seem to be sequence sensitive and even sensitive to the random

(continued from previous) timing between multiple tasks being handled in a context-switching environment. It is not likely that AI and machine learning will discover that problem, or even try to look for it. The "twitter-like" character limit is a royal pain!!!!!

Some purely analog electronic systems working at low frequency will be easy to describe rather accurately in this way, but in most electronic systems there is no direct relations between input and output. This can be systems with internal memory , or having internal dynamics, hence most of the systems. To automate this process without drowning in data we need AI tools having the relevant laws of physics built in (Newton, Maxwell ++). - a challenge for the AI-community

M.H. is certainly correct! But in addition, also those systems that have any sort of memory and have multiple inputs will be quite a challenge to emulate. So while the system described is interesting it may not be very useful. In-House fabrication may be a more secure option, even if it is more expensive. Maintaining adequate security is never trivial and seldom cheap.

Add new comment

By submitting this form, you accept the Mollom privacy policy.