Questions Re: System Validation, Neural-Network Hardware, and Deep Learning

When it comes to using AI, look for problems that have rules. If there are rules that enable people to solve the problem, that helps a lot

EEWeb

By Lauro Rizzatti (Contributed Content) | Thursday, July 04, 2019

Jean-Marie Brunet, senior director of marketing at Mentor, a Siemens Business, served as moderator for a well-attended and lively DVCon U.S. panel discussion on the hot topics of artificial intelligence (AI) and machine learning (ML).

The hour-long session featured panelists Raymond Nijssen, vice president and chief technologist at Achronix; Rob Aitken, fellow and director of technology at Arm; Alex Starr, senior fellow at AMD; Ty Garibay, vice president of hardware engineering at Mythic; and Saad Godil, director of Applied Deep Learning Research at Nvidia.

In the last of this four-part mini-series based on the panel transcript, audience members questioned panelists about system validation, neural-network hardware, and deep-learning training and inference.

To recap, Part One and Part Two tracked panelists’ answers about how AI is reshaping the semiconductor industry and addressed whether tool vendors are ready to deliver what they need to verify chips in a particular domain, and in Part Three (which we showed as Audience Question #1), an audience member questioned panelists about how chips are going to be made.

Audience Question #2: How can we improve system validation through hardware/software validation? Is there an AI and machine-learning approach that can help us? What would be the next thing in a more systematic way, either from a model point of view or an input training set point of view? Should we have a format for picking up events during simulations and dumping data periodically and seeing the system state to generate the input training set? When do we start it out? On the current project, the next project? What are the things that we should do now so we can go ahead and actually do them?

Saad Godil: I spend a lot of time talking to different groups in Nvidia brainstorming on what may be good applications. One of the first things that we talk about is, don’t tell me about your data, tell me about your problem. What is it that you’re trying to solve?

There are certain things that we’re looking for that make something a good AI problem, the first one being whether this is something that you can solve. Don’t give me something where you say, “I looked at this and there’s no possible way for me to solve this because how is an AI engine going to solve it?” A good example would be something where you can say, “I can solve it if I have infinite time, but I just don’t have the time to go through all of this.” Those are good problems to look for.

The other thing that we talk about is AI systems are rarely ever 100% accurate, and even if you get to that, that just means your data doesn’t have the case where it’s going to fail. You can never guarantee you are 100% accurate. We always tell the team that all of the solutions we deployed in the early phase have to be something that augments existing code, so that it has a backstop. Basically, we always talk about what happens if the system is wrong. In this case, I think predictive-type models are great in that they give you early access to information that allows you to make decisions ahead of time but don’t have to be perfect. That’s useful.

Going back in terms of practical steps, find out the problem that you want to solve. Once you do that, it’s just an engineering problem to figure out what data you need, and then collect it and solve it. In all the projects that we worked on, we’ve never gotten the data collection right the first time, even when we knew what the problem was and we came up with a plan to collect data. We will run it and find this is not training. Then we look at it more closely and say, “You know what, I think we need this other thing.” Don’t spend a year collecting data and then try to train the model. It has to be an iterative approach to the project.

Rob Aitken: Look for problems that have rules. If there are rules that enable people to solve the problem, that helps a lot. AI systems can play Go, because Go has a defined set of rules.

AI systems cannot play a typical game that your four-year-old kids would invent because the rules change constantly while the game is being played.

Raymond Nijssen: I’ve been thinking about how you would come up with the training set. How do you come up with enough data to train your AI system? Maybe one way to do that is to take your system and, on purpose, introduce a bug to see what the AI system can generate. This is how Google trained Google Go. It set up a system to play against itself. It wanted the AI system that takes a design and randomly here and there introduce faults to mess around with it.

And then figure out how the system is learning from that. You know, this symptom correlated to that input. That’s how you can, in some way, adjust along the lines of what this would do with it with Google Go. That’s how you could build that training set. And before you know, that trained system can outperform the human Go player.

Ty Garibay: I would caution a little bit on synthetic data sets. Just be careful that whenever you create a synthetic data set, you really augment your existing data set.

Neural networks are extremely powerful at picking up low-level details. Just be careful when you synthesize data that the neural network is being trained to the real problem and not just to the data you are creating. That’s just something that’s a common pitfall that a lot of people fall into.

Just be careful.

Raymond Nijssen: I want to add to that. If you look at the computer chess world, Garry Kasparov or someone said, “I want to use the computer as a tool because sometimes I make mistakes.” The computer was not there to replace him, but he uses the chess computer because he thought this was a particularly smart move. The computer didn’t tell him what to do. Basically, he uses it to make sure that the computer tells him what not to do. Maybe that’s the start.

Audience Question #3: Here is a segue on this topic: Why isn’t there any sort of capability within the tools right now to integrate any of these deep-learning techniques? We’re hopeful it is a big wave coming. We’re hopeful that the vendors will implement those libraries. Should we be hopeful? We’ve had previous waves of productivity where the vendors have not come up to speed. One of the papers in this conference presented the automation of functional coverage closure outside of the simulators. This is a no-brainer. If the tools haven’t provided something as simple as feeding back from functional coverage to rerun constraints, why should we trust that you’re going to build the proper deep-learning tools, which is super-complex, non-deterministic?

Jean-Marie Brunet: It’s not a problem of how to do it for us. The problem is what to extract. If you look at how you do CPU and GPU, and compare it to 10, 15 years ago, now an engineer does the layout, custom implementation, and so on. You can do those designs with eight, nine engineers.

We hear that in Silicon Valley, people are able to do chips with eight engineers, which is incredible. It is no longer the back-end implementation and custom implementation; it’s the scalability, the software verification, and so on. Although not every EDA vendor is the same, we can deterministically extract pretty much anything that is in an emulator.

The challenge we’re seeing with those frameworks is that we can spend time extracting a lot of stuff that is completely useless. I hope there will be some standard for frameworks that will allow us to extract performance metrics. Otherwise, we are going to customize every output all the time.

It’s probably different than the approach from the past from the verification standpoint. Now we might need to get some data from you on what will be meaningful to extract at a particular time and explore deterministically what those metrics are. That we can do.

Q#2: Right. And it’s simple. It’s existing already, and it’s called functional coverage. Simulator vendors don’t act upon, so there’s no feedback. That’s what is missing.

Saad Godil: I think people have looked into that. There was a tool called Echo from VCS that addressed that. This is not a new idea in terms of feeding data back. In my experience, I have found that whenever there is a need and a new use case or a lot of customers need a feature, the EDA vendors are pretty good about addressing them. It all boils down to dollars and if it makes sense. I think if there was more business opportunity here, I am sure that the feature will be implemented.

Audience Question #4: How many companies making chips containing special-purpose neural-network hardware do you think will exist in five years?

Ty Garibay: Markets are markets. This is no different than the networking market in the late 1990s and the mobile market in the past decades where everybody with an idea and VC funding jumped in.

Today, there are well over 100 offerings, if you take into account China. How many will survive in five years, maybe 10 years out? The AI chip market is a pretty standard market split between China and not China. You’ll have a number one, a number two, and maybe three others that are trying to run a business for a long period of time.

Rob Aitken: I think you will have more than that. If you look at your typical CPU, whether it’s from Arm or whomever, there’s a lot of work that’s gone into it to optimize the hardware to do better on a set of metrics, to multiply-accumulate and other neural-network calculations.

At some level, anything with the CPU in five years is going to have what is essentially custom-purpose AI hardware in it. In addition to that, machine learning is a ubiquitous enough problem that there will be lots of other chips that have dedicated tensor processors or neural processors or whatever you want to call them. This stuff is here to stay. Whether there will be a lot of vendors selling it or everybody gets it from IP vendors is to be argued about, but its existence is not in question. It is guaranteed.

Raymond Nijssen: A few years ago, I was attending a panel where somebody from the audience asked the same question or the same question with a different order of magnitude. He was wondering if dozens of companies were doing something now, how would the market evolve in the next five years? How many companies would design the same something in five years? Somebody from the panel answered over 100, and there was a lot of laughter in a room because everybody felt it was preposterous. That’s where we are right now.

I firmly believe we’re only at the start. You’re going to see a lot of different things where people are going to add more and more things to it, just 3D, add notion of time to it. And there’s going to be a lot of new initiatives around that. If we do not include China, I think that’s going to stay maybe around 100. It’s going to be a lot more stuff where it’s going to find its way into standard products, into instruction sets, and so on.

Jean-Marie Brunet: What I believe will happen is, in five years, the distribution among the top 10 will change, probably not startups, hopefully one of them is here today. I think we’ll see companies similar to Google, Amazon, Facebook of today that will emerge within the top 10 semiconductor makers because of the distribution shift.

Audience Question #5: Deep learning is divided into two problem scopes: the training and the inference. Training is specifically designed for data centers, and that’s addressed by Nvidia DGX box and others. Those designs consist of multiple accelerators talking to each other, and you don’t care about power consumption. On the inference side, designs are like edge devices. Here, we are talking about different architectures for the AI chips. Do you think that we need different versions of EDA tools to solve these two different problems?

Raymond Nijssen: If you are looking for one example that shows we are in the infancy of this whole thing, it is the dichotomy between training and inference. Right now, we’re at the vacuum-tube level of AI. We have a cat, and if I show my cat a new treat once, the cat will immediately learn about that treat and it does that with maybe two watts in its brain. I’m sure nothing got uploaded to a data center, ran overnight, and some coefficients downloaded into my cat’s brain. It did this instantaneously with a couple of watts.

I think the separation between training and inference is a temporary one. As far as I know, nobody has figured out how to combine them. Maybe that’s something for quantum computing. Basically, that’s it.

My answer to your question is that the separation will continue for now, but eventually, it is going to disappear.

Ty Garibay: I’d say that it’s no different than what happens with mobile chips or edge chips of any kind or chips in other markets at the edge versus data center servers. We all use the same EDA tools.

For the most part, we tend to focus more on power for some and more on performance for others, and we’re leaning on the tools in different ways and different combinations with different attention to detail. In the end, what we’re trying to build is something that will be manufacturable on silicon. The tools won’t change until we move to carbon nanotubes or quantum devices or whatever. Then we will need new tools.