Podcast Transcript: Artificial Intelligence Meets Verification Technology

Source: EEWeb

By: Max Maxfield

As you may recall, a couple of months ago my chum — verification expert Lauro Rizzatti — asked me if I’d be interested in joining him on a podcast. Since Lauro knows that I’m currently very interested in advanced technologies, including artificial intelligence (AI), artificial neural networks (ANNs), machine learning, deep learning, and cognitive (thinking, reasoning) systems, he proposed that we talk about the relationship between artificial intelligence and design verification.

Well, Lauro just suggested that we make the transcript of our podcast available for you to peruse and ponder at your leisure. This transcript appears after the image below. Also, you can still listen to the whole 15-minute recording at rizzatti.com/podcasts. Either way, I’d love to hear whether or not you agree with my conclusions regarding the relationship between artificial intelligence and design verification.

(Source: pixabay.com)

Operator: Welcome to Lauro Rizzatti’s podcast: Verification Perspectives. Lauro’s guest today is Clive Maxfield, known as Max, editor in chief of eeweb.com.

Lauro: Welcome Max, it is a great pleasure to have you here today to talk about artificial intelligence and its relationship to design verification. But first, can you tell us a little bit about eeweb.com?

Max: Hi Lauro, it’s wonderful to be here, and — yes — I would love to tell you about EEWeb. We have a bunch of free design and verification tools, all the way from simple ones like impedance calculators up to full blown schematic capture, layout, and simulation engines. We also have the forums where members can ask and answer questions. As part of this, we have a team of real experts who are standing by to answer anybody’s questions, all for free. And, last but not least, we have me writing columns about my hobby projects and all the weird and wacky things that interest me that I run across as I wander through life.

Lauro: That sounds good Max, I will look into it — I must admit that I found your article on the Hutzler 571 Banana Slicer to be, how can I say, unexpected. At any rate, recently you told me that the concept of artificial intelligence, or AI, can be traced back to the time of Charles Babbage, the inventor of the first mechanical computer around the 1840s. Does this mean that Babbage was using the term AI back then?

Max: No, it doesn’t. And, in fact, Babbage wasn’t really the one who I was thinking of. As you say, Babbage had come up with the idea for a mechanical computing engine called the Analytical Engine, which had a lot of the interesting things we think of as a modern computer, although it was implemented using mechanical means. It had the ability to perform mathematical operations like addition, subtraction, multiplication, division, and so on, and it also had a memory on punch cards and it had the ability to make decisions: “If the result from this operation is greater than this value, then do this or else do that,” sort of thing. But Babbage only ever thought about this in terms of number crunching. He just wanted to use his engine to perform calculations. It was his assistant, Lady Ada, or Ada Lovelace, the daughter of Lord Byron, who was working with Babbage. She was quite young at that time — I think like in her 20s — and everyone said she was a brilliant mathematician. Ada had the idea that the numbers didn’t have to represent just numerical values, they could also represent symbols, and if you are using numbers to represent symbols, you can represent anything you please. In some notes that she wrote, she was thinking out loud and sort of saying we could use numbers to represent abstract things like musical notes, and as computing engines got more and more powerful, she could see a future where they could compose music “of any complexity.” And I think that’s astounding that somebody back in the 1840s could imagine a time when computing engines could generate music scores and things. This, to me, is the first time someone had the idea that computers could be used to perform something like artificial intelligence.

Lauro: So, when did people start to talk about AI the way we can think about it today?

Max: Well, there was a conference back in 1956 at Dartmouth, and the American computer scientist and cognitive scientist, John McCarthy, was the one who first used the term “artificial intelligence.” The dream of the people at that time was to construct complex machines that possess the same characteristics as human intelligence. This is a concept that today we would refer to as “General AI” — I think they were envisaging something like C-3PO in Star Wars or Arnold Schwarzenegger as the Terminator (although, of course these weren’t around back then).

Lauro: Very interesting, Max. In addition to artificial intelligence, I keep hearing terms such as machine learning, deep learning, artificial neural networks, and cognitive systems; how are all these terms related to AI?

Max: Oh, this is a can of worms, really. So, AI, artificial intelligence, is the umbrella term. I think the general public tends to think of AI in the form of general AI, C-3PO-like robots that we could talk to and interact with, and so on. We are at a much lower level at the minute. We have things like machine vision where our computers can look at an image and say, “This is a human baby with a fluffy toy,” or “This is a tree,” or “This is a train,” or whatever. So artificial intelligence is the overall umbrella name. 

Then we have machine learning, which is like a subset, and which refers to the ability for computers to learn without being explicitly programmed. At its most basic, machine learning is the idea of using algorithms to process data, to learn from it, and then to make some sort of determination or prediction about something in the real world.

Then we go to artificial neural networks, which are inspired by our understanding of the biology of our brains. Artificial neural networks, ANNs, have been around for decades in academia. However, it is only in the last couple of years that the technology has become more generally available in the form of Google’s TensorFlow and Caffe and others network implementations. ANNs consists of networks with thousands of neurons and hundreds of layers; they have moved out of academia into commercial space.

Now we have artificial neural networks running on things like my iPad where you run applications like MyScript Nebo. This performs sophisticated handwriting recognition using multiple layers of neural networks all running on an iPad.

Deep learning refers to a branch of machine learning where we use a vast amount of data to train a network; for example, if we were training a machine vision system to recognize general-purpose images. An organization like Google has access to millions upon millions of images, and we might train our artificial neural network using these images. That would be deep learning.

Last, but not least, cognitive systems are systems that are capable of learning from their interactions with data and with humans. Essentially, they are continuously reprogramming themselves. The interesting thing for me is that, in the recent embedded system studies performed by EE Times and EDN, 50% of the respondents said that they are going to embed some sort of cognitive capabilities in their next-generation embedded systems, which just astounded me, to be honest.

Lauro: Wow, this is very interesting. I really learned something today. OK, let’s move on to my field of interest; that is, design verification. How do you think AI and the other things we have been talking about will affect the way in which we will perform verification in the future?

Max: Well, I have some good news and I have some bad news.

Let’s do the bad news first. Again, when we talk about AI, most people think about this differently to what we actually have today. General AI would be something like having humanoid-looking robots with which we could converse back and forth, and that we could show things to and explain things to. We are not there yet. The sort of AI we have today can, for instance, recognize handwriting. Or let’s take the vision application. You take an artificial neural network that contains thousands upon thousands of artificial neurons presented as hundreds of layers, you create the network architecture, you then feed millions of images through the network to train it.

These are called “tagged images.” A human being has previously looked at the image and identified it and tagged it with information. For example, they might say “This is a mountain,” and maybe also assign a subset like “Himalayas.” Or “This is a bridge,” with a subset of “iron bridge”; “This is a ship,” “This is a spaceship,” “This is a dog,” “This is a cow,” and so on.

Bu comparison, when we come to verification — and we could be talking simulation or emulation or any form of verification — how does a human perform that verification? If we wanted to get artificial intelligence to perform the verification, how would we do that? If we look at training, are we going to feed hundreds of thousands of designs into a neural network and somehow tag those designs in such a way that the network could learn how to perform the verification? 

The answer is no, not at this time, maybe in the future. Consider how a verification engineer approaches his job. He sits down and looks at a task, he uses a vast amount of experience to decide what sorts of verification will be applied to different areas of the design. Let’s take something as “simple” as constrained random; we hand over to the computer to generate all the random numbers, but before we do that, a human has to look at the design and apply the intelligence that says “Only apply random numbers between these boundaries,” and “Don’t use this set of numbers,” and “Don’t use that set of numbers.”

If we did have General AI — say we had a robot like C-3PO from Star Wars — I can imagine that robot sitting next to a human being and the human being explaining things to the robot. In the same way that when you get a young engineer who wants to be a verification engineer, there isn’t really a course on this at any university as far as I know. What happens is the young engineer sits next to an old engineer and the old engineer says something like, “Look at this design. We have this area that is an MP3 decoder; we have this area here; this is how we are going to test this portion of the design; this is how we are going to test that portion of the design; these are the tools we are going to use; this is how we are going to do it.”

I can imagine in the future — and this is not in the near future we’re talking about — this sort of thing happening, but not at the moment.

If you think of today’s emulators compared to the way things were, say 10, 15 years ago; or if you think of today’s simulators compared to when I was a lad, then verification has made vast leaps and bounds in many areas. At the same time, however, it is probably the are most lagging behind in the design automation space. Consider the design part of the development process. Back at the end of the 1980s and at the beginning of the 1990s, that is when we started using logic synthesis where we used high-level languages like Verilog or VHDL to capture the design intent, and then we fed that information into a synthesis engine and it generated the gate-level netlist for us to implement in an FPGA or on silicon. 

We don’t have anything like this in verification. Today, verification is still down to human beings looking at the design and deciding how they are going to test it. Yes, we use tools like constrained random numbers to generate vast amounts of stimulus, but it is still guided by the human being. Now, things are starting to come online, like the Accellera standard called Portable Stimulus. This is the most incredibly badly designed name I have ever heard because (a) it is not a language that defines stimulus and (b) the stimulus that is created is not portable. Apart from that, I think they have nailed it.

Portable Stimulus is actually a set of semantics that defines the verification intent model. From that model, the hope is we will be able to synthesize the test bench, and I can see that this could become part of some future artificial intelligence-type scenario.

So that is the bad news — If you look at the entire development process, verification is probably going to be the last thing that truly benefits from artificial intelligence. 

The good news is that if you are looking for a career in electronic design, being a verification engineer is going to be one of the last holdouts where a human is absolutely required. 

Lauro: Well, too late for me, Max. I missed the boat.

Max: And me, I fear, and me. 

Lauro: Thank you, Max. It has been very informative and educational talking to you today; you have certainly given me a lot to think about. 

Operator: Thanks for listening to today’s podcast of Lauro Rizzatti’s Verification Perspectives. Join us next time for another interesting and in-depth conversation. For more episodes, please visit www.rizzatti.com.