Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Race to Build a Perfect Computer Chip

The Race to Build a Perfect Computer Chip

There are various estimates of how big a drain on the world's resources the digital economy is one data point is that 1% of the world's carbon emissions are created by people just streaming video over their phones or on their PCs, Smartphone’s, Computers, data centers, and all the digital activities we take for granted are estimated to use around 7% of the world's electricity.


And that's only going to get worse as we rely more and more on our devices. Probably the best way to look at that is what's going on in data centers. These are these giant factories full of computers that are processing all of the information that you generate. These giant data centers are on a course to be using much more electricity by perhaps the end of the decade.

Clearly, that kind of a drain is just not sustainable. We need new solutions. We need novel solutions. Scientists and startups around the world are developing low-energy computer chips which could be better for the environment and upgrade the intelligence of our digital devices. Everything's tied to computers getting better.

It could be finance, it could be chemical engineering, name your field. So my goal since founding the company has been to develop a new computing paradigm, a new technology platform, that will allow computers to continue to make progress. It's not only about design but also sending them for fabrication, testing them and making system prototypes for various application.

The focus of the application would be reducing the energy consumption to the minimum level. It's a very high risk, high reward type of problem. So if we're successful, then we'll transform this industry. That'd be this enormous leap in this field. From space hardware to toasters, almost everything in modern life depends on silicon-based semiconductors called chips.

They're the things that make electronic items smart. At a very fundamental level, semiconductors are made up of what's called transistors, and these are the absolutely microscopic digital switches, the on and offs. On or off means zero or one in the digital realm. Say a modern graphics chip will have tens of billions of transistors on it and this is something that's the size of your thumbnail.

So how do we get all of these tiny switches onto something relatively small? Well, we do that by building up layers of material on a disk of silicon, scraping off patterns into those materials, layering on other materials until we have these tiny circuits that give the chip its function. For half a century, the semiconductor industry has made chips exponentially faster and better by shrinking the size of transistors, turning room-sized computers into pocket-sized Smartphone’s, propelling the digital revolution.

Clearly, new technology is presenting both management and staff with an ever-growing range of choices about how and where work will be done in the future. But traditional computing is reaching its limit. State-of-the-art mass produced chips are made with what's called 5 nanometer technology, a dimension that smaller than a virus.

And materials inside some devices are already one atom thick, meaning they can't be made any smaller. And we're reaching the limits of physics. We're reaching the limits of energy density. The increasing consensus in the chip industry is that these advances that we've got from silicon are beginning to come to an end.

So that if we really want to take advantage of the full potential of artificial intelligence and the absolute ocean of data that we're all creating today, we're going to need new materials, we're going to need new programming models, we're going to need new versions of computers.

My personal first time about learning and discovering what a carbon nanotube was was when I was just about finishing up college, undergraduate. And I saw a presentation by the vice president of research of IBM and immediately I was, thought it was very special. A carbon nanotube is a material made entirely of carbon atoms.


The carbon atoms are arranged in a tube. Imagine taking a sheet of carbon atoms and rolling them into a tube and then making that tube smaller and smaller and smaller and smaller until the diameter of that tube was just a few atoms. It's this extremely small, which is about 100,000 times smaller than the diameter of an eyelash and a hundred times smaller than a virus.

The really cool thing about Nanotubes is that they conduct electricity better than just about any other material that's ever been discovered. Electrons will move along the length of a nanotube faster than they do in silicon. And that means you can get ultimately faster switching between on and off states. You can make faster computer chips.

You could turn them on and off with less voltage. And that means they use less electricity, less power than silicon. In theory, nanotubes will be able to do a thousand times better than silicon. Same computational capabilities, 1,000 times less power. Another advantage is that carbon nanotubes can be processed at a low temperature, so layers of nanotubes can be built on top of one another.

Silicon has to be processed at an extremely high temperature, which makes 3D layers much harder. If you really think about a city, what happens when you run out of real estate in a city is you build up, you build skyscrapers, you build into the third dimension. And so if you can't make the transistors smaller, then you could improve your computer chip by making more transistors, by making multiple layers of transistors on top of other transistors.

Since the 1990s when nanotubes were first invented in Japan, different methods have been developed to mass produce them. But every process creates two types of nanotubes, metallic ones and semiconducting ones. A metallic nanotube like a copper wire, it's stuck in the metallic state. You can't switch it, and that kills a circuit.

So we need only semiconducting nanotubes to really make nanotube electronics work. So molecules and polymers that can be mixed in with the carbon nanotube powder, which is this tangled mess of nanotubes. And those molecules and polymers will stick to just the semiconducting ones and not the metallic ones, or they'll stick differently.


And then you could sort the nanotubes and separate them based on these differences. Initially, in a powder, there's about 67% of the nanotubes are semiconducting. But using these chemical approaches, we could extract over 99.99% of the semiconducting nanotubes. After the semiconducting nanotubes have been extracted, they float around in a solution, and so the next challenge is to line them up neatly on a silicon wafer, which can then be turned into a computer chip.

Carbon nanotubes really got me hooked and excited right from the beginning. I mean they really have the opportunity to potentially revolutionize electronics. This challenge of aligning them in these highly packed, highly aligned arrays has really been frustrating the field of carbon nanotube electronics since their discovery.

During my PhD, I was kind of tasked with actually determining, you know, how we can better align carbon nanotubes. We just found, you know, kind of by accident that when we layer carbon nanotube ink or inks where carbon nanotubes are dissolved in solutions like organic solvents, if we layer that carbon nanotube ink on water, then instantaneously those carbon nanotubes will collect and confine at that ink water interface.

And that high ordering and induced ordering by themselves really helps to align the carbon nanotubes. What you're seeing here is a aligned array of hundreds of thousands of carbon nanotubes. As you can see, individual nanotubes, these light-colored regions, and then the dark-colored region is the substrate. They're all really well-aligned with respect to each other. Occasionally, there's a misaligned nanotube in this, then we need to improve our manufacturing process to eliminate those instances.

The biggest current challenge is being able to achieve that high alignment in extremely uniform arrays across, you know, 300-millimeter wafers, which is an industry standard of silicon wafers, before they start changing silicon to integrate carbon nanotubes instead. If we can overcome these challenges, if we can make these aligned nanotube wafers, the major players in industry will really jump into this field and it could just progress at that point could be very rapid.

The technique isn't perfect yet, but it's already an advance for carbon nanotube research. Michael and Katherine have founded a company with an aim to solve remaining challenges. But many more breakthroughs are needed before nanotube transistors have even a chance of replacing silicon. It absolutely shows promise, but it's been showing promise for 20 years.

There are many issues, you know, how robust are these devices, but more importantly can you manufacture them? The semiconductor industry is based on silicon transistors, and enormous sums have already been invested in infrastructure to manufacture that technology. Plants have a price tag of $20 billion, take years to build and need to be run for 24 hours a day to turn a profit.

Change will be difficult without a guarantee that carbon nanotubes will be cheaper. Silicon's been around a long time. People know how to use it, they know how to program it, they know how to mass manufacture it. It's still, in terms of the economics, the winner. And until those economics change, nothing is going to replace it.

I've always been obsessed with computers. You know, I had an opportunity to work in industry at a large semiconductor company for a number of years. And I got to see there some of the fundamental challenges associated with continuing to shrink transistors and I found that to be not a very exciting future. So my goal with Lightmatter since founding the company has been to develop a new  computing paradigm, a new technology platform that allows you to do calculations using photonic devices.

When electronics uses electrons, that's the medium of transfer, that's what represents the data. Photonics is using photons, which is the basic constituency of light. So for example, a fiber optic cable that spans the Pacific Ocean is using light to transmit information. Why is it using light? Well, it's because the energy  required to shove electrons along a copper cable would be absolutely enormous, and you would just have too much signal decay.

So if you can convert that information into photons, you can send it faster and with less energy overhead. So that's obviously a desirable thing. There's nothing faster than the speed of light. So when you think about the latency between when you make a query and when a photonic computer gives you an answer, it's really the fundamental limit for how fast you could do these things.

Electrical wires, there's a certain amount of time that it takes for them to turn on and off. And every time you do, it takes a lot of energy to do this, to turn this wire off and on. Now with a photonic computer and photonics in general, you have a type of wire that doesn't take very long at all, femtoseconds maybe, even attoseconds to turn on. It takes almost no energy other than any light that's lost while the optical signal is propagating through that wire. You have another great property with photonics which is that you can send multiple data streams through this photonic wire, encoded in different colors or wavelengths, all at the same time.

So you have massively parallel computation and communication through these nearly perfect wires. The idea of silicon photonics has been around again for a long time. The problem comes with changing those electrons-based signals into photons. So the traditional way of doing that is to have all of these other pieces of equipment that are expensive and use lots of energy to do that.

Silicon photonics is, well, maybe we can just design a chip that directly deals in both electrons and photons that we can have, you know, the benefits of both worlds. And that's what was supposedly on the cusp of doing. Again, it's been promised for a long time, it's gonna change the world, or has been about to change the world for a long time, and it hasn't yet.

But Lightmatter still believes silicon photonics is going to happen, and it's harnessing existing laser technologies used in the telecoms and semiconductor processing industries. Their chips are specifically being made for AI applications like chat bots and self-driving cars. The startup plans to launch their first photonic computer later this year.

One of their current products is an interconnect, which enables chips to talk to each other. It could help solve energy consumption issues in data centers and potentially bring down costs. In these huge factories, processors are closely arranged together so they can communicate at high speeds. But this also generates a massive amount of heat.

So not only do you have to put the power in to power all of these components, but then you have to use an enormous amount of power to actually cool them down to stop them literally melting. What this company is trying to do is to come up with interconnects and ways of connecting components that use light rather than electrons over wires.

So that means, in theory, that you could have a processor in one room, memory in another, storage in another, and that would help with the density and the power and the heat. And that's definitely a solution that's being tried by numerous companies, but we're still waiting for that to show, you know, practical everyday results and be widely adopted.

Human brain is very efficient. Our brain operates all the time. It doesn't dissipates more than 10 to 20 watts, which is very small value. To solve the problem of Rubik's Cube, so the amount of energy which is required to learn and then operate is very low as compared to the actual systems. Just solving the problem, put learning aside, that itself will need thousands of processors to become parallel and then dissipate a lot of power, which can go beyond  megawatt.

These scientists in India have been trying to build low-energy chips mimicking the way the human brain works for years. The human brain is packed with neuron  cells that communicate with each other through electrical pulses known as spikes. Each neuron releases molecules that act as messengers and control if the electrical pulse is passed along the chain.

This transmission happens in the gaps between neurons, which are called synapses. Neurons and synapses, they are fundamentally different from the way  computers compute. Computers use transistors, which are binary. They do zero, one, on, off. The brain sort of does a similar thing except that it works in analog. It's also partly stochastic.

A neuron may spike or may not spike depending upon inherent internal randomness. The brain itself doesn't like to do repetitive things. It likes to do everything with slight variance. This sort of enables creativity. It sort of enables problem solving. And the youngsters' creative imagination is not neglected. Once there was a bear that was twice the size of the earth and lived on the sun.

The connections are different. So if you look at a computer chip, every transistor roughly talks to about 10 transistors. Every neuron, on the other hand, talks to about 10,000 other neurons, and that's literally what makes it super parallel. Neurons use time as a token of information. Computers don't understand time per se, they just sort of, you know, work with digital numbers and do mathematical operations.

To mimic the architecture of our brain more closely, the team designed an  artificial neural net based on the biological neural networks in our brain. Each artificial neuron is built with a few silicon transistors and connected to other neurons. We try to mimic the auditory cortex. So the auditory cortex has this neurons which are connected in random.

And that sounds weird, but it turns out it serves a very important function, which is in speech processing. What we are doing is that there are recurrent connections, which means that the neuron could go to any neuron including itself, creating some sort of loopy pathways, which are fairly random.

And these loops are part of the architecture, they are called liquid state machine. They naturally are able to process time series, like when you hear speech, which is sort of occurs in time. We are doing some analog compute. There is some noise also in the neurons, so there could be some stochasticity. In the traditional or conventional computation platform, everything is digital.

Everything is defined based on logic one and zero. Analog computers, which are eventually signals which can have any value, would be able to get more complexity at the same time more functionality in the operation. Here we have our neural network chip which is on the test board. So, what we'll do is we'll speak to the computer, and that will sort of convert the speech into spikes.

It's going to get processed in the chip. We run the code. One So it detects a one. This neuromorphic chip has been able to recognize speech using a fraction of the energy used by a traditional chip. Just like biological neurons, the artificial neurons wait for inputs and only spike when data comes in, and that means lower energy consumption.

Each neuron is also highly efficient by using a quantum mechanics phenomenon called band-to-band tunneling. So, electrons are still able to pass through transistors with very low electric current. Quantum tunneling is nothing that humans essentially have experience for. If you have a human being that has to cross over a hill, the human being has to walk up the hill, burn energy, and then walk down, which again burns energy.

And so, there is no way for you to cross this barrier without using up energy. If you make this hill thinner and thinner, narrower and narrower until it goes to the size scale on the order of, say, a few electron wavelengths, at which point, an electron doesn't need to go above the barrier to surmount it, it can, in principle, tunnel through it, which means it will sort of disappear on one side and appear on the other side magically.

If you take a transistor, there's this same idea that when it turns on, we have a barrier that we sort of reduce, and when we want to turn it off, we increase the barrier. That's when you expect no current to go through, but that is not true by quantum mechanics. So even through a barrier, when it's narrow enough, electrons can sort of, quote and quote, "tunnel" through it.

So far, there are 36 artificial neurons on the chip, and their energy consumption is remarkably close to biological neurons. If we want to approach the number of neurons which are already in human brain, so we need billions of them, and therefore we need to reduce the energy per spike generated by a single neuron as much as possible.

Intel, one of the world's biggest chipmakers, has been working on brain-inspired chips for seven years. They've released two versions of their neuromorphic research chip Loihi. The latest one is only 32 square millimeters in size but contains up to a million neurons. The company said Loihi has demonstrated multiple applications, including recognizing hazardous odors and enabling robotic arms to feel.

In the near term, based on the most recent results we've had with Loihi, I would say that solving optimization problems are the most exciting application. Things like planning, navigating, you can think of it of solving the shortest path in a map, for example, if you want to get from point A to point B. These kind of optimization problems are conventionally really computed heavy, but we found that on a neuromorphic architecture, we can reduce the computational demands by orders of magnitude.

So over 10 times speed up as well as over a thousand times energy reduction. Probably the majority of the groups that are working with Loihi sees robotics as kind of the long-term, best application domain for neuromorphic chips. After all, brains evolved in nature to control bodies and to deal with real-world stimulus, respond to it in real time, and adapt to the, you know, unpredictable aspect of the real world.

And if you think about robotics today, they perform very rigidly prescribed tasks. But if you want to deploy a robot into the real world, we really don't have the technology yet to make them as agile, as adaptive as we really would like. And so that's where we see neuromorphic chips really thriving.


Intel and its collaborators have used the Loihi chip to power iCub, a humanoid robot designed to learn in a way that children might. This iCub robot

is intended to interact with the environment as opposed to being pre-trained with static data sets beforehand. So, using kind of saccadic motions, moving the eyeballs, it can trace through and detect change or edges in the objects in front of this robot.

And using a fully neuromorphic paradigm, we're able to design a network that understands and learns new object classes, even at different angles and different poses. It adds to its understanding, it's kind of dictionary of understand objects over time. What is this called? That's a mouse. Oh, I see. This is a mouse. Let me have a look at that. Sure.

Can you show me the mouse? Here is the mouse. So far all of this technology is at an early stage. But to fight climate change and power the progress of civilization, we need at least one of them to be a breakthrough. Greener chips absolutely have to happen. People are concerned about the environmental impact, people are concerned about the efficiency, people are concerned about the results and the way that chips can impact modern life.

So there's been, you know, an absolute blossoming of investment in chip startups and trying these things. This is definitely the time of great ideas. Will they all be successful? Absolutely not. But now, I would say more than ever, there is a chance that out there in some



This post first appeared on Long Way Handle, please read the originial post: here

Share the post

The Race to Build a Perfect Computer Chip

×

Subscribe to Long Way Handle

Get updates delivered right to your inbox!

Thank you for your subscription

×