Artificial Intelligence is about to change dramatically with the use of neuromorphic chips. Using Python programming to create new algorithms, researchers are on their way to creating context-aware AI.
Instead of implanting chips in our brains, as many futuristic musings suggest, scientists are developing ways to give AI chips brains of their own.
Despite how fast they may seem, today’s AI services like Siri and Alexa do not operate in real time. They use cloud computing because they do not have enough power to accurately process the information on their own. In order to answer questions, the AIs send your queries to data centers, via the cloud, where a response is generated. Though Siri may be the voice of AI, her brain does not exist on your device.
Developers are interested in a new kind of chip that uses neuromorphics. It has the potential to surpass current CPU processors in both speed and power conservation.
CPUs process information in regular “clocked time,” which means that information is transmitted in constant, measured moments, like seconds on a clock. The processing is linear and regular.
With Neuromorphic chips, digital neurons process information in ways that are similar to brain function, mimicking the sporadic and varied ways that signals and information are relayed.
Neuromorphics decrease the volume of power that’s needed by “packing in digital equivalents of neurons.” These brain chips are more efficient and capable of computing more information- and not by a small margin. A chip made by IBM only consumes 70 milliwatts of power, as compared to the 35 to 140 watts used by an intel processor. The same chip would also hold five times as many transistors.
The concept of neuromorphic chip processing is not new, in fact, developers have been aware of it since the 1980s.
What is new, however, is the way that the chips are projected to be used.
In the past, older versions of the chips were task specific, to the point that they only became useful when combined with a group of other specific chips–for example, there needs to be one for audio, one for motion, one for word processing, etc.
Though they tried, developers were unable to make a general purpose chip. “This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.”
After 30 years, it seems as if the technology has caught up with researchers’ quest to build an algorithm for a general purpose chip.
Chris Eliasmith, a theoretical neuroscientist and of Canadian AI startup Applied Brain Research is confident that a new type of chip is about to change that. Eliasmith is working with a team to build algorithms using Nengo, a brain simulator, that uses Python.
Python is one of the most intuitive programming languages, and if Neuromorphic Chips become a future reality, it may be one of the most useful languages for the future of tech. Anyone who knows how to use python will know how to build a neuromorphic chip. This would be great for AI customization and would lead to a much more diverse set of AI in the future, rather than just a few different bots who have to phone into the same data centers for help.
The potential for personalization that results in neuromorphic chip computing is staggering. The intuitive nature of neuromorphic AI will completely change the way we interact with our devices. Right now, most AI is relatively proficient at any kind of trivia, but can Siri remember what you talked about with your boss last week when you ran into him in the elevator?
“Imagine a Siri that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like, ‘Who did I have that conversation about doing the launch for our new product in Tokyo?’ or ‘What was that idea for my wife’s birthday gift that Melissa suggested?’” Eliasmith says.
They See You When You’re Sleeping
This is terrifying.
For a person who writes for an innovative tech site, I’m kind of a Luddite when it comes to trusting new AI. I’ve heard one too many stories about Alexa spying on people, and how companies and data centers will steal data about people’s lives purposefully and without their consent. Then, will the Vault 7 leaks from WikiLeaks, we know that our devices can be used to spy on us. With neuromorphics, this reality doesn’t have to last very much longer.
In neuromorphic AI, all of the processing occurs in the chip that’s inside of your device, and the question of where the data goes becomes obsolete.
Peter Suma, a computer scientist and the other CEO of Applied Brain Research who has been an active builder of neuromorphics applications says, “that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.”
Perhaps a little freaky, but it will become useful. In fact, it may be too useful. Apple is deeply invested in Siri and the data that they collect from users of their devices. It is possible that the company and others are resistant to the amount of loss that this method of hardware might inflict on their data collection efforts. Adversely, it could open up new opportunities for tech companies who are willing to market neuromorphic AI as a private, more intuitive option.
Using Python, researchers will continue to be able to advance neuromorphic imaging, and create AIs that are aware of context in ways that simply are not possible with linear computing. If you want to be ahead of the game, its time to start brushing up on your Python skills.
The post You Should Learn Python: Neuromorphic Chips are set to Replace CPUs appeared first on Edgy Labs.