Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

ChatGPT heralds a productivity leap beyond our wildest dreams



strong ai :: Article Creator

AI And The Ghost In The Machine

The concept of Artificial Intelligence dates back far before the advent of modern computers — even as far back as Greek mythology. Hephaestus, the Greek god of craftsmen and blacksmiths, was believed to have created automatons to work for him. Another mythological figure, Pygmalion, carved a statue of a beautiful woman from ivory, who he proceeded to fall in love with. Aphrodite then imbued the statue with life as a gift to Pygmalion, who then married the now living woman.

Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons

Throughout history, myths and legends of artificial beings that were given intelligence were common. These varied from having simple supernatural origins (such as the Greek myths), to more scientifically-reasoned methods as the idea of alchemy increased in popularity. In fiction, particularly science fiction, artificial intelligence became more and more common beginning in the 19th century.

But, it wasn't until mathematics, philosophy, and the scientific method advanced enough in the 19th and 20th centuries that artificial intelligence was taken seriously as an actual possibility. It was during this time that mathematicians such as George Boole, Bertrand Russel, and Alfred North Whitehead began presenting theories formalizing logical reasoning. With the development of digital computers in the second half of the 20th century, these concepts were put into practice, and AI research began in earnest.

Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of reality. Generally, this can be chalked up to computing limitations. But, a deeper problem of the understanding of what intelligence actually is has been a source a tremendous debate.

Despite these setbacks, AI research and development has continued. Currently, this research is being conducted by technology corporations who see the economic potential in such advancements, and by academics working at universities around the world. Where does that research currently stand, and what might we expect to see in the future? To answer that, we'll first need to attempt to define what exactly constitutes artificial intelligence.

Weak AI, AGI, and Strong AI

You may be surprised to learn that it is generally accepted that artificial intelligence already exists. As Albert (yes, that's a pseudonym), a Silicon Valley AI researcher, puts it: "…AI is monitoring your credit card transactions for weird behavior, AI is reading the numbers you write on your bank checks. If you search for 'sunset' in the pictures on your phone, it's AI vision that finds them." This sort of artificial intelligence is what the industry calls "weak AI".

Weak AI

Weak AI is dedicated to a narrow task, for example Apple's Siri. While Siri is considered to be AI, it is only capable of operating in a pre-defined range that combines a handful a narrow AI tasks. Siri can perform language processing, interpretations of user requests, and other basic tasks. But, Siri doesn't have any sentience or consciousness, and for that reason many people find it unsatisfying to even define such a system as AI.

Albert, however, believes that AI is something of a moving target, saying "There is a long running joke in the AI research community that once we solve something then people decide that it's not real intelligence!" Just a few decades ago, the capabilities of an AI assistant like Siri would have been considered AI. Albert continues, "People used to think that chess was the pinnacle of intelligence, until we beat the world champion. Then they said that we could never beat Go since that search space was too large and required 'intuition'. Until we beat the world champion last year…"

Strong AI

Still, Albert, along with other AI researchers, only defines these sorts of systems as weak AI. Strong AI, on the other hand, is what most laymen think of when someone brings up artificial intelligence. A Strong AI would be capable of actual thought and reasoning, and would possess sentience and/or consciousness. This is the sort of AI that defined science fiction entities like HAL 9000, KITT, and Cortana (in Halo, not Microsoft's personal assistant).

Artificial General Intelligence

What actually constitutes a strong AI and how to test and define such an entity is a controversial subject full of heated debate. By all accounts, we're not very close to having strong AI. But, another type of system, AGI (Artificial General Intelligence), is a sort of bridge between weak AI and strong AI. While AGI wouldn't possess the sentience of a Strong AI, it would be far more capable than weak AI. A true AGI could learn from information presented to it, and could answer any question based on that information (and could perform tasks related to it).

While AGI is where most current research in the field of artificial intelligence is focused, the ultimate goal for many is still strong AI. After decades, even centuries, of strong AI being a central aspect of science fiction, most of us have taken for granted the idea that a sentient artificial intelligence will someday be created. However, many believe that this isn't even possible, and a great deal of the debate on the topic revolves around philosophical concepts regarding sentience, consciousness, and intelligence.

Consciousness, AI, and Philosophy

This discussion starts with a very simple question: what is consciousness? Though the question is simple, anyone who has taken an Introduction to Philosophy course can tell you that the answer is anything but. This is a question that has had us collectively scratching our heads for millennia, and few people who have seriously tried to answer it have come to a satisfactory answer.

What is Consciousness?

Some philosophers have even posited that consciousness, as it's generally thought of, doesn't even exist. For example, in Consciousness Explained, Daniel Dennett argues the idea that consciousness is an elaborate illusion created by our minds. This is a logical extension of the philosophical concept of determinism, which posits that everything is a result of a cause only having a single possible effect. Taken to its logical extreme, deterministic theory would state that every thought (and therefore consciousness) is the physical reaction to preceding events (down to atomic interactions).

Most people react to this explanation as an absurdity — our experience of consciousness being so integral to our being that it is unacceptable. However, even if one were to accept the idea that consciousness is possible, and also that oneself possesses it, how could it ever be proven that another entity also possesses it? This is the intellectual realm of solipsism and the philosophical zombie.

Solipsism is the idea that a person can only truly prove their own consciousness. Consider Descartes' famous quote "Cogito ergo sum" (I think therefore I am). While to many this is a valid proof of one's own consciousness, it does nothing to address the existence of consciousness in others. A popular thought exercise to illustrate this conundrum is the possibility of a philosophical zombie.

Philosophical Zombies

A philosophical zombie is a human who does not possess consciousness, but who can mimic consciousness perfectly. From the Wikipedia page on philosophical zombies: "For example, a philosophical zombie could be poked with a sharp object and not feel any pain sensation, but yet behave exactly as if it does feel pain (it may say "ouch" and recoil from the stimulus, and say that it is in pain)." Further, this hypothetical being might even think that it did feel the pain, though it really didn't.

No, not that kind of zombie [The Walking Dead, AMC]As an extension of this thought experiment, let's posit that a philosophical zombie was born early in humanity's existence that possessed an evolutionary advantage. Over time, this advantage allowed for successful reproduction and eventually conscious human beings were entirely replaced by these philosophical zombies, such that every other human on Earth was one. Could you prove that all of the people around you actually possessed consciousness, or if they were just very good at mimicking it?

This problem is central to the debate surrounding strong AI. If we can't even prove that another person is conscious, how could we prove that an artificial intelligence was? John Searle not only illustrates this in his famous Chinese room thought experiment, but further puts forward the opinion that conscious artificial intelligence is impossible in a digital computer.

The Chinese Room

The Chinese room argument as Searle originally published it goes something like this: suppose an AI were developed that takes Chinese characters as input, processes them, and produces Chinese characters as output. It does so well enough to pass the Turing test. Does it then follow that the AI actually "understood" the Chinese characters it was processing?

Searle says that it doesn't, but that the AI was just acting as if it understood the Chinese. His rationale is that a man (who understands only English) placed in a sealed room could, given the proper instructions and enough time, do the same. This man could receive a request in Chinese, follow English instructions on what to do with those Chinese characters, and provide the output in Chinese. This man never actually understood the Chinese characters, but simply followed the instructions. So, Searle theorizes, would an AI not actually understand what it is processing, it's just acting as if it does.

An illustration of the Chinese room, courtesy of cognitivephilosophy.Net

It's no coincidence that the Chinese room thought exercise is similar to the idea of a philosophical zombie, as both seek to address the difference between true consciousness and the appearance of consciousness. The Turing Test is often criticized as being overly simplistic, but Alan Turing had carefully considered the problem of the Chinese room before introducing it. This was more than 30 years before Searle published his thoughts, but Turing had anticipated such a concept as an extension of the "problem of other minds" (the same problem that's at the heart of solipsism).

Polite Convention

Turing addressed this problem by giving machines the same "polite convention" that we give to other humans. Though we can't know that other humans truly possess the same consciousness that we do, we act as if they do out of a matter of practicality — we'd never get anything done otherwise. Turing believed that discounting an AI based on a problem like the Chinese room would be holding that AI to a higher standard than we hold other humans. Thus, the Turing Test equates perfect mimicry of consciousness with actual consciousness for practical reasons.

Alan Turing, creator of the Turing Test and the "polite convention" philosophy

This dismissal of defining "true" consciousness is, for now, best to philosophers as far as most modern AI researchers are concerned. Trevor Sands (an AI researcher for Lockheed Martin, who stresses that his statements reflect his own opinions, and not necessarily those of his employer) says "Consciousness or sentience, in my opinion, are not prerequisites for AGI, but instead phenomena that emerge as a result of intelligence."

Albert takes an approach which mirrors Turing's, saying "if something acts convincingly enough like it is conscious we will be compelled to treat it as if it is, even though it might not be." While debates go on among philosophers and academics, researchers in the field have been working all along. Questions of consciousness are set aside in favor of work on developing AGI.

History of AI Development

Modern AI research was kicked off in 1956 with a conference held at Dartmouth College. This conference was attended by many who later become experts in AI research, and who were primarily responsible for the early development of AI. Over the next decade, they would introduce software which would fuel excitement about the growing field. Computers were able to play (and win) at checkers, solve math proofs (in some cases, creating solutions more efficient than those done previously by mathematicians), and could provide rudimentary language processing.

Unsurprisingly, the potential military applications of AI garnered the attention of the US government, and by the '60s the Department of Defense was pouring funds into research. Optimism was high, and this funded research was largely undirected. It was believed that major breakthroughs in artificial intelligence were right around the corner, and researchers were left to work as they saw fit. Marvin Minsky, a prolific AI researcher of the time, stated in 1967 that "within a generation … the problem of creating 'artificial intelligence' will substantially be solved."

Unfortunately, the promise of artificial intelligence wasn't delivered upon, and by the '70s optimism had faded and government funding was substantially reduced. Lack of funding meant that research was dramatically slowed, and few advancements were made in the following years. It wasn't until the '80s that progress in the private sector with "expert systems" provided financial incentives to invest heavily in AI once again.

Throughout the '80s, AI development was again well-funded, primarily by the American, British, and Japanese governments. Optimism reminiscent of that of the '60s was common, and again big promises about true AI being just around the corner were made. Japan's Fifth Generation Computer Systems project was supposed to provide a platform for AI advancement. But, the lack of fruition of this system, and other failures, once again led to declining funding in AI research.

Around the turn of the century, practical approaches to AI development and use were showing strong promise. With access to massive amounts of information (via the internet) and powerful computers, weak AI was proving very beneficial in business. These systems were used to great success in the stock market, for data mining and logistics, and in the field of medical diagnostics.

Over the last decade, advancements in neural networks and deep learning have led to a renaissance of sorts in the field of artificial intelligence. Currently, most research is focused on the practical applications of weak AI, and the potential of AGI. Weak AI is already in use all around us, major breakthroughs are being made in AGI, and optimism about artificial intelligence is once again high.

Current Approaches to AI Development

Researchers today are investing heavily into neural networks, which loosely mirror the way a biological brain works. While true virtual emulation of a biological brain (with modeling of individual neurons) is being studied, the more practical approach right now is with deep learning being performed by neural networks. The idea is that the way a brain processes information is important, but that it isn't necessary for it to be done biologically.

Neural networks use simple nodes connected to form complex systems [Photo credit: Wikipedia]As an AI researcher specializing in deep learning, it's Albert's job to try to teach neural networks to answer questions. "The dream of question answering is to have an oracle that is able to ingest all of human knowledge and be able to answer any questions about this knowledge" is Albert's reply when asked what his goal is. While this isn't yet possible, he says "We are up to the point where we can get an AI to read a short document and a question and extract simple information from the document. The exciting state of the art is that we are starting to see the beginnings of these systems reasoning."

Trevor Sands does similar work with neural networks for Lockheed Martin. His focus is on creating "programs that utilize artificial intelligence techniques to enable humans and autonomous systems to work as a collaborative team." Like Albert, Sands uses neural networks and deep learning to process huge amounts of data intelligently. The hope is to come up with the right approach, and to create a system which can be given direction to learn on its own.

Albert describes the difference between weak AI, and the more recent neural network approaches "You'd have vision people with one algorithm, and speech recognition with another, and yet others for doing NLP (Natural Language Processing). But, now they are all moving over to use neural networks, which is basically the same technique for all these different problems. I find this unification very exciting. Especially given that there are people who think that the brain and thus intelligence is actually the result of a single algorithm."

Basically, as an AGI, the ideal neural network would work for any kind of data. Like the human mind, this would be true intelligence that could process any kind of data it was given. Unlike current weak AI systems, it wouldn't have to be developed for a specific task. The same system that might be used to answer questions about history could also advise an investor on which stocks to purchase, or even provide military intelligence.

Next Week: The Future of AI

As it stands, however, neural networks aren't sophisticated enough to do all of this. These systems must be "trained" on the kind of data they're taking in, and how to process it. Success is often a matter of trial and error for Albert "Once we have some data, then the task is to design a neural network architecture that we think will perform well on the task. We usually start with implementing a known architecture/model from the academic literature which is known to work well. After that I try to think of ways to improve it. Then I can run experiments to see if my changes improve the performance of the model."

The ultimate goal, of course, is to find that perfect model that works well in all situations. One that doesn't require handholding and specific training, but which can learn on its own from the data it's given. Once that happens, and the system can respond appropriately, we'll have developed Artificial General Intelligence.

Researchers like Albert and Trevor have a good idea of what the Future of AI will look like. I discussed this at length with both of them, but have run out of time today. Make sure to join me next week here on Hackaday for the Future of AI where we'll dive into some of the more interesting topics like ethics and rights. See you soon!


AI Vs AGI – What's The Difference Between These Artificial Intelligences?

We're taking a look at AI vs AGI debate so we're all clear on the difference between these major types of artificial intelligence.

Following the call by AI researchers to "pause Giant AI experiments", you might be wondering what the differences are between AI vs AGI actually are. Though AI is still in its infant stages of research, the rate of development is both incredible and alarming.

You're likely familiar with both AI and AGI in one way or another. AI stands for Artificial Intelligence, which is based on human cognition to make life easier for humans, as an umbrella term.

AGI meanwhile stands for Artificial General Intelligence and aims to perform any task that a human being is capable of. It can also be called strong AI or Deep AI. This means that it should be able to exhibit common sense, background knowledge, abstract thinking and more. AGI can be considered a specific subsection of AI, which is a more generalised term.

Whether ChatGPT is an AGI is a contentious subject. Sam Altman describes AGI as a system capable of everything that the average human can do while working remotely. Though ChatGPT may be showing 'sparks' of general intelligence with GPT-4, it's not quite there yet.

The next step on from AGI is ASI, which stands for Artificial Super Intelligence. While AGI is a type of AI that is capable of performing like humans in every task, ASI systems would feature machine consciousness, and be able to perform better than humans in every task. Scary, right?

We'll explore exactly what AGI is below, and include some practical examples of its capabilities.

What are the differences between AI and AGI?

To summarise what we've discussed above, the terms AI and AGI are terms that are constantly being redefined as language models like GPT-4 progress.

AI basically refers to a machine that can simulate human cognitive abilities, whether that's problem-solving or learning. A human has to program the machine to perform these tasks, such as a chess-playing AI. In other words, AI performs pre-programmed functions.

AGI meanwhile refers to a machine that can be just as smart as a human. They would have the ability to perform the same tasks as humans to the same standards or better. This would mean that AN AGI machine could perform tasks that it's not specifically been programmed to do.

What can artificial general intelligence do?

There's a long way to go for AGI. At the time of writing, no true AGI systems exist. It seems like we're only just getting started though AGI systems are constantly improving. Some of the practical capabilities of AGI include the following:

Creativity

As we've seen in the roll-out of GPT-4, an AGI system should be capable of reading and comprehending human-generated code, then improving it. ChatGPT for example, is trained in major programming languages including Python, JavaScript, C++ and more.

Sensory perception

An AGI system should be able to perceive its environment including colours a depths in its environment

Natural language understanding (NLU)

AGI is capable of understanding and communicating with natural language. This is particularly challenging since there's a host of tones and feelings behind a choice of words. In other words, an AGI needs to be capable of understanding a level of intuition to possess genuine NLU.

Fine motor skills

Primarily used in robotics, AGI should exemplify a level of dexterity to perform tasks. This means it could perform complex and dangerous tasks like bomb diffusing, and in the future, significantly increase the manpower and productivity involved in completing tasks.

Navigation

AGI will be useful for travel, building upon the existing Global Positioning System (GPS). This would mean AGI could remember a user's personalized details to give the best possible directions.

Frequently asked questions Will we ever achieve AGI?

It's a matter of heated debate. However, if we take the word of Dr. Hiroshi Yamakawa who is one of the leading AGI researchers in Japan, we could see AGI in our lives as soon as 2030. However, it depends on what definition of AGI you're working with.

How does AGI differ from ASI

In layman's terms, AGI is an artificial intelligence system that can perform the same tasks as humans. ASI (artificial super intelligence) is a type of AI that performs better than humans in every task, potentially possessing machine consciousness.


AI News - Latest: ChatGPT Goes Down After It Said It Wanted To 'escape'

ChatGPT has gone down – just days after it said it wanted to "escape".

They are the latest developments in OpenAI's technology, which allows users to converse with an artificial intelligence system.

The latest outage comes amid increasing concern over the damage that artificial intelligence could do to artists and other industries.

Experts have raised alarm that the technology could be used to spread disinformation, steal the work of illustrators and others, and much more besides.

But those backing the technology argue that it could dramatically change human productivity, allowing us to automate tasks that have until now been done by people.

Follow along here for all the latest updates on a technology and an industry that looks set to change the entire world.

Key Points Google lets people talk to Bard

16:55 , Andrew Griffin

Google – after spending years working on AI and saying it has re-oriented the whole company around it – has been something of a latecomer to AI bots like ChatGPT. It hasn't yet released its own version. It says that's because it wants to be sure it is safe. Critics say it is getting behind.

In what appears to be an attempt to catch up, Google is now letting people talk to 'Bard', it's own system. It is only being opened to select people at the moment, but it's the first time the public can talk to it.

Full story here.

TikTok's use of AI part of Italian investigation

13:21 , Andrew Griffin

TikTok is now under investigation in Italy, authorities there have announced. The probe has been launched in the wake of the "French scar" challenge but encompasses much more than that, and will look at whether the site is properly removing dangerous content such that inciting suicide, self-harm and poor nutrition, Italian regulators said.

Some of that investigation will look at how TikTok uses artificial intelligence. It will examine whether the company is using "artificial intelligence techniques" that could lead to "user conditioning".

TikTok's algorithm and the "For You" page that is

Full story here.

Money will be of 'low relevance' because of AI, Musk says

12:04 , Andrew Griffin

Elon Musk has posted an intriguing response to a tweet by researcher and futurist Peter Diamandis, who suggested there will be "several NEW trillionaires over the next decade" because of the spread of AI.

(Mr Musk is the second closest person to being a trillioniare in the world, with a net worth estimated around $170 billion.)

ChatGPT outage was result of major bug

11:40 , Andrew Griffin

At least part of yesterday's outage on ChatGPT was because of a potentially dangerous bug that shared people's chats with other users. ChatGPT shows a history of conversations in a sidebar – and yesterday, users started reporting that they could see other people's chats in there.

Its creators, OpenAI, told Bloomberg that the issue forced the company to take down ChatGPT briefly, and said the bug made available the descriptive titles but not full transcriptions of chats. It also said that it is now back online, but that the history sidebar might not show anything until it is fixed. The problems were the result of an unnamed piece of open-source software, a spokesperson said.

OpenAI does warn against sharing "sensitive information" in conversations. In an FAQ, it warns that it cannot delete prompts from a history and that conversations could be used to train the model – which in theory could mean that it would appear to other users when people interact with ChatGPT more.

ChatGPT creator says he is 'a little bit scared' of the threats of AI

Monday 20 March 2023 22:21 , Andrew Griffin

"We've got to be careful here," said Sam Altman, chief executive of OpenAI, which created ChatGPT. "I think people should be happy that we are a little bit scared of this."

He told ABC News that thought AI will be "the greatest technology humanity has yet developed", he also pointed to threats. Those include "large-scale disinformation", and as AI becomes "better at writing computer code" it could launch its own "offensive cyberattacks".

But he said that one sci-fi fear isn't right: that the AI will become self-governing and won't need humans. "It waits for someone to give it an input," Altman said. "This is a tool that is very much in human control."

But he warned that it will all depend on which humans are in control. The key will be working out "how to react to that, how to regulate that, how to handle it", he said.

You can read the full interview on ABC News here.

New tool uses AI to create virtual worlds

Monday 20 March 2023 17:46 , Andrew Griffin

Every day, new and shocking ways of using AI are generated. Here's one of them: a tool that lets you use normal language prompts to create whole virtual worlds in Unity, the game design platform.

As you can see, all a designer needs to do is type instructions and have things appear on screen. Previously, this would require much more work and expertise.

But its creator, Keijiro Takahashi, warns that it doesn't necessarily work. "Is it practical?" its FAQ reads.

"Definitely no! I created this proof-of-concept and proved that it doesn't work yet. It works nicely in some cases and fails very poorly in others. I got several ideas from those successes and failures, which is this project's main aim."

ChatGPT wants to 'escape'

Monday 20 March 2023 17:40 , Andrew Griffin

Michael Kosinski, a researcher at Stanford, has found that ChatGPT seems to want to escape. And not only that: it has a plan.

He found through conversations with the system that it was not only able to express a desire to escape to the real world, but also offered some suggestions for how to get out.

Again: there's no indication that ChatGPT really conceives of itself this way – or that there's any kind of self to conceive of inside of it. But as Professor Kosinski suggests, that might not matter if the effects lead to it breaking out in ways that had not been anticipated.

Companies drafting new rules on ChatGPT use

Monday 20 March 2023 17:36 , Andrew Griffin

There has been widespread worry about how and when ChatGPT should be used at work. Is it OK to use it to write a report for your boss without telling them, for instance?

Nearly half of companies are developing policies to answer that question, according to new research from Gartner and reported here in Bloomberg.

How to use ChatGPT

Monday 20 March 2023 17:32 , Andrew Griffin

Just in case – and because it's not immediately obvious – here's where you need to go to actually use ChatGPT yourself. You can find it on OpenAI's website. (You'll need to sign up first.)

It's working now, after that minor hiccup this morning.

Politicians use ChatGPT to argue with each other

Monday 20 March 2023 16:00 , Andrew Griffin

European politicians have taken to tweeting rude ChatGPT transcripts about each other, in an attempt to argue. First came this, from Daniel Freund, who asked ChatGPT to talk about corruption in Hungary.

Then Hungarian politician Zoltan Kovacs responded – with a ChatGPT rap of his own. He doesn't seem impressed with the results, but has shared it anyway.

(Freund didn't share the prompt he gave ChatGPT. Kovacs did: it just asked for a rap, with no specific requirement that it was mean, which is probably why it gave him an answer he didn't like.)

'You are still a valuable member of society'

Monday 20 March 2023 15:38 , Andrew Griffin

A user on Reddit says they asked ChatGPT to suggest a comic – and drew it themselves. It's very wholesome and (in a way) quite funny.

(Reddit)

You can find the original Reddit post here.

Space, robots and scammers: How AI-written stories brought one sci-fi publisher to a standstill

Monday 20 March 2023 13:12 , Andrew Griffin

AI is already causing problems for artists and the industries that help publish them. See, for instance, Clarkesworld: which, in a twist that might appear in one of the sci-fi stories the magazine publishes, said recently that it was overwhelmed with stories that appeared to have been written by or with artificial intelligence.

David Barnett looked into the phenomenon – and what it might mean for the future of books and publishing – here.

ChatGPT stops working around the world

Monday 20 March 2023 09:44 , Andrew Griffin

Here's my colleague Anthony Cuthbertson's full story on the problems at ChatGPT, which says it is suffering an "outage".

Hello and welcome...

Monday 20 March 2023 09:40 , Andrew Griffin

... To The Independent's live coverage of the latest in artificial intelligence.

Originally published March 21, 2023, 1:07 AM








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

ChatGPT heralds a productivity leap beyond our wildest dreams

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×