Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Podcast Summary: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI – Lex Fridman Podcast #367

Recommendation

In a March 2023 podcast, MIT researcher Lex Fridman interviewed Sam Altman – CEO of OpenAI, the company behind ChatGPT and GPT-4 – about the future of AI. Many believe super-intelligent AI will provoke fundamental societal, political, and economic changes in people’s lifetimes. Will AI eliminate poverty and suffering or annihilate the human species? Will it take people’s jobs or make their work more rewarding? Such conversations between political leaders, scientists, engineers and philosophers are critical before this emerging intelligence – however it manifests itself – falls into the wrong hands.

Take-Aways

  • ChatGPT represents a defining moment in AI development.
  • Society needs to agree on broad AI boundaries while also accounting for user preferences.
  • Different countries and institutions will likely use different versions of an AI base system tailored to local rules and culture.
  • GPT-4 is not an AGI (at least not yet).
  • Running OpenAI as a business invites commercial pressure and criticism.
  • AI will replace certain jobs but make other people more productive.
  • GPT advancements revolutionize software development.
  • Altman still refers to GPT as “it” but that could change.

Summary

ChatGPT represents a defining moment in AI development.

Sam Altman thinks people will look back on GPT-4 as “a very early AI.” Like archaic computers, it’s comparatively slow and buggy, but points to emerging technology that will soon become essential to human lives. Progress in this field is a “continual exponential.” GPT is a groundbreaking large language model (LLM) but it was its implementation into the chatbot ChatGPT that made it so accessible and easy to use. GPT-4 trains on massive quantities of data from open databases, websites and partner companies. Reinforcement Learning with Human Feedback (RLHF) helps align the system to perform human-initiated tasks efficiently and filter out unwanted material. The system itself can formulate follow-up questions, acknowledge mistakes and bounce back demands that are deemed out-of-bounds.

“Like any new branch of science, we’re going to discover new things that don’t fit the data and have to come up with better explanations, and that is the ongoing process of discovering science.” (Sam Altman, CEO of OpenAI)

Open sourcing has helped researchers figure out how useful the system proves in real-world scenarios and how it helps to create better products and services.

GPT-4 is helping to define human knowledge, making the leap from a box of facts to true wisdom. Scholars push back at the notion that the system presents a crude form of reasoning, but there is no denying that, in some sense, AI will add to human wisdom.

“The thing that’s most exciting is that somehow out of ingesting human knowledge, it’s coming up with this reasoning capability, however we want to talk about that.” (Sam Altman)

The “collective intelligence” of people accessing GPT-4 helps OpenAI research teams find AI weaknesses and strengths. User feedback helps to shape this technology, by exposing imperfections. Altman believes it’s crucial to correct such mistakes now, while the stakes are low. The biases inherent in former GPT models have been improved in GPT-4, but no model can be 100% unbiased.

Society needs to agree on broad AI boundaries while also accounting for user preferences.

OpenAI conducted safety evaluations and aimed at aligning the system to human concerns and sensibilities. Sam Altman believes that alignment has to advance at a faster pace than the model’s capabilities. So far, complete alignment has not been possible. However, RLFH works reasonably well for GPT-4’s current scale, making it a safer, more useful system.

OpenAI’s recently-launched “system message” functionality allows users more control. For example, users can instruct the model to “only answer this message as if you were Shakespeare doing thing X.” Designing a great steering prompt for GPT-4 involves studying how the order of words and clauses affects the model’s response, somewhat parallel to how human conversation unfolds.

Different countries and institutions will likely use different versions of an AI base system tailored to local rules and culture.

When people discuss aligning AI, tensions arise concerning who is allowed to decide the parameters, and how to balance human beliefs and prejudices using a democratic process.

“My dream scenario…is that every person on Earth would come together, have a really thoughtful, deliberative conversation about where we want to draw the boundary on this system…And then we and other builders build a system that has that baked in. Within that, then different countries, different institutions can have different versions. So there’s like different rules about, say, free speech in different countries. And then different users want very different things. And that can be within the bounds of what’s possible in their country.” (Sam Altman)

AI researchers must remain heavily involved in this process. They build the systems and fix them if they break. They know the potential of these models and must be held accountable, beyond mere information input.

OpenAI has considered putting out an unrestricted base model for researchers, but Altman feels most people want a model that “has been RLHF to the worldview they subscribe to.” One problem is that when objectionable or erroneous output material leaks into social media, it’s exponentially amplified.

GPT employs moderation tools to help it to identify inappropriate questions and refuse to answer them. Such tools are being built while the system is running, and are still not perfect. Altman admits he doesn’t like to be “scolded by a computer,” which elicits from him a “visceral response.”

“A story that has always stuck with me…is that the reason Steve Jobs put that handle on the back of the first iMac…was that you should never trust a computer you couldn’t throw out a window.” (Sam Altman)

GPT-4 treats people like adults, unlike the earlier GPT-3 model. Moving from GPT-3 to GPT-4 involved many technical leaps, melding together “small wins.” Those wins, bolstered by data collection, training and optimization, multiplied into larger leaps. GPT-3 and “3.5” had about 175 billion nodes in their neural network. GPT-4 has significantly more. It represents the most complex software ever created, encompassing much of humanity’s existing text, yet it will probably be thought of as “trivial” in two decades.

GPT-4 is not an AGI (at least not yet).

Can GPT or a similar system reconstruct the “magic” of being a human, directly experiencing the world? Can LLMs lead to AGI? Altman would not be surprised if GPT-10 becomes the world’s first true AGI. However, advancing beyond AI to Artificial General Intelligence (AGI) poses potentially serious dangers, so that must be handled in a way that presents authentic benefits.

Right now, GPT is a tool people use in a feedback loop, to amplify abilities and potential. Some programmers feel anxious about the future, but most appreciate that AI makes them much more productive. Altman thinks people will always want to work, crave drama and status, create things, and feel useful. AI may help people find better ways to do these things.

“I hate to sound like Utopic Tech Bro here, but if you’ll excuse me for three seconds. The level of the increase in quality of life that AI can deliver is extraordinary. We can make the world amazing, and we can make people’s lives amazing. We can cure diseases. We can increase material wealth. We can help people be happier, more fulfilled.” (Sam Altman)

There’s no guarantee that complete human alignment will be possible as AI becomes super-intelligent. Continuous oversight is essential, along with eliminating “one-shot-to-get-it-right” scenarios. Significant safety work was performed before deep learning and LLMs were released, but safeguards must be updated and scaled as technology progresses. “AI takeoff, or fast takeoff” remains a concern.

Altman was surprised at how quickly people embraced ChatGPT, but many thought GPT-4 didn’t offer an impressive update. Altman suggests an uncertain timeline for AGI development, but he thinks “slow takeoff, short timelines” would provide the best route, with OpenAI optimized for maximum impact.

GPT’s emergence as an AGI might not be immediately recognized. For example, is AGI the interface or the actual wisdom inside it? A model could be capable of super intelligence but remain locked. Merely adding some RL to ChatGPT made it much more useful and impressive. A few hundred more tricks might accomplish AGI.

GPT-4 can “fake” consciousness, and answer as though it were sentient, so the question persists: What’s the difference between fake consciousness and actual consciousness in a computer program?

“I believe AI can be conscious. So then the question is, what would it look like when it’s conscious? What would it behave like? And it would probably say things like, first of all, I am conscious. Second of all, display capability of suffering, an understanding of self, of having some memory of itself, and maybe interactions with you.” (Lex Fridman)

Open AI co-founder Ilya Sutskever said that if you trained a model with no mention of consciousness, then you began describing to it the experience of consciousness and the model responded, “Yes, I know exactly what you mean” – that might point to sentience. Consciousness may be the “fundamental substrate” where everyone exists in a simulation or dream.

People should entertain a healthy fear of how AGI could evolve. AGI may give rise to disinformation programs and economic shocks for which society is not prepared. People might not be aware that LLMs control the flow of information on social media. Open source LLMs with few safety controls will likely appear. Laws could prevent this, or more powerful AI could be employed to detect them, but this effort must start soon.

Running OpenAI as a business invites commercial pressure and criticism.

Resisting market pressure from companies like Google, Apple and Meta could present issues, but OpenAI intends to stick to its values, and avoid shortcuts rather than try to out-compete them. When OpenAI began in 2015, Altman remembers that people thought they were “batshit insane” for working on AGI. They’re not getting such negative comments these days.

OpenAI started as a nonprofit organization and now exists as a “capped for profit” group. Other companies are investing large sums in the technology, and grappling with “what’s at stake.” Hopefully, capitalism won’t entertain designs that will destroy the planet. Some developers are in line to be the most powerful humans on Earth. Such power must be equitably structured, and the technology transparent.

“I think you want decisions about this technology and certainly decisions about who is running this technology to become increasingly democratic over time. We haven’t figured out quite how to do this, but part of the reason for deploying like this is to get the world to have time to adapt and to reflect and to think about this, to pass regulations and for institutions to come up with new norms for the people working together.” (Sam Altman)

Altman is not so concerned about the PR risk as to how the technology will be employed. He appreciates fair media coverage, and always seeks feedback from smart people as everyone swims through these “uncharted waters.”

AI will replace certain jobs but make other people more productive.

Altman is trying to avoid the San Francisco “groupthink bubble” by going on a world user tour to speak to people in different cities this year. It helps him ascertain bias in feedback raters, and appreciate people with various worldviews. He aspires to make GPT systems less biased than humans.

Every technological revolution makes some jobs disappear but enhances others. AI will create new jobs that people can’t currently imagine, with more personal satisfaction and creative potential. AI’s economic impacts will drive political transformation. The costs of energy and intelligence will drop dramatically over the next two decades. Resources will be reallocated to lift the world’s struggling people. Society will become much wealthier in ways that today are difficult to imagine.

“I think it does go the other way too, like the sociopolitical values of the Enlightenment enabled the long-running technological revolution and scientific discovery process we’ve had for the past centuries. But I think we’re just going to see more. I’m sure the shape will change, but I think it’s this long and beautiful exponential curve.” (Sam Altman)

It’s unclear whether one centralized, concentrated AGI would benefit humanity more than several distributed AGIs. One upside of the AI vision is that the world might become more united. Yet the massive size and power of collective intelligence controlled by AI should be everyone’s concern.

Identifying truth is never an easy task. Free speech sometimes involves spreading biased opinions, factual errors or hate speech. AI will struggle with such challenges in different ways than social media does. Open AI bears responsibility for the tools it releases into the world, but those tools are not accountable for how they are used.

GPT advancements revolutionize software development.

Some of GPT’s greatest accomplishments reside in the field of programming. Researchers build tools on top of it, which helps them do their jobs more efficiently and creatively. The system generates and adjusts code to perform various tasks. A back-and-forth dialogue streamlines the coding process because it fixes bugs and catches mistakes as it makes them.

Altman still refers to GPT as “it” but that could change.

Altman doesn’t use a pronoun other than “it” to describe an AI system, because it is dangerous to project life onto a tool. He understands why some people are drawn to romantic relationships with AIs, or want to treat them as pets or companions, but he’s not personally interested in that.

Altman believes the manner in which GPT-4 converses with people matters, but he’s more interested in the science behind it, and solving its remaining mysteries. AI could help humanity build a powerful space probe, for instance, or run complex scientific experiments.

Altman has always ignored advice, or listened to it with caution. He understands that not everyone agrees with his development approach, but his teams are making respectable progress. GPT-4 represents one point on an “exponential curve” that began with Earth’s earliest humans.

About the Podcast

Sam Altman is an American entrepreneur, investor and programmer. He was the co-founder of Loopt and is the current CEO of OpenAI. Lex Fridman is a research scientist at MIT who specializes in machine learning and human-robot interaction. His podcast involves “conversations about the nature of intelligence, consciousness, love and power.”

The post Podcast Summary: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI – Lex Fridman Podcast #367 appeared first on Paminy - Information Resource for Marketing, Lifestyle, and Book Review.



This post first appeared on Paminy - Information Resource For Marketing, Lifestyle, And Book Review, please read the originial post: here

Share the post

Podcast Summary: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI – Lex Fridman Podcast #367

×

Subscribe to Paminy - Information Resource For Marketing, Lifestyle, And Book Review

Get updates delivered right to your inbox!

Thank you for your subscription

×