Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

What is the future of Generative AI? | McKinsey



some new technology :: Article Creator

MIT Technology Review

MIT Technology Review's What's Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. There is heightened demand for chips that can train AI models faster and ping them from devices like smartphones and satellites, enabling us to use these models without disclosing private data. Governments, tech giants, and startups alike are racing to carve out their slices of the growing semiconductor pie. 

Here are four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they'll unlock.

CHIPS Acts around the world

On the outskirts of Phoenix, two of the world's largest chip manufacturers, TSMC and Intel, are racing to construct campuses in the desert that they hope will become the seats of American chipmaking prowess. One thing the efforts have in common is their funding: in March, President Joe Biden announced $8.5 billion in direct federal funds and $11 billion in loans for Intel's expansions around the country. Weeks later, another $6.6 billion was announced for TSMC. 

The awards are just a portion of the US subsidies pouring into the chips industry via the $280 billion CHIPS and Science Act signed in 2022. The money means that any company with a foot in the semiconductor ecosystem is analyzing how to restructure its supply chains to benefit from the cash. While much of the money aims to boost American chip manufacturing, there's room for other players to apply, from equipment makers to niche materials startups.

But the US is not the only country trying to onshore some of the chipmaking supply chain. Japan is spending $13 billion on its own equivalent to the CHIPS Act, Europe will be spending more than $47 billion, and earlier this year India announced a $15 billion effort to build local chip plants. The roots of this trend go all the way back to 2014, says Chris Miller, a professor at Tufts University and author of Chip War: The Fight for the World's Most Critical Technology. That's when China started offering massive subsidies to its chipmakers. 

SIMON & SCHUSTER

"This created a dynamic in which other governments concluded they had no choice but to offer incentives or see firms shift manufacturing to China," he says. That threat, coupled with the surge in AI, has led Western governments to fund alternatives. In the next year, this might have a snowball effect, with even more countries starting their own programs for fear of being left behind.

The money is unlikely to lead to brand-new chip competitors or fundamentally restructure who the biggest chip players are, Miller says. Instead, it will mostly incentivize dominant players like TSMC to establish roots in multiple countries. But funding alone won't be enough to do that quickly—TSMC's effort to build plants in Arizona has been mired in missed deadlines and labor disputes, and Intel has similarly failed to meet its promised deadlines. And it's unclear whether, whenever the plants do come online, their equipment and labor force will be capable of the same level of advanced chipmaking that the companies maintain abroad.

"The supply chain will only shift slowly, over years and decades," Miller says. "But it is shifting."

More AI on the edge

Currently, most of our interactions with AI models like ChatGPT are done via the cloud. That means that when you ask GPT to pick out an outfit (or to be your boyfriend), your request pings OpenAI's servers, prompting the model housed there to process it and draw conclusions (known as "inference") before a response is sent back to you. Relying on the cloud has some drawbacks: it requires internet access, for one, and it also means some of your data is shared with the model maker.  

That's why there's been a lot of interest and investment in edge computing for AI, where the process of pinging the AI model happens directly on your device, like a laptop or smartphone. With the industry increasingly working toward a future in which AI models know a lot about us (Sam Altman described his killer AI app to me as one that knows "absolutely everything about my whole life, every email, every conversation I've ever had"), there's a demand for faster "edge" chips that can run models without sharing private data. These chips face different constraints from the ones in data centers: they typically have to be smaller, cheaper, and more energy efficient. 

The US Department of Defense is funding a lot of research into fast, private edge computing. In March, its research wing, the Defense Advanced Research Projects Agency (DARPA), announced a partnership with chipmaker EnCharge AI to create an ultra-powerful edge computing chip used for AI inference. EnCharge AI is working to make a chip that enables enhanced privacy but can also operate on very little power. This will make it suitable for military applications like satellites and off-grid surveillance equipment. The company expects to ship the chips in 2025.

AI models will always rely on the cloud for some applications, but new investment and interest in improving edge computing could bring faster chips, and therefore more AI, to our everyday devices. If edge chips get small and cheap enough, we're likely to see even more AI-driven "smart devices" in our homes and workplaces. Today, AI models are mostly constrained to data centers.

"A lot of the challenges that we see in the data center will be overcome," says EnCharge AI cofounder Naveen Verma. "I expect to see a big focus on the edge. I think it's going to be critical to getting AI at scale."

Big Tech enters the chipmaking fray

In industries ranging from fast fashion to lawn care, companies are paying exorbitant amounts in computing costs to create and train AI models for their businesses. Examples include models that employees can use to scan and summarize documents, as well as externally facing technologies like virtual agents that can walk you through how to repair your broken fridge. That means demand for cloud computing to train those models is through the roof. 

The companies providing the bulk of that computing power are Amazon, Microsoft, and Google. For years these tech giants have dreamed of increasing their profit margins by making chips for their data centers in-house rather than buying from companies like Nvidia, a giant with a near monopoly on the most advanced AI training chips and a value larger than the GDP of 183 countries. 

Amazon started its effort in 2015, acquiring startup Annapurna Labs. Google moved next in 2018 with its own chips called TPUs. Microsoft launched its first AI chips in November, and Meta unveiled a new version of its own AI training chips in April.

AP PHOTO/ERIC RISBERG

That trend could tilt the scales away from Nvidia. But Nvidia doesn't only play the role of rival in the eyes of Big Tech: regardless of their own in-house efforts, cloud giants still need its chips for their data centers. That's partly because their own chipmaking efforts can't fulfill all their needs, but it's also because their customers expect to be able to use top-of-the-line Nvidia chips.

"This is really about giving the customers the choice," says Rani Borkar, who leads hardware efforts at Microsoft Azure. She says she can't envision a future in which Microsoft supplies all chips for its cloud services: "We will continue our strong partnerships and deploy chips from all the silicon partners that we work with."

As cloud computing giants attempt to poach a bit of market share away from chipmakers, Nvidia is also attempting the converse. Last year the company started its own cloud service so customers can bypass Amazon, Google, or Microsoft and get computing time on Nvidia chips directly. As this dramatic struggle over market share unfolds, the coming year will be about whether customers see Big Tech's chips as akin to Nvidia's most advanced chips, or more like their little cousins. 

Nvidia battles the startups 

Despite Nvidia's dominance, there is a wave of investment flowing toward startups that aim to outcompete it in certain slices of the chip market of the future. Those startups all promise faster AI training, but they have different ideas about which flashy computing technology will get them there, from quantum to photonics to reversible computation. 

But Murat Onen, the 28-year-old founder of one such chip startup, Eva, which he spun out of his PhD work at MIT, is blunt about what it's like to start a chip company right now.

"The king of the hill is Nvidia, and that's the world that we live in," he says.

Many of these companies, like SambaNova, Cerebras, and Graphcore, are trying to change the underlying architecture of chips. Imagine an AI accelerator chip as constantly having to shuffle data back and forth between different areas: a piece of information is stored in the memory zone but must move to the processing zone, where a calculation is made, and then be stored back to the memory zone for safekeeping. All that takes time and energy. 

Making that process more efficient would deliver faster and cheaper AI training to customers, but only if the chipmaker has good enough software to allow the AI training company to seamlessly transition to the new chip. If the software transition is too clunky, model makers such as OpenAI, Anthropic, and Mistral are likely to stick with big-name chipmakers.That means companies taking this approach, like SambaNova, are spending a lot of their time not just on chip design but on software design too.

Onen is proposing changes one level deeper. Instead of traditional transistors, which have delivered greater efficiency over decades by getting smaller and smaller, he's using a new component called a proton-gated transistor that he says Eva designed specifically for the mathematical needs of AI training. It allows devices to store and process data in the same place, saving time and computing energy. The idea of using such a component for AI inference dates back to the 1960s, but researchers could never figure out how to use it for AI training, in part because of a materials roadblock—it requires a material that can, among other qualities, precisely control conductivity at room temperature. 

One day in the lab, "through optimizing these numbers, and getting very lucky, we got the material that we wanted," Onen says. "All of a sudden, the device is not a science fair project." That raised the possibility of using such a component at scale. After months of working to confirm that the data was correct, he founded Eva, and the work was published in Science.

But in a sector where so many founders have promised—and failed—to topple the dominance of the leading chipmakers, Onen frankly admits that it will be years before he'll know if the design works as intended and if manufacturers will agree to produce it. Leading a company through that uncertainty, he says, requires flexibility and an appetite for skepticism from others.

"I think sometimes people feel too attached to their ideas, and then kind of feel insecure that if this goes away there won't be anything next," he says. "I don't think I feel that way. I'm still looking for people to challenge us and say this is wrong."


In Building A Better Future, Technology Will Only Take Us So Far

Milken Institute Global Conference attendees discussed how the same technologies will help us ... [+] address environmental and geopolitical risks, including climate change, the rising threat of authoritarianism, economic inequality and the retirement savings gap.

Shutterstock

After attending last week's Milken Institute Global Conference, "Shaping a Shared Future," I had one main takeaway: There was a lot of good talk about shaping the future, but very little about how we will share that future.

It's always a privilege to be part of such a prestigious event, especially when it is held in haute Beverly Hills, at the Beverly Hilton. To be honest, as a Chicago guy, it took me some time to acclimate to the beautiful Hollywood hills. But when I did, I was struck by the intensity of the attendees' commitment to addressing big issues and figuring out how technology can help solve climate change and other challenges.

AI-ifying Everything

My first stop was at a panel discussion on a topic near and dear to my heart: New Realities in Asset Management. Throughout this talk, one "reality" crowded out other issues: artificial intelligence, and its potential to transform almost everything. The panelists' key point was that AI will and should eliminate work that doesn't add real value, freeing up individuals to focus their time on activities that directly advance an organization and its mission. For asset managers, that means delegating to AI applications that collect, analyze and summarize data—activities that now take up huge amounts of time—and reallocating those hours to the higher-level pursuit of interpreting results and applying them to investments and the business. Panelists talked about AI's ability to expand the reach of analysts and portfolio managers, to break down silos between functions, asset classes and business lines, and to help foster innovation.

The focus on AI was not limited to the asset management panel. In fact, AI was omnipresent in sessions and cocktail conversations. In the conference materials, organizers highlighted AI as a central theme, explaining "AI will revolutionize not only how we work, live and play, but even what it means to think and be human." On panels specifically devoted to the topic some of the world's most prominent AI experts—including executives from OpenAI and even Elon Musk—told the audience that the world's biggest companies are already applying generative AI (GenAI) to use cases ranging from automating basic customer service to developing complex new medicines.

Shaping The Future

Throughout the week, attendees discussed how the same technologies will help us address environmental and geopolitical risks, including climate change, the rising threat of authoritarianism, economic inequality and the retirement savings gap. I was genuinely impressed by how much the "who's who" of business, media (and social media) are committed to improving the world, and by their ability to explain the role technology will have to play in future solutions.

My one critique of the event is, for all the talk about how people and technology will shape the future, there was much less emphasis on how that future will and should be shared. The conference organizers defined "shaping a shared future" as "finding common ground amid the complex issues that have arisen in the post-pandemic world." I think that's a good definition. To address problems as sweeping in scope as climate change and inequality, we will need a shared common ground that supports collaboration across individuals, companies and nations.

Marshaling The Power Of Business

From my vantage point, that common ground starts with education, and the conference devoted little attention to the task of improving educational systems across the globe and in the United States, specifically. Because global solutions will inevitably require the cooperation of companies around the world, a shared common ground will also have to include efforts to support and incentivize the private sector. That means supplementing the "sticks" regulators have employed to motivate companies to address climate change and other issues with "carrots" that encourage them to bring their brilliant workforces and deep wallets to the table.

Although the vilification of corporate profits was fairly muted at the Milken conference, I wish participants had spent more time talking about how to marshal the power of business to tackle our big challenges. With Brexit and the growing rifts between the United States and China, the world has had a decided pivot from globalism to nationalism. Going forward, we will need to revert to a more collaborative framework in which businesses and governments unify to improve the lives of many.

Finally, even in the era of AI and other transformative technologies, implementing global solutions will require a broader sense of unity that is lacking today. Perhaps due to the impact of social media, we share less and less common ground. Whether the topic is national borders, foreign policy or political leadership, our society seems more divided than ever.

As the presenters at the Milken conference so skillfully explained, artificial intelligence and other innovations are giving us the tools we need to shape a better future. However, technology won't help us find the shared common ground we'll need to actually achieve that better world. That's up to us.


The AI Classroom Hype Is All Wrong, Some Educators Say

Many educators who have used generative artificial intelligence tools in their work have called the emerging technology a "game changer."

Some say it's been especially helpful in reducing the time it takes to do planning or administrative work, such as creating schedules, crafting lesson plans, and writing letters of recommendation for students. Teachers say they work an average of 57 hours a week, but less than half of that time is spent teaching.

"I think the use of AI has streamlined many aspects of teaching and has saved much prep time for teachers," said a high school fine arts teacher in California in an open-ended response to an EdWeek Research Center survey conducted in March and April.

But amid all the encouragement to try the technology, there are plenty of educators who haven't tried AI tools and don't plan to start. These educators are more skeptical of the technology and don't believe it should be used in K-12.

In open-ended responses to the EdWeek Research Center survey, educators shared their reasoning:

It could degrade critical thinking skills

   AI is not as wonderful as you all make it out to be. How do we expect our next generation to learn to think if all we teach them is how to use AI?

— District-level administrator, Ohio

   AI is driving a wedge between critical thinking and imagination.

— High school foreign language teacher, New Jersey

   AI are machines. They have been trained using stolen data. Students should be learning, questioning, problem-solving, and doing their own work. Teachers should as well. I do not believe AI can ethically be used.

— High school English teacher, Louisiana

   Students should not use AI until they have demonstrated some level of mastery on a subject. Students should not even use a calculator until they can do arithmetic calculations without tools. Problem solving starts in the mind, not on a keypad.

— High school math teacher, Texas

   AI and use of computers in the classroom has diminished everyone's ability to think, learn and reason. It's too easy to punch in a subject and get an immediate answer, which may or may not be correct. How many times have we heard "the computer model says this or that," so therefore that's the end of the discussion. Now I hear AI says this or that. Machines do not and can never have the capabilities of the human mind and the human experience. They can never have the ability to reason. They can never have the ability to rely on "gut instinct," which is correct most of the time. They can never have the ability to say "something just isn't right here." All they can do is look at the data that is fed into them and go from there. And that data is totally dependent on the character of the human or humans feeding it into them.

— District-level administrator, Texas

   I feel AI is used less as a resource and more as a crutch. I was shaken when I found out how many yearbook groups have used AI to write their entire yearbook and make the theme and set the ladder and put it together. We don't like students using AI because it's considered "plagiarism" but yet some teachers use it for everything. I don't mind AI as a brainstorming tool but when you give AI the ability to do all your work is when I have issues with it.

— Middle school teacher, Missouri

The human touch is better

   I have never used AI for anything in my job. I would think we still have to follow through with the actual teaching. AI can't do what I do!!

— High school math teacher, Michigan

   While AI is the future, it's more important that teachers know their subject matter, and AI should only be used as a supplement to the teacher's scope of knowledge. To use it beyond that is ineffective as the presentation of the knowledge will be presented with less passion and clarity.

— Middle school physical education teacher, Virginia

   While I believe AI is here to stay, I do not believe that it should be used to simply replace the human aspect of the learning experience. If AI is used by instructors or teachers heavily, then the computer is essentially doing the teachers' jobs for them and the teacher is simply the middle person who repeats what the computer tells them.

— High school career-technical education teacher, Missouri

   AI concerns me in that educators need to know their "stuff" before blindly having AI create lessons, etc., to administer in class. I have tried AI and caught multiple errors in its creation. If I had used what AI created, I would have considered myself unethical in teaching students through that lesson because it contained many errors.

— District-level administrator, Alabama

   Utilizing AI to develop assessments is impersonal. If the general scientific community can acknowledge that generative AI utilizes biased information to create material, why would we rely on these tools to create unbiased assessments?

— High school social studies teacher, Montana

The K-12 system isn't prepared

   I think that AI is a very dangerous phenomenon for learning and education. It seems like it is thrust upon us and unleashed without adequate preparation to handle the consequences for learning and teaching. I think this should be the number one topic for governments and academic institutions to address immediately.

— High school foreign language teacher, Pennsylvania

   I fear AI is yet another trend that education professionals are running headlong into without sufficient forethought and planning.

— Elementary fine arts teacher, Virginia

   I have never used AI and never will. I think it gives fuel to a fire that we won't be able to control.

— Elementary teacher, North Carolina

Concerns about how it affects their jobs

   Last year, I spent a lot of time talking with English teaching colleagues about how to tackle the new problem of AI generated student work. We researched apps to check for plagiarism and AI produced writing and didn't find a good source to help us. This new issue is requiring teachers to rethink the types of assignments we give and the ways we ask students to produce writing in class so we can ensure they are producing original works. It's frustrating and time consuming.

— High school English teacher, Minnesota

   Artificial Intelligence will render my job unnecessary within five years. My students use Grammarly and ChatGPT to write their essays, and they even use it to email their teachers. Commercials show corporations praising their staff for using it to email each other. If humans no longer need to learn how to communicate well in writing—if AI does it for us—then what I have been teaching students for decades is no longer needed. What's more, my students already realize this and are showing it in their attitudes and efforts in writing class.

— Middle school English teacher, Massachusetts








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

What is the future of Generative AI? | McKinsey

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×