Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Why the AI race may never have a clear winner

Irrespective of how you manipulate these acronyms, it is not advisable to put your money on any single company in the AI or Generative AI competition, simply because there are too many factors. New alliances, takeovers, and investments; the rise of novel, disruptive technologies; and global regulations can all impede the growth of new technologies and hinder mega deals.

Consider these developments. On 25 September, Amazon.com Inc. declared that it would invest up to $4 billion in Anthropic, becoming a minority shareholder in the company and propelling it into the generative AI competition that is currently dominated by OpenAI, Microsoft, Google, Meta, and Nvidia. Anthropic, founded in 2021 by Dario Amodei and other individuals with prior involvement in the creation of OpenAI’s GPT-3 language Model, recently introduced its new AI chatbot named Claude 2.

Last year, Google made an investment exceeding $300 million in Anthropic, although the exact sum was not publicly revealed. This investment gave Google a 10% stake in Anthropic, enabling the company to expand its AI computing systems using Google Cloud. It also allowed Anthropic to utilize Google’s infrastructure for training and deploying its AI models.

Shortly after Amazon’s investment announcement, OpenAI – not to be outdone – announced that it was beginning to introduce new voice and image capabilities in ChatGPT.

Moreover, just a week ago, on 20 September, Amazon stated that its large language model (LLM) would enhance Alexa’s “conversational abilities with a new level of smart home intelligence”, one day after Google unveiled a series of updates to Bard that would grant the chatbot access to its suite of tools, including YouTube, Google Drive, and Google Flights.

Meanwhile, Meta is already working on a generative AI chatbot named ‘Gen AI Personas’ for younger users on Instagram and Facebook. According to The Wall Street Journal, it is expected to be revealed this week at Meta Connect, the company’s two-day annual event commencing on Wednesday. Microsoft, on the other hand, has announced its intentions to embed its generative AI assistant ‘Copilot’ in many of its products.

The competition for a share of the generative AI market is crucial for major tech companies, and with good reason. According to a survey conducted by McKinsey Global in August, one-third of the participants confirmed that generative AI models are being utilized in at least one business function, and 40% stated that their organizations would increase their investment in AI overall due to advancements in generative AI.

Nigel Green, the CEO of deVere Group, a financial consultancy, advises investors to act now to gain an “early advantage.” According to him, “Getting in early allows investors to establish a competitive advantage over latecomers. They can secure favorable entry points and lower purchase prices, maximizing their potential profits. This technology has the ability to disrupt existing industries or create entirely new ones. Early investors are likely to benefit from the exponential growth that often accompanies the adoption of such technologies. As these innovations gain traction, their valuations could skyrocket, resulting in significant returns on investment.”

Green, however, cautions that while “AI is currently the big story,” investors should diversify across asset classes, sectors, and regions as always in order to maximize returns per unit of risk (volatility) incurred.

With that being said, it seems that change is the only constant in AI, making it futile to place bets on any single company.

For example, Google seemed to have the upper hand in the AI race as its transformer model, which can predict the next word, sentence, or even paragraph, served as the foundation for all large language models, or LLMs. However, when Microsoft partnered with OpenAI, many began to dismiss Google, despite its “AI first” mission. OpenAI’s generative pre-trained transformer (GPT) and the GPT-powered chatbot ChatGPT gained over 100 million users within the first two months of its launch on December 31, 2022. Additionally, the flaws of Bard only added to Google’s troubles and bolstered ChatGPT’s position.

Just when it appeared that Google would lag behind in the AI competition, the company announced that it would merge its AI research units – Google Brain and DeepMind. Furthermore, Google revitalized Bard and made it accessible in 180 countries, including India. Bard employs Language Model for Dialogue Applications (LaMDA), a transformer-based model developed by Google in 2017. It learns by “reading” trillions of words, enabling it to recognize the patterns that constitute human language. Gemini, currently undergoing training, is now being touted as Google’s “next-generation foundation model.”

Amazon, too, has returned to the spotlight with the Anthropic deal, which will make Amazon Web Services (AWS) the main cloud provider for Anthropic. According to Andy Jassy, Amazon’s CEO, “Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. AWS provides its customers with these customized chips, Inferentia and Trainium, as an alternative to training their LLMs on Nvidia’s graphics processing units (GPUs), which are becoming more expensive and harder to acquire.

It should be noted that Amazon had already entered the generative AI competition alongside Microsoft and Google with Bedrock, AWS’s managed service that assists companies in utilizing various foundation models to develop generative AI applications on top of it. For instance, travel media company Lonely Planet is creating a generative AI solution on AWS “to help customers plan epic trips and create life-changing experiences with personalized travel itineraries,” according to Chris Whyde, Senior Vice President of Engineering and Data Science at Lonely Planet.

“By building with Claude 2 on Amazon Bedrock, we reduced itinerary generation costs by nearly 80% when we quickly created a scalable, secure AI platform that organizes our book content in minutes to deliver cohesive, highly accurate travel recommendations. Now we can re-package and personalize our content in various ways on our digital platforms, based on customer preference, all while highlighting trusted local voices—just like Lonely Planet has done for 50 years,” he added.

Similarly, Bridgewater Associates, an asset management firm for institutional investors, has joined forces with the AWS Generative AI Innovation Center to utilize Amazon Bedrock and Anthropic’s Claude model “to create a secure large language model-powered Investment Analyst Assistant that will be able to generate elaborate charts, compute financial indicators, and create summaries of the results, based on both minimal and complex instructions,” according to Greg Jensen, co-CIO at Bridgewater Associates.

Amazon SageMaker also enables developers to build, train, and deploy AI models, allowing customers to incorporate AI capabilities such as image recognition, forecasting, and intelligent search into applications via a simple application programming interface (API) call. Amazon Bedrock grants access to LLMs from AI21 Labs, Anthropic, Stability AI, and Amazon through an application programming interface (API). Additionally, Amazon has announced the preview of Amazon CodeWhisperer, its AI coding companion, if the code completion tool GitHub’s Copilot provides complete code snippets based on context.

Microsoft, on its part, has already invested $10 billion in OpenAI. It is currently developing AI-powered ‘Copilots’ (its term for AI-powered assistants) to enhance coding efficiency with GitHub, increase work productivity with Microsoft 365, and improve search capabilities with Bing and Edge. Microsoft will expand the availability of its copilots to the next Windows 11 update and to applications such as Paint, Photos, and Clipchamp.

Bing will incorporate support for the latest DALL-E 3 model from OpenAI, delivering more personalized answers based on users’ search history, an AI-powered shopping experience, and updates to Bing Chat Enterprise that optimize its mobile and visual aspects. Starting from November 1, Microsoft 365 Copilot will be accessible to enterprise customers along with a new AI assistant called Microsoft 365 Chat.

AI has made significant strides over the past five years, primarily due to three factors: improved algorithms, a larger volume of high-quality data, and the remarkable increase in computing power. Nvidia has benefited from the third factor by using its GPUs, typically employed in gaming, to power AI models. For example, OpenAI utilized Nvidia A100 GPUs, the predecessors to H100, to train and operate ChatGPT, and it will employ GPUs from Nvidia’s Azure supercomputer to support its ongoing AI research.

Meta, too, maintains a crucial partnership with Nvidia and developed the Grand Teton system, an AI supercomputer based on Hopper architecture, using Nvidia GPUs. Stability AI, a startup focusing on text-to-image generative AI, utilizes H100 to accelerate the development of its video, 3D, and multimodal models.

While central processing units (CPUs) are also utilized in training AI models, the parallel computing feature of GPUs allows for simultaneous execution of multiple calculations or processes. Given that training AI models involves millions of calculations, parallel computing significantly speeds up the process. This transformation has elevated Nvidia from being solely associated with gaming to becoming an emblematic figure in the domain of AI and Generative AI. Investors now favor Nvidia, valuing the company at approximately $1.13 trillion as of September 8 and estimating Huang’s net worth to be just over $40 billion.

Intel, on the other hand, does not want to be left behind in the AI race, as evidenced by the Intel Innovation 2023 event that commenced on September 19 in San Jose, California. Although Nvidia, co-founded by Jensen Huang, does not manufacture its own chips and relies on external foundries, Intel possesses its own chip fabrication facilities. Nevertheless, both companies intend to leave no stone unturned in their quest to capture a larger share of the AI market.

In the meantime, there are reports that Microsoft is developing AI chips that can be used for training LLMs, reducing the reliance on Nvidia. However, for now, Nvidia holds a significant advantage in this field. According to a report by JPMorgan published on May 27, the company could secure approximately 60% of the AI market this year thanks to its GPUs and networking products.

With these rapid developments, determining a clear winner becomes increasingly difficult.



This post first appeared on Recut URL Shortener, please read the originial post: here

Share the post

Why the AI race may never have a clear winner

×

Subscribe to Recut Url Shortener

Get updates delivered right to your inbox!

Thank you for your subscription

×