Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Contextual AI Selects Google Cloud as Preferred Cloud Provider

Contextual AI Inc., a startup specializing in large language models for enterprise use, has announced Google Cloud as its preferred cloud provider. Under this partnership, Contextual AI will leverage Google Cloud services, particularly its infrastructure, to train its language models.

Based in Palo Alto, California, Contextual AI emerged from stealth mode earlier this year with $20 million in funding. Led by co-founder and CEO Douwe Kiela, who is also an adjunct professor at Stanford University, the startup is focused on building language models using retrieval augmented generation (RAG) technology, which was pioneered by Kiela.

Unlike conventional AI models that rely solely on the dataset they were trained on, RAG technology enables neural networks to incorporate information from external sources without the need for retraining. This reduces infrastructure costs and opens up new possibilities for generating more accurate responses. Additionally, Contextual AI’s language models are designed to provide source citations when answering user queries and are less susceptible to AI hallucinations compared to traditional neural networks.

According to Kiela, “Building a large language model to solve some of the most challenging enterprise use cases requires advanced performance and global infrastructure.” To meet these requirements, Contextual AI plans to utilize Google Cloud’s A3 and A2 instances for training its neural networks. The A3 instance provides access to eight Nvidia H100 graphics processing units, while the A2 instance features 16 processors from the Nvidia A100 chip family. The startup also intends to utilize Google’s TPU v4 machine learning chips, which offer increased speed and power efficiency.

Google’s TPU v4 chips are deployed in clusters within its data centers, with each cluster containing 4,096 chips connected by a custom optical interconnect. This interconnect automatically adjusts the network settings of the TPU v4 cluster to optimize performance for the neural network running on top.

Contextual AI envisions various use cases for its language models, including customer support and applications in the financial sector. With Google Cloud as its preferred cloud provider, Contextual AI aims to enhance the performance and scalability of its models, delivering advanced natural language understanding capabilities to its enterprise customers.

Definitions:
– RAG (retrieval augmented generation): A technology that enables neural networks to incorporate information from external sources without retraining, reducing infrastructure costs and improving accuracy in generating responses.
– NVIDIA: An American technology company known for designing graphics processing units (GPUs).
– TPU (Tensor Processing Unit): A machine learning accelerator developed by Google specifically for neural network processing.
– Google Cloud: The cloud computing service provided by Google.

Source: SiliconANGLE (https://siliconangle.com/2021/09/01/google-cloud-named-preferred-cloud-provider-contextual-ai/)

The post Contextual AI Selects Google Cloud as Preferred Cloud Provider appeared first on TS2 SPACE.



This post first appeared on TS2 Space, please read the originial post: here

Share the post

Contextual AI Selects Google Cloud as Preferred Cloud Provider

×

Subscribe to Ts2 Space

Get updates delivered right to your inbox!

Thank you for your subscription

×