Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Kolena, a startup building tools to test AI models, raises $15M

Kolena, a firm that creates tools to evaluate and validate the effectiveness of AI models, revealed today that it has raised $15 million in a funding round that was co-led by Lobby Capital and included SignalFire and Bloomberg Beta.

The additional funding increases Kolena’s total amount raised to $21 million. According to co-founder and CEO Mohamed Elgendy, the money will be used to expand Kolena’s sales and marketing activities, cooperate with regulatory organizations, and extend the company’s research staff.

Although AI has many applications, neither its creators nor the general public trust it, according to Elgendy. “This technology needs to be implemented in a way that improves rather than degrades digital experiences. Although the genie won’t return to the bottle, the industry may ensure that its desires are good ones.

After working for almost six years at AI departments within firms like Amazon, Palantir, Rakuten, and Synapse, Elgendy founded Kolena in 2021 alongside Andrew Shi and Gordon Hart. The group’s goal with Kolena was to create a “model quality framework” that would provide models with both unit testing and end-to-end testing in an adaptable, enterprise-friendly package.

In the first place, Elgendy stated, “We wanted to provide a new framework for model quality—not just a tool that simplifies current approaches.”

Kolena makes it easy to perform scenario-level or unit tests constantly. Additionally, it offers complete testing of the full AI and machine learning solution, not just specific parts.

Elgendy claims that Kolena can help fill in any gaps in the coverage of the test data for AI models. Furthermore, the platform includes risk management tools that aid in monitoring the dangers connected with the introduction of a specific AI system (or systems, as the case may be). By building test cases and comparing the results to those of other models, users of Kolena’s UI can assess a model’s performance and identify potential causes of underperformance.

Instead of using a general “aggregate” statistic like an accuracy score, which might hide the specifics of a model’s performance, teams can manage and execute tests for specific situations that the AI product will have to handle with Kolena, according to Elgendy. For instance, a model that can recognize autos with 95% accuracy isn’t always preferable to one with 89% accuracy. Each has unique advantages and disadvantages, such as the ability to identify cars in a variety of weather conditions or occlusion levels or to determine a car’s orientation.

Source of Image: Techcrunch.com

Kolena may be helpful for data scientists who spend a lot of time creating models that fuel AI applications if they perform as promised.

One survey found that AI developers spend the majority of their time locating and organizing the data needed to train models, spending only 20% of their time analyzing and constructing models. According to another study, only approximately 54% of models make it from pilot to production as a result of the difficulties in creating precise, performance-based models.

However, other parties are developing tools to test, watch, and validate models. In addition to market leaders like Amazon, Google, and Microsoft, other startups are testing cutting-edge methods for evaluating the accuracy of models both before and after they are put into use.

For its platform to train and stress-test AI models via a crowdsourced network of testers, Prolific has secured $32 million. Meanwhile, Deepchecks and Robust Intelligence are developing toolkits for companies to stop AI models from failing and to regularly validate them. Additionally, Bobidi pays engineers to test the AI systems of businesses.

Elgendy counters that Kolena’s platform is one of the few that gives users “full control” over the sorts of data used, the reasoning used to evaluate the data, and other elements of an AI model test. He also highlights Kolena’s privacy strategy, which does away with the requirement for users to upload their data or models to the platform; instead, Kolena merely keeps track of model test results for benchmarking purposes, which are erasable upon request.

Kolena, a San Francisco-based company with 28 full-time staff, wouldn’t disclose how many clients it presently has. Elgendy, however, noted that the business is currently adopting a “selective approach” to its collaborations with “mission-critical” businesses and intends to introduce team bundles for mid-sized businesses and early-stage AI startups in Q2 2024.



This post first appeared on General, please read the originial post: here

Share the post

Kolena, a startup building tools to test AI models, raises $15M

×

Subscribe to General

Get updates delivered right to your inbox!

Thank you for your subscription

×