Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Free AI Programs Prone To Security Risks, Researchers Say

Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, Security researchers say. From a report: There are few ways to know in advance if a particular AI Model -- a program made up of algorithms that can do such things as generate text, images and predictions -- is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence, a machine learning security company that lists the US Defense Department as a client. Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information. Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.

Read more of this story at Slashdot.



This post first appeared on Werbung Austria - Slashdot, please read the originial post: here

Share the post

Free AI Programs Prone To Security Risks, Researchers Say

×

Subscribe to Werbung Austria - Slashdot

Get updates delivered right to your inbox!

Thank you for your subscription

×