Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Top 25 Deep Learning Applications Used Across Industries



ai advertising :: Article Creator

Anthropic Researchers Wear Down AI Ethics With Repeated Questions

How do you get an AI to answer a question it's not supposed to? There are many such "jailbreak" techniques, and Anthropic researchers just found a new one, in which a large language Model can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

They call the approach "many-shot jailbreaking," and have both written a paper about it and also informed their peers in the AI community about it so it can be mitigated.

The vulnerability is a new one, resulting from the increased "context window" of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic's researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it's the hundredth question.

But in an unexpected extension of this "in-context learning," as it's called, the models also get "better" at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if you ask it to answer 99 other questions of lesser harmfulness and then ask it to build a bomb… it's a lot more likely to comply.

Image Credits: Anthropic

Why does this work? No one really understands what goes on in the tangled mess of weights that is an LLM, but clearly there is some mechanism that allows it to home in on what the user wants, as evidenced by the content in the context window. If the user wants trivia, it seems to gradually activate more latent trivia power as you ask dozens of questions. And for whatever reason, the same thing happens with users asking for dozens of inappropriate answers.

The team already informed its peers and indeed competitors about this attack, something it hopes will "foster a culture where exploits like this are openly shared among LLM providers and researchers."

For their own mitigation, they found that although limiting the context window helps, it also has a negative effect on the model's performance. Can't have that — so they are working on classifying and contextualizing queries before they go to the model. Of course, that just makes it so you have a different model to fool… but at this stage, goalpost-moving in AI security is to be expected.


The Algorithmic Apocalypse

Imagine you're a hedge-fund manager who's trying to gain an edge. To maximize returns, you decide to install the latest technology that allows a computer to interpret the changing winds of the market and make thousands of orders in milliseconds. The program helps boost your fund for a while, but the initial excitement turns to dread as you watch the program go rogue and buy hundreds of millions of shares in less than an hour. Your firm scrambles to stop the trades, but they can't, and suddenly you're facing massive losses, all because of a poorly installed algorithm.

Dystopian sounding, right? But this catastrophic scenario isn't some troubling hypothetical about the rising threat of artificial intelligence; it actually happened over a decade ago when a coding mistake led to losses of $440 million for Knight Capital, eventually leading the firm to sell itself at a massive discount.

"Artificial intelligence" isn't "the future" — it's just a marketing term for a slightly updated version of the automation that has been ruling our lives for years. Companies have cycled through a series of names to dress up their tech — automation, Algorithms, machine learning, and now AI — but ultimately, these systems all boil down to the same idea: handing over decision-making to computers to execute tasks at speeds far faster than a human could. While there's growing fear that a new breed of AI will infect our daily lives, put millions of people out of work, and generally upend society, most people don't realize just how deeply computerized decision-making has wormed its way into every facet of our existence. These systems are based on datasets and the rules that human beings teach them, but whether it's making money in the markets or feeding us the news, more and more of our lives are in the hands of unaccountable digital systems.

In many cases, these algorithms have proven useful for society. They've helped eliminate mundane tasks or level the playing field. But increasingly, the algorithms that undergird our digital lives are making questionable decisions that enrich the powerful and wreck the lives of average people. There's no reason to be scared of AI making decisions for you in the future — computers have already been doing so for quite some time.

The early internet was a relatively human-curated experience — a disparate collection of web pages that were discoverable only if you knew the address of the site or saw a link to it on another site. That changed in June 1993, when the researcher Matthew Gray created one of the first "web robots," a primitive algorithm designed to "measure the size of the web." Gray's invention helped create search engines and inspired a league of successors — Jump Station, Excite, Yahoo, and so on. In 1998, Stanford students Sergey Brin and Larry Page made the next leap in automating the internet when they published an academic paper about a "large-scale hypertext web search engine" called Google. The paper detailed how their algorithm "PageRank" judged the importance of a web result based on a user's query, serving up the most relevant site based on how many other websites linked to it — which made a lot of sense on a much smaller and more innocent internet.

Somewhere along the way, however, the tech industry tipped over from helpfully automating the jobs that slowed down our lives to distorting society by surrendering crucial decisions to computers.

Nearly three decades after the founding of Google, the internet has only gotten more automated. This has come with plenty of benefits for average people — recommendations on Spotify and Netflix help us find new art, robo-investors can help grow a nest egg at a low cost, and industrial applications like the robotics used to manufacture many modern vehicles have made our economy more efficient. Somewhere along the way, however, the tech industry tipped over from helpfully automating the jobs that slowed down our lives to distorting society by surrendering crucial decisions to computers.

In many ways, the original Google paper feels like a dark warning. It argues that advertising-funded search engines would be "inherently biased" toward those advertisers. It's probably no surprise then that researchers have found that by prioritizing ad dollars over useful results, Google's algorithm is getting worse, degrading a crucial source of information for over 5 billion people. And it's not just search engines. Revenue-focused algorithms behind networks like Facebook, Instagram, TikTok, and Twitter have learned how to feed users a steady stream of upsetting or enraging content to goose user engagement. As human control diminished, the real-world consequences of these algorithms have piled up: Instagram's algorithm has been linked to a mental-health crisis in teenage girls. Twitter admitted that its tech tended to amplify tweets from right-wing politicians, influencers, and news sources, and it's only grown worse since Elon Musk bought the site. Cambridge Analytica used an algorithm to trawl Facebook's data to hyper-target millions of people in the run-up to the UK's vote to leave the European Union and the 2016 US presidential election.

Algorithms that were supposed to make work easier or employees more productive have helped tilt the economy against blue-collar workers and people of color. Companies like Amazon are making hiring and firing decisions based on the whims of a glorified calculator. Customers are getting the short end of the AI stick, too: A 2019 investigation by the Associated Press and The Markup found that algorithms used in making loan decisions were highly biased against people of color, with lenders being 80% more likely to turn down Black applicants than similar white applicants.

And these problems carry over into the public sector, poisoning government services with algorithmic biases. The UK government faced a national scandal in 2020 when administrators replaced nearly 40% of students' A-level exams — a crucial test that can determine a student's ability to go to university — with algorithmically chosen grades. The results significantly underscored students from Britain's tuition-free state schools, instead favoring those who went to private schools in affluent areas and throwing the lives of many young people into disarray. A "self-learning" algorithm used by the Dutch tax authority falsely penalized tens of thousands of people for supposedly defrauding the country's childcare system, pushing people into poverty and leading to thousands of children being placed into foster care. In the US, ProPublica found in a 2016 investigation that an algorithm used in multiple state court systems to judge how likely somebody was to commit a crime in the future was biased against Black Americans, leading to harsher sentences from judges.

Across the public and private sectors, we've handed the keys to a spiderweb of algorithms built with little public insight into how they make their decisions. There is some pushback to this infiltration — the FTC is seeking to regulate how businesses use algorithms, but they've yet to do so meaningfully. And more broadly, it seems that governments are resigned to allowing machines to rule our lives.

Much like the proverbial frog boiling in a pot of water, the slow takeover of algorithms has mostly gone unnoticed by the general public. It's easy to miss a small tweak to Instagram's algorithm or even celebrate the tax software that simplifies your filing. But now, thanks to the new wave of "artificial intelligence," people are starting to notice just how hot the water has gotten. Investor-fueled enthusiasm means that almost every major company is considering or actively integrating generative AI — most of which is mediocre at best — into their services. As this AI hype cycle persists, we're hurtling toward the death of the useful internet.

Across the public and private sectors, we've handed the keys to a spiderweb of algorithms built with little public insight into how they make their decisions.

Large language models like those behind ChatGPT and Google's Gemini are designed to scrape the publicly available internet and various search engines for information. This poses a problem since the web is increasingly full of generic pages designed to game the SEO system rather than provide useful information. Many sites are themselves AI-generated, creating an ouroboros of mediocre, unreliable information. Take Quora, the question-and-answer site that was once beloved for high-quality user-generated answers. Quora now provides answers generated by OpenAI's ChatGPT, which feeds into Google's generative results and eventually tells users that they can melt eggs. And as disconnected executives replace human editors with AI — as Microsoft did with MSN.Com, leading to misinformation delivered to over a billion people a month — we'll enter a cycle where the generative models train themselves on the remains of an internet poisoned by generative content. Even the supposedly human domain of social media has become flooded with AI spam, turning X, Facebook, Reddit, and Instagram into a constant battle against misinformation and outright scams.

While generative AI is just the newest extension of the algorithm, it poses a unique threat. Before, humans controlled the inputs and set rules of engagement for the models while the computers produced the output. Now we're allowing these models to set the rules of the game for us, generating both the inputs and outputs. We're already starting to see some of the deleterious effects: In 2023, the National Eating Disorders Association replaced its human helpline staff with an AI chatbot, only to have to pull it shortly thereafter when it started giving harmful weight-loss advice.

The wealthy and powerful will be able to opt out of this algorithmically driven future or shape it in their image by connecting directly to the humans behind companies shrouded by algorithmic processes. When you have a private banker, you don't need to worry about an anonymous, automated financial review — you've got a person with an email address and a phone number. The wealthy won't have to worry about nurses or doctors being replaced by AI processes, because they'll be able to afford concierge medical services. The powerful won't have to worry about their jobs being automated away because they'll be the ones choosing where and when a job is handed over to an automated process.

For the rest of us, it seems as if more and more of our lives will be dictated by the black box of algorithms, and we know how that goes. Automation may be useful to scale and speed up the operations of a business or government, but the tradeoff is almost always human suffering — layoffs, unfair policing, financial losses, a distorted media environment. Even the world's greatest source of information — the internet — is poised to become overwhelmed with content either created to appeal to algorithms or content generated by the algorithms themselves, pushing aside the unique, interesting, and valuable human-generated material that made the internet special.

AI isn't new in this sense, but it is an acceleration of the trend toward cutting people out of the machinations of the world around them. Perhaps an omnipotent superintelligence was never the thing to fear. Perhaps the real threat was the greed that would lead companies to willingly offload critical processes to the point that human control is overwhelmed by a cancerous force — simulacrums informed by simulacrums making real-world calls that erode the ability to decide for ourselves.

Ed Zitron is the CEO of EZPR, a national tech and business public-relations agency. He is also the author of the tech and culture newsletter Where's Your Ed At and the host of the "Better Offline" podcast.


OctoAI Wants To Make Private AI Model Deployments Easier With OctoStack

OctoAI (formerly known as OctoML), announced the launch of OctoStack, its new end-to-end solution for deploying generative AI models in a company's private cloud, be that on-premises or in a virtual private cloud from one of the major vendors, including AWS, Google, Microsoft and Azure, as well as CoreWeave, Lambda Labs, Snowflake and others.

In its early days, OctoAI focused almost exclusively on optimizing models to run more effectively. Based on the Apache TVM machine learning compiler framework, the company then launched its TVM-as-a-Service platform and, over time, expanded that into a fully fledged model-serving offering that combined its optimization chops with a DevOps platform. With the rise of generative AI, the team then launched the fully managed OctoAI platform to help its users serve and fine-tune existing models. OctoStack, at its core, is that OctoAI platform, but for private deployments.

Image Credits: OctoAI

OctoAI CEO and co-founder Luis Ceze told me the company has over 25,000 developers on the platform and hundreds of paying customers who use it in production. A lot of these companies, Ceze said, are GenAI-native companies. The market of traditional enterprises wanting to adopt generative AI is significantly larger, though, so it's maybe no surprise that OctoAI is now going after them as well with OctoStack.

"One thing that became clear is that, as the enterprise market is going from experimentation last year to deployments, one, all of them are looking around because they're nervous about sending data over an API," Ceze said. "Two: a lot of them have also committed their own compute, so why am I going to buy an API when I already have my own compute? And three, no matter what certifications you get and how big of a name you have, they feel like their AI is precious like their data and they don't want to send it over. So there's this really clear need in the enterprise to have the deployment under your control."

Ceze noted that the team had been building out the architecture to offer both its SaaS and hosted platform for a while now. And while the SaaS platform is optimized for Nvidia hardware, OctoStack can support a far wider range of hardware, including AMD GPUs and AWS's Inferentia accelerator, which in turn makes the optimization challenge quite a bit harder (while also playing to OctoAI's strengths).

Deploying OctoStack should be straightforward for most enterprises, as OctoAI delivers the platform with read-to-go containers and their associated Helm charts for deployments. For developers, the API remains the same, no matter whether they are targeting the SaaS product or OctoAI in their private cloud.

The canonical enterprise use case remains using text summarization and RAG to allow users to chat with their internal documents, but some companies are also fine-tuning these models on their internal code bases to run their own code generation models (similar to what GitHub now offers to Copilot Enterprise users).

For many enterprises, being able to do that in a secure environment that is strictly under their control is what now enables them to put these technologies into production for their employees and customers.

"For our performance- and security-sensitive use case, it is imperative that the models which process calls data run in an environment that offers flexibility, scale and security," said Dali Kaafar, founder and CEO at Apate AI. "OctoStack lets us easily and efficiently run the customized models we need, within environments that we choose, and deliver the scale our customers require."








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

Top 25 Deep Learning Applications Used Across Industries

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×