Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Week in AI: Generative AI Spams the Web


Keeping up with an industry that moves as fast as AI it is a difficult task. So until an AI can do it for you, here’s a helpful roundup of recent stories in the world of Machine Learning, along with notable research and experiments we didn’t cover on their own.

This week, SpeedyBrand, a company that uses Generative AI to create SEO-optimized content, emerged from the underground with the backing of Y Combinator. It has not yet attracted much funding ($2.5 million) and its customer base is relatively small (about 50 brands). But it got me thinking about how generative AI is starting to change the makeup of the web.

As The Verge’s James Vincent wrote in a recent article piece, generative AI models make it cheaper and easier to generate lower-quality content. Newsguard, a company that provides tools for investigating news sources, has exposed Hundreds of ad-supported sites with generic-sounding names that present misinformation created with generative AI.

It’s causing a problem for advertisers. Many of the sites featured by Newsguard appear to be created solely to abuse programmatic advertising or automated systems to place ads on pages. In its report, Newsguard found nearly 400 instances of ads from 141 major brands that appeared on 55 of the junk news sites.

It’s not just advertisers who should be concerned. Like Kyle Barr from Gizmodo points, it may only take one AI-generated article to generate mountains of engagement. And even if each AI-generated article only generates a few dollars, that’s less than the cost of generating the copy in the first place, and potential ad dollars aren’t sent to legitimate sites.

So what is the solution? There is one? They are a couple of questions that keep me more and more awake at night. Barr suggests that it behooves search engines and ad platforms to exercise stricter control and punish bad actors who adopt generative AI. But given how fast the field is moving, and the infinitely scalable nature of generative AI, I’m not convinced they can keep up.

Of course, spam content is not a new phenomenon, and there have been waves before. The website has adapted. What is different this time is that the barrier to entry is dramatically low, both in terms of cost and the time that must be invested.

Vincent adopts an optimistic tone, implying that if the web is eventually overrun by AI crap, it could spur the development of better-funded platforms. I’m not so sure. However, what is not in doubt is that we are at an inflection point and that the decisions made now around generative AI and its results will affect the function of the web for some time to come.

Here are other standout AI stories from the past few days:

OpenAI officially launches GPT-4: OpenAI this week announced the general availability of GPT-4, its latest text generation model, through its paid API. GPT-4 can generate text (including code) and accept image and text input, an improvement over GPT-3.5, its text-only predecessor, and performs at “human level” in various academic and professional benchmarks. But it’s not perfect, as we noted in our previous coverage. (Meanwhile, adoption of ChatGPT is reported to be downbut we’ll see.)

Bringing ‘super-intelligent’ AI under control: In other OpenAI news, the company is forming a new team led by Ilya Sutskever, its chief scientist and one of OpenAI’s co-founders, to develop ways to command and control “super-intelligent” AI systems.

Anti-Bias Law for New York City: After months of delays, New York City this week began enforcing a law that requires employers who use algorithms to recruit, hire or promote employees to submit those algorithms for an independent audit and make the results public.

Valve tacitly gives the green light to AI-generated games: Valve issued a rare statement after claims that it was rejecting games with AI-generated assets from its Steam game store. The notoriously quiet developer said his policy was evolving and was not an anti-AI position.

Humane introduces the Ai Pin: Humane, the startup launched by former Apple design and engineering duo Imran Chaudhri and Bethany Bongiorno, this week revealed details about its first product: The Ai Pin. As it turns out, Humane’s product is a wearable device with a projected screen and AI-powered features, just like a futuristic smartphone, but in a much different form factor.

Warnings about EU AI regulation: Top tech founders, CEOs, venture capitalists and industry giants across Europe signed an open letter to the EU Commission this week, warning that Europe could miss out on the generative AI revolution if the EU passes laws that they stifle innovation.

Deepfake scam makes the rounds: Verify this clip of UK consumer finance champion Martin Lewis, apparently taking advantage of an Elon Musk-backed investment opportunity. It seems normal, right? Not quite. It’s an AI-generated deepfake, and potentially a glimpse of the rapidly accelerating AI-generated misery on our screens.

AI Powered Sex Toys: Lovense —perhaps best known for its remote controllable sex toys— announced its ChatGPT Pleasure Companion this week. Released in beta on the company’s remote control app, the “Advanced Lovense ChatGPT Pleasure Companion” invites you to enjoy juicy and erotic stories that the Companion creates based on your selected theme.

Other machine learning

Our research brief begins with two very different projects at ETH Zurich. First is aiEndoscopic, an intelligent intubation cleave. Intubation is necessary for a patient’s survival in many circumstances, but it is a complicated manual procedure that is usually performed by specialists. The intuBot uses computer vision to recognize and respond to a live feed from the mouth and throat, guiding and correcting the position of the endoscope. This could allow people to safely intubate when needed rather than waiting for a specialist, potentially saving lives.

Here are them explaining it in a little more detail:

In an entirely different domain, researchers at ETH Zurich also contributed secondhand to a Pixar film by pioneering the technology needed to animate smoke and fire without falling prey to the fractal complexity of fluid dynamics. The approach of him was noticed and built by Disney and Pixar for the movie Elementary. Interestingly, it’s not so much a simulation solution as a style transfer one, a clever and seemingly quite valuable shortcut. (The image above is from this.)

AI in nature is always interesting, but AI in nature applied to archeology is even more interesting. Research led by Yamagata University had as objective to identify new lines of Nasca — the huge “geoglyphs” in Peru. You’d think that being visible from orbit they’d be pretty obvious, but the erosion and tree cover from the millennia since these mysterious formations were created means there are an unknown number hiding just out of sight. After training on aerial imagery of known and hidden geoglyphs, a deep learning model was released on other views and surprisingly detected at least four new ones, as you can see below. Very exciting!

Four newly discovered Nasca geoglyphs by an AI agent.

In a more immediately relevant sense, AI-adjacent technology is always finding new jobs in detecting and predicting natural disasters. Stanford engineers are gather data to train future wildfire prediction models by running hot air simulations over a forest canopy in a 30 foot water tank. If we’re going to model the physics of flames and embers traveling outside the confines of a wildfire, we’ll need to better understand them, and this team is doing everything they can to approximate that.

At UCLA they are researching how to predict landslides, which are more common as fires and other environmental factors change. But while AI has already been used to predict them with some success, it doesn’t “show its work,” meaning that a prediction doesn’t explain whether it’s due to erosion, a changing water table, or tectonic activity. A new “superimposable neural network” approach You have the network layers that use different data but run in parallel instead of all together, which allows the output to be a bit more specific on which variables led to the greatest risk. It is also much more efficient.

Google is facing an interesting challenge: how do you make a machine learning system learn from dangerous knowledge but not spread it? For example, if your training set includes the recipe for napalm, you don’t want it to repeat it, but to know not to repeat it, you need to know what it’s not repeating. A paradox! So the tech giant is looking for a method of “automatic unlearning” that allows this kind of balancing act to happen safely and reliably.

If you’re looking for a deeper look at why people seem to trust AI models for no good reason, look no further than this Science editorial by Celeste Kidd (UC Berkeley) and Abeba Birhane (Mozilla). It goes into the psychological foundations of trust and authority and shows how today’s AI agents basically use them as springboards to increase their own value. It’s a really interesting article if you want to sound smart this weekend.

Although we often hear about the infamous fake Mechanical Turk chess machine, that hoax inspired people to create what it purported to be. IEEE Spectrum has a fascinating history about the Spanish physicist and engineer Torres Quevedo, who created a real mechanical chess player. His capabilities were limited, but that’s how you know he was real. Some even propose that his chess machine was the first “computer game.” Food for thought.





Source link

The post The Week in AI: Generative AI Spams the Web appeared first on Interview Preparation.



This post first appeared on Interview Preparation, please read the originial post: here

Share the post

The Week in AI: Generative AI Spams the Web

×

Subscribe to Interview Preparation

Get updates delivered right to your inbox!

Thank you for your subscription

×