Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Excel, Notion have inspired fans to turn into influencers



microsoft tay :: Article Creator

What A Corrupt Chatbot From The Past Can Teach Us About Today's Data Poisoning Threat

As Chief Availability Officer (CAO), Lou Senko leads Q2's hosting team delivering an enhanced customer experience.

getty

Back in 2016, an innocent AI chatbot joined Twitter, and we all got a lesson in the dark side of generative artificial intelligence. Named Tay by her creators at Microsoft, the chatbot was designed to simulate a teenage girl, and her debut on Twitter was meant to help her develop conversational learning skills by interacting with humans.

Within 24 hours, Tay was spouting racist, misogynistic, Nazi-loving views she learned from Twitter users, and she was pulled off the platform.

Obviously, technology was less sophisticated back then. However, the short life of Tay illustrates a major challenge with generative AI, which is once again in the headlines. Generative AI is the technology behind ChatGPT, which can not only engage in human-like conversations but can answer complex questions and write content. It works using deep learning tools that identify and analyze patterns in data to create new content.

ChatGPT is just the tip of the iceberg for generative AI, whose applications across industries and professions grow daily. In fact, McKinsey estimates that by 2030, activities that account for about 30% of U.S. Work hours could be automated, thanks to generative AI.

However, while the threat to jobs is stealing the headlines, there's another quieter threat on the horizon—data poisoning. Generative AI works by analyzing training data, and models are trained by tweaking the data until it gives the correct answer. Once the Model is in use, its efficacy is measured on its answers, which are fed back into the model in a continual loop.

The danger is that the data used to train the model can be tainted.

Intentional And Unintentional Drift

Think about using AI to score creditworthiness. We start with a model that's trained to ignore race, economic class and other demographic information and only look at the math, which is obviously beneficial because we're eliminating the human element of unconscious bias.

Over time, however, the model is fed more information about the number of loans that were denied or approved as well as which ones were successful or failed and why. All the while, the model is self-adjusting to get better outcomes, which may lead it to introduce unintended bias around creditworthiness. Slowly, the model starts to drift as it "learns" unconscious bias.

While problematic and in need of a solution, this type of drift is unintentional. Much more troubling and potentially dangerous is the intentional poisoning of data in AI models, which is when a bad actor manipulates the training data of a deep-learning model to get a desired output or outcome. Data poisoning is a type of adversarial machine learning.

Returning to our creditworthiness example, imagine Bank A is humming along, granting loans and keeping default rates low. Its AI model is constantly learning to judge the risk ratio of applications and which loans should be approved as well as at what rates. Because the model is so successful, Bank A is beating its competition (Bank B) down the road.

Bank B could come along and start flooding Bank A's system with applications that cause the model to drift slowly over time. It could send 10,000 applications that meet the criteria for a loan but change the criteria ever so slightly, lowering the average age or income of the applicants, for example.

Step by step, half a percentage by half a percentage, the model drifts until it starts approving loans for 19-year-olds with low incomes, denying loans or giving higher interest rates on loans it approves because the applicants are deemed more risky. Bank A starts losing more deals because of what its AI model thinks it has learned. Meanwhile, Bank B is snapping up that business and making good loans.

One of the great challenges with data poisoning is that it's subtle and happens slowly over time. A user may notice a change but not understand what caused the change and whether it's a natural drift or something more nefarious. This is troubling enough when it comes to determining creditworthiness, but what about when the stakes are even higher?

Imagine a bad actor shifting the model on an autonomous car so that it begins to speed up at red lights instead of stopping or corrupting the data used to train a drug trial model or other healthcare application. Unfortunately, any industry or business that uses generative AI is susceptible to data poisoning.

Protecting The Supply Chain

In order to combat the drift, we must find ways to protect the integrity of the decisions made by the model and separate the bad outcomes from the learning that needs to happen. That's the tricky part because we can't simply say that "the model is the model" and use it over and over like an Excel formula.

AI models are designed to learn with usage, so the only way to protect them is to protect the data used to train them. Because training data comes from various sources, it can be poisoned at any or all points in the training process. A July 2023 report from Google on the various types of threats to AI systems noted that an attacker only needs to control 0.01% of a dataset to poison a model.

Organizations such as the Open Worldwide Application Security Project (OWASP), Google and Gartner have developed guides and frameworks to protect AI systems. Implementing such a framework is a good start, but systems need to be constantly tested, and models must be continually adjusted—by humans—to ensure the data is not being corrupted and that desired, trusted outcomes are achieved.

Remember, AI can safely land an airplane, but we still have a pilot in the cockpit.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Machine 'Social Network' To Update Air Force AI Robots On The Fly

Artificial intelligence is becoming increasingly capable, but early releases may not be fully prepared for the real world: Microsoft MSFT had to withdraw its Tay chatbot in 2016 after a group of users tricked the AI into making racist and sexist comments online. The same problem will apply to military systems such as smart drones, but with more devious adversaries and more serious consequences. The U.S. Air Force is looking for proactive solutions.

"Real-world examples are limited, and the number of non-benign conditions out there is large," Dr. Lisa Dolev, CEO of AI specialists Qylur Intelligent Systems told Forbes. "That combination can cause problems."

The US Air Force will need to update AI software and retrain it when it encounters adverse conditions, and the AFWERX directorate, tasked with accelerating technology adoption in the Air Force has awarded a phase I Small Business Innovation Research contract to Qylur . This will provide a means to update AI systems in the field rapidly and efficiently using the Social Network of Intelligent Machines or SNIM AI® , managing updates for AI-based systems such as drones and ground robots.

SNIM AI monitors the performance of AIs in the field, and where problems are encountered it can ... [+] retrain and update them on the fly.

QYLUR

SNIM AI identifies problems within AI models and helps solve them with data collected by all the machines connected to the system.

No machine-learning system can be trained on every possible situation, and when it encounters something new it may not be able to deal with it. Dolev says this could be something as simple as an object recognition system seeing snow for the first time and finding that everything looks different. Dolev says such issues are inevitable in the type of systems the Air Force deploys because of the limited data available.

"It's not like learning to recognize cats, where you have endless data from the Internet. You are working with small, noisy data sets," says Dolev. "It's a messy environment."

The problem of trained systems running into difficulty is called AI model drift, and is well known in the business world. But while commercial operations can tolerate drift problems over the course of a few days, anything in the defence world needs to be rectified as soon as possible.

"In our system we have drift monitors embedded inside the loop so we can see if something is incorrect. If it's incorrect, we need to go back and retrain," says Dolev.

SNIM AI allows data from all of the connected machines to be pooled and used for retraining, so in the case of encountering snow, it would be able to draw on all the images of snow-covered objects captured in the field. The updated recognition system would then be tested and verified by a human operator before being pushed back out to some of the machines in the field.

Dolev says they do not deploy everything to everyone. The sparse computing resources on board drones and other mobile systems means they do not get burdened with all data. SNIM AI is 'mission adaptive' so for example the snow update would not be applied to machines in the tropics.

Dolev has plenty of experience in this field, having started 17 years ago, a time when the very idea of embedding artificial intelligence in drones and other systems seems like magic to some people. Qylur's products are used in the commercial security field where they are used for the automated detection of explosives and other threats at public venues. The need for rapid, responsive updates led Dolev to develop SNIM AI.

While in theory SNIM AI could be completely automated, Dolev has a strict policy of keeping human oversight in the process.

"I insist not only on looking at the math and results, but also on having a forced physical 'sanity test' to verify that what we see in the lab matches up with what we see in the field," says Dolev.

Putting human common sense in the update loop removes the risk that an AI will retrain itself in a way that makes the problem worse. AI may be smart, but is notoriously brittle and prone to bizarre errors, so Dolev's approach gives a degree of security. This is going to be increasingly important as AI gains momentum. Businesses – and militaries – are scrambling to get systems deployed so they are not left behind. These autonomous machines needs to be safe and reliable.

The Air Force's XQ-58 Valkyrie is an AI-powered "loyal wingman" : systems like this will need to be ... [+] updated at high speed.

Department of Defense

As the example of the Tay chatbot shows, any system may encounter adversaries who deliberately try to trip it up, feed it misleading data or confuse it with situations not included in its training data. This is especially true in defence, where there is already research into types of camouflage specifically to fool automated recognition algorithms. An entire specialist field of 'counter-AI warfare' may emerge with the goal of finding the brittleness of AI systems which can cause them to fail in ways which no human ever would.

Being able to exploit the weakness of AI could give a decisive advantage, but not if those exploits are immediately identified and corrected. Rapid update processes like SNIM AI will help keep the Air Force's autonomous machines one step ahead of its adversaries.


Is Lil Tay Still Alive? What To Know About All The Confusion

After a five-year hiatus, Lil Tay is back and allegedly involved in another controversy.

The Canadian teenager rose to fame when she supposedly was just 9 years old. (Her exact age is unclear.) She frequently posted profanity-filled clips on Instagram in 2018 showing off her money and multiple expensive cars.

Lil Tay also built a following on YouTube, labeling herself as the "youngest flexer of the century" in her videos. However, she became involved in multiple scandals after using the N-word and allegedly being the victim of child abuse.

Lil Tay (LIL TAY via YouTube)

After posting a tribute to late rapper XXXTentacion on June 18, 2018, the day of his death, Lil Tay disappeared from social media.

On Aug. 9, a statement was posted on her Instagram page, which has 3.3 million followers, claiming that Lil Tay and her brother had passed away.

The next day, Lil Tay and her family reportedly issued a statement to TMZ that said the teen is alive and her account was hacked.

Read on to learn more about the events surrounding Lil Tay this week.

Lil Tay's Instagram account announces her death

A message to fans was uploaded to the Lil Tay Instagram account on Aug. 9, more than five years after her last post.

"It is with a heavy heart that we share the devastating news of our beloved Claire's sudden and tragic passing," the now-deleted statement said. "We have no words to express the unbearable loss and indescribable pain."

The statement continued, "This outcome was entirely unexpected, and has left us all in shock. Her brother's passing adds an even more unimaginable depth to our grief."

The person who posted the message asked for privacy and claimed that the siblings' deaths were "still under investigation."

"Claire will forever remain in our hearts, her absence leaving an irreplaceable void that will be felt by all who knew and loved her," the statement concluded.

No other information was provided about the two alleged deaths. The message was not signed or attributed to Lil Tay's family or management team. TODAY.Com reached out to Lil Tay's management and did not receive a response.

Her former manager, Harry Tsang, would not confirm or deny the validity of the post.

In a statement to "Entertainment Tonight" on Aug. 10, he said, "Given the complexities of the current circumstances, I am at a point where I cannot definitively confirm or dismiss the legitimacy of the statement issued by the family."

Tsang added, "This situation calls for cautious consideration and respect for the sensitivities involved."

Tay's father, Christopher Hope, on Aug. 9 would not confirm news of her alleged death and declined to comment on the record to TODAY.Com's sister brand NBC News. Attempts by NBC News to reach her mother, Angela Tian, by phone were unsuccessful.

A statement is issued claiming Lil Tay is alive and her account was hacked

A day after the news of her alleged death was reported, the situation became more suspicious.

TMZ says it received a statement allegedly from her family saying the teenager is still alive.

"I want to make it clear that my brother and I are safe and alive, but I'm completely heartbroken, and struggling to even find the right words to say," the statement began. "It's been a very traumatizing 24 hours. All day yesterday, I was bombarded with endless heartbreaking and tearful phone calls from loved ones all while trying to sort out this mess."

The statement said the Lil Tay account was "compromised by a 3rd party and used to spread jarring misinformation and rumors regarding me, to the point that even my name was wrong."

Multiple reports about Lil Tay's death said her real name is Claire Hope. The Instagram statement also referred to her as "Claire." But the statement obtained by TMZ said the teenager's legal name is "Tay Tian." (TODAY.Com has not confirmed that is her legal name.)

TODAY.Com reached out to Lil Tay's management following the TMZ report. A spokesperson for Meta, which owns Instagram, didn't respond to requests for comment by NBC News about the alleged hack.

Lil Tay has been involved in other controversies

As a preteen, Lil Tay was at the center of multiple controversies.

After a video circulated on social media of Lil Tay — whose mother is Chinese and whose father is Canadian, according to a 2019 profile by New York Magazine's The Cut — saying the N-word, she apologized for using the racial slur in her three-episode docuseries, "Life With Lil Tay," which premiered in August 2018 on the Zeus Network.

"Help Me" was posted to her Instagram story in July 2018, causing fans to worry about her safety.

After child abuse allegations against her father surfaced, the Daily Beast published a report in October 2018 in which Tsang, Lil Tay's manager at the time, denied the claims.

The Cut's profile piece in 2019 detailed Lil Tay's rise to fame. In the article, the publication reported that her brother, Jason Tian, had created her social media persona.

This article was originally published on TODAY.Com

View comments








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

Excel, Notion have inspired fans to turn into influencers

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×