Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

114 Milestones In The History Of Artificial Intelligence (AI)




remote sensing artificial intelligence :: Article Creator

Artificial Intelligence Could Finally Let Us Talk With Animals

Underneath the thick forest canopy on a remote island in the South Pacific, a New Caledonian Crow peers from its perch, dark eyes glittering. The bird carefully removes a branch, strips off unwanted leaves with its bill and fashions a hook from the wood. The crow is a perfectionist: if it makes an error, it will scrap the whole thing and start over. When it's satisfied, the bird pokes the finished utensil into a crevice in the tree and fishes out a wriggling grub.

The New Caledonian Crow is one of the only birds known to manufacture tools, a skill once thought to be unique to humans. Christian Rutz, a behavioral ecologist at the University of St Andrews in Scotland, has spent much of his career studying the crow's capabilities. The remarkable ingenuity Rutz observed changed his understanding of what birds can do. He started wondering if there might be other overlooked Animal capacities. The crows live in complex social groups and may pass toolmaking techniques on to their offspring. Experiments have also shown that different crow groups around the island have distinct vocalizations. Rutz wanted to know whether these dialects could help explain cultural differences in toolmaking among the groups.

New technology

Beyond creating chatbots that woo people and producing art that wins fine-arts competitions, machine learning may soon make it possible to decipher things like crow calls, says Aza Raskin, one of the founders of the nonprofit Earth Species Project. Its team of artificial-intelligence scientists, biologists and conservation experts is collecting a wide range of data from a variety of species and building machine-learning models to analyze them. Other groups such as the Project Cetacean Translation Initiative (CETI) are focusing on trying to understand a particular species, in this case the sperm whale.

Decoding animal vocalizations could aid conservation and welfare efforts. It could also have a startling impact on us. Raskin compares the coming revolution to the invention of the telescope. "We looked out at the universe and discovered that Earth was not the center," he says. The power of AI to reshape our understanding of animals, he thinks, will have a similar effect. "These tools are going to change the way that we see ourselves in relation to everything."

When Shane Gero got off his research vessel in Dominica after a recent day of fieldwork, he was excited. The sperm whales that he studies have complex social groups, and on this day one familiar young male had returned to his family, providing Gero and his colleagues with an opportunity to record the group's vocalizations as they reunited.

For nearly 20 years Gero, a scientist in residence at Carleton University in Ottawa, kept detailed records of two clans of sperm whales in the turquoise waters of the Caribbean, capturing their clicking vocalizations and what the animals were doing when they made them. He found that the whales seemed to use specific patterns of sound, called codas, to identify one another. They learn these codas much the way toddlers learn words and names, by repeating sounds the adults around them make.

Having decoded a few of these codas manually, Gero and his colleagues began to wonder whether they could use AI to speed up the translation. As a proof of concept, the team fed some of Gero's recordings to a neural network, an algorithm that learns skills by analyzing data. It was able to correctly identify a small subset of individual whales from the codas 99 percent of the time. Next the team set an ambitious new goal: listen to large swathes of the ocean in the hopes of training a computer to learn to speak whale. Project CETI, for which Gero serves as lead biologist, plans to deploy an underwater microphone attached to a buoy to record the vocalizations of Dominica's resident whales around the clock.

As sensors have gotten cheaper and technologies such as hydrophones, biologgers and drones have improved, the amount of animal data has exploded. There's suddenly far too much for biologists to sift through efficiently by hand. AI thrives on vast quantities of information, though. Large language models such as ChatGPT must ingest massive amounts of text to learn how to respond to prompts: ChatGPT-3 was trained on around 45 terabytes of text data, a good chunk of the entire Library of Congress. Early models required humans to classify much of those data with labels. In other words, people had to teach the machines what was important. But the next generation of models learned how to "self-supervise," automatically learning what's essential and independently creating an algorithm of how to predict what words come next in a sequence.

In 2017 two research groups discovered a way to translate between human languages without the need for a Rosetta stone. The discovery hinged on turning the semantic relations between words into geometric ones. Machine-learning models are now able to translate between unknown human languages by aligning their shapes—using the frequency with which words such as "mother" and "daughter" appear near each other, for example, to accurately predict what comes next. "There's this hidden underlying structure that seems to unite us all," Raskin says. "The door has been opened to using machine learning to decode languages that we don't already know how to decode."

The field hit another milestone in 2020, when natural-language processing began to be able to "treat everything as a language," Raskin explains. Take, for example, DALL-E 2, one of the AI systems that can generate realistic images based on verbal descriptions. It maps the shapes that represent text to the shapes that represent images with remarkable accuracy—exactly the kind of "multimodal" analysis the translation of animal communication will probably require.

Many animals use different modes of communication simultaneously, just as humans use body language and gestures while talking. Any actions made immediately before, during, or after uttering sounds could provide important context for understanding what an animal is trying to convey. Traditionally, researchers have cataloged these behaviors in a list known as an ethogram. With the right training, machine-learning models could help parse these behaviors and perhaps discover novel patterns in the data. Scientists writing in the journal Nature Communications last year, for example, reported that a model found previously unrecognized differences in Zebra Finch songs that females pay attention to when choosing mates. Females prefer partners that sing like the birds the females grew up with.

You can already use one kind of AI-powered analysis with Merlin, a free app from the Cornell Lab of Ornithology that identifies bird species. To identify a bird by sound, Merlin takes a user's recording and converts it into a spectrogram—a visualization of the volume, pitch and length of the bird's call. The model is trained on Cornell's audio library, against which it compares the user's recording to predict the species identification. It then compares this guess to eBird, Cornell's global database of observations, to make sure it's a species that one would expect to find in the user's location. Merlin can identify calls from more than 1,000 bird species with remarkable accuracy.

But the world is loud, and singling out the tune of one bird or whale from the cacophony is difficult. The challenge of isolating and recognizing individual speakers, known as the cocktail party problem, has long plagued efforts to process animal vocalizations. In 2021 the Earth Species Project built a neural network that can separate overlapping animal sounds into individual tracks and filter background noise, such as car honks—and it released the open-source code for free. It works by creating a visual representation of the sound, which the neural network uses to determine which pixel is produced by which speaker. In addition, the Earth Species Project recently developed a so-called foundational model that can automatically detect and classify patterns in datasets.

New Caledonian Crows, which are famous for their toolmaking abilities, have regionally distinctive vocalizations that could one day be deciphered using AI. Credit: Jean-Paul Ferrero/Auscape International Pty Ltd/Alamy Stock Photo

Not only are these tools transforming research, but they also have practical value. If scientists can translate animal sounds, they may be able to help imperiled species. The Hawaiian Crow, known locally as the 'Alalā, went extinct in the wild in the early 2000s. The last birds were brought into captivity to start a conservation breeding program. Expanding on his work with the New Caledonian Crow, Rutz is now collaborating with the Earth Species Project to study the Hawaiian Crow's vocabulary. "This species has been removed from its natural environment for a very long time," he says. He is developing an inventory of all the calls the captive birds currently use. He'll compare that to historical recordings of the last wild Hawaiian Crows to determine whether their repertoire has changed in captivity. He wants to know whether they may have lost important calls, such as those pertaining to predators or courtship, which could help explain why reintroducing the crow to the wild has proved so difficult.

Machine-learning models could someday help us figure out our pets, too. For a long time animal behaviorists didn't pay much attention to domestic pets, says Con Slobodchikoff, author of Chasing Doctor Dolittle: Learning the Language of Animals. When he began his career studying prairie dogs, he quickly gained an appreciation for their sophisticated calls, which can describe the size and shape of predators. That experience helped to inform his later work as a behavioral consultant for misbehaving dogs. He found that many of his clients completely misunderstood what their dog was trying to convey. When our pets try to communicate with us, they often use multimodal signals, such as a bark combined with a body posture. Yet "we are so fixated on sound being the only valid element of communication, that we miss many of the other cues," he says.

Now Slobodchikoff is developing an AI model aimed at translating a dog's facial expressions and barks for its owner. He has no doubt that as researchers expand their studies to domestic animals, machine-learning advances will reveal surprising capabilities in pets. "Animals have thoughts, hopes, maybe dreams of their own," he says.

Farmed animals could also benefit from such depth of understanding. Elodie F. Briefer, an associate professor in animal behavior at the University of Copenhagen, has shown that it's possible to assess animals' emotional states based on their vocalizations. She recently created an algorithm trained on thousands of pig sounds that uses machine learning to predict whether the animals were experiencing a positive or negative emotion. Briefer says a better grasp of how animals experience feelings could spur efforts to improve their welfare.

But as good as language models are at finding patterns, they aren't actually deciphering meaning—and they definitely aren't always right. Even AI experts often don't understand how algorithms arrive at their conclusions, making them harder to validate. Benjamin Hoffman, who helped to develop the Merlin app before joining the Earth Species Project, says that one of the biggest challenges scientists now face is figuring out how to learn from what these models discover.

"The choices made on the machine-learning side affect what kinds of scientific questions we can ask," Hoffman says. Merlin Sound ID, he explains, can help detect which birds are present, which is useful for ecological research. It can't, however, help answer questions about behavior, such as what types of calls an individual bird makes when it interacts with a potential mate. In trying to interpret different kinds of animal communication, Hoffman says researchers must also "understand what the computer is doing when it's learning how to do that."

Daniela Rus, director of the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, leans back in an armchair in her office, surrounded by books and stacks of papers. She is eager to explore the new possibilities for studying animal communication that machine learning has opened up. Rus previously designed remote-controlled robots to collect data for whale-behavior research in collaboration with biologist Roger Payne, whose recordings of humpback whale songs in the 1970s helped to popularize the Save the Whales movement. Now Rus is bringing her programming experience to Project CETI. Sensors for underwater monitoring have rapidly advanced, providing the equipment necessary to capture animal sounds and behavior. And AI models capable of analyzing those data have improved dramatically. But until recently, the two disciplines hadn't been joined.

At Project CETI, Rus's first task was to isolate sperm whale clicks from the background noise of the ocean realm. Sperm whales' vocalizations were long compared to binary code in the way that they represent information. But they are more sophisticated than that. After she developed accurate acoustic measurements, Rus used machine learning to analyze how these clicks combine into codas, looking for patterns and sequences. "Once you have this basic ability," she says, "then we can start studying what are some of the foundational components of the language." The team will tackle that question directly, Rus says, "analyzing whether the [sperm whale] lexicon has the properties of language or not."

But grasping the structure of a language is not a prerequisite to speaking it—not anymore, anyway. It's now possible for AI to take three seconds of human speech and then hold forth at length with its same patterns and intonations in an exact mimicry. In the next year or two, Raskin predicts, "we'll be able to build this for animal communication." The Earth Species Project is already developing AI models that emulate a variety of species, with the aim of having "conversations" with animals. He says two-way communication will make it that much easier for researchers to infer the meaning of animal vocalizations.

In collaboration with outside biologists, the Earth Species Project plans to test playback experiments, playing an artificially generated call to Zebra Finches in a laboratory setting and then observing how the birds respond. Soon "we'll be able to pass the finch, crow or whale Turing test," Raskin asserts, referring to the point at which the animals won't be able to tell they are conversing with a machine rather than one of their own. "The plot twist is that we will be able to communicate before we understand."

The prospect of this achievement raises ethical concerns. Karen Bakker, a digital innovations researcher and author of The Sounds of Life: How Digital Technology Is Bringing Us Closer to the Worlds of Animals and Plants, explains that there may be unintended ramifications. Commercial industries could use AI for precision fishing by listening for schools of target species or their predators; poachers could deploy these techniques to locate endangered animals and impersonate their calls to lure them closer. For animals such as humpback whales, whose mysterious songs can spread across oceans with remarkable speed, the creation of a synthetic song could, Bakker says, "inject a viral meme into the world's population" with unknown social consequences.

So far the organizations at the leading edge of this animal-communication work are nonprofits like the Earth Species Project that are committed to open-source sharing of data and models and staffed by enthusiastic scientists driven by their passion for the animals they study. But the field might not stay that way—profit-driven players could misuse this technology. In a recent article in Science, Rutz and his co-authors noted that "best-practice guidelines and appropriate legislative frameworks" are urgently needed. "It's not enough to make the technology," Raskin warns. "Every time you invent a technology, you also invent a responsibility."

Designing a "whale chatbot," as Project CETI aspires to do, isn't as simple as figuring out how to replicate sperm whales' clicks and whistles; it also demands that we imagine an animal's experience. Despite major physical differences, humans actually share many basic forms of communication with other animals. Consider the interactions between parents and offspring. The cries of mammalian infants, for example, can be incredibly similar, to the point that white-tailed deer will respond to whimpers whether they're made by marmots, humans or seals. Vocal expression in different species can develop similarly, too. Like human babies, harbor seal pups learn to change their pitch to target a parent's eardrums. And both baby songbirds and human toddlers engage in babbling—a "complex sequence of syllables learned from a tutor," explains Johnathan Fritz, a research scientist at the University of Maryland's Brain and Behavior Initiative.

Whether animal utterances are comparable to human language in terms of what they convey remains a matter of profound disagreement, however. "Some would assert that language is essentially defined in terms that make humans the only animal capable of language," Bakker says, with rules for grammar and syntax. Skeptics worry that treating animal communication as language, or attempting to translate it, may distort its meaning.

Raskin shrugs off these concerns. He doubts animals are saying "pass me the banana," but he suspects we will discover some basis for communication in common experiences. "It wouldn't surprise me if we discovered [expressions for] 'grief' or 'mother' or 'hungry' across species," he says. After all, the fossil record shows that creatures such as whales have been vocalizing for tens of millions of years. "For something to survive a long time, it has to encode something very deep and very true."

Ultimately real translation may require not just new tools but the ability to see past our own biases and expectations. Last year, as the crusts of snow retreated behind my house, a pair of Sandhill Cranes began to stalk the brambles. A courtship progressed, the male solicitous and preening. Soon every morning one bird flapped off alone to forage while the other stayed behind to tend their eggs. We fell into a routine, the birds and I: as the sun crested the hill, I kept one eye toward the windows, counting the days as I imagined cells dividing, new wings forming in the warm, amniotic dark.

Then one morning it ended. Somewhere behind the house the birds began to wail, twining their voices into a piercing cry until suddenly I saw them both running down the hill into the stutter start of flight. They circled once and then disappeared. I waited for days, but I never saw them again.

Wondering if they were mourning a failed nest or whether I was reading too much into their behavior, I reached out to George Happ and Christy Yuncker, retired scientists who for two decades shared their pond in Alaska with a pair of wild Sandhill Cranes they nicknamed Millie and Roy. They assured me that they, too, had seen the birds react to death. After one of Millie and Roy's colts died, Roy began picking up blades of grass and dropping them near his offspring's body. That evening, as the sun slipped toward the horizon, the family began to dance. The surviving colt joined its parents as they wheeled and jumped, throwing their long necks back to the sky.

Happ knows critics might disapprove of their explaining the birds' behaviors as grief, considering that "we cannot precisely specify the underlying physiological correlates." But based on the researchers' close observations of the crane couple over a decade, he writes, interpreting these striking reactions as devoid of emotion "flies in the face of the evidence."

Everyone can eventually relate to the pain of losing a loved one. It's a moment ripe for translation.

Perhaps the true value of any language is that it helps us relate to others and in so doing frees us from the confines of our own minds. Every spring, as the light swept back over Yuncker and Happ's home, they waited for Millie and Roy to return. In 2017 they waited in vain. Other cranes vied for the territory. The two scientists missed watching the colts hatch and grow. But last summer a new crane pair built a nest. Before long, their colts peeped through the tall grass, begging for food and learning to dance. Life began a new cycle. "We're always looking at nature," Yuncker says, "when really, we're part of it."


Artificial Intelligence

Despite the fact that it constantly seems like we're in the midst of a robotics- and artificial intelligence-driven revolution, there are a number of tasks that continue to elude even the best machine learning algorithms and robots. The clothing industry is an excellent example, where the flimsy materials can easily trip up robotic manipulators. But one task like this that seems like it might soon be solve is packing cargo into trucks, as FedEx is trying to do with one of their new robots.

Part of the reason this task is so difficult is that packing problems, similar to "traveling salesman" problems, are surprisingly complex. The packages are not presented to the robot in any particular order, and need to be efficiently placed according to weight and size. This robot, called DexR, uses artificial intelligence paired with an array of sensors to get an idea of each package's dimensions, which allows it to then plan stacking and ordering configurations and ensure secure fits between all of the other packages. The robot must also be capable of quickly adapting if any packages shift during stacking and re-order or re-stack them.

As robotics platforms and artificial intelligence continue to improve, it's likely we'll see a flurry of complex problems like these solved by machine instead of by human. Tackling real-world tasks are often more complex than they seem, as anyone with a printer an a PC LOAD LETTER error can attest to, even handling single sheets of paper can be a difficult task for a robot. Interfacing with these types of robots can be a walk in the park, though, provided you read the documentation first.


Opinion: Here Are The Jobs AI Will Impact Most

Editor's Note: This is the second installment of a CNN Opinion project dedicated to examining the potential and the risks of artificial intelligence. "Our AI future: promise and peril" explores how AI will affect our lives, the way we work and how we understand ourselves.

CNN  — 

"AI is coming for your job."

That's just one variation of the many headlines you've probably seen ever since ChatGPT exploded in popularity and won the world's attention late last year. But is it true?

Back in April, Dropbox announced it was cutting 500 employees. In May, outplacement firm Challenger, Gray & Christmas let go of almost 4,000 people. And in July, the founder of an e-commerce startup said he laid off 90% of his support team. The common reason cited? You guessed it: artificial intelligence.

Goldman Sachs economists have estimated that 300 million full-time jobs across the globe could be automated in some way by the newest wave of AI, with up to a quarter of all jobs being completely done by AI.

Indeed, large language models like ChatGPT have demonstrated a pretty remarkable ability to write code, offer detailed instructions for different tasks, pass a law school bar exam, and even express empathy when answering medical questions. And while this technology has the potential to cause widespread disruption, the effects may not be felt evenly across the workforce, with white-collar workers likely to be more affected than manual laborers.

AI isn't always better, faster or cheaper, though. In fact, current iterations are prone to making mistakes and spitting out false information. News outlet CNET had to issue several corrections after it used an AI tool to help write stories. And some workers, including members of the International Association of Machinists and Aerospace Workers union, have said that their workload actually increased since their companies implemented new AI tools.

In some industries, experts have suggested a future in which AI can assist humans rather than replace them entirely. In others, artificial intelligence may have little to no impact at all.

To get a better sense of the effect AI might have on different industries across the labor market, we reached out to experts in medicine, law, art, retail, film, tech, education and agriculture, to address 1) How will AI change the nature of work? And 2) How will AI change the labor force in this specific industry?

Read on to see what they had to say. The views expressed in this commentary are their own.

Erich S. Huang is the head of Clinical Informatics at Verily and former chief data officer for quality at Duke Health and assistant dean for Biomedical Informatics at Duke University School of Medicine.

Imagine you are sitting in an exam room. You are a 47-year-old with a 16-year-old daughter and a 10-year-old son. Last week you had your annual screening mammogram and the radiologist identified a suspicious lesion. This week you have an ultrasound and a core needle biopsy. You've never needed a knowledgeable and compassionate doctor more than now.

How much of this experience do you feel comfortable outsourcing to artificial intelligence? How do we position algorithms in settings where we still need professionals to sit down, look you in the eye, understand who you are as a person, help you understand what is going to happen next and answer all of your questions?

"While AI might simulate compassion, beyond chatbot interactions, AI cannot truly empathize or be emotionally anticipatory the way human clinical professionals can."

Erich S. Huang

I've worked on AI in biomedicine for my entire career. I absolutely believe in its potential and know that AI will certainly be a part of medicine as we work to improve and further personalize care. AI can win back time and space for clinicians and help to reduce the administrative burden that is mostly tangential to direct patient care.

How often do you notice your doctor or nurse looking at a screen rather than looking at you? Several hours and thousands of mouse clicks a day are laboriously devoted to entering data into an electronic health record. Many health care person hours are spent on billing and reimbursement rather than patient care. These represent low-hanging opportunities for automation that will help your doctor be more present for you.

But AI should not take away clinical jobs. While AI might simulate compassion, beyond chatbot interactions, AI cannot truly empathize or be emotionally anticipatory the way human clinical professionals can.

Graduating from medical school, we raise our right hands and pledge the Hippocratic Oath. Machine learning algorithms and artificial intelligence do not. The best clinicians make data-driven decisions while helping you and your family understand your options with compassion. They are professionals. Algorithms are not.

Think of navigation aids like Apple or Google Maps: We entrust algorithms to evaluate factors such as traffic and road construction to find us the best route to our destination. We still drive the car. We still scan the road ahead for changing conditions and pump the brakes (even with self-driving cars) when we need to react quickly.

We must do the same for health care. Our task is to use algorithms to marshal data to efficiently assist human professionals to help other human beings to better health. How we care for patients should not be "artificial."

Regina Barzilay is a distinguished professor for AI and health in the Electrical Engineering and Computer Science Department at MIT. She is the AI lead of Jameel Clinic for Machine Learning and Health. Barzilay is also a MacArthur Fellow and a member of the National Academy of Engineering, and the American Academy of Arts and Sciences.

In 2016, AI pioneer Geoffrey Hinton made a bold prediction that within five to 10 years, AI models would outperform humans in reading medical images, saying, "People should stop training radiologists now."

He was right in some ways and wrong in others. Most clinicians refer to this quote as an example of AI hype, citing the significant shortage of radiologists, who are still very much in demand today. But he was right about AI's abilities — in some areas of clinical AI, primarily radiology, machines indeed match and even outperform human experts. Among AI practitioners, Hinton's comments often ignite feelings of frustration over the increasing gap between the performance capacity of these existing tools and the slow rate of their adoption in health care systems.

Smart AI algorithms, which are trained on large-scale medical data and equipped with powerful computing, can go beyond what is humanly possible, eliminate care delays and reduce the cost of health care.

Regina Barzilay

Smart AI algorithms, which are trained on large-scale medical data and equipped with powerful computing, can go beyond what is humanly possible, eliminate care delays and reduce the cost of health care. We already see research models that can diagnose diseases years prior to symptom occurrence, predict an individual patient's response to intervention and personalize the treatment.

But real-world integration of AI in health care has been slow due to a number of reasons, from the initial cost of adoption to qualms about safety and regulation. Based on my experience collaborating with hospitals, health care systems are more likely to utilize AI to reduce the administrative burden in care management, fueled by advancements in natural language processing tools (such as ChatGPT) that can automate the transcription of doctors' notes, help with scheduling and streamline office support. Instead of replacing doctors, this technology can help address issues of burnout and allow health care providers to focus more on improving the patient experience.

But the more fundamental change to health care will come from the uptake of new AI-powered diagnostic and treatment tools which will shift late-disease treatment to prevention and early-stage interventions. In the same way that e-commerce provides recommendations tailored to a consumer, AI-empowered medicine will eventually be personalized.

To achieve this vision, advancements in AI are not sufficient on their own; clinicians, regulators and the general public have a role to play in determining the extent to which AI will be adopted in hospitals and doctors' offices.

Daniel W. Linna Jr. Has a joint appointment at Northwestern's Pritzker School of Law and McCormick School of Engineering. Dan's research and teaching focus is on using AI for legal services and the regulation of AI in society. Previously, Dan was a litigator and equity partner at Honigman, a large law firm, and an IT manager, developer and consultant.

Artificial intelligence will better equip society to uphold the values and achieve the goals of the law. With AI assistance, lawyers can spend more time working on the challenges that attracted many of us to the law, such as eradicating inequality, ensuring access to justice, safeguarding democracy and strengthening and expanding the rule of law.

For instance, we can develop AI tools to help individuals understand their responsibilities and rights, and preserve and enforce those rights. At Northwestern University's CS+Law Innovation Lab, where my colleague and I oversee teams of law and computer science students who build prototype technologies, we have worked with the nonprofit Law Center for Better Housing to improve Rentervention, a chatbot that helps tenants in disputes with landlords. If a landlord does not return a security deposit, for example, Rentervention can help tenants determine if they are entitled to the security deposit and, if so, help draft a letter demanding its return.

People in businesses, large and small, are already using chatbots, AI assistants and other AI tools to help them comply with laws, regulations and internal policies. AI tools specifically developed for legal tasks can help them draft and negotiate contracts, make business decisions consistent with legal and ethical principles and proactively identify potential problems that they should discuss with a lawyer.

"…in a business dispute for nonpayment of goods, an AI system could predict the likelihood of success and create initial drafts of legal briefs … based on the AI system's analysis of the judge's past written decisions."

Daniel W. Linna Jr.

For lawyers, this means that AI can automate or augment many legal tasks that they perform. Most lawyers spend a lot of time finding applicable laws, organizing information, spotting common issues, performing basic analysis and drafting formulaic language in emails, memos, forms, contracts and briefs. AI systems will be able to do this faster, cheaper and better. Large language models, like those behind ChatGPT, have significantly increased the capabilities of these systems. Established legal information providers and many startups are rapidly developing and releasing AI systems that are "fine-tuned" or specialized for legal tasks.

Unsurprisingly, AI is changing the skills lawyers need. To responsibly use AI, they will need a functional understanding of the technology to evaluate the benefits and risks of using it, such as how it might fail and the ways in which it might be biased or unfair.

Lawyers will also need to exercise judgment to tailor an AI system's output for specific situations. For example, in a business dispute for nonpayment of goods, an AI system could predict the likelihood of success and create initial drafts of legal briefs, using the specific language and arguments that are most likely to persuade the assigned judge to rule in favor of the client, based on the AI system's analysis of the judge's past written decisions.

A lawyer will need to determine if the prediction and the proposed language and arguments are a good fit given the client's goals and interests. Perhaps what would be a winning argument, for example, would cause damage to the client's brand in the eye of the public, and the lawyer should revise it.

Lawyers will continue to play a significant role as governments update laws, regulations and policies for emerging technologies, including to address AI bias, discrimination, privacy, liability and intellectual property. Additionally, new roles are emerging in the legal industry, such as legal engineers who build systems, legal data scientists and legal operations professionals. And there is significant unmet demand for legal services from individuals and even businesses. Considering all of this, the best long-term prediction now is that there will continue to be a stable number of jobs for lawyers and other legal professionals, so long as the legal industry embraces technology and trains professionals to develop important complementary skills.

While there is uncertainty about the future, there have never been more opportunities for lawyers to make an impact on society.

Refik Anadol is a media artist and director who owns and operates Refik Anadol Studio and teaches at UCLA's Department of Design Media Arts. His work locates creativity at the intersection of art, science and technology, and has been featured at landmark institutions including The Museum of Modern Art, The Centre Pompidou and Walt Disney Concert Hall.

Artificial intelligence and automation will initially cause some shifts in the labor force in the arts, but I do think that in the long run, it will create more jobs than it will disrupt.

For example, we already need an army of ethicists, translators, linguists and humanities professionals to oversee chatbots and implement policies to make sure they make fewer mistakes. And because AI will continue to push human imagination — whether for the pursuit of meaningful human-AI collaborations or to prove that man-made art is better than AI-generated art — it will give rise to more areas for further professional training. We will encounter new art movements and new forms of digital aesthetics in the near future, and those will be created by humans, not AI.

For almost a decade, I have been using AI as a collaborator in my media art practice. I use publicly available data sets to train AI algorithms, ranging from cities' weather patterns to photographs of California's national parks. Since the pandemic, my focus has been to compile the largest nature-themed data set and contribute to its preservation by creating archives of images of disappearing natural places or through fundraising.

"Glacier Dreams" is a series of AI-generated art installations. Concerned about ethically sourced data, Refik Anadol collected his own images, sounds and climate data to create it. Courtesy of Refik Anadol Studio.

"We will encounter new art movements and new forms of digital aesthetics in the near future, and those will be created by humans, not AI."

Refik Anadol

Our work changes with every new AI-related invention, because we engage with deep research to first understand and then incorporate novel technologies into our works. Generative AI uses still-evolving projection algorithms that can learn from existing artifacts to create novel artifacts that accurately reflect the features of the initial data without repeating them. It provides us the possibility to train algorithms with any image, sound or even scent data. The current hype around generative AI models such as text-image generators and natural language chatbots made us put more emphasis on alternative data collection methods. We are committed to contributing to the practice of and dialogues around safeguarding against data bias, protecting data privacy and full transparency about how data is collected and used in training algorithms.

A big challenge of using generative AI in art is figuring out how to provide the models with original and authentic data for the kinds of artistic output that I imagine in the beginning. For example, for our most recent project, "Glacier Dreams" — a series of multisensory AI art installations — we decided not to use models already trained with existing glacier images. Making sure that the trained models use ethically sourced data in terms of consent, or even making sure that the data we collect from publicly available platforms fall under that category, is one of the major concerns in our field. So, in order to address these issues, we started to collect our own images, sounds and climate data. By traveling to our first destination, Iceland, we were able to capture the beginning of our own narrative of glaciers by taking our own images and videos.

I think that the increasing prevalence, accessibility and acceptance of AI-generated art will force not only artists, but also writers, designers and other creatives to re-consider the meaning of creativity and push their imagination even further. This will require time, effort and in some cases re-structuring of methods and practices, but I am in favor of keeping an open mind while reviewing innovation through a respectful and critical lens.

Adam Elmachtoub is an associate professor in the Department of Industrial Engineering and Operations Research at Columbia University, specializing in machine learning, optimization and pricing algorithms for e-commerce and logistics.

Consider a grocery retailer or restaurant chain on the day that the NBA releases its playoff schedule. For cities that are hosting games, AI tools will one day be able to immediately realize that this news will adjust the demand forecast for foods like chicken wings and potato chips, which are associated with basketball viewing parties. AI tools will then quickly re-optimize decisions associated with inventory shipments, staffing and promotions.

Over the last decade, online and brick-and-mortar retail have leveraged many advances in AI, particularly in the fields of operations research and machine learning. Operations research methodologies are used for inventory management, price optimization and delivery logistics, while machine learning tools are used for forecasting demand, digesting product reviews and targeted advertising. In the next wave of AI, we will solve operations problems faster by learning from past data, while also predicting changes to demand at a more granular level (both in space and time).

For cities that are hosting games, AI tools will one day be able to immediately realize that this news will adjust the demand forecast for foods like chicken wings and potato chips, which are associated with basketball viewing parties.

Adam Elmachtoub

Suppose the "Wannabe" music video by the Spice Girls was released today, rather than in 1996, and went viral on social media. A clothing retailer with an AI system in place can pick up on this viral hit immediately and initiate designs for similar clothing styles as in the video. Machine learning can help predict the demand at a local level, while operations research tools can help to immediately start sourcing materials, optimize manufacturing and plan inventory shipments.

While it is possible that retailers might have data scientists that follow the NBA and social media closely, there are countless other events that AI systems will detect and react to in real time — with less human intervention and serendipity required. AI will help make data scientists and managers more efficient, but not necessarily take away jobs, as there will be more opportunities to leverage data and improve the customer experience. Of course, human workers will still be needed to manage AI systems, which can have trouble navigating through satire, false information and adversarial attacks. While some roles, such as operating a register or stocking shelves, might be replaced by AI-powered robots in the future, more jobs may open up in assisting customers with complex tasks such as returns or advice, as retailers compete more on (human) service quality.

Nisreen Ameen: Not sure what sunglasses suit you? AI can analyze your face and decide

Dr. Nisreen Ameen is a senior lecturer in digital marketing and co-director of the Digital Organisation and Society (DOS) Research Centre at Royal Holloway, University of London. Nisreen is also currently serving as vice president of the UK Academy of Information Systems (UKAIS). 

AI will be a game changer for retailers, and the value of this technology in the global retail market is expected to grow dramatically in the next few years.

AI will allow retailers to improve the online shopping experience and connect with their customers through personalization, whether it be through online advertisements or curated product pages. Instead of having customers scroll through hundreds of products to find one item that they like or need, selected products can be presented to meet the customer's tastes or demands, leading to higher engagement and increased sales.

AI can also transform the shopping experience in new and unique ways. Some retailers have already installed smart mirrors, which use augmented reality and artificial intelligence for virtual try-ons. These mirrors can suggest different sunglasses, for example, based on an analysis of the customer's face shape, or help customers visualize how certain beauty products will look. In some cases, customers can also create a digital avatar for online shopping, helping them to confidently select the best size and fit.

"These mirrors can suggest different sunglasses, for example, based on an analysis of the customer's face shape, or help customers visualize how certain beauty products will look."

Nisreen Ameen

AI will have an enormous impact on both the nature of work in retail and the labor force in this industry. Chatbots are already widely used in customer service, and AI-driven robots can assist with tasks like inventory monitoring and answering simple questions in retail stores, such as where to find certain items.

While AI can handle repetitive and time-consuming tasks or synthesize massive amounts of data, and some retail jobs could be in danger of being replaced, employees' input is still currently required for decision making and tasks that require empathy and emotional intelligence — particularly when it comes to branding, marketing and public relations, for example.

For many employees, AI will redefine job descriptions, and the integration of this technology will require upskilling in order to work with AI and remain creative. Managers in retail should understand the potential and limitations of this new technology and focus on augmentation — utilizing AI in conjunction with human intelligence — instead of automation.

Theodore Kim is a professor of computer science at Yale, a former senior research scientist for Pixar and a two-time Scientific and Technical Academy Award winner.

If the Writers Guild of America and SAG-AFTRA, the union representing 160,000 actors, don't secure stricter guardrails against the use of AI during their negotiations with Hollywood studios, the film industry will end up relying less on the traditional writers, actors and directors who help bring movies to life.



This post first appeared on Autonomous AI, please read the originial post: here

Share the post

114 Milestones In The History Of Artificial Intelligence (AI)

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×