Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

This Copyright Lawsuit Could Shape the Future of Generative AI



not an application of ai :: Article Creator

Opinion: An AI Moratorium Is Not The Answer To The Technology's Threats

Hundreds of Technology leaders and researchers, including Steve Wozniak and Elon Musk, recently released a letter calling for a moratorium on artificial intelligence development, citing the risk to human society.

There are legitimate concerns about AI impact on humanity, but a moratorium is unrealistic, especially while there is a reasonable alternative.

Jerry McNerney served in Congress representing part of the East Bay from 2007-23. For six years, he was chairperson of the House AI Caucus. 

First, let's look at those concerns. There's worry that AI will dehumanize society by making us too soft or overly dependent on technology. AI could benefit a very few while eliminating jobs or replacing them with menial low-paying ones. AI applications could be biased, harming specific ethnic groups.

AI could be used in fraud and other crimes. AI could manipulate the political discourse causing social instability. AI could be used in autonomous weapons on and off the battlefield. And worst of all, AI might develop an intelligence or consciousness that would enable it to actively eliminate humans and other life forms.

A moratorium on AI development might be possible in a totalitarian society in which neighbors, friends and even family members would be compelled to report anyone involved in work on the technology — but not in a free country.

Scientists and engineers develop technology for a variety of reasons. Money is just one of the motivators. Scientists also yearn to explain and manipulate nature. For many technologists, being the first one to make a disruptive discovery is the ultimate motivator. In today's environment, you must always assume there will be competent competition.

Placing a moratorium on AI would slow its benefits, such as advances in health care, learning to use earth's limited resources efficiently and fight climate change, reducing traffic and transportation injuries, translating languages and increasing human productivity. Stopping AI would ask our descendants to live lives similar to ours when much better may be possible.

The most obvious alternative to a moratorium is regulation. It is possible to create laws and regulations that would guide AI development, but that doesn't seem to be in the cards. The United States has not even created a privacy law, a necessary precursor to an AI regulatory law. The European Union has a privacy law, and so does California. But the United States has not been able to pass a privacy law because of disputes between and within the political parties, disputes between states and the federal government, and resistance from stakeholders.

But there are ways to erect guardrails to reduce AI risks while allowing the beneficial development. The answer lies in creation of a coalition or an association that can bring stakeholders in industry, government and academia together to create standards and a legislative plan for AI. Standards should apply to the algorithms and data that are used to develop and train AI applications, making the applications more predictable and less biased. Standards could also be used to weed out products that are not up to par.

Stakeholders should agree to participate and support this for the same reasons many technologists created the moratorium letter in the first place. The public is already concerned enough about what AI might do that a moratorium is being seriously discussed even though there haven't been any massive AI-caused layoffs or other catastrophes. If something bad does happen that can be blamed on AI, the public reaction could be severe, possibly resulting in draconian measures.

In their own self-interest, AI developers don't want bad actors, rogue players or incompetent developers to introduce applications that cause major backlash. They should be willing to come together and develop standards along with realistic legislation that can be the basis for federal regulations.

Jerry McNerney served in Congress representing part of the East Bay from 2007-23. For six years, he was chairperson of the House AI Caucus.


AI Isn't Going Away. Instead Of Dystopian Worries, Use It To Boost Prosperity, ProgressOpinion

With the invention of ChatGPT and similar tools, science fiction has become science fact. Despite dystopian novels that predict artificial intelligence will lead to human irrelevance and alarmists forecasting massive job cuts and widespread cheating in classrooms, this technology will lead to more prosperity and progress, not less.

Launched in November, Open AI's ChatGPT gained 100 million users worldwide in two months, faster than any other web application in history. Those who cheered it — and others who feared it — must accept its widespread impact. It suddenly made our world more complex, creating vast opportunities and challenges.

Bain, Buzzfeed, Google, Instacart, Microsoft and many other companies have been steadily reporting their applications of the technology. I argue that it will vastly improve the productivity and effectiveness of business and higher education.

Artificial intelligence has the potential to connect people across language barriers, increasing our productivity by quickly synthesizing information or generating new content at scale. Imagine having a dozen research assistants providing you with timely insights. For now, fact-checking must still be done manually after initial research and document drafting have been completed.

It also democratizes access to knowledge and helps with initial ideation and any type of content creation. The latest version, GPT-4, is capable of drafting lawsuits, passing standardized exams in the top 10% of scores, building websites from sketches and constructing software games in a matter of minutes. It's already making its way into third-party products.

To be fair, there are serious, legitimate concerns. The initial conversation in higher education focused on how ChatGPT robs students of motivation to write and think for themselves and called for the return of pen-and-paper or oral exams to avoid plagiarism and cheating. Suddenly, tools have become available to students that allow them to respond to assignments with content generated by technology in a manner of minutes.

Story continues

The problem is not with students' use of the technology. It's with faculty's lack of adaptation to the new reality. We should educate students about proper and improper technology use in our courses, motivate them to use it properly and revise assignments to elevate the challenges presented, not try to police students through AI cheating detection software. The existence of cheating evasion tools, such as QuillBot, cannot be denied.

With the application of ChatGPT and Generative AI tools in professions everywhere, prominent university professors contend that the future will belong to those who can master working collaboratively with generative AI. We have entered a new era of human-machine collaboration that has been forecasted for years.

ChatGPT certainly has limitations. It sometimes provides incorrect information. It often incorporates biases in responses, due to the historical training that includes human biases. As with every new technology, we also worry about the impact of the new tools on jobs and job displacement. However, we once needed people to take care of horses. Then, we needed to fix cars with internal combustible engines. Now, we need people to fix electric engines.

Despite the sensational headlines and distrust of the technology, downplaying its potential would be a huge mistake. We have already seen drastic improvements in the reliability of responses in just a few months. The industry is working on new tools to provide transparency to the reasoning behind the responses generated, and the first tools to help eliminate biases have already been previewed.

While the laws have not kept up with the technological developments, we will soon see courts address some of the pressing legal issues. And some regulation will follow, making the tools more acceptable.

Google and Microsoft are introducing generative AI to the office tools we use every day. It will be a part of our reality, whether we like it or not.

Beata M. Jones is a professor of business information systems practice at the TCU Neeley School of Business.

Beata M. Jones


The AI Future Of Finance Is Now

Over the last three weeks I've received dozens of pitches about ChatGPT and generative AI, mainly from PR folks working for "experts" offering to comment on stories.

Far more interesting among those pitches has been the handful of messages from founders of new generative AI startups focused on financial services who had read something I'd written and thought I might like to hear more.

One of those came from David Plon, the co-founder and CEO of Portrait Analytics, a generative AI research platform for investment analysts, which was founded in 2022 but exited stealth on Thursday with the announcement of $3 million in pre-seed funding. The financing was led by .406 Ventures with participation from a few hedge funds.

"Ultimately, the vision I have is, essentially, to build an AI-powered junior analyst," said Plon.

In other words, Portrait is being built so it can answer any question or perform the tasks typically asked of a junior analyst at a hedge fund today. This can include having the analyst suggest ideas, build financial models or create pitch decks and author memos.

"The way I think about it, is that I know the analyst workflow really well," he said, having spent almost five years as an analyst at The Baupost Group in Boston. And that workflow and its processes and the required datasets is where his small team of developers and engineers have focused.

The human version of these analysts spend untold hours per year pouring over thousands of documents to research companies and stay current on their coverage area.

Plon explained that Portrait's first product is a question-and-answer-based application that has both generative AI search and "summarization." Or, as the company states, "Portrait responds to users' tasks by extracting and synthesizing key information buried in company filings to produce crisp and factual responses that are fully auditable by users."

"Down the road we hope and envision creating something that anyone involved in investing can use—I'd love, if in five years, that my mother's own financial advisor had access and used this on a daily basis," said Plon.

David Plon

But it's not amassing and incorporating all the necessary data to power Portrait that is Plon's biggest expense.

"The biggest cost is the engineering time required to create a system that is both useful and reliable," said Plon.

Building the startup's ever-growing repository of data and knowledge graph, while challenging, pales compared to the engineering challenges of creating its language model, he said.

As for data, it will vary greatly from EDGAR (the SEC's Electronic Data Gathering, Analysis, and Retrieval system), which is free and publicly available, to earnings call transcripts and other data that may not be publicly searchable but is available, and ultimately, to piping in user data.

While the platform is in private beta, with plans to release access to analysts on its waiting list in the coming months, Portrait is just the latest in a string of generative AI products for advisors or with advisors on their roadmap.

Jan Szilagyi, CEO and co-founder of Toggle, created a cloud-based AI application with its own machine learning and natural language processing algorithms that were built in-house and looks at millions of pieces of data for its users and comes back with thousands of points of interest, called Toggle Insights, each day.

I wrote about Toggle, which shares many similarities (at least from the outside looking in) with Portrait, back in August. It already has a far wider and more varied user base that includes hedge funds, banks and professional investors. It also has a wait list for its own generative experience, expected in the next few months.

Szilagyi's team is currently at work "teaching ChatGPT how to invest—not hallucinate … [giving it] a crash course in finance," as it states on Toggle's homepage.

"The exciting frontier we are at now—and it is unbelievably exciting—is that we are able to have two-way communication, you'll be able to ask follow-up questions," said Szilagyi, referring to the addition of generative AI with its language models to the already built and working ML and NLP technology under Toggle's hood.

He said advisors can ask the system, for example, about impacts on a client portfolio if the yield curve inverts and immediately get a response back along the lines of: "Here are the most vulnerable parts of your portfolio."

"You can then respond with another what if, you'll be able to control it simply by being able to articulate the question—it will provide an unparalleled ability to take the English language and convert it into computer code," said Szilagyi, adding that it was akin to having a digital Rosetta Stone and being able to not just read or translate hieroglyphics but in turn write them as well.

Adnan Masood, PhD, who heads up the AI and Machine Learning group at global technology consultancy UST, said he has been struck by the recent massive public and media reaction to what have appeared to be instantaneous breakthroughs in generative AI, starting with ChatGPT.

"Those of us that are researchers in the field are not surprised, we have seen the painstaking evolution," he said, noting that while what ChatGPT does can seem almost to be like magic, it has taken prodigious research to get there.

And while the focus here is with financial services, Masood detailed other available or near-term developments where generative AI is already making or will make tremendous impacts, from health care to cybersecurity.

He said specific use cases include understanding customer sentiment at scale by combining ML, NLP and generative AI to analyze customer service call logs and reduce fraudulent insurance claims through combining the technologies and analyzing claims data.

Another use case in financial services was in the lending industry where tedious and traditionally human-based loan decisioning occurs.

"Banks are looking at quantitative data but the textual unstructured qualitative data in terms of say, business plans, was not something that could previously be brought in in any scalable way," said Masood. "Now you can bring not only that sort of information but ingest local market conditions in the decisioning process."

In addition to the low-hanging fruit of content generation, advisors are likely to see some early useful advancements when it comes to compliance automation.

For example, applications that will ingest trading notes, something that has been a perfect use case for NLP over the last few years.

"Now with generative AI you can start to analyze good notes and bad notes in real time and ask the system to determine whether this note would pass an audit or not?" said Masood.

While the potential in AI, and generative AI in particular, is certainly awe-inspiring, Masood cautions that identifying risk and biases in the language models presents a challenge, not an insurmountable one, but one that perhaps many in awe of AI are not yet familiar enough with.

He also noted the recent exposure of ChatGPT user personal information and chat titles due to an internal bug that came to light recently.

"Imagine that happening to a large financial services organization, there needs to be layered security and guardrails in place," he said.








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

This Copyright Lawsuit Could Shape the Future of Generative AI

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×