Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Artificial Intelligence – Key Issues and Considerations – II

What’s been happening with AI?  This time nothing new technically, but a host of preparations for then.

While not really “The rise of the chatbots” (Rachel Metz, Bloomberg Tech Daily, May 25th), maybe we can say “It’s raining chatbots,” as “there are AI chatbots in drive-thrus.  They’ve been built into Snapchat.  They’re recommending recipes at BuzzFeed and, disturbingly, have replaced human assistance at the National Eating Disorders Association.“  But this piece was most interesting for the money being raised and assigned to them:  $450 million for Anthropic “in its last funding round” bringing it to “more than $1 billion thus far,” $150 million for Character.AI, and $101 million for Stability AI, all hoping to change the current situation in which “none of these contenders has so far appeared to rival ChatGPT in terms of consumer popularity, name recognition, or funding” – even the latter.

Are we really looking on as “A Hiring Law Blazes a Path for A.I. Regulation” (Steve Lohr, The New York Times, May 25th)?  Well, although there have been AI regulations in place since at least 2021, every new one can be a meaningful precedent.  Now, in New York, “the city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used.  It also requires companies to have independent auditors check the technology annually for bias.  Candidates can request and be told what data is being collected and analyzed.  Companies will be fined for violations.”  How this law is enforced, what firms and jobseekers say about it, and how often it will be broken will all get nationwide attention.

In a related area, “AI is here to stay; it’s time to update your HR policies.” (Breck Sumas, Fox Business, May 27th).  Organizations, per the owner of a human resources firm, will need to decide which products workers can use on the job, and for what, keeping in mind AI’s great utility and insidious data insecurity, and should be starting to develop, document, and implement those rules now.

Of the people with great AI fears, CEOs of major AI companies are at the top.  While such views are controversial, it may have been surprising for us to see that “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn” (Kevin Roose, The New York Times, May 30th).  The statement, released that day by the nonprofit Center for A.I. Safety and “signed by more than 350 executives, researchers and engineers working in A.I.” including the tops of OpenAI, Google Deep Mind, and Anthropic, read, in full, “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”  That organization did not venture a view on how that catastrophe would happen, but the danger of autonomous goal-seeking programs has long been understood.  There are now large groups on both sides of the fear/no fear divide, and arguments between them may not always be harmless and polite.

It's not too early to look at “How AI will revolutionize politics in 2024, and why voters must be vigilant” (Brian Athey, Fox News, June 2nd).  Although much in AI will change over the next year-plus, we now must consider the difficulty we will have, given excellent-quality synthesized images and falsely attributed written statements, in “discerning reality.”  As well, “copywriting for fundraising emails, captions for social media posts, and scripts for campaign videos can now all be produced with an unprecedented level of speed, personalization, and diversity.”  All of these “are currently being navigated by people whose mandate is to win at all costs,” making ethical behavior sporadic at best.  The past two presidential elections told us a great deal about voters’ often tenuous perception of the truth, and the next may be vastly worse.

Finally, in an area adjoining AI, we see that “Robots could go full ‘Terminator’ after scientists create realistic, self-healing skin” (Emma Colton, Fox News, also on June 2nd).  Remember the passage in one of those films when someone warned that automata could then have “bad breath”?  Now has been developed “layers of synthetic skin that can now self-recognize and align with each other when injured, simultaneously allowing the skin to continue functioning while healing.”  We may get to interact with people who aren’t people in person, unawares, as well as electronically.  From there… who knows?

Back for more with the next post.



This post first appeared on Work's New Age, please read the originial post: here

Share the post

Artificial Intelligence – Key Issues and Considerations – II

×

Subscribe to Work's New Age

Get updates delivered right to your inbox!

Thank you for your subscription

×