Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Decoding the Black Box of AI: Regulation is the Future of Artificial Intelligence

Original Article by TechSling Weblog:

What if, like aircraft, we maintained a database that tracked every flight of Artificial Intelligence (AI), in other words, the black box of AI? Picture an encoded token allowing us to track the specific context that led the AI to a particular answer.

Just as the human neural networks can be studied – thanks to their predictable patterns, allowing for examinations, studies, diagnoses, and future predictions – so could AI be examined. The standardization of these logs, or Metadata, in conjunction with an accessible external interface, will enable continuous improvement.

As open protocols on the Internet have facilitated universal applications such as email, file transfer, chat, and others, the same principle could be applied to metadata. This would pave the way for third-party software capable of identifying anomalies or recommending the most suitable AI for a type of question, creating an abstraction layer leading to higher-quality responses.

This “black box” method has been used in the software industry for years. Engineers and programmers are familiar with terms such as “Core Dump”, “Crash Dump”, “System dump”, or “Core File”. These terms refer to files that capture a process’s memory image at a specific moment, usually when the process has failed or ended abruptly.

Humans understand the world through classifications (e.g., animals, vehicles, etc.). Similarly, AI is a new classification for a statistical and probabilistic model, automating the scientific model to approve or reject a hypothesis, aided by advanced algorithms and computing power.

Large Language Model (LLM) that powers OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard, are examples. Parameters such as the learning rate, batch size, number of layers, etc., are set and configured to begin the Training Phase. Therefore, the same AI algorithm can produce fundamentally different results if parameters, weights, customizations, and training data are altered for the training phase, implying an astronomical number of possible outcomes.

For instance, it could be calibrated with information from Nobel Prize-winning sources, promising extensive accuracy, or it could be calibrated to better understand humor or children.

Imagine if the AI was trained with scientific articles, where credibility was gauged by the number of citations, rather than information from social media. This would impact all users of that particular version, including applications that extend it.

We should treat AI as an ‘entity’ that can be cited. However, we would need unique identification numbers to ‘verify’ the source leading to that particular response. This model would also serve to point out errors in AI’s reasoning.

External audits could be constituted to assess and determine credibility with an ‘electronic seal’. Once the quality of AI is gauged by logs and metadata, technologies controlled by terrorists or anti-democratic countries could be mitigated, reducing the risk of catastrophes.

Following this, articles citing specific AI could be objectively evaluated. In the future, if any training or parameterization errors are found, it would be possible to trace which studies, articles, and applications are based on that specific response without compromising others.

For example, consider the tweet from a politician who claimed former President Bolsonaro had prevented World War III during his visit to Ukraine. The post was seen as false and not satirical, given Putin’s coincidental troop movements at the time. In this case, the post could have been tagged with ‘sarcasm’.

This necessitates diagnosis and adjustments, but the mechanism must exist – which is currently monopolized by AI companies, meaning they audit themselves, a conflict of interest in the current system.

Society needs to be educated about the varying qualities and inherent risks of AI, to be used safely in schools, businesses, governments, and academia. For this discernment to be evaluated as clearly and democratically as possible, metadata transparency is vital. We could have something similar to movie rating tags; humor, reality, fiction, etc.

Certainly, many mistakes relating to machine capabilities and algorithms will appear. Still, an array of businesses and studies will open up to investigate continuous improvement possibilities and error reduction, leading to evolution.

In conclusion, it is crucial to establish international funds that incentivize innovation and studies related to AI quality, specifically logs and metadata of different “Black Boxes”. Over time, we can achieve safety, and solutions, and discern between reality and fiction. The world needs Artificial Intelligence, but it must be secure. In essence, this requirement is not unlike what we demand from transportation, household appliances like microwaves, surgeons in hospitals, and engineers building our infrastructure. The future of AI shall be transparent, accountable, and ultimately, safer for all of us.

The post Decoding the Black Box of AI: Regulation is the Future of Artificial Intelligence first appeared on TechSling Weblog.



This post first appeared on TechSling Weblog - Digital News, Information & Resources, please read the originial post: here

Share the post

Decoding the Black Box of AI: Regulation is the Future of Artificial Intelligence

×

Subscribe to Techsling Weblog - Digital News, Information & Resources

Get updates delivered right to your inbox!

Thank you for your subscription

×