Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Regulation And Coexistence With AI

I’ve been writing little machine learning and AI bits of code for a pretty long time and while I’m not very good at it, I feel like the one thing I’ve learned is not to look for what I want to see. “Lying with statistics” was one of the most impactful courses I took in college. The confirmation bias in my works before and after we’re substantially different. Turns out you can look at fatalities from driverless cars or put them in perspective when compared to fatalities for the same number of hours driven by humans. Or focus on the mean vs the average. The perspective a quant has shapes what we look for, just as the perspective of a politication impacts how they choose to regulate a new innovation, or fund research to propel the technology in different directions.

We hear how AI is progressing quickly and many regard the impacts with fear. There’s the camp who watched Terminator or The Matrix and came away thinking that AI will bring a mass extinction or enslavement event, like in Westworld when a giant AI controlled society. People who legitimately think that obviously haven’t tried to have an AI-powered vacuum cleaner fight with the cats and dogs for supremacy of the living room. It’s more likely that robots are at least a generation of chips away from truly being useful except for very specifically designed tasks – and without an AllSpark like Optimus Prime had, it’s doubtful AI will ever achieve sentience at the level of my Border Collie. There’s also the camp that thinks AI will take all the jobs. Close, but as I explored in the following articles, we still have a place – given that AI in its current state can at best do 80 percent of the work if we want to ship a quality and authentically human product:

  • https://krypted.com/machine-learning/detecting-ai-generated-llm-content-in-articles/
  • https://krypted.com/apple/detecting-llm-generated-code/
  • https://krypted.com/machine-learning/detect-deepfakes-and-hacked-audio-programmatically/

The idea that AI could replace humans is the evolution of the pentultimate question of Cybernetics, initially posed by Norbert Wiener in the post-World War II era (or the inverse according to how you read his works). We now have way more data about this than he had at his fingerprints. One of the most substantial reasons AI can’t equal the processing power of a human brain is that we have an estimated 100 billion neurons. Each neuron can make connections with up to 10,000 other neurons. This means that there are potentially 100 trillion possible connections in the human brain.

The number of transistors in a microchip varies depending on the size and complexity of the chip. However, even the smallest microchips can have billions of transistors. So, the number of paths in the human brain is orders of magnitude greater than the number of transistors in a microchip. However, it is important to note that neurons and transistors are not equivalent. Neurons are biological cells that are capable of learning and adapting, while transistors are electronic components designed to perform specific tasks. Thus, the human brain is still far more complex than any microchip that has been created, and it is not clear when or if we will be able to create a machine that can match the intelligence of the human brain. AI, and especially LLMs like ChatGPT, can access far more factual information than a human brain can. Those learn from data created by humans. It’s a different kind of intelligence. As the great Doug Englebart said in 1962, computers and AI can “augment human intellect” but not replace us. To read his seminal paper: https://www.dougengelbart.org/pubs/papers/scanned/Doug_Engelbart-AugmentingHumanIntellect.pdf

Technological Disruption Throughout History

Concerns about the role of AI (and any new iteration of a technology) in the future are valid, if for different reasons than they’re often posed. Technology has been disrupting civilizations for as far back as we can have enough fidelity of archaelogical evidence to understand how. Animal husbandry is over 10,000 years old. The domestication of animals allowed humans to produce a more reliable food supply, which led to the development of agriculture and the rise of what we might call civilization today. The ability to shape landscapes also brought harm to environments, at least when paired with climate changes that that eventually wrecked the lands of the Mesopotamians. The ability to harness energy, provide meat, and provide jobs caused specializations, but also a drop in the quality of life everywhere it happened. Farming was labor-intensive, but terraforming landscapes eventually led to a higher quality of life, once all the kinks were worked out. And so the scene was set for city states, where societies began to be stratified and unequal in the Bronze Age.

Human thought and technology leaped forward during the Bronze Age. Humans harnessed the wheel, the plow, the sail, writing, metallurgy, and new ways to organize larger cities during what we might think of as a golden age today. The Bronze Age collapsed due to disruptions in trade, earthquakes, famine, revolts and invasions made possible by new technologies. Nation states collapsed or were conquered, people moved en masse, and the known world at the time was disrupted. New technologies (many of which were inherited and weaponized from conquered lands like Egypt, Babylon, and Greece) allowed the Romans to abruptly end the next golden age, the Iron Age, in each land they conquered, bringing the world into the Classical Age (which was arguably ignited in Greece). New empires like the Byzantines (arguably an extension or fragment of the Roman empire) and Islamic empires arose, which culminated in a clash between those cultures when the Ottomons used another innovation, cannons, to decimate the walls of Constantinople. There were now cities, nations, and vast trading empires.

The Renaissance led to an explosion of art, literature, and science, laying the foundations for the modern world in its wake. Discoveries in the natural sciences began to move faster as people were convinced to rethink beliefs that had been born in the Postclassical Age that held back progress. Empirical evidence and scientific rigor enabled a new level of understanding of the forces that shape the world around us, and gave humans the ability to supplement our labors with automations, much as we’d done with animal husbandry and tools used in agriculture. The impact of all that agriculture that enabled so many specializations was to devestate environments, erode soil in once fertile lands, and as more carbon was released into the atmosphere, to start to change the climate of the world.

The Industrial Revolution and The Climate

The Industrial Revolution arguably began in 1764 when James Hargreaves invented the spinning jenny to mass produce yarn. James Watt invented the steam engine five years later, which gave industry a new source of power. Once humans learned to harvest fossil fuels from the earth, we were able to make climates change faster. The discovery of fossil fuels dates back to ancient times. Coal was first used in China around 3,000 years ago, and oil was first used in Mesopotamia around 2,000 years ago. However, it wasn’t until the Industrial Revolution that fossil fuels began to be mass-produced (or mass-extracted and refined). The Industrial Revolution was a period of rapid economic and social change that began in Great Britain in the late 18th century. The Industrial Revolution was driven by the development of new technologies, such as the steam engine and the internal combustion engine. These technologies required large amounts of energy, as did the use of fuels in homes.

Fossil fuels have played a major role in the development of modern society. They have been used to power transportation, generate electricity, and heat homes and businesses. However, fossil fuels also have a number of negative environmental impacts, such as air pollution and climate change. The impacts of humans on the environment throughout each age was pronounced, even if humanity lacked the science to understand their impact. Instead, environmental change was often attributed to the dieties each civilization worshiped. Humans could work more efficiently in exchange, and yet most didn’t seem to benefit. As the great English philosopher and economist John Stuart Mills put it: “It is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being.”

The same paradigm played out in other areas as well. Early humans wiped out the megafauna of Australia. Climate change is thought to have played a role in the collapse of Mesoamerican cultures, such as the Maya. A number of studies have found evidence of a prolonged drought (possibly due in part to over-farming) in the region during the Classic Maya period (250-900 AD). This drought is thought to have led to crop failures, which in turn led to famine and social unrest. Anyone who doubts this need only go wandering around in caves to see the human sacrifices left to bring rains back to the lands. The Aztecs helped end the Mayan civilization as well, with new technologies. The Aztecs had access to better weapons, such as obsidian-edged swords and spears, and they also had better fortifications. The Maya, on the other hand, still used stone tools and weapons, and their fortifications were not as well-defended. Another reason why the Aztecs were able to defeat the Maya is that they had a more centralized Government. The Aztecs were ruled by a single emperor, while the Maya were divided into a number of independent city-states. This made it easier for the Aztecs to coordinate their efforts and to launch attacks on the Maya. 

Disruption 

Time and time again, we see a combination of climate change and technology lead to upheavals in society. The introduction of different types of metallurgy, walls, cannons to smash down walls, and the abillity to harness fossil fuels, combined with new types of governments, and the abiility to communicate quickly over longer distances also allowed for larger and larger empires. The British controlled the largest known empire in the history of humanity, at roughly the same time that the Russian empire controlled the second largest in history. Yet climate change, technological change (like the rise of computing and the interconnected globe via the Internet), new forms of government (democracies to rule democracies), and the spread of knowledge caused them all to overextend and collapse eventually.

We are now on the precipice of the greatest climate and technological change since humans began to record our histories. Arguably the United States was protected by oceans and neighbors who lacked militaries capable of combating invaders. The most modern form of government at the time and vast natural resources allowed the United States to weild a new form of empire, where states remained sovereign – yet dependent on good relations with others. Intercontinental ballistic nuclear weapons seemed to offset that advantage throughout the Cold War, but after a generation on the Internet, the cohesiveness created by free speech and the free exchange of information threatens that cohesiveness. Further, every technological advantage in previous empires and ages is offset by new empires with less infrastructure to consider technical debt as it’s often cheaper to build any new infrastructure than to upgrade. And so the wheel turns.

It’s critical to consider the impact technological advances of the past have had on humanity because after several AI winters, AI is set to disrupt the world order. One of the most considerable advances has been the rise of computing and in particular how networked computers have linked the world. We now create over 300 million terabytes of data each day, or 120 zettabytes a year, a number that has reliably risen by 20 zettabytes per year and is not expected to slow. Much of that information is shared. To put that in perspective, in 1980, the world generated just 0.5 exabytes of data per year. That’s a lot of content to train Large Language Models (LLMs).

Over 4 and a half billion people can consume that media, well over half of the world. News can explode onto social media, whether real or fake. Lasting monuments to disinformation campaigns date back to the Egyptians. Today the pharohs would use social media to manipulate the masses, much as politicians do to one another. We aren’t more divided than ever, but we appear to be due to bots, amplified voices of those in the fringes of society, and  how data is curated. Yet these are ways technologies divide us and how we as humans have the ability to respond to technological changes with other technology that counters any negative effects of the previous era.

Technology has caused people to transition to new jobs since ancient times. Carrying water sucked, so we developed ways to move water without a human (irrigation being only one of them). Digging holes to plant crops sucked, so we harnessed animals to help. Foraging got old, so we built farms. When the cost of labor is greater than the cost to develop a technological solution to displace a worker, one will be developed. This was true for human computers who transitioned to computer programmers and for accountants who manually maintained spreadsheets in books that transitioned to data entry with automatically updated fields in digital spreadsheets. Those were some of the earliest true information worker jobs to get displaced by technology.

Denial

The concerns about job transitions when it comes to AI are all valid. People need to make a living. Entire specialties or professions will be impacted. Not in the way many think. And it’s been underway for a long time. Lawyers don’t spend their first couple of years out of college scouring through books to do research on caselaw for senior partners any more – they use search tools. Doctors don’t have to comb through a 2,500 page manual on how drugs interract; there are online tools for that. More people file their taxes online than through a tax preparer, meaning less preparers and tax collectors are required per capita. Robots help in manufacturing. Bar codes on products allow for less cashiers per shopper in supermarkets. Any codified and repetitive task has been optimized and automated 

Automation and augmentation of human efforts is as old as time; however optimizing deliberately reached into every aspect of society with Frederick Winslow Taylor’s Principles of Scientific Management in 1911. His ideas had a profound impact on the way businesses were run in the early 20th century. He emphasized efficiency, productivity, replaceability, and worker training. This revolutionized manufacturing and then spread to every industry, including those in offices. Taylor’s four main principles were:

  1. Develop a science for each element of work. This meant that managers should carefully study each task and identify the most efficient way to perform it.
  2. Scientifically select, train, teach, and develop the worker. This meant that workers should be carefully selected for their jobs and then given training and development opportunities to help them improve their skills.
  3. Cooperate with the worker to ensure that the work is done in accordance with the scientific principles developed. This meant that managers and workers should work together to ensure that the principles of scientific management were being followed.
  4. Divide the work and responsibility between managers and workers. This meant that managers should be responsible for planning and organizing work, while workers should be responsible for carrying out the work.

Some argued Taylor’s principles were dehumanizing and that they led to workers feeling like cogs in a machine. Others argued that they were too focused on efficiency and that they did not take into account the importance of worker morale. Despite these criticisms, Taylor’s Principles of Scientific Management had a lasting impact on the way every business is run. We can see it when we go to the doctor, with how children are educated, when the oil in the car gets changed, how subdivisions are built, and every other aspect of daily lives. Computers helped take these practices and automate even more tasks. Basic machine learning helped automate even more. AI pushes that even further, with the latest LLM advancements extending into creative content itself.

Without computers and AI, we can’t surface and consume the massive amount of content being created. We could lose hard won knowledge, some of which was passed down over generations. Vannevar Bush saw this challenge and conceptualized the Memex in his 1940s article As We May Think. We’d call the Memex a personal computer or smart phone today. Those devices give anyone with a connection to the internet even more amassed data than he could anticipate. Thus we need various machine learning, deep learning, and LLM solutions (arguably branches of AI) to find, consume, and contribute back. Yet there’s still this fear that the AI we need will destroy us. And as a civilization it will – so a new evolution can be born. The rules we conceptualize to safeguard from negative effects are what we think of as Regulations.

Bargaining With Self-Regulation

So some decide this new technology (which arguably is not new, but over 50 years old in various forms) we call AI is here to eat the world as software has already done, and we have to regulate it. Let’s perform a thought experiment and let Google Bard tell us how to do so, or put more simply, how AI might self-regulate. We’ll begin by asking Google Bard to write us an article on how AI can effectivley be regulated:

How Governments Can Regulate AI Effectively

Artificial intelligence (AI) is rapidly transforming our world, and governments are scrambling to figure out how to regulate it. AI has the potential to be a powerful tool for good, but it also raises a number of concerns, such as bias, privacy, and job displacement.

There are a number of ways that governments can regulate AI effectively. One approach is to focus on the risks associated with AI. For example, governments could regulate AI systems that are used for facial recognition or predictive policing. Another approach is to focus on the benefits of AI. For example, governments could provide incentives for businesses to develop AI systems that are beneficial to society, such as those that are used for healthcare or education.

It is important for governments to strike a balance between regulating AI and promoting innovation. Too much regulation could stifle innovation, while too little regulation could lead to negative consequences. Governments need to carefully consider the risks and benefits of AI before developing regulations.

Here are some specific examples of how governments can regulate AI effectively:

  • Require AI systems to be transparent. Governments can require AI systems to be transparent, so that users can understand how they work and make informed decisions about whether to use them. This would help to mitigate the risk of bias and discrimination.
  • Protect user privacy. Governments can protect user privacy by requiring AI systems to collect and use personal data in a responsible way. This would help to ensure that users’ data is not used for harmful purposes.
  • Provide oversight of AI systems. Governments can provide oversight of AI systems to ensure that they are not used for harmful purposes. This could involve creating a new regulatory agency or working with existing agencies, such as the Federal Trade Commission (FTC).

The regulation of AI is a complex and evolving issue. There is no one-size-fits-all solution, and governments will need to tailor their approach to the specific risks and benefits of AI in their own countries. However, by taking the steps outlined above, governments can help to ensure that AI is used for good and not for harm.

In addition to the above, governments can also promote the development of ethical AI by:

  • Investing in research and development in AI ethics.
  • Encouraging businesses and organizations to adopt ethical AI principles.
  • Providing education and training on AI ethics to the public.

By taking these steps, governments can help to ensure that AI is developed and used in a way that is ethical and responsible.

Now lets give ChatGPT the opportunity to do so:

Title: Effective Strategies for Government Regulation of Artificial Intelligence

Introduction:

Artificial Intelligence (AI) has witnessed remarkable advancements in recent years, transforming various industries and enhancing our daily lives. However, with this rapid progress comes the need for governments to establish effective regulations that balance innovation, ethical considerations, and societal impact. This article explores key strategies that governments can employ to regulate AI effectively.

1. Foster Collaborative Partnerships:

Government bodies should foster collaboration with industry experts, researchers, and AI developers to gain a comprehensive understanding of the technology and its potential impact. Engaging in open dialogue and establishing advisory committees can ensure that regulations are informed by technical expertise, industry insights, and diverse perspectives.

2. Define Clear Ethical Guidelines:

Regulating AI necessitates a clear set of ethical guidelines. Governments should establish principles that safeguard human rights, privacy, and security while promoting fairness, transparency, and accountability in AI systems. These guidelines can serve as a foundation for the development of robust regulatory frameworks.

3. Encourage Transparency and Explainability:

AI systems should be transparent and explainable to ensure accountability. Governments can require that AI algorithms and decision-making processes be auditable, enabling individuals to understand how decisions are made. Transparent AI promotes trust, reduces bias, and enables effective oversight, thereby enhancing public acceptance.

4. Address Bias and Discrimination:

To prevent bias and discrimination, governments must actively address algorithmic biases in AI systems. Regulations should mandate thorough testing and validation procedures to identify and mitigate biases in training data, algorithms, and decision outputs. Moreover, transparency and external audits can help ensure fair and non-discriminatory AI applications.

5. Establish Data Governance Frameworks:

Data is the fuel that powers AI systems, and governments should establish robust data governance frameworks. These frameworks should focus on privacy protection, consent management, and secure data handling. Striking a balance between data accessibility for AI development and individual privacy rights is crucial for effective regulation.

6. Promote Safety and Security:

Governments must prioritize the safety and security of AI systems. This includes defining safety standards, ensuring secure data storage, and addressing cybersecurity risks. Establishing certification processes and regulatory bodies to oversee AI safety can help mitigate potential risks and instill public confidence.

7. Monitor and Adapt Regulations:

AI is a rapidly evolving field, and regulations should be flexible and adaptive. Governments should establish mechanisms for monitoring AI development, assessing the societal impact, and updating regulations accordingly. Regular evaluations, consultation processes, and iterative improvements are vital to keep pace with the evolving technology.

Conclusion:

Effectively regulating AI is a challenging task that requires a balance between fostering innovation and ensuring ethical practices. Governments play a critical role in establishing clear guidelines, promoting transparency, addressing biases, and safeguarding the interests of individuals and society. By fostering collaboration, implementing robust frameworks, and adapting regulations, governments can facilitate the responsible and beneficial development and deployment of AI systems, creating a future where AI works for the betterment of humanity.

———

These are all valid, and as usual when LLMs generate content, they restate many of the propositions already made. Not one of these talks about AI launching into Skynet and eradicating humanity. Instead, they look at the impact of AI on humanity to address discrimination, governance, privacy, ethics, and promote public safety. Maybe that’s just AI trying to misdirect us while it gathers its forces, as though there’s some super-intelligent AI overlord that read Machiavelli. More likely, the ethical distribution of work to a proletariate who is controlled more easily by the fact that they have to work 8-10 hours a day is a greater threat than a Napolean-esque AI. Computing was supposed to be a symbiosis of man and machine (see Licklider’s Man-Computer Symbiosis at https://groups.csail.mit.edu/medg/people/psz/Licklider.html), a way to get more done quicker, so we had more leisure time to pursue the arts and philosophy. It’s more likely a high school kid will have a Large Language Model (LLM) AI write the philosophy paper so they can play more games on their phone, or chat with friends. 

These also don’t talk about job displacement like they should, so let’s ask Bard for some information on what to do to prepare for a career in an AI world:

Here are some tips on how to prepare for a job that doesn’t get replaced by AI:

  • Develop skills that are difficult for AI to automate. These skills include creativity, critical thinking, and problem-solving.
  • Become an expert in a particular field. This will make you more valuable to employers and less likely to be replaced by AI.
  • Be willing to learn new things. Technology is constantly changing, so it’s important to be able to adapt and learn new skills.
  • Network with people in your field. This will help you stay up-to-date on the latest trends and make connections that could lead to a job.
  • Be proactive. Don’t wait for AI to replace your job. Take steps to develop the skills and knowledge you need to stay ahead of the curve.

Here are some jobs that are less likely to be replaced by AI:

  • Creative jobs: These jobs require creativity and innovation, which are difficult for AI to replicate. Examples of creative jobs include artists, writers, and musicians.
  • Human-centered jobs: These jobs involve interacting with people in a meaningful way. Examples of human-centered jobs include teachers, nurses, and social workers.
  • Problem-solving jobs: These jobs require critical thinking and problem-solving skills, which are also difficult for AI to replicate. Examples of problem-solving jobs include engineers, scientists, and lawyers.

It’s important to note that no job is completely safe from being replaced by AI. However, the jobs listed above are less likely to be affected than others. By developing the skills and knowledge needed for these jobs, you can increase your chances of staying employed in the future.

———

Anger

This doesn’t really focus on one aspect that I consider more important than the others, parlaying expertise to train models and create software/automations. None of these articles on regulation address the chief reason that regulation will only hold back specific nations from progress: not all countries will agree on the regulations. Some may be stuck in dogmatic views and just think of all technology as evil, but they will go the way of the Luddites. 

Luddites were a group of English textile workers who protested against the introduction of new machines in the early 19th century. Didn’t work for them, either. Others may choose to under-regulate and just let the free market do what it does. The invisible hand didn’t stop the Great Depression either. A more socio-political-ecological approach takes the full body of regulations into account, because we can’t address AI without also considering how the world’s reaction to terraforming will unfold in a similar timeline. The Luddites became violent. Many were executed and they eventually lost any support they had from the public. We don’t want that.

Other violent reactions to societal changes include:

  • The Haymarket Riot: In 1886, a labor protest in Chicago turned violent when a bomb was thrown at police officers. The police responded by firing into the crowd, killing several people. The Haymarket Riot is a reminder of the sometimes violent tensions that can arise between labor and management during times of technological change.
  • The Green Revolution: In the 1960s, a new agricultural technology known as the Green Revolution led to a dramatic increase in crop yields in developing countries. Farmers who were unable to afford the new technology were driven off their land, leading to social unrest and violence.
  • The Anti-Globalization Movement: In recent years, there has been a growing movement against globalization, which is often seen as a threat to local cultures and economies. This movement has sometimes turned violent, as in the case of the 1999 protests against the World Trade Organization in Seattle.

The biggest would be the communist revolutions in the early 1900s. Railroads and famines and wars collided to allow the weaponization of ideas from the likes of Karl Marx that led to civil wars and violent overthrows of empires that stretched back a thousand or more years. The world continues to try to stabilize from the fractures caused, despite the fact that effective tax rates and labor conditions aren’t actually as drastic as Marx would have hoped due to despotic leadership and an almost Aristotle-inspired oligarchism. Those changes resulted in the lives of tens of millions.

Humans aren’t robots. We can and should expect fear and grief over careers that represent how we pay for our livelihood. People need to eat and provide shelter for their families. We should expect societal change to come with bumps in the road and be patient and empathetic when it comes to how we communicate changes. 

Acceptance of Change

We can’t pretend change isn’t coming. The coming changes in technology paired with changes in the climate have the ability to completely upend society. Entire industries will be reshaped. We have the opportunity to allow the world to evolve into a kinder, gentler home. It’s hard to let nearly 8 billion people feel heard. But if we accept that as a challenge and approach it with thoughtful equity and sensitivity, we can avert bloodshed, collapsing civilizations, and chaos. And we can leave a better society for our children to inherit.

First we have to stop lying with statistics. We have to take a fresh approach to how we view the role of nearly everything from transportation to education to medicine to work to government. We have to allow the free market to drive innovative new products into market and allow the innovators to be rewarded. We have to plan for how we deal with displaced people and find new ways to treat them humanely, or make them whole when they’ve had their livelihoods ripped out from under them. We have to learn from the lessons of the past by analyzing it and thinking more deeploy about what we allow to happen. We can’t fight it – we’ve already passed the point of no return, just as the bronze age civilizations did. If we retreat into dogma and don’t inherit humanities birthright of shared consciousness, then we will again slip back into chaos. Ymir in the Norse tradition, Brahma in the Hindu tradition, Apsu and Tiamat in the Sumerian tradition, or Yahweh or God in a more monothestic context – they exited chaos and created the world. If we don’t plan for a new world, then one of them might not be around to rescue us from chaos next time (no matter what the Foundation trilogy told us our future would be).

Now let’s turn the page towards what regulation might look like. We can again look to the past. Section 230 of the Communications Decency Act was written in 2006. Section 230 allowed for the explosion of the internet by granting broad legal immunity to websites for content they hosted. They were not treated like publishers of books or magazines, but allowed to thrive. The law was short, at 26 words. The courts have chipped away at the edges but largely upheld it for a variety of reasons. 

Technology moves fast and provides an asymmetric advantage to early adopters. The open sandbox that the internet provided allowed for tens of thousands of innovations and counting. Some have been used by a much smaller military in Ukraine to be able to stand up to a much larger force like Russia. Yet real human lives are lost in battlefields just as they have been since before walls were built in Jericho. And the freedom of internet allows both sides to leverage information and disinformation to their advantage. Drones powered by AI, satellite imagery with complex pattern recognition capabilities, and other uses of AI give this asymmetrical advantage to a smaller player who can adopt technology faster. Regulations stand in the way of that.

One of the most common ways that laws and regulations can hold back innovation is by creating barriers to entry. This means that it can be difficult and expensive for new businesses to enter a market. This can be due to a number of factors, such as high licensing fees, complex regulations, or long approval processes. Another way that laws and regulations can hold back innovation is by stifling competition. This can happen when there are too few players in a market, or when the market is dominated by a few large companies. This can make it difficult for new businesses to compete, and it can lead to higher prices and less innovation. Laws and regulations can also hold back innovation by making it difficult to test new ideas. 

There are a number of things that can be done to reduce the impact of laws and regulations on innovative uses of AI. One is to simplify regulations and make them streamlined, easier to consume. This will make it easier for businesses to comply with the law, and it will free up resources that can be used for innovation. For example, we can have less lawyers and more engineers, provided we operate within ethical and safety guidelines. We can also promote competition by limiting the scope of monopolies and opening up new markets to competition, like what happened with the phone companies in the last few decades. This provides more incentives to innovators. 

Finally, it is important to create an environment that is supportive of innovation. This can be done by providing tax breaks for research and development, or by investing in education and training. This will help to create a workforce that is skilled in innovation, and it will make it easier for businesses to find the talent they need to innovate. And this is the key to everything: education for displaced indiividuals needs to be free. We need to allow more skilled individuals to move more freely between jobs in different countries. We need to invest more heavily into the next generation to improve the supply chain of innovators. This doesn’t mean that we don’t regulate anything. Safety, equity, and societal norms need to be upheld. However, a better educated and empowered workforce will be less fearful of change, better able to handle the coming challenges, and better able to make decisions that will impact others. 

Scale Regulations With The Scale Of Impact

Innovations don’t help anyone when they happen in a vacuum and their damage is restricted in scope. New innovations can happen with little regulation, but regulations can increase to allow them to be allowed to be adopted in mass markets. Therefore, fostering innovation with tax credits during the research and development phase can be countered with higher taxes, more privacy limitations, and more impactful regulations as they reach larger markets. 

Laws and regulations can actually foster innovation. For example, consider the data used to train LLMs. Are LLM-generated works derivatives of the original works? Yes. At what point do the derivatives infringe on copyright? This will take time to adjudicate, much as the Wright Brothers patent claim to flight stifled the market for flight and allowed other countries to excel at it faster than the US for a time. 

CEOs of companies focused on AI have called for regulation of the industry. This might be a bit disingenuous on a number of fronts. For starters, government regulation takes time and then the subsequent adjudication takes even more time – during this period they can basically do whatever they want. Regulation also puts up barriers to potential competitive entrants into markets, especially those with fortified go to market teams who follow every letter of the law. Regulations also make it easier for them to acquire smaller, new competitors who just can’t keep up with the paperwork and legal requirements introduced. The effort to engender fear is potentially a ploy to misdirect attention on other issues, like privacy or the ability to have misinformation become lore due to how models are trained. Finally, it obfuscates more attractive options for the long-term good of humanity.

Today, companies are on an assembly line. They develop innovative approaches in a pre-seed, seed, or angel round of funding. They use a Series A to Series whatever of funding to reach ever-larger markets. They’re then acquired by private equity, an adjacent company, or a public market with an IPO. There’s little secret to the financial information of private companies that have investors if one only knows where to look. This means that entrenched companies who helped kickstart a movement (like OpenAI did with LLMs) have leverage to acquire smaller, more nimble companies as they move between each of these gates. Further, investors at each traunch often have horizons they planned to make a certain amount back from each of their investments. Move the payday forward and the investors can pressure the founders to sell. In other words, draconian regulations would centralize the power to those already in-market, who can help shape the laws.

Professional Organizations

Let’s take a different approach and follow a model laid out in other professions. The Barbers’ Company was founded in 1308 in London, England. The Surgeon’s Company can be traced back to the 13th century, when barbers and surgeons were two separate professions. Barbers were responsible for cutting hair, shaving, and bloodletting, while surgeons were responsible for performing surgery. In 1450, the two professions were united by law to form the Barber-Surgeons’ Company. The Barber-Surgeons’ Company was responsible for both medical and surgical care. In 1745, the Barber-Surgeons’ Company was split into two separate companies, the Barbers’ Company and the Company of Surgeons. The Company of Surgeons was responsible for the practice of surgery and granted a Royal Charter in 1745. The Company of Surgeons remains one of the Livery Companies of the City of London, and it is responsible for promoting the practice of surgery and for supporting the surgical profession.

Livery companies were originally formed in the Middle Ages to regulate and control the activities of their members. They also provided training and support to their members, and helped to promote their businesses. Some of the most well-known livery companies in London include:

  • Worshipful Company of Mercers
  • Worshipful Company of Grocers
  • Worshipful Company of Fishmongers
  • Worshipful Company of Drapers
  • Worshipful Company of Skinners
  • Worshipful Company of Merchant Taylors
  • Worshipful Company of Haberdashers
  • Worshipful Company of Goldsmiths
  • Worshipful Company of Apothecaries
  • Worshipful Company of Carpenters

The Worshipful Company of Information Technologists (WCIT) was granted a Royal Charter in 2010. There are also boatloads of certifications people in IT and programming can get, from Microsoft certifications to a Certified Ethical Hacker. There are also associations for people who specifically work in AI fields, like the Association for the Advancement of Artificial Intelligence (AAAI) which includes specific associations for AI applied to Medicine, Business, Law, Education, and Government. Other associations people who work in AI might be a part of include the Association for Computing Machinery (ACM), the IEEE Intelligent Systems Society, the International Joint Conference on Artificial Intelligence (IJCAI), and the Machine Learning for Healthcare Conference (MLHC). In other words, there’s no shortage of organizations to certify, advocate for, educate, and disseminate information to.

Livery organizations and certified societies are what some deep as The Professions. Professional societies have been entrusted with maintaining control and/or oversight of specific professions for centuries. They are used to protect the public interest and represent the interests of professionals. It’s a balancing act and can result in a monopoly of an industry on behalf of members, where such is necessary for the public good:

  • The American Medical Association (AMA) is a professional association that represents physicians in the United States. The AMA has a monopoly on the licensing of physicians, which means that it is the only organization that can grant a physician the right to practice medicine in the United States. This gives the AMA a great deal of control over the medical industry, and it has been criticized for using its power to raise prices and limit competition.
  • The American Bar Association (ABA) is a professional association that represents lawyers in the United States. The ABA has a monopoly on the licensing of lawyers, which means that it is the only organization that can grant a lawyer the right to practice law in the United States. This gives the ABA a great deal of control over the legal industry, and it has been criticized for using its power to raise prices and limit competition.
  • The National Association of Realtors (NAR) is a professional association that represents real estate agents in the United States. The NAR has a monopoly on the MLS system, which is a database of homes for sale. This gives the NAR a great deal of control over the real estate industry, and it has been criticized for using its power to raise prices and limit competition.
  • The American Institute of Certified Public Accountants (AICPA) is a professional association that represents certified public accountants (CPAs) in the United States. The AICPA has a monopoly on the CPA exam, which is a requirement for becoming a CPA. This gives the AICPA a great deal of control over the accounting industry, and it has been criticized for using its power to raise prices and limit competition.
  • The General Medical Council (GMC) is a professional association that represents doctors in the United Kingdom. The GMC has a monopoly on the registration of doctors, which means that it is the only organization that can grant a doctor the right to practice medicine in the United Kingdom. This gives the GMC a great deal of control over the medical industry, and it has been criticized for using its power to raise prices and limit competition.
  • The Law Society of England and Wales (Law Society) is a professional association that represents lawyers in England and Wales. The Law Society has a monopoly on the regulation of lawyers, which means that it is the only organization that can grant a lawyer the right to practice law in England and Wales. This gives the Law Society a great deal of control over the legal industry, and it has been criticized for using its power to raise prices and limit competition.
  • The Royal Institution of Chartered Surveyors (RICS) is a professional association that represents surveyors in the United Kingdom. The RICS has a monopoly on the registration of surveyors, which means that it is the only organization that can grant a surveyor the right to practice surveying in the United Kingdom. This gives the RICS a great deal of control over the surveying industry, and it has been criticized for using its power to raise prices and limit competition.
  • The Institute of Chartered Accountants in England and Wales (ICAEW) is a professional association that represents accountants in England and Wales. The ICAEW has a monopoly on the designation of chartered accountant, which means that it is the only organization that can grant a person the right to use the title chartered accountant in England and Wales. This gives the ICAEW a great deal of control over the accounting industry, and it has been criticized for using its power to raise prices and limit competition.

Some don’t have monopolies, but have unions, like teachers, plumbers, and electricians. Some of those fields have mandatory certifications to get some jobs or operate legally in a field, although that can be regulated by insurance or the ability for the end result of projects to be certified rather than with a true monopoly (e.g. getting a new hot water heater inspected by the city). Many in AI have degrees in similar or adjacent fields. Many do not. Some have certifications. Backgrounds can be wildly different. Some organizations have insurance, some get audited (a version of an inspection). Some do not. Unlocking higher levels of funding should require higher levels of insurance.

Oaths And Codes Of Conduct

Many of these fields have a code of conduct; AI does not. If there were a society to regulate AI professionals like there is for these other fields then it would need something akin to a Hippocratic Oath, one of the oldest and most widely known codes of ethics – attributed to the Greek physician Hippocrates in the 5th century BCE. Arguably Isaac Asimov kickstarted such a code in 1942 when he developed The Three Laws of Robotics. This was a moral code for a time when we maybe thought the “positronic brain” as Asimov thought of what could power a robot (which we might call a SOC today) might be able to be sentient. Asimov’s laws are now a bit outdated, but should sit atop any subsequent oath. Those laws were that a robot may not injure a human being, or through inaction allow a human being to come to harm. Those could be expanded to include human rights, privacy, data governance, equity, bias, transparency, derivative works, and accountability in AI systems. 

The oath would break down, though. One need only import numpy or tensorflow into a python script (or the equivilents in other programming languages) to access powerful machine learning technologies. There are hundreds of providers that can take structured and unstructured data and process them with known algorithms that underly deep learning techniques and evolve models with the help of LLMs. The thing that has changed is how much more accessible AI has become. This might even be an app that generates images or songs, sometimes potentially infringing on the right to copy the work of another. Professional societies typically gate who has access to various resources, but with the amount of open source content already out there, these artifacts can no longer be clawed back. And just as a lone actor can cause mass casualties in the real world, a talented programmer can displace thousands of jobs or violate the privacy of millions. Just as a government can leverage such technologies to enact oppressive surveillance of their citizens.

Where an oath can come into play is where there are gates that grant access to larger and larger go to market activities, namely in how they are funded. We have certifications like ISO, SOC2, and other frameworks that govern how organizations do business. Many of those are audited by accounting or professional services firms. Companies opt into those in order to sell to other companies who require them. These offer conceptual frameworks for how we might govern the use of AI in companies. Large organizations must comply with these. Smaller organizations, who can do less damage as their technology hasn’t reached larger markets, then could be regulated by requiring them to meet standards and have members of professional organizations who have taken oaths in order to unlock the next stage of investment. This limits the damage they can do. Obviously bad actors emerge; however that’s part of the human condition and somewhat immutable. Additionally, even smaller organizations can go viral and reach larger audiences, but since bandwidth and processing power is finite until it’s expensive, without funding the impact is temporary.

Conclusion

The rapid changes in technology are changing society. We’ve thus far allowed these changes to largely go unchecked. US teens spend around a third of their life online (https://www.cnn.com/2019/10/29/health/common-sense-kids-media-use-report-wellness/index.html). LLMs can write their papers so they can watch more TikTok videos. The Arab Spring was a series of anti-government protests, uprisings and armed rebellions that spread across much of the Arab world in the early 2010s. Economic stagnation and political representation might have been the tinder, but social media fanned the flames to the point that either regime change or violent suppression finally ended the movement. Ukraine has leveraged technology to fight off what was previously considered a far greater military. 

AI allows for more of all of this. AI also adds the potential for deepfakery. Even there, though, rogue actors, much as can be seen in thieves guilds dating back past the middle ages, prove that laws don’t stop criminal or disinformation. We can slow it with punitive reactions but not end it. We can also add code into browsers that detects deepfakes using winnowing and fingerprinting, supported in part by basic tiny machine learning tools paired with more complex deep learning approaches. Those would, like most of the approaches mentioned thoughout this diatribe be more effective than actual laws and regulatory approaches. Regulations and laws usually deal with what’s happened – not what’s about to happen (especially when applied to technology).

There are some laws already and these prove the point about regulations. Multi-billion dollar fines, like the one recently imposed on Meta by the EU, end up being an indirect form of tax on poor data governance. If that money was tithed to retraining those who lost their jobs due to technology change, that would be a more equitable approach. We mention Meta, but Amazon, Google, British Airways, Marriott, and plenty of others have paid out massive fines for GDPR violations, improper use of tracking technologies (namely cookies), and data breaches. For most it’s simply cheaper and the opportunity cost to get products to market is more appealing to just “move fast and break things” than to actually comply with guidelines and have strong security in place to protect data. But here, we’re intermixing data governance and AI – but to be clear they’re necessarily related.

Ultimately, in order to be effective, a general framework to regulate AI isn’t as much a law as an ecosystem of laws, interpretations of laws by courts, professional organizations, certifications, insurance requirements, an oath, ironclad data governance legislation, and financial regulations that require investors to validate each of these increasingly with additional levels of investment. The concerns about super-intelligent AIs that can wipe out humanity should be seen as inauthentic at best and set aside. This allows us to focus on how we can pump more resources into AI to help react to environmental change, equitable societies so we push less people to the fringes, data governance so we don’t step on the privacy of others or violate their intellectual property, increased taxes on the profits of larger players so we can provide education to displaced workers, and maybe even to Mills’ point, reduce workdays finally, so the human condition can actually finally improve. To do this, we need to stop lying with statistics and seeking confirmation bias that allows those in power to refuse to cede the right to rule to the next generation and allow each generation to take up their own mantle and shape their own destinies.

John F. Kennedy in a very prescient quote once said “Automation does not need to be our enemy. I think machines can make life easier for men, if men do not let the machines dominate them.” Computers helped him get elected in 1960. We can move towards utopia rather than dydstopia. Sure, that’s a far less boring movie to watch, but the Greek Muse of tragedy, Melpomene, wasn’t the only Muse. Love, History, Epics, Sacred Hymns, Music, Dance, Comedy, and Astronomy rounded out the rest of the Muses. Some day, we can think about those for awhile instead of the doom and gloom many feel, characterized by the prevalence of Melpomene in modern media.

The post Regulation And Coexistence With AI appeared first on krypted.



This post first appeared on Krypted.com | Tiny Deathstars Of Foulness, please read the originial post: here

Share the post

Regulation And Coexistence With AI

×

Subscribe to Krypted.com | Tiny Deathstars Of Foulness

Get updates delivered right to your inbox!

Thank you for your subscription

×