Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Who serves whom?

The takeover of artificial intelligence seems to be a done deal. The open questions are: When will machines outperform us? Will they annihilate us? And: Should self-driving cars kill one pregnant woman or two Nobel prize winners? Artificial Intelligence is a complex riddle for all sorts of experts. It’s full of magic, mystery, money, mind-boggling techno-ethical paradoxes and sci-fi dilemmas that may or may not affect us in some far or near future. Meanwhile, information technology already shapes our everyday life. Things already go wrong. And no one is responsible. What should we do? What may we hope? What can we know? What is human and who is machine?

Pocket calculators have been beating us at math for a couple of decades now. Bots programmed to influence Human dialogue on social media are showing the middle finger to the everyday Turing tests called bot-or-not. Propagandists and marketers running fake accounts are feigning the voices of Millions, in a calculated attempt to influence political processes. Some machines may still take cauliflower for poodles, owls for apples, and cats for ice cream, but they beat us hands down at speech and face recognition, they beat world champions at Chess, Go and Poker and now mostly compete with themselves. There is little doubt that machines will out-simulate and outperform human intelligence at any specific contest. Just like mechanical robots outperform us physically in every way, bots will be bigger, faster, stronger than any natural being. When the rules are set, machines kick our ass.

“It’s about setting the rules,” Kasparov said. “And setting the rules means that you have the perimeter. And as long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that’s enough for machines to reach the level that is impossible for humans to compete.” 1

Whatever point in time machines outperform the human brain in totality doesn’t matter. If they beat us at defined disciplines, networked machines will outdo us as individuals and as a species. We could just wait and see what happens. Leaving our future to the experts when we already see at what pace we are losing control now is comically lazy.

Technology is made to make our life easier, to serve us. Bots, robots and ultimately the Wizards of Oz that hold their strings are flipping this around. Computers don’t need to become self-conscious, they feed on our cognition. We fuel Facebook with our experiences. We see, think, write, speak and live for Google. Amazon runs the biggest Mechanical Turk that the world has seen—and it calls it just that. The ultimate secret of who we are is not in our hearts, it is in our iPhones. The big five are robbing the data bank with fire trucks.

Some claim that we should treat machines like our children. Treat them well, so they treat us well in return. Since our machines will be smarter than us, we should make sure to not anger and “inflict[…] existential trauma” on them. 2 Right, let’s not hurt their potential feelings. Is your head already spinning? Wait, it gets better.

What may we hope?

What matters foremost, what matters now, is not what is possible but what we get. Why do we build artificial intelligence? What are machines supposed to do? What are machines for?

  • Do machines serve us as much as we serve those who own them?
  • Should humans serve machines or should they serve us?
  • May we give machines the technical, legal and political power to make decisions in our place, subjecting us to their processes?

Technology can be described as an amplifier or an extension of the human body. The hammer is an extension of the fist, the knife is an extension of our teeth, TVs are exaggerated eyes and ears. Marshall McLuhan claimed that every extension results in an auto-amputation of another part.

“Every extension of mankind, especially technological extensions, has the effect of amputating or modifying some other extension[…] The extension of a technology like the automobile ‘amputates’ the need for a highly developed walking culture, which in turn causes cities and countries to develop in different ways. The telephone extends the voice, but also amputates the art of penmanship gained through regular correspondence. These are a few examples, and almost everything we can think of is subject to similar observations…We have become people who regularly praise all extensions, and minimize all amputations.” 3

The idea that machines might become an equal or a superior is discussed as exciting and scary at the same time. It’s exciting if machines stay extensions and help us; it becomes scary if they turn against us. If we imagine the human existence as a biological entity in constant search of equilibrium, both scenarios are disconcerting. Every extension of a body part logically leads to a decreased sensitivity of another. The harder the hammer, the less we feel the nail. As long as this is intended, it’s cool.

“Medical researchers like Hans Selye and Adolphe Jonas hold that all extensions of ourselves, in sickness or in health, are attempts to maintain equilibrium. Any extension of ourselves they regard as ‘autoamputation,’ and they find that the self-amputation power or strategy is resorted to by the body when the perceptual power cannot locate or avoid the cause of irritation. Our language has many expressions that indicate this self-amputation that is imposed by various pressures. We speak of ‘wanting to jump out of my skin’ or of ‘going out of my mind,’ being ‘driven batty’ or ‘flipping my lid.’ And we often create artificial situations that rival the irritations and stresses of real life under controlled conditions of sport and play.” 4

What amputation do we risk with artificial intelligence? Cognition? Thought? Intelligence itself? Freedom?

What should we do?

The debate when machines will be more intelligent than us and whether they can have feelings is fascinating on many levels. And so is the question what kind of people a self-driving machine should kill in an either-or situation. Talking science fiction, debating logical paradoxes and ethical trick questions makes for great small talk, promising business plans, lucrative promises, cheap marketing, fantastic hoaxes, fun games, entertaining illustrations catchy headlines, half knowledge, spectacular cock fights, world record bullshit, and great clickbait.

Without a thorough reflection on the ethical principles of human action, these discussions remain small talk. Now, ethics may irritate the predominantly scientific mind. Natural science looks at what is and tries to describe it in a way that can be reproduced at any point in time. Ethics looks at what should be and tries to make sure that everybody can take part in making it real. Humans are to a big extent defined by nature. But we also have the miraculous power to define reality through normative disciplines like ethics, art, literature, politics, law, and economy. And strangely enough, natural science itself wouldn’t be possible without human sciences.

Now, ethically acting, you can’t just go and tell a machine whom to kill as if it were a mathematical problem. If you cannot say who will take responsibility for those who die—as a result of an algorithmic calculation, a bug or unforeseen malfunction—you are in trouble, ethically speaking. Ethically thinking, we’d first need to find a way to decide how much the power we want machines to have over our lives:

  • Should civil machines be given the power to calculate whom to kill?
  • Can moral values be measured, weighed, quantified and thus “processed” at all?
  • What are the moral core values these calculations are based on? The biggest use for the biggest number of people? Duty? Maximize happiness? Economic profitability?
  • Who decides which ethical principles are relevant for human-machine interaction?

Most of us would not feel comfortable following a machine’s calculation over whom to date. Ironically, Facebook might already calculate better whom we should date than our wet bodies high on mad hormones.5 But no matter how statistically well machines would fare against our faulty instincts, most humans feel uneasy putting their freedom, imagined or real, in the electric hands of machines.

“Human beings don’t want to be controlled by machines. And we are increasingly being controlled by machines. We are addicted to our phones, fed information by algorithms we don’t understand, at risk of losing our jobs to robots. This is likely to be the narrative of the next thirty years.” 6

But, hey, no one knows the future. Crazy things may and will happen. Imagine a future where Facebook decides whom we marry because Facebook marriages then have a 45% lower divorce rate than natural marriages. Who will be responsible for the failed marriages and the time lost? Those who trusted the machines? Those who make the machines? No one?

Let’s imagine that it will be scientifically and morally obvious that machines make better political decisions than humans. Who runs those machines that sit in parliament? Who monitors them? And aren’t we ultimately subjecting ourselves to those who build, manage, run and own the machines rather than the machines themselves? Who decides that machines make better decisions? The people that voted the machines into power? The smarter machines? The market? The Lobbyists? A group of programmers on Slack? The machines autonomously? Whom would you like to take such decisions?

As crazy as this may sound, all of this is not Science Fiction. It is happening right now. Machines already filter, sort and choose the information we base our decisions upon. They count our votes. They sort the tasks we spend our time on, they choose the people we talk to and meet. More and more key aspects of our life are decided by information technology. And things go wrong. Machines are made by humans. As long as we make mistakes, machines make mistakes.

When things go wrong, both parties—those who use machines and those who build, manage and own information technology—decline responsibility. Users hide behind their lack of power, owners hide behind “the algorithm”. They sell artificial intelligence as Deus ex Machina and when it errs they blame the machine as a mere machine.

The question “Who serves whom?” is not a topic for experts in 2047. It is a key question for all of us, today, right here and now. Whether or not machines can be intelligent is not just technically or scientifically relevant, it is existential.

What can we know?

Intelligence comes from “intellegere”, to understand. Natural intelligence, as opposed to artificial intelligence, requires natural understanding. We “naturally understand” when someone else’s words make sense to us, when we physically recognize what we hear and see, when we feel through language what someone else felt, when we know what other people’s words mean. When we share our feelings, words, sentiments, and positions. Your brain is not a computer.

Currently, machines don’t understand. They don’t sense, feel, recognize, share or mean. They have no intentions, positions or perspectives. They follow fuzzy orders. They receive and match patterns, they process, calculate and simulate. They don’t feel, think, or comprehend, they do not understand, they don’t even know what they are doing. Your computer is not a brain.

How can a being or thing qualify as intelligent when it doesn’t know itself, doesn’t understand others or even realize what it is doing?

Machines are not naturally intelligent, but they already excel at simulating intelligence. They are really good at making us believe that they understand, that they know us, that they comprehend. Well, it is not hard to fool us. Without any manipulation, we readily believe that toasters have feelings, that cars have a personality, that ketchup bottles have intentions. It is in our very nature to project our inner life into the world outside.

And with all the projections, we still do not understand our own minds. Few engineers playing with AI even care to ask what human intelligence may be. Algorithms have become so complex that those who build those algorithms do not understand how they work. One could say it’s magic. Or one could say it’s trial and error. But it is a profound mistake to equate a bricolage formula with human intelligence just because both are not understood.

A human-made machine can produce similar results as a human brain without us knowing why. This doesn’t mean that we are equal, or that we have reproduced or surmounted our own intelligence. It means that we like tinkering.

Now, in order to get an idea of what is happening inside the machines, engineers write reverse algorithms that explain themselves. This is cool stuff, but, again, it doesn’t qualify as self-conscious and it does not put us in charge of what we built. In contrary. It forces us to trust something we built without proper understanding.

One might fantasize and claim that maybe, probably, surely in 30 years machines will understand so much more than we do that they can explain everything. A more passionate advocate of artificial intelligence may counter by trying to put human understanding in doubt claiming that “maybe ‘understanding’ is nothing but an illusion on top of bioelectric processing, calculations, simulations.” What if understanding is an illusion? Well, does it matter?

However you put it, human reality is a mix of fact and fiction, nature, and norm. Natural science itself has discovered that there are limits to what can be measured. Literature, art, philosophy, poetry, economy, and history are not fully measurable and they never will be. Human sciences define us as they describe us. And as long as natural science needs human language to express and discuss its findings it depends on human sciences. Those who try defend an absolute measurable pure reality of natural science against the normative influence of human sciences do not just dissuade form modern science’s very own insights about measurability. Math itself is as much human as natural science. There is no pure measurement, there is no measurement without norms, there is no reality without perception, no language without interpretation, there is no ‘thing in itself’.

Whether humans should serve machines or not cannot be decided scientifically, technically or economically. It’s not a factual but a normative question. It doesn’t ask “What is?” but “What should be?” What can be done cannot be discussed independently of what should be done.

“Everything human not only means the generally human in the sense of the characteristics of the human species in contrast to other types of living beings, especially animals, but also comprises the broad view of the variety of the human essence. […] All practical or political decisions which determine the actions of people are normatively determined and exert in their turn a norm-determining effect.” 7

Should we cease power to anyone or anything that lacks a proper understanding of the realm it controls? Only a fool would say that people without understanding should run things. How can you possibly say that it is okay to let machines decide what we do?

With power comes responsibility. Without understanding the potential effect of our actions, without the ability to realize the actual result of our decisions machines cannot take responsibility for what they do or make us do. Who cannot take responsibility should not be given power. Who or what cannot take responsibility for itself shouldn’t be responsible for others.8

Whether or not machines can be fully intelligent, is interesting not just as a party topic. It becomes relevant when we debate how much power we should give them. Without any doubt, machines can simulate and amplify specific forms of human intelligence. But the ability to simulate does not suffice. Would you give a person that can simulate what you do the responsibility to run your life?

What is human and who is machine?

Simulation of human intelligence is not a replication of real intelligence and does not suddenly become real intelligence. Artificial intelligence is by definition not real, artificial can mean man-made as opposed to natural. Up to this day, it means simulated, pretended, fake, feigned intelligence. Artificial intelligence is intelligent like artificial leather is leather, it is self-aware like a pocket calculator knows math. To become self-conscious you need a reference point, you need to feel what you think, you need a body.

History teaches us that one knows exactly what is possible or impossible. Neither the pessimists nor the optimists have prophetic powers to make that call. No one knows the future, all we know is that it crazy and dirty and complicated. But we know the present.

Humans already feed machines with cognition, and often we speak to and like machines without being aware of it. We can keep things a bit simpler if we make sure that technology stays a discernible extension and amplifier and doesn’t continue to increase the entropy between humane and artificial information. Let’s keep things as simple as possible. The information technology and the chaos of data it generates we have built to date is already hard enough to control.

It is fair to assume that human intelligence cannot be reproduced without reproducing the human body and the natural and cultural history it grew out of. Theoretically, one could imagine a Blade Runner future where machines make themselves produce and reproduce human intelligence to a point where human and machine become as good as indiscernible. Whether such a future is desirable or not is another question.

“If it would be possible to build artificial wet brains using human-like grown neurons, my prediction is that their thought will be more similar to ours. The benefits of such a wet brain are proportional to how similar we make the substrate. The costs of creating wetware is huge and the closer that tissue is to human brain tissue, the more cost-efficient it is to just make a human. After all, making a human is something we can do in nine months. Furthermore, as mentioned above, we think with our whole bodies, not just with our minds. We have plenty of data showing how our gut’s nervous system guides our ‘rational’ decision-making processes and can predict and learn. The more we model the entire human body system, the closer we get to replicating it.” 9

We need to make sure now that we do not grow into a future where we cannot discern humane from artificial, fake from factual, where have no basis to decide what existence we want to lead. Right now, we need to make sure that the distinction between human and machine stays clear. We need to make sure that we and not those who own information technology decide what future we want.

The first step we need to take is making sure that machines make themselves recognizable when they talk to us. Processed speech needs to be discernible from human speech. Legally and visually. We need technology that rather than exploiting our cognition and privacy protects the human condition and our right to choose.

We need to know whether we talk to machines of humans, whether we devote our time to talk to machines or to living beings. Whether we try to re-feel what has been felt by another human being or whether we die a little offering our time and cognition to a robot to use our mind and time as a resource. We need to know who runs these robots. And we need to know how they work. Bots have no right to anonymity. Algorithms that influence human existence on the deepest level shouldn’t be trade secrets.

We need to add verification mechanisms—iris scanning, fingerprints, blockchain verification for publishing processes—that offer us some security that the information we read has been created by humans with body and mind. It won’t be safe and as all technology, it can be abused and it will be abused, but these mechanisms will add expensive hurdles for crooks and send an important message: “If you abuse this you are a criminal.”

The imminent threat from technology today lies no longer just in its physical power, but that it has the power to distort reality, shape our perception to the advantage of its owner and take our ability to shape the world in our image. You may find that old-fashioned, sentimental, unscientific, or “just too bad but inevitable”. The laws of nature are given, human autonomy is a hard-fought achievement that transcends the laws of nature. Our freedom to shape reality, to chose between right and wrong, the freedom to err is as real as gravitation. Let’s not cease that superpower to those who run the machines.

  1. Garry Kasparov in what happens when machines ‘reach the level that is impossible for humans to compete’ ↩︎
  2. Raising good robots ↩︎
  3. Marshall McLuhan, Understanding Media ↩︎
  4. Ibid.
  5. Facebook Knows You Better Than Anyone Else ↩︎
  6. Hans-Georg Gadamer, The Enigma of Health ↩︎
  7. Fred Wilson on What happened in 2017 ↩︎
  8. The madness of ceasing power to those who lack understanding and are thus not able to take responsibility for their actions finds a perfect example in the current president of the United States. He incorporates artificial intelligence put in charge. ↩︎
  9. Kevin Kelly, the Myth of Super Human AI ↩︎

This post first appeared on Home | IA, please read the originial post: here

Share the post

Who serves whom?


Subscribe to Home | Ia

Get updates delivered right to your inbox!

Thank you for your subscription