Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

To Make Self-Driving Cars Safe, We Also Need Better Roads and Infrastructure

Paul Taylor/Getty Images

The big question around self-driving cars, for many people, is: When will the technology be ready? In other words, when will autonomous vehicles be safe enough to operate on their own? But there has been far less attention paid to two equally important questions: When will the driving Environment be ready to accommodate self-driving cars? And where will this technology work best?

Self-driving cars are the most challenging automation project ever undertaken. Driving requires a great deal of processing and decision making, which must be automated. On top of that, there are many unpredictable external factors that must be accounted for, and therefore many ways in which the driving environment must change.

Cars are heavy, fast-moving objects, operating in public spaces. Safety is largely the responsibility of the driver, who must continuously observe, analyze, decide, and act. Not only do drivers have to follow the rules of the road, but they also have to communicate with each other and other road users to navigate ambiguous or contested situations; think about how you wave or nod to someone to signal “You go first.”

Insight Center

  • Adopting AI
    Sponsored by SAS
    How companies are using artificial intelligence in their business operations.

Self-driving systems have to execute all of these functions, and do so accurately, reliably, and safely, across a wide variety of situations and conditions. Currently, the technology is more capable in some situations than in others.

Through sensors and detailed mapping software, the systems build representations of their environments and update them many times a second. They classify the objects they see and predict their likely behavior before selecting appropriate responses. The speed and the accuracy of these systems already surpass human responses in many situations. Lasers can see in the dark. Reaction times can be nearly instantaneous.

But some conditions still constrain them. Cameras are challenged by strong, low-angle sunlight (important for reading traffic lights), and lasers can be confused by fog and snowfall. Unusual, unfamiliar, and unstructured situations (so-called edge cases), such as accidents, road work, or a fast-approaching emergency response vehicle, can be hard to classify. And self-driving systems are not good at detecting and interpreting human cues, such as gestures and eye contact, that facilitate coordination between cars on the road.

Processes and environments that are structured well are much easier to automate than those that are not. Automated systems need to collect, classify, and respond to information, and this is easier to do in a clean, unambiguous environment — which is what many driving environments are not. The designers of self-driving systems simply cannot foresee every possible combination of conditions that will occur on the road. (Though companies are trying: Google’s Waymo team deliberately subjects its cars to “pathological situations” that are unlikely to happen, such as people hiding in bags and then jumping in front of the car.)

Over time, learning will take place and the number of situations that systems cannot recognize will decrease. In fact, learning is likely to be better in an automated system, because once an incident has occurred and is understood, the fix can be rolled out across all vehicles. Currently, learning is largely confined to individual drivers, and is not shared across the system as a whole. But novel combinations of conditions will never be eliminated, and sometimes these will have catastrophic consequences — a pattern seen even in the highly disciplined environment of commercial aviation.

The problem therefore lies in our period of transition. For the technology to improve, it must be exposed to real, on-road conditions. In the early stages of deployment, it sometimes won’t know the best way to respond and therefore will have to hand over control to a human driver. The issue here, however, is that humans zone out when their full attention is not needed. As self-driving cars improve and humans intervene less, driver inattention and the associated problem of quickly reengaging to respond become even bigger problems.

And as the technology becomes more sophisticated, the situations where it requires human assistance are likely to be more complex, ambiguous, and difficult to diagnose. In these cases, a startled human has much less chance of responding correctly. Even in the highly sterile environment of an aircraft cockpit, pilots can be caught by surprise and respond incorrectly when automation has ceded control.

Two fatal accidents involving Tesla vehicles operating on their Autopilot systems demonstrate how this space between semi-automated driving and intermittent human control may be the most dangerous place of all. In the Florida 2016 crash, the driver of the Tesla had his hands on the steering wheel for only 25 seconds of the 37 minutes in which he operated the vehicle in automated control mode. In California in 2018, the driver’s hands were not detected on the steering wheel in the six seconds preceding the crash.

This problem has led companies such as Waymo and Ford to advocate for fully autonomous cars that get rid of the need for handovers. But this requires a leap: With no driver as backup, there is a risk that the technology will be catapulted into environments that are beyond its ability to handle.

Self-driving cars also have to navigate an environment that is shared — with pedestrians who sometimes cross the road without looking, cyclists, animals, debris, inanimate objects, and of course whatever elements the weather brings. Road infrastructure, regulations, and driving customs vary from country to country, even city to city, and are overseen by a multiplicity of bodies. So it’s not clear which institutions have the power and reach to regulate and standardize the driving environment, if they even exist. Roads are very different from airspace, which is governed by powerful global regulatory bodies, has far less traffic, and has very high licensing standards for pilots.

This means that we need to think not just about the onboard technology but also about the environment in which it is deployed. We’ll likely start to see a more standardized and active environment as more smart infrastructure is constructed. Think of radio transmitters replacing traffic lights, higher-capacity mobile and wireless data networks handling both vehicle-to-vehicle and vehicle-to-infrastructure communication, and roadside units providing real-time data on weather, traffic, and other conditions. Common protocols and communications standards will have to be devised and negotiated, as they were with internet communication protocols or the Global System for Mobile Communications (GSM) for mobile phones. This transition will take decades, and autonomous vehicles will have to share the roads with human drivers.

If rapid, radical change to the driving environment is impractical, what is the alternative? The most likely near-term scenario we’ll see are various forms of spatial segregation: Self-driving cars will operate in some areas and not others. We’re already seeing this, as early trials of the technology are taking place in designated test areas or in relatively simple, fair-weather environments. But we may also see dedicated lanes or zones for self-driving vehicles, both to give them a more structured environment while the technology is refined and to protect other road users from their limitations.

We can also expect to see self-driving cars deployed first in relatively controlled environments (such as theme parks, private campuses, and retirement villages), where speeds are lower and the range of situations the vehicles have to deal with is limited. Economics, too, will play an important role in where and how self-driving cars begin to operate. The vehicles will likely appear in environments where it is cost-effective to develop and maintain highly detailed mapping, such as dense urban environments, although of course these also pose other challenges due to their complexity.

Although the cost of self-driving cars will fall once they enter mass production, it is currently very high, from $250,000 to $300,000 a vehicle, according to some estimates. So they will first appear in settings where vehicle utilization rates are high and where the cost of a driver’s time matters — imagine robotaxis or ride-hailing vehicles operating in defined, geo-fenced zones. Trials of these are already under way. Robotaxis also point to a way in which humans can support self-driving technology while avoiding the human zone-out problem by interacting with call centers. A self-driving vehicle that cannot get past an obstruction in the road without acting illegally (crossing a white line, for example) can stop and call a human operator for advice, who can then authorize it to act in a nonstandard way.

In the long run, driverless cars will help us reduce accidents, save time spent on commuting, and make more people mobile. The onboard technology is developing rapidly, but we’re entering a transition stage in which we need to think carefully about how it will interact with human drivers and the wider driving environment. During this period, the key question we should be asking is not when will self-driving cars be ready for the roads, but rather which roads will be ready for self-driving cars.



This post first appeared on 5 Basic Needs Of Virtual Workforces, please read the originial post: here

Share the post

To Make Self-Driving Cars Safe, We Also Need Better Roads and Infrastructure

×

Subscribe to 5 Basic Needs Of Virtual Workforces

Get updates delivered right to your inbox!

Thank you for your subscription

×