Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How to Learning to trust our AI colleagues

Computers were once seen as more or less infallible machines that simply processed discrete inputs into discrete outputs, whose calculations were never wrong. If a problem ever arose in a calculation or business process, it was definitionally caused by human error, not the computer as machines encroach on ever-more humanlike tasks that go beyond basic number crunching and enter the realm of discernment and decision-making via artificial intelligence (AI), the business world is developing a new understanding of what it means to trust machines.

The degree to which businesses and workers learn to trust their AI “colleagues” could play an important role in their business success. Most organizations today say  data-driven. Many even call themselves AI-fueled companies.There’s plenty of evidence suggesting businesses that use AI pervasively throughout their operations perform at a higher level than those that don’t that have an AI strategy are 1.7 times more likely to achieve their goals than those   underlying AI tool implemented in a given workflow matters less. With cloud vendors increasingly offering prebuilt models, any business can access world-class AI functionality with a few clicks. The top-performing facial recognition vendors ranked by the National Institute of Standards and Technology deliver comparable performance, and  all easily accessed through cloud-based services. you do with the tool that’s important—and whether your people, customers, and business trust the results.

may matter in the future is not who can craft the best algorithm, but rather who can use AI most effectively. As algorithms increasingly shoulder probabilistic tasks such as object detection, speech recognition, and image and text generation, the real impact of AI applications may depend on how much their human colleagues understand and agree with what they’re doing. People don’t embrace what they don’t understand. We spent the last 10 years trying to get machines to understand us better. Now it looks like the next 10 years might be more about innovations that help us understand machines. that leverage AI in transparent and explainable ways will be key to spurring adoption.

“What we’re designing is an interface of trust between a human and a machine,” says Jason  identity management capability manager at the Transportation Security Administration. “Now you’re taking an input from a machine and feeding it into your decision-making. If humans don’t trust machines or think they’re making the right call, it won’t be used.”

 

 AI like onboarding a new team member. We know generally what makes for effective teams: openness, rapport, the ability to have honest discussions, and a willingness to accept feedback to improve performance. Implementing AI with this framework in mind may help the team view AI as a trusted copilot rather than a brilliant but taciturn critic. When applications are transparent, resilient, and dependable, they can become a natural part of the they’re less sold on fit. Currently, enterprises have a hard time trusting AI with mission-critical tasks.  same report found that 41% of technologists are concerned about the ethics of the AI tools their company uses, and 47% of business leaders have concerns about transparency,the ability for users to understand the data that went into a model.  grappling with a related concept, explainability, the ability of a model to give an explicit justification for its decision or recommendation. AI systems is necessary when it is required by regulations, but it’s also becoming expected functionality in situations where it helps make clear to end users how to use a tool, improve the system generally, and assess fairness is one of the biggest differentiators between the successful use of AI at scale and failure to reap returns on AI investment, yet many businesses haven’t figured out how to achieve it.

          How to make AI more trusted

 data-collection methods enable the end user to understand why certain pieces of information are being collected and how they’re going to be used. When users have this control, they can make informed decisions about whether the AI tool represents a fair value exchange.

One of the biggest clouds hanging over AI today is its black-box problem. Because of how certain algorithms train, it can be very difficult, if not impossible, to understand how they arrive at a recommendation. Asking workers to do something simply because the great and powerful algorithm behind the curtain says to is likely to lead to low levels of buy-in. 

 

One automaker in the United Kingdom is tackling this problem by bringing frontline workers into the process of developing AI tools. The manufacturer wanted to bring more AI into the vehicle-assembly process by enabling machine learning to control assembly robots and identify potentially misaligned parts before the vehicle gets too far into the assembly process. At the start of the development process, engineers bring in front-line assembly workers to gauge their perception of problems and use that to inform development. Rather than dropping AI into an arbitrary point in the production process, they use it where the assemblers say they most need help.

 have grown accustomed to a certain level of reliability from work applications. When you open an internet browser or word-processing application, it typically simply “behaves.” More specialized business applications such as customer relationship management platforms and enterprise resource management tools may be a bit more finicky, but their challenges are fairly well established, and good developers know how to troubleshoot them. 

With AI, the question isn’t whether it will work  rather how accurate the result will be or how precisely the model will assess a situation. AI is generally neither right nor wrong in the traditional sense. AI outputs are probabilistic, expressing the likelihood of certain outcomes or conditions as percentages—like a weather forecast predicting a 60% chance of rain—which can make assessing reliability a challenge. But workers need to know how accurate and precise AI is, particularly in critical scenarios such as health care applications. enterprises deploy AI in traditional operational systems, a new trend is taking shape on the horizon: generative AI. We’re already seeing the emergence of tools such as OpenAI’s  2 image generator and GPT-3 text generator. There’s a generative model for music called Jukebox that lets users automatically create songs that mimic specific artists’ styles. AI is increasingly being used to automatically caption live audio and video. types of content generators are getting more sophisticated by the day and are reaching the point where people have a hard time telling the difference between artificially rendered works and those created by humans. 

Concern over automation’s impact on jobs is nothing new, but it is growing ever more pronounced as we head toward this automatically generated future. In many cases, generative AI is proving itself in areas  were once thought to be automation-proof: Even poets, painters, and priests are finding no job will be untouched by machines.

That does not mean, however, that these jobs are going away. Even the most sophisticated AI applications today can’t match humans when it comes to purely creative tasks such as conceptualization, and we’re still a long way off from AI tools that can unseat humans in jobs in these areas. A smart approach to bringing in new AI tools is to position them as assistants, not competitors. 

Companies still need designers to develop concepts and choose the best output, even if designers aren’t doing as much of the manipulating of images directly. They need writers to understand topics and connect them to readers’ interests. In these cases, content generators are just another tool. As OpenAI’s CEO Sam Altman in a blog on DALLE-2, “It’s an example of a world in which good ideas are the limit for what we can do, not specific skills.”

  companies that learn to team with AI and leverage the unique strengths of both AI and humans may find that we’re all better together. Think about the creative, connective capabilities of the human mind combined with AI’s talent for production work. We’re seeing this approach come to life in the emerging role of the prompt engineer. This teaming approach may lead to better job security for workers and better employee experience for businesses. 

 

AI continues to push into new use cases through emerging capabilities that most people thought would remain the exclusive domain of humans. As enterprises consider adopting these capabilities, they could benefit from thinking about how users will interact with them and how that will impact trust. For some businesses, the functionality offered by emerging AI tools could be game-changing. But a lack of trust could ultimately derail these ambitions.

 



This post first appeared on How Do Astronauts Survive In Space | Space Science?, please read the originial post: here

Share the post

How to Learning to trust our AI colleagues

×

Subscribe to How Do Astronauts Survive In Space | Space Science?

Get updates delivered right to your inbox!

Thank you for your subscription

×