Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Summary: Human-Centered AI by Ben Shneiderman

Tags: human hcai book

Recommendation

Humans have made tools since prehistoric times. And tools or technologies – like the steam engine, the telegraph or electric lights – change how people live. Thanks to advances in algorithms, machine learning and deep learning, today’s artificial intelligence is a super tool that may improve Human life on many levels, from health care to education to the environment. But criminals, terrorists and people seeking to undermine democratic governments can also use AI. Human-Centered AI (HCAI) provides strategies for grounding AI in human values and deploying this powerful new technology to improve, not diminish, people’s lives.

Take-Aways

  • Human-Centered AI (HCAI) seeks to empower people, not replace them.
  • HCAI vests in rationalism and empiricism.
  • HCAI attempts to combine significant levels of human control with automation.
  • HCAI strives to be “reliable, safe and trustworthy.”
  • HCAI designs should be consistent and user-friendly, and should prioritize user control.
  • AI research should focus on science and innovation.
  • Software engineers who create HCAI systems should use ethically viable practices.
  • Businesses deploying HCAI systems must prepare for potential failures.
  • As AI develops, organizations should adopt HCAI principles.

Summary

Human-Centered AI (HCAI) seeks to empower people, not replace them.

Artificial intelligence – gaining fuel from increasingly sophisticated algorithms, machine learning and deep learning – promises immense advances in medicine, manufacturing, communications and transportation. Yet criminals, terrorists and people seeking to undermine democracies and their institutions can also exploit possibilities that AI advances reveal.

“The remarkable progress in algorithms for machine and deep learning during the past decade has opened the doors to new opportunities, and some dark possibilities.”

Human-Centered Artificial Intelligence (HCAI) provides strategies for moving away from AI that focuses exclusively on powerful algorithms, and toward AI that prioritizes the human perspective. In the recent past, AI researchers and developers targeted the power and performance of algorithms and technologies. But fear of bad actors and AI’s impact on the economy made people increasingly concerned over AI’s effects on humans. HCAI doesn’t want AI and automation to replace human beings. It aspires to keep people in control of AI. It wants AI to benefit human beings and promote human aspirations, such as for a more just society. HCAI prioritizes “rights, justice and dignity.”

HCAI vests in rationalism and empiricism.

In the Western philosophical tradition, rationalism and empiricism contrast with one another. Rationalists such as René Descartes developed their worldview through introspection, reason, logic and mathematics. Without ever leaving their desks, and with their faith in the methodologies at their disposal, rationalists sought the absolute truth. Empiricists such as John Locke and David Hume, conversely, insisted that they had to leave their desks and observe the world in all its complexity to find the truth.

“The rationalist viewpoint is a strong pillar of the AI community, leading researchers and developers to emphasize data-driven programmed solutions based on logic. Fortunately, an increasing component of AI research bends to the empirical approach.”

Moving forward, technology design’s character will depend upon whether researchers and developers are rationalists or empiricists. AI rationalists lean toward “autonomous designs” that allow AI to operate without human intervention. But the prospect of, for example, military use of “lethal autonomous weapons systems” and safety issues surrounding self-driving cars led designers to increasingly adopt an empiricist approach. AI’s systematic biases that arise in the criminal justice and mortgage approval systems have led people to seek a more concrete – and empirical – understanding of how these systems work, and how HCAI might improve them.

HCAI’s investment in empiricism places users at the center of AI system design. HCAI designers partially fashion their systems by observing users in their homes and offices, and then base system revisions on documented user experience. AI rationalists believe that computers can ultimately mimic human brains and display thoughts, emotions and subjectivity. But HCAI empiricists regard humans and computers as occupying entirely different categories. For them, AI remains a tool humans created that should remain under human control.

HCAI attempts to combine significant levels of human control with automation.

HCAI distinguishes the degree to which AI is autonomous and supports automation, and the degree to which people control AI. HCAI seeks AI systems that combine autonomy and human control. This combination of human control and autonomy should promote applications that are “reliable, safe, and trustworthy” and that enhance human performance in complex projects.

“There is growing awareness by leading artificial intelligence researchers and developers that human-centered designs are needed.”

The common perception of full computer autonomy is rife with misconceptions. The idea that full autonomy will obviate any need for human control is a myth. In the case of autonomous weapons, for example, the cost and the consequences of failure are prohibitive. Automation can prevent human errors when it understands possible human mistakes, and the system builds in avoidance of those mistakes. Such an approach makes sense with jobs that are highly repetitive and risky for humans to perform. Jobs that involve reason, imagination and emotion are better when they are in human hands. Complex, challenging problems demand a well-designed and thought-through relationship between humans and computers. In such cases, AI systems and computers don’t supplant humans. They spur human innovation.

HCAI strives to be reliable, safe and trustworthy.

Researchers and developers find machine autonomy appealing. But human autonomy is equally significant. As separate goals, machine and human autonomy can be useful and valuable in certain contexts, but function improves when they work in combination. Apply AI-driven machine autonomy in stable and consistent contexts, and human autonomy when AI-driven machine autonomy proves inadequate.

“Software engineers, business managers, independent oversight committees and government regulators all care about the three goals.”

A reliable system generates predictable responses reliably. A system’s reliability results from software engineers’ practices, which include accountability and transparency when failures occur. Appropriate audits of data streams, documented workflows, valid, verifiable tests, assessments of biases built into the system, and user-friendly interfaces all promote accountability and transparency. A system will be safe when company executives vest in safety, train employees with safety in mind, ensure that employees report failures and near-failures, and review problems and ways of dealing with them. Companies should conform to industry standards.

As for a trustworthy system, the very concept of trust is complex. A system is trustworthy when it justifies people’s trust. Neutral, independent organizations – such as accounting and insurance entities – must assess the trustworthiness of a system.

HCAI designs should be consistent and user-friendly, and should prioritize user control.

The debate over whether to pursue artificial intelligence or “intelligence augmentation” is more or less irrelevant. Builders can design HCAI systems in ways that combine AI algorithms with interfaces that increase human capabilities.

“Successful designs are comprehensible, predictable and controllable, thereby increasing the users’ self-efficacy.”

The “Eight Golden Rules” for interface design are: 1) consistency, 2) general usability, 3) regular feedback, 4) option to close dialog boxes, 5) error prevention, 6) capacity to reverse actions, 7) ability to prioritize user control and 8) capacity to minimize the need for short-term memory. The Eight Golden Rules help designers fashion intelligible, stable interfaces that humans control.

Designers can expand the Eight Golden Rules with an “HCAI pattern language,” which suggests ways of approaching design problems. The HCAI pattern language begins with a display of all relevant data that users can zoom in on, a display of the entire process so users can assess the data, the capacity of users to guide the process through control panels, and the ability to use sensors to preserve process history and audit information. Designs prove more effective when they ease sharing content and ask for help, provide cautious, independent evaluations when the consequences may be harmful, thwart malignant attacks from the outside, and remain open to feedback from users and stakeholders.

AI research should focus on science and innovation.

Early in AI’s history, more than 50 years ago, the principal goal of nascent AI research was to answer the question that legendary computer scientist Alan Turing posed: Can machines think? At that point, answering the question proved relatively straightforward. Machines can think if they satisfy the “Turing Test,” which asks whether a user can tell the difference between a machine’s answers to questions and a human’s answers to the same questions. Scientific research has come a long way since Turing’s days. Science has gained insights into perception, cognition and natural language, and whether AI can make precise predictions or make robots perform as well as humans.

“Bold aspirations can be helpful, but another line of criticism is that the AI science methods have failed, giving way to more traditional solutions, which have succeeded.”

AI research should pursue two broad goals: scientific advances and innovation. The scientific goal would be to achieve a deeper knowledge of human cognitive capacities such as perception and reason, to create devices that match or even exceed human abilities, and ultimately to create “social robots” that have consciousness and emotion. The final goal would be to achieve “artificial general intelligence,”​​​​ which could take up to 1,000 years. Researchers seek to apply HCAI to enhance human capacities with “active appliances” (those with, for example, automatic sensors or programs that users can set), prosthetics, and the like.

Software engineers who create HCAI systems should use ethically viable practices.

HCAI may provide benefits for humans in health care, education, criminal justice and the environment. HCAI could improve medical diagnoses and provide ways to protect species at risk of extinction. Some fear the misuse of AI capabilities to persecute minority groups, violate human rights and commit crimes. HCAI’s core aspiration is to empower and benefit humans while incorporating ways to mitigate its own misuse.

“HCAI research builds on these scientific foundations by using them to amplify, augment and enhance human performance in ways that make systems reliable, safe and trustworthy.”

HCAI needs to incorporate ethical principles and political governance into systems production. Core ethical principles for HCAI design will likely include accountability for the product, clarity as to how it works, and due consideration for users’ rights and well-being. HCAI must include tools for auditing a system’s performance, especially when things go wrong. For example, the aviation industry’s use of flight data recorders might serve as a model. Flight data recorders prove crucial for analyzing the causes of airplane crashes, and are useful in improving design and training.

Businesses deploying HCAI systems must prepare for potential failures.

People, organizations and systems all fail sometimes. But users can mitigate against failure and deal with it. In the past, the consequences of such failures rose within fairly well-defined boundaries. In a globalized world that has a dependence upon digital technologies, failures can foment catastrophic, widespread consequences.

“Preparing for failure by organizational design has become a major HCAI theme.”

Organizations can take responsibility for an accident or failure. This requires clear operating procedures and a well-defined chain of command. Organizations can approach failure through the prism of the organization’s design and management. Such an organization pays rigorous attention to possible and near-failures, and how to avoid and mitigate them. An organization can address potential failure through “resilience engineering,” which leads an organization to become agile enough to successfully respond to even unanticipated failures. Organizations should imbue their company cultures with the credo that safety is the paramount concern to leaders and employees.

As AI develops, organizations should adopt HCAI principles.

AI is advancing rapidly, with ever-more-powerful systems and applications.

“This means gathering user requirements, adherence to design guidelines, iterative testing with users, and continuous evaluation once the product or service is released.”

Placing HCAI methods and principles at the center of AI’s future opens numerous possibilities. Technology-oriented researchers want to explore and improve deep learning algorithms. Socially-oriented researchers seek ways to reduce the bias in certain systems and applications. The three areas in which HCAI could play an especially positive role are “citizen science” (research in which volunteers help gather and review data for professionals), getting rid of toxic misinformation campaigns, and health care.

People can and do pursue amateur scientific projects that produce vast amounts of data, benefiting professional projects in fields as diverse as biology and biochemistry. HCAI can produce systems with designs that can help eliminate the conspiracy theories, hate speech, deep fakes and misinformation that proliferate on social media. HCAI will prove useful in the analysis of DNA and amino acid chains to rapidly develop vaccines for dangerous viruses similar to COVID-19.

About the Author

Ben Shneiderman is an emeritus distinguished professor in the University of Maryland’s Department of Computer Science and the founding director of its Human-Computer Interaction Laboratory.

Review

The Book is a visionary and practical guide for designing AI systems that put humans at the center of the process. The author, Ben Shneiderman, is a distinguished professor of computer science and human-computer interaction at the University of Maryland. He proposes a framework for Human-Centered AI (HCAI) that aims to support human self-efficacy, promote creativity, clarify responsibility, and facilitate social participation. He also advocates for human values, rights, justice, and dignity in AI design and use.

The book covers various topics, such as:

  • How to define and measure the goals and outcomes of AI systems
  • How to design and evaluate AI systems that are reliable, safe, and trustworthy
  • How to ensure that AI systems are transparent, explainable, and accountable
  • How to promote human autonomy, creativity, and collaboration with AI systems
  • How to address the ethical, social, and legal implications of AI systems
  • How to foster a culture of innovation and responsibility in AI development and use

The book is divided into four parts: Part One introduces the HCAI framework and its benefits, such as enhancing human performance, increasing user satisfaction, and ensuring system reliability. Part Two presents practical tools and techniques for implementing HCAI, such as story structure, character development, dialogue, theme, and genre. Part Three explores specific applications and domains for HCAI, such as health care, education, business, and government. Part Four discusses the challenges and opportunities for HCAI, such as ethical issues, social impacts, and future directions.

I found the book to be very informative and inspiring. The author writes in a clear and persuasive tone, drawing from his extensive experience and knowledge in the field of AI. The book is full of relevant and timely examples that illustrate the benefits and challenges of HCAI. The book is also full of useful tools and techniques that can help anyone design, develop, or use AI systems that are human-centered.

I especially liked the chapters on how to define the goals and outcomes of AI systems, how to design and evaluate AI systems that are reliable, safe, and trustworthy, how to ensure that AI systems are transparent, explainable, and accountable, and how to promote human autonomy, creativity, and collaboration with AI systems. I learned a lot from the author’s advice on how to apply the principles of HCAI to various domains and applications, such as health care, education, business, government, entertainment, and social media. I also appreciated the exercises and resources that the author provided at the end of each chapter, which helped me reflect on my own situation and plan my next steps.

One of the strengths of the book is its focus on the importance of understanding human behavior, cognition, and emotions in the development of AI systems. Shneiderman argues that AI systems must be designed with a deep understanding of human psychology and sociology to be truly effective and beneficial. He provides numerous examples of how human-centered AI has been successfully applied in various domains, such as healthcare, education, and transportation.

Another strength of the book is its emphasis on ethical and social implications of AI. Shneiderman recognizes the potential risks and challenges associated with AI, such as privacy, bias, and accountability, and provides practical guidelines for addressing these issues. He stresses the importance of transparency, explainability, and accountability in AI decision-making and highlights the need for ongoing research and development in these areas.

One potential criticism of the book is that it may be overly optimistic in its assessment of the potential of human-centered AI to address the challenges of the digital age. While Shneiderman acknowledges the complexity of the issues involved, he argues that a human-centered approach to AI can help to mitigate some of the negative consequences of AI, such as bias and job displacement. However, the implementation of such an approach may be more difficult and time-consuming than Shneiderman suggests, and may require significant changes in the way that AI systems are designed and implemented.

Overall, “Human-Centered AI” is a must-read for anyone interested in the future of AI and its impact on society. Shneiderman’s comprehensive framework for human-centered AI provides a solid foundation for designing, developing, and deploying AI systems that prioritize human needs and values. The book is accessible to readers with varying levels of expertise in AI and is an excellent resource for researchers, practitioners, and policymakers.

In conclusion, “Human-Centered AI” is a thought-provoking and insightful book that offers a comprehensive guide to the design, development, and deployment of AI systems that prioritize human needs and values. Ben Shneiderman’s extensive experience and expertise in human-computer interaction make him an ideal author for this book, and his practical guidelines and examples provide valuable insights for readers. Overall, I highly recommend “Human-Centered AI” to anyone interested in the future of AI and its impact on society.

I would recommend this book to anyone who is interested in AI or wants to learn more about it. The book is suitable for both experts and novices in the field of AI, as well as for anyone who cares about the impact of AI on society. The book is easy to read and follow, and can be used as a reference or a workbook. The book is not only educational but also motivational, as it shows that HCAI is possible and desirable for everyone.

The post Summary: Human-Centered AI by Ben Shneiderman appeared first on Paminy - Information Resource for Marketing, Lifestyle, and Book Review.



This post first appeared on Paminy - Information Resource For Marketing, Lifestyle, And Book Review, please read the originial post: here

Share the post

Summary: Human-Centered AI by Ben Shneiderman

×

Subscribe to Paminy - Information Resource For Marketing, Lifestyle, And Book Review

Get updates delivered right to your inbox!

Thank you for your subscription

×