Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Achieving Individual — and Organizational — Value With AI



main applications of artificial intelligence :: Article Creator

How Artificial Intelligence Of Things (AIoT) Will Transform Homes

Picture a world where your surroundings respond to your needs seamlessly. Your room wakes you up with gentle lighting adjustments; your bathroom mirror offers personalized skincare advice; your fridge restocks ingredients before you even realize they're running low; and your car senses and adapts to your needs.

It may sound like science fiction, but this vision is rapidly becoming a reality. This remarkable phenomenon is known as Artificial Intelligence of Things, or AIoT for short, and it is poised to redefine our lifestyles, workplaces, and the way we interact with our environment.

Demystifying AIoT

To comprehend how we can elevate everyday objects like rooms, fridges, and bathrooms into intelligent entities capable of extraordinary feats, let's establish a parallel with how humans react to their daily surroundings.

Just as humans perceive their environment through sensory devices, transmit signals to the brain, and process this information to make decisions, we can replicate this process in the digital realm to transform ordinary objects into intelligent entities.

Sensory Devices and Digital Perception: Eyes and Ears

We mere mortals rely on sensory perception through our eyes, ears, and skin to comprehend our environment.

So too does the digital world, relying on sensors to understand its surroundings. These sensors, which range from simple temperature gauges to advanced cameras and motion detectors, supply the essential raw data required for objects to grasp and engage with their surroundings.

Internet of Things: The Digital Nervous System

Once humans receive sensory input, our nervous system takes over. Signals are transmitted to the brain, and this network of nerves ensures that information flows seamlessly. In the world of smart objects, we need a digital equivalent to this nervous system to transmit the data collected by sensors.

This is where the Internet of Things (IoT) comes into play. Think of IoT as the digital nervous system connecting these sensors to the digital "brain" of our objects, allowing efficient and instantaneous data transmission. IoT is essentially a vast global network interconnecting countless objects, granting them the power to perceive, compute, execute tasks, and establish connections with the internet.

Within this network, information flows seamlessly between objects, data centers, and users, enabling diverse intelligent services.

Artificial Intelligence: The Digital Brain

Now, let's talk about the "brain" of these intelligent objects. In the human decision-making process, the brain doesn't just receive signals; it also processes this information, learns from it, and makes decisions. This cognitive ability is mirrored in the digital world by artificial intelligence (AI).

AI is not merely about collecting data; it's about analyzing it, identifying patterns, and making autonomous decisions based on that analysis. The digital brain allows our smart objects to go beyond simple perception and take meaningful actions in response to their environment.

By integrating AI into our smart objects, we endow them with the capacity to process the data received from sensors, just as our brain processes sensory input. AI can make sense of this data, recognize patterns, and make intelligent decisions, which is pivotal for objects to adapt, optimize, and respond to our needs.

AIoT: Connecting Digital Nervous System with Digital Brain

However, having these essential technologies in isolation is insufficient; we need a seamless integration strategy to unite them. Such an integrated system can be constructed using one of three approaches:

1. Cloud Computing: Sensor and device data can be transmitted to the cloud for processing and storage. The cloud serves as a virtual powerhouse accessible via the internet, providing flexible and scalable computing resources for various AIoT applications. The standout features of this setup include flexibility, scalability, and cost-effectiveness.

2. Fog Computing: When you want computing closer to the sensors, fog computing is a viable solution. Like routers and gateways, Fog nodes provide storage and processing right at the network's edge. Fog computing is useful for tasks requiring low latency and maintaining service stability during internet interruptions. It also enhances data privacy.

3. Edge Computing: Edge computing happens right on devices near sensors and actuators. It's excellent for reducing latency and conserving network bandwidth. However, it can only handle lightweight models due to limited computational capacity.

Applications of AIoT

AIoT is reshaping industries in remarkable ways. Here are some key applications:

1. Healthcare and Remote Monitoring: In the healthcare sector, wearable devices equipped with AIoT technology can monitor patients' vital signs and send real-time data to healthcare providers. This enables early detection of health issues and allows for timely interventions.

2. Predictive Maintenance in Manufacturing: AIoT predicts when machinery and equipment need maintenance. Sensors on machines collect data, and AI analyzes it to determine when parts are likely to fail, reducing downtime and maintenance costs.

3. Precision Agriculture: Farmers use AIoT for precision agriculture. Sensors in the field collect data on soil conditions, weather, and crop health. AI analyzes this data to optimize irrigation, fertilization, and pest control, increasing crop yields and conserving resources.

4. Smart Home Automation: AIoT can make your home smarter by allowing devices like thermostats, lights, and security cameras to learn your preferences and adjust settings accordingly. For instance, your thermostat can optimize temperature based on your daily routine, while your security system can recognize familiar faces and alert you to potential intruders.

5. Smart Transportation: AIoT is transforming transportation with autonomous vehicles and intelligent traffic management applications. Self-driving cars use AIoT to process data from sensors and cameras, while traffic signals adjust based on real-time traffic data to ease congestion.

Challenges of AIoT

While AIoT has a vast potential to transform ordinary objects into smart entities, it is not without challenges. Some of the key challenge are mentioned below:

1. Data Privacy and Security: With the proliferation of connected devices collecting massive amounts of data, data privacy and security are paramount concerns. Ensuring that sensitive information remains confidential and safeguarded against cyber threats is an ongoing challenge.

2. Data Quality and Reliability: AIoT's effectiveness hinges on the quality and reliability of the data it relies on. Inconsistent or inaccurate data can lead to flawed decisions and unreliable outcomes. Maintaining data integrity through data cleansing, validation, and redundancy checks is crucial to ensure the trustworthiness of AIoT systems.

3. Interoperability: The diverse range of IoT devices and platforms poses a significant interoperability challenge. Ensuring that devices from different manufacturers can seamlessly communicate and work together is essential for a cohesive AIoT ecosystem. Standardization efforts are underway, but achieving universal compatibility remains an ongoing endeavor.

4. Scalability: As AIoT deployments grow, scalability becomes a concern. Adapting AI models and infrastructure to accommodate a growing number of connected devices and data streams requires careful planning and investment in scalable architectures.

The Bottom Line

AIoT, a fusion of AI and IoT, promises a future where our surroundings adapt seamlessly to our needs. It combines sensory devices, a digital nervous system (IoT), and a digital brain (AI) to create a connected ecosystem.

This integration,

Demystifying Artificial Intelligence

WRITTEN BY: David Schatsky, Craig Muraskin, &  Ragu Gurumurthy

In the last several years, interest in artificial intelligence (AI) has surged. Venture capital investments in companies developing and commercializing AI-related products and technology have exceeded $2 billion since 2011.1 Technology companies have invested billions more acquiring AI startups. Press coverage of the topic has been breathless, fueled by the huge investments and by pundits asserting that computers are starting to kill jobs, will soon be smarter than people, and could threaten the survival of humankind. Consider the following:

  • IBM has committed $1 billion to commercializing Watson, its cognitive computing platform.2
  • Google has made major investments in AI in recent years, including acquiring eight robotics companies and a machine-learning company.3
  • Facebook hired AI luminary Yann LeCun to create an AI laboratory with the goal of bringing major advances in the field.4
  • Researchers at the University of Oxford published a study estimating that 47 percent of total US employment is "at risk" due to the automation of cognitive tasks.5
  • The New York Times bestseller The Second Machine Age argued that digital technologies and AI are poised to bring enormous positive change, but also risk significant negative consequences as well, including mass unemployment.6
  • Silicon Valley entrepreneur Elon Musk is investing in AI "to keep an eye" on it.7 He has said it is potentially "more dangerous than nukes."8
  • Renowned theoretical physicist Stephen Hawking said that success in creating true AI could mean the end of human history, "unless we learn how to avoid the risks."9
  • Amid all the hype, there is significant commercial activity underway in the area of AI that is affecting or will likely soon affect organizations in every sector. Business leaders should understand what AI really is and where it is heading.

    ARTIFICIAL INTELLIGENCE AND COGNITIVE TECHNOLOGIES

    The first steps in demystifying AI are defining the term, outlining its history, and describing some of the core technologies underlying it.

    Defining artificial intelligence 10

    The field of AI suffers from both too few and too many definitions. Nils Nilsson, one of the founding researchers in the field, has written that AI "may lack an agreed-upon definition. . . ."11 A well-respected AI textbook, now in its third edition, offers eight definitions, and declines to prefer one over the other.12 For us, a useful definition of AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.13Defining AI in terms of the tasks humans do, rather than how humansthink, allows us to discuss its practical applications today, well before science arrives at a definitive understanding of the neurological mechanisms of intelligence.14 It is worth noting that the set of tasks that normally require human intelligence is subject to change as computer systems able to perform those tasks are invented and then widely diffused. Thus, the meaning of "AI" evolves over time, a phenomenon known as the "AI effect," concisely stated as "AI is whatever hasn't been done yet."15

    A useful definition of artificial intelligence is the theory and development of computer systems able to perform tasks that normally require human intelligence.

    The history of artificial intelligence

    AI is not a new idea. Indeed, the term itself dates from the 1950s. The history of the field is marked by "periods of hype and high expectations alternating with periods of setback and disappointment," as a recent apt summation puts it.16 After articulating the bold goal of simulating human intelligence in the 1950s, researchers developed a range of demonstration programs through the 1960s and into the '70s that showed computers able to accomplish a number of tasks once thought to be solely the domain of human endeavor, such as proving theorems, solving calculus problems, responding to commands by planning and performing physical actions—even impersonating a psychotherapist and composing music. But simplistic algorithms, poor methods for handling uncertainty (a surprisingly ubiquitous fact of life), and limitations on computing power stymied attempts to tackle harder or more diverse problems. Amid disappointment with a lack of continued progress, AI fell out of fashion by the mid-1970s.

    In the early 1980s, Japan launched a program to develop an advanced computer architecture that could advance the field of AI. Western anxiety about losing ground to Japan contributed to decisions to invest anew in AI. The 1980s saw the launch of commercial vendors of AI technology products, some of which had initial public offerings, such as Intellicorp, Symbolics,17 and Teknowledge.18 By the end of the 1980s, perhaps half of the Fortune 500 were developing or maintaining "expert systems," an AI technology that models human expertise with a knowledge base of facts and rules.19 High hopes for the potential of expert systems were eventually tempered as their limitations, including a glaring lack of common sense, the difficulty of capturing experts' tacit knowledge, and the cost and complexity of building and maintaining large systems, became widely recognized. AI ran out of steam again.

    In the 1990s, technical work on AI continued with a lower profile. Techniques such as neural networks and genetic algorithms received fresh attention, in part because they avoided some of the limitations of expert systems and partly because new algorithms made them more effective. The design of neural networks is inspired by the structure of the brain. Genetic algorithms aim to "evolve" solutions to problems by iteratively generating candidate solutions, culling the weakest, and introducing new solution variants by introducing random mutations.

    Catalysts of progress

    By the late 2000s, a number of factors helped renew progress in AI, particularly in a few key technologies. We explain the factors most responsible for the recent progress below and then describe those technologies in more detail.

    Moore's Law. The relentless increase in computing power available at a given price and size, sometimes known as Moore's Law after Intel cofounder Gordon Moore, has benefited all forms of computing, including the types AI researchers use. Advanced system designs that might have worked in principle were in practice off limits just a few years ago because they required computer power that was cost-prohibitive or just didn't exist. Today, the power necessary to implement these designs is readily available. A dramatic illustration: The current generation of microprocessors delivers 4 million times the performance of the first single-chip microprocessor introduced in 1971.20

    Big data. Thanks in part to the Internet, social media, mobile devices, and low-cost sensors, the volume of data in the world is increasing rapidly.21 Growing understanding of the potential value of this data22 has led to the development of new techniques for managing and analyzing very large data sets.23 Big data has been a boon to the development of AI. The reason is that some AI techniques use statistical models for reasoning probabilistically about data such as images, text, or speech. These models can be improved, or "trained," by exposing them to large sets of data, which are now more readily available than ever.24

    The Internet and the cloud. Closely related to the big data phenomenon, the Internet and cloud computing can be credited with advances in AI for two reasons. First, they make available vast amounts of data and information to any Internet-connected computing device. This has helped propel work on AI approaches that require large data sets.25 Second, they have provided a way for humans to collaborate—sometimes explicitly and at other times implicitly—in helping to train AI systems. For example, some researchers have used cloud-based crowdsourcing services like Mechanical Turk to enlist thousands of humans to describe digital images, enabling image classification algorithms to learn from these descriptions.26 Google's language translation project analyzes feedback and freely offers contributions from its users to improve the quality of automated translation.27

    New algorithms. An algorithm is a routine process for solving a program or performing a task. In recent years, new algorithms have been developed that dramatically improve the performance of Machine Learning, an important technology in its own right and an enabler of other technologies such as computer vision.28 (These technologies are described below.) The fact that machine learning algorithms are now available on an open-source basis is likely to foster further improvements as developers contribute enhancements to each other's work.29

    Cognitive technologies

    We distinguish between the field of AI and the technologies that emanate from the field. The popular press portrays AI as the advent of computers as smart as—or smarter than—humans. The individual technologies, by contrast, are getting better at performing specific tasks that only humans used to be able to do. We call these cognitive technologies (figure 1), and it is these that business and public sector leaders should focus their attention on. Below we describe some of the most important cognitive technologies—those that are seeing wide adoption, making rapid progress, or receiving significant investment.

    Computer vision refers to the ability of computers to identify objects, scenes, and activities in images. Computer vision technology uses sequences of imaging-processing operations and other techniques to decompose the task of analyzing images into manageable pieces. There are techniques for detecting the edges and textures of objects in an image, for instance. Classification techniques may be used to determine if the features identified in an image are likely to represent a kind of object already known to the system.30

    Computer vision has diverse applications, including analyzing medical imaging to improve prediction, diagnosis, and treatment of diseases;31 face recognition, used by Facebook to automatically identify people in photographs32 and in security and surveillance to spot suspects;33 and in shopping—consumers can now use smartphones to photograph products and be presented with options for purchasing them. 34

    Cognitive technologies are products of the field of artificial intelligence. They are able to perform tasks that only humans used to be able to do.

    Machine vision, a related discipline, generally refers to vision applications in industrial automation, where computers recognize objects such as manufactured parts in a highly constrained factory environment—rather simpler than the goals of computer vision, which seeks to operate in unconstrained environments. While computer vision is an area of ongoing computer science research, machine vision is a "solved problem"—the subject not of research but of systems engineering.35 Because the range of applications for computer vision is expanding, startup companies working in this area have attracted hundreds of millions of dollars in venture capital investment since 2011.36

    Machine learning refers to the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. At its core, machine learning is the process of automatically discovering patterns in data. Once discovered, the pattern can be used to make predictions. For instance, presented with a database of information about credit card transactions, such as date, time, merchant, merchant location, price, and whether the transaction was legitimate or fraudulent, a machine learning system learns patterns that are predictive of fraud. The more transaction data it processes, the better its predictions are expected to become.

    Applications of machine learning are very broad, with the potential to improve performance in nearly any activity that generates large amounts of data. Besides fraud screening, these include sales forecasting, inventory management, oil and gas exploration, and public health. Machine learning techniques often play a role in other cognitive technologies such as computer vision, which can train vision models on a large database of images to improve their ability to recognize classes of objects.37 Machine learning is one of the hottest areas in cognitive technologies today, having attracted around a billion dollars in venture capital investment between 2011 and mid-2014.38 Google is said to have invested some $400 million to acquire DeepMind, a machine learning company, in 2014.39

    Natural language processing refers to the ability of computers to work with text the way humans do, for instance, extracting meaning from text or even generating text that is readable, stylistically natural, and grammatically correct. A Natural Language Processing system doesn't understand text the way humans do, but it can manipulate text in sophisticated ways, such as automatically identifying all of the people and places mentioned in a document; identifying the main topic of a document; or extracting and tabulating the terms and conditions in a stack of human-readable contracts. None of these tasks is possible with traditional text processing software that operates on simple text matches and patterns. Consider a single hackneyed example that illustrates one of the challenges of natural language processing. The meaning of each word in the sentence "Time flies like an arrow" seems clear, until you encounter the sentence "Fruit flies like a banana." Substituting "fruit" for "time" and "banana" for "arrow" changes the meaning of the words "flies" and "like."40

    Natural language processing, like computer vision, comprises multiple techniques that may be used together to achieve its goals. Language models are used to predict the probability distribution of language expressions—the likelihood that a given string of characters or words is a valid part of a language, for instance. Feature selection may be used to identify the elements of a piece of text that may distinguish one kind of text from another—say a spam email versus a legitimate one. Classification,

    Because context is so important for understanding why "time flies" and "fruit flies" are so different, practical applications of natural language processing often address relative narrow domains such as analyzing customer feedback about a particular product or service,42automating discovery in civil litigation or government investigations (e-discovery),43 and automating writing of formulaic stories on topics such as corporate earnings or sports.44

    Robotics, by integrating cognitive technologies such as computer vision and automated planning with tiny, high-performance sensors, actuators, and cleverly designed hardware, has given rise to a new generation of robots that can work alongside people and flexibly perform many different tasks in unpredictable environments.45Examples include unmanned aerial vehicles,46 "cobots" that share jobs with humans on the factory floor,47 robotic vacuum cleaners,48and a slew of consumer products, from toys to home helpers.49

    Speech recognition focuses on automatically and accurately transcribing human speech. The technology has to contend with some of the same challenges as natural language processing, in addition to the difficulties of coping with diverse accents, background noise, distinguishing between homophones ("buy" and "by" sound the same), and the need to work at the speed of natural speech. Speech recognition systems use some of the same techniques as natural language processing systems, plus others such as acoustic models that describe sounds and their probability of occurring in a given sequence in a given language.50 Applications include medical dictation, hands-free writing, voice control of computer systems, and telephone customer service applications. Domino's Pizza recently introduced a mobile app that allows customers to use natural speech to order, for instance.51

    As noted, the cognitive technologies above are making rapid progress and attracting significant investment. Other cognitive technologies are relatively mature and can still be important components of enterprise software systems. These more mature cognitive technologies include optimization, which automates complex decisions and trade-offs about limited resources;52planning and scheduling, which entails devising a sequence of actions to meet goals and observe constraints;53 and rules-based systems, the technology underlying expert systems, which use databases of knowledge and rules to automate the process of making inferences about information.54

    COGNITIVE TECHNOLOGIES ARE ALREADY IN WIDE USE

    Organizations in every sector of the economy are already using cognitive technologies in diverse business functions.

    In banking, automated fraud detection systems use machine learning to identify behavior patterns that could indicate fraudulent payment activity, speech recognition technology to automate customer service telephone interactions, and voice recognition technology to verify the identity of callers.55

    In health care, automatic speech recognition for transcribing notes dictated by physicians is used in around half of US hospitals, and its use is growing rapidly.56 Computer vision systems automate the analysis of mammograms and other medical images.57 IBM's Watson uses natural language processing to read and understand a vast medical literature, hypothesis generation techniques to automate diagnosis, and machine learning to improve its accuracy.58

    In life sciences, machine learning systems are being used to predict cause-and-effect relationships from biological data59 and the activities of compounds,60 helping pharmaceutical companies identify promising drugs.61

    In media and entertainment, a number of companies are using data analytics and natural language generation technology to automatically draft articles and other narrative material about data-focused topics such as corporate earnings or sports game summaries.62

    Oil and gas producers use machine learning in a wide range of applications, from locating mineral deposits63 to diagnosing mechanical problems with drilling equipment.64

    The public sector is adopting cognitive technologies for a variety of purposes including surveillance, compliance and fraud detection, and automation. The state of Georgia, for instance, employs a system combining automated handwriting recognition with crowdsourced human assistance to digitize financial disclosure and campaign contribution forms.65

    Retailers use machine learning to automatically discover attractive cross-sell offers and effective promotions.66

    Technology companies are using cognitive technologies such as computer vision and machine learning to enhance products or create entirely new product categories, such as the Roomba robotic vacuum cleaner67 or the Nest intelligent thermostat.68

    As the examples above show, the potential business benefits of cognitive technologies are much broader than cost savings that may be implied by the term "automation." They include:

  • Faster actions and decisions (for example, automated fraud detection, planning and scheduling)
  • Better outcomes (for example, medical diagnosis, oil exploration, demand forecasting)
  • Greater efficiency (that is, better use of high-skilled people or expensive equipment)
  • Lower costs (for example, reducing labor costs with automated telephone customer service)
  • Greater scale (that is, performing large-scale tasks impractical to perform manually)
  • Product and service innovation (from adding new features to creating entirely new products)
  • WHY THE IMPACT OF COGNITIVE TECHNOLOGIES IS GROWING

    The impact of cognitive technologies on business should grow significantly over the next five years. This is due to two factors. First, the performance of these technologies has improved substantially in recent years, and we can expect continuing R&D efforts to extend this progress. Second, billions of dollars have been invested tocommercialize these technologies. Many companies are working to tailor and package cognitive technologies for a range of sectors and business functions, making them easier to buy and easier to deploy. While not all of these vendors will thrive, their activities should collectively drive the market forward. Together, improvements in performance and commercialization are expanding the range of applications for cognitive technologies and will likely continue to do so over the next several years (figure 2).

    Improving performance expands applications

    Examples of the strides made by cognitive technologies are easy to find. The accuracy of Google's voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later, according to one assessment.69 Computer vision has progressed rapidly as well. A standard benchmark used by computer vision researchers has shown a fourfold improvement in image classification accuracy from 2010 to 2014.70 Facebook reported in a peer-reviewed paper that its DeepFace technology can now recognize faces with 97 percent accuracy.71 IBM was able to double the precision of Watson's answers in the few years leading up to its famous Jeopardy! Victory in 2011.72 The company now reports its technology is 2,400 percent "smarter" today than on the day of that triumph.73

    Many companies are working to tailor and package cognitive technologies for a range of sectors and business functions, making them easier to buy and easier to deploy.

    As performance improves, the applicability of a technology broadens. For instance, when voice recognition systems required painstaking training and could only work well with controlled vocabularies, they found application in specialized areas such as medical dictation but did not gain wide adoption. Today, tens of millions of Web searches are performed by voice every month.74Computer vision systems used to be confined to industrial automation applications but now, as we've seen, are used in surveillance, security, and numerous consumer applications. IBM is now seeking to apply Watson to a broad range of domains outside of game-playing, from medical diagnostics to research to financial advice to call center automation.75

    Not all cognitive technologies are seeing such rapid improvement. Machine translation has progressed, but at a slower pace. One benchmark found a 13 percent improvement in the accuracy of Arabic to English translations between 2009 and 2012, for instance.76 Even if these technologies are imperfect, they can be good enough to have a big impact on the work organizations do. Professional translators regularly rely on machine translation, for instance, to improve their efficiency, automating routine translation tasks so they can focus on the challenging ones.77

    Major investments in commercialization

    From 2011 through May 2014, over $2 billion dollars in venture capital funds have flowed to companies building products and services based on cognitive technologies.78 During this same period, over 100 companies merged or were acquired, some by technology giants such as Amazon, Apple, IBM, Facebook, and Google.79 All of this investment has nurtured a diverse landscape of companies that are commercializing cognitive technologies.

    This is not the place for providing a detailed analysis of the vendor landscape. Rather, we want to illustrate the diversity of offerings, since this is an indicator of dynamism that may help propel and develop the market. The following list of cognitive technology vendor categories, while neither exhaustive nor mutually exclusive, gives a sense of this.

    Data management and analytical tools that employ cognitive technologies such as natural language processing and machine learning. These tools use natural language processing technology to help extract insights from unstructured text or machine learning to help analysts uncover insights from large datasets. Examples in this category include Context Relevant, Palantir Technologies, and Skytree.

    Cognitive technology components that can be embedded into applications or business processes to add features or improve effectiveness. Wise.Io, for instance, offers a set of modules that aim to improve processes such as customer support, marketing, and sales with machine-learning models that predict which customers are most likely to churn or which sales leads are most likely to convert to customers.80 Nuance provides speech recognition technology that developers can use to speech-enable mobile applications.81

    Point solutions. A sign of the maturation of some cognitive technologies is that they are increasingly embedded in solutions to specific business problems. These solutions are designed to work better than solutions in their existing categories and require little expertise in cognitive technologies. Popular application areas include advertising,82 marketing and sales automation,83 and forecasting and planning.84

    Platforms. Platforms are intended to provide a foundation for building highly customized business solutions. They may offer a suite of capabilities including data management, tools for machine learning, natural language processing, knowledge representation and reasoning, and a framework for integrating these pieces with custom software. Some of the vendors mentioned above can serve as platforms of sorts. IBM is offering Watson as a cloud-based platform.85

    Emerging applications

    If current trends in performance and commercialization continue, we can expect the applications of cognitive technologies to broaden and adoption to grow. The billions of investment dollars that have flowed to hundreds of companies building products based on machine learning, natural language processing, computer vision, or robotics suggests that many new applications are on their way to market. We also see ample opportunity for organizations to take advantage of cognitive technologies to automate business processes and enhance their products and services.86

    HOW CAN YOUR ORGANIZATION APPLY COGNITIVE TECHNOLOGIES?

    Cognitive technologies will likely become pervasive in the years ahead. Technological progress and commercialization should expand the impact of cognitive technologies on organizations over the next three to five years and beyond. A growing number of organizations will likely find compelling uses for these technologies; leading organizations may find innovative applications that dramatically improve their performance or create new capabilities, enhancing their competitive position. IT organizations can start today, developing awareness of these technologies, evaluating opportunities to pilot them, and presenting leaders in their organizations with options for creating value with them. Senior business and public sector leaders should reflect on how cognitive technologies will affect their sector and their own organization and how these technologies can foster innovation and improve operating performance.

    Read more on cognitive technologies in "Cognitive technologies: The real opportunities for business" on Deloitte University Press.

    Deloitte Consulting LLP's Enterprise Science offering employs data science, cognitive technologies such as machine learning, and advanced algorithms to create high-value solutions for clients. Services include cognitive automation, which uses cognitive technologies such as natural language processing to automate knowledge-intensive processes; cognitive engagement, which applies machine learning and advanced analytics to make customer interactions dramatically more personalized, relevant, and profitable; and cognitive insight, which employs data science and machine learning to detect critical patterns, make high-quality predictions, and support business performance. For more information about the Enterprise Science offering, contact Plamen Petrov ([email protected]) or Rajeev Ronanki ([email protected]).

    ABOUT THE AUTHORS David Schatsky

    David Schatsky is a senior manager at Deloitte LLP. He tracks and analyzes emerging technology and business trends, including the growing impact of cognitive technologies, for the firm's leaders and its clients.

    Craig Muraskin

    Craig Muraskin is managing director of the innovation group in Deloitte LLP. He works with leadership to set the group's agenda and overall innovation strategy, and counsels Deloitte's businesses on their innovation efforts.

    Ragu Gurumurthy

    Ragu Gurumurthy is national managing principal of the innovation group in Deloitte LLP, guiding overall innovation efforts across all Deloitte's business units. He advises clients in the technology and telecommunications sectors on a wide range of topics including innovation, growth, and new business models.

    Originally published by Deloitte University Press on dupress.Com. Copyright 2015 Deloitte Development LLC.


    Introduction To Applications Of Artificial Intelligence In Medicine With Dr. Ryan Godwin And Dr. Sandeep Bodduluri

    Ryan Godwin, Ph.D.Ryan Godwin, Ph.D., instructor in the Department of Anesthesiology and Perioperative Medicine and the Department of Radiology, and Sandeep Bodduluri, Ph.D., assistant professor in the Department of Medicine and Instructor & Advisor for AI Programming for the Marnix E. Heersink Institute for Biomedical Innovation, have partnered to instruct the institute's new graduate certificate course, Applications of Artificial Intelligence in Medicine.

    The institute's AI in Medicine Graduate Certificate provides current and future health care leaders with important foundations in understanding and applying artificial intelligence (AI), as well as the safety, security, and ethics of using AI to improve the health and lives of patients. The Applications of Artificial Intelligence in Medicine course introduces students to the applications of AI in medicine through machine learning, deep learning, and natural language processing.

    The Heersink communications team met with Dr. Godwin and Dr. Bodduluri to discuss the ethics of AI and what students can expect from this new program at UAB.

    Q: How did you become interested in AI?

    Godwin: As a part of the last generation to grow up without regular access to computers, I was at a transformative age when my parents brought home our first PC. Learning computational skills early on ultimately supported delving into analytical problem solving, and, as I was actively involved in algorithm development while working on my Ph.D. In physics during the initial AI wave, it was a natural progression to begin exploring applications of AI in my work.

    Bodduluri: I got my first introduction to medical AI systems in my graduate school at the University of Iowa Biomedical Imaging Laboratories. During our graduate studies, we focused on developing advanced image processing and machine learning workflows for lung disease diagnosis and prognosis. This led my interest to pursue AI and deep neural networks as a specialization and as a career path.

    Q: How did you get connected with UAB?

    Godwin: I heard about some of the incredible work in AI being done at UAB from my good friend and colleague, Dr. Ryan Melvin. We worked in the same lab during our Ph.D., and it was inspiring to reunite in the Department of Anesthesiology and Perioperative Medicine to continue our fruitful scientific collaboration as part of the Perioperative Data Science team.

    Bodduluri: I joined UAB Pulmonary Division in 2016 as a postdoctoral fellow in Dr. Surya Bhatt's newly established UAB Lung Imaging Lab. Dr. Bhatt was also my graduate advisor at the University of Iowa before he joined UAB Pulmonary. Dr. Bhatt introduced me to UAB and provided an opportunity to conduct advanced AI research to diagnose pulmonary disease, specifically chronic obstructive pulmonary disease (COPD).

    Q: Why did you choose to teach the Applications of Artificial Intelligence in Medicine course?

    Godwin: The topic of applications of AI in Medicine is something I am very passionate about, and given my experience as an educator and success in building AI models for health care applications, I jumped at the opportunity to teach the course. It is rewarding to help the students improve their AI literacy and enable them better evaluate and interpret the many AI models available and even learn to build their own AI models.

    Bodduluri: I have always been interested in medical image processing and acquired expertise in developing medical AI systems through my educational background in biomedical engineering, medical imaging, and software development. I chose to teach Applications of AI in Medicine as the course was designed to introduce several advanced AI concepts toward practical medical applications. This is very close to my current research work as a faculty in the lung imaging lab.

    Q: In your opinion, what is the greatest opportunity for AI application in Medicine?

    Godwin: The most significant opportunity ultimately rests in AI's ability to transition care from a population-based approach to a more patient-specific approach based on the comprehensive, detailed patient information provided in modern health care settings. In other words, the most significant opportunity for AI in Medicine is borne from its ability to synthesize and interpret immense amounts of complex data quickly. This feature will help transition care-based decisions from being based on what is best for individuals based on populations of similar patients (i.E. Demographics) to what is best based on precise physiological measurements and an in-depth understanding of what is happening to the patient based on their entire physiological makeup (e.G. Multi-omics).

    Bodduluri: There are several critical applications that we could develop to advance our understanding of disease pathology and save lives. It is challenging to limit to one single greatest opportunity in my view. For instance, AI-based diagnostics has seen great advances in recent times, and at a point where such systems are able to diagnose and predict disease progression accurately in different subspecialties. With the latest advance in large language models, there would be significant progress and opportunity in utilizing these systems towards safer and larger clinical benefits.

    Sandeep Bodduluri, Ph.D.Q: Would you please explain the three core elements of this course: machine learning, deep learning, and natural language processing?

    Machine Learning- Applications of AI to EHR data

    Bodduluri: This implies the application of neural networks toward a greater understanding of clinical electronic health record (EHR) data. This would allow us to develop disease screening tools to identify patients early and administer precise and personalized therapeutic strategies.

    Godwin: With machine learning, computers can be trained to analyze EHR data, learn from it, and make predictions or decisions. In the context of healthcare, this could mean predicting which patients are at risk of developing certain diseases based on their health history, helping doctors diagnose diseases more accurately, suggesting personalized treatment plans, or even identifying trends in larger populations that can help with public health planning. In essence, machine learning in the context of EHR data is about harnessing the power of computers to improve health care by learning from large amounts of data that would be too complex and time-consuming for humans to analyze independently.

    Deep Learning- Applications of AI to Medical Imaging data

    Bodduluri: Images are everywhere around us, and these images are the largest data mediums we interact with every day. In this context, there has been a significant advancement in the amount of information we can get from one single photograph. As natural images contain information about our surroundings, medical images contain critical information about specific human anatomy and how it changed with disease. The application of advanced AI systems allows us to infer this critical information in greater detail and higher accuracy, thus leading to a better understanding of underlying disease processes. 

    Godwin: Deep learning is typically a  supervised learning technique that involves using neural networks to have the computer algorithm automatically identify what parts of an image are essential to differentiate images (i.E. Classification) or pixels (i.E. Segmentation). These neural networks have many layers where, at each layer, it learns about more complex features in an image, allowing it to find patterns that might not be obvious to humans. AI has many applications in medical imaging. For example, it can help doctors provide diagnoses, particularly for useful for incidental findings, and automate time-consuming tasks like segmentation which can streamline treatment planning for radiotherapy.

    Natural Language Processing- Applications of AI to Clinical Documentation

    Godwin: Natural Language Processing, or NLP, is the sub-specialty of AI focused on building models that can interface with natural human language. This includes teaching AI models to understand how to extract meaning from text and even how to create text on their own. The latter (creating text) is often referred to as generative AI because the models make new, never before seen text. There has been a recent leap in the capabilities of generative AI with large language models (e.G. ChatGPT), and this leap is accelerating the changes brought about by AI, particularly in health care. 

    6.   What can students expect to learn from the course?

    Bodduluri: This course is designed to educate learners about artificial intelligence and machine learning applications in medicine and will cover the basic concepts required to understand, implement, and assess AI applications in medicine. 

    Godwin: Students can expect to learn about different application types (e.G. Generative AI in Medicine, AI applied to medical images, and AI applied to physiological waveforms), the critical factors to consider when evaluating and building models, and some of the ways these technologies are already reshaping clinical research and work. The instruction incorporates games, like "2 Human 1 AI," where the students try to guess which text was human-generated to promote engagement and hopefully have a little fun in the process.

    Learn more about this program. Applications are due by Dec. 1 for the Spring 2024 semester.








    This post first appeared on Autonomous AI, please read the originial post: here

    Share the post

    Achieving Individual — and Organizational — Value With AI

    ×

    Subscribe to Autonomous Ai

    Get updates delivered right to your inbox!

    Thank you for your subscription

    ×