Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

5 Things CEOs Need To Know About ChatGPT And Generative AI

If you’ve attended any industry conferences this year, you’re likely aware that Chatgpt, Generative AI, and artificial intelligence, in general, have been dominating the discussions. However, a significant portion of the content seems to be overly preachy and lacking substance, with statements like “AI will disrupt industries” or “AI is a game changer.”

CEOs and other senior executives are seeking more specific perspectives regarding the actual impact of these emerging technologies and actionable steps to navigate them effectively. Consequently.

Five essential insights that CEOs should be aware of concerning ChatGPT and Generative AI:

1) Generative AI Does Not Primarily Aim for Cost Reduction

The initial priority in deploying Generative AI tools and technologies should revolve around enhancing productivity, particularly by expediting processes.

Estimates regarding workforce reductions vary based on job roles and positions, ranging from 20% to as high as 80%. However, instances of companies completely or nearly replacing their employees with Generative AI are rare and have yielded less than stellar results.

The true impact of Generative AI on businesses doesn’t lie in replacing staff but rather in accelerating human productivity and fostering creativity. Charles Morris, Microsoft’s Chief Data Scientist for Financial Services, encapsulates this perspective, “Rather than viewing Gen AI as an automation tool, consider it a co-pilot, aiding humans in performing tasks more expeditiously.

From executing marketing campaigns to developing web sites to developing code to create new data models, the benefits of these use cases for using Generative AI isn’t cost reduction, it’s reducing time to market.

What every CEO should know about Generative AI. Image Credit: Digital Rosh

2) Assessing the Risks of Large Language Models is Essential

While ChatGPT may currently enjoy the highest level of recognition among large language models (LLMs), other contenders like Microsoft’s Gorilla and Facebook’s Llama are gaining momentum. Almost every major technology provider is either developing or has recently introduced their own LLM.

By the end of this decade, businesses can anticipate relying on anywhere from 10 to 100 LLMs, contingent upon their industry and company size. Two certainties are evident. 1) Technology vendors may claim to integrate Generative AI technology into their offerings, even when they do not, and 2) Tech vendors are unlikely to disclose the weaknesses and limitations of their LLMs, if they indeed possess any.

Therefore, organizations will need to independently evaluate each model’s strengths, weaknesses, and associated risks. As stated by Chris Nichols, Director of Capital Markets at South State Bank:

“There are specific criteria that companies should apply to assess each model. Risk teams should monitor these models and assess them based on their accuracy, potential biases, security, transparency, data privacy, audit frequency, and ethical considerations (such as the risk of intellectual property infringement and deep fake generation).

3) In 2023, ChatGPT Resembles What Lotus 1-2-3 Was in 1983

Think back to the era of Lotus 1-2-3, the spreadsheet software. While it wasn’t the initial PC-based spreadsheet on the market, its launch in early 1983 ignited a surge in personal computer adoption, earning it the title of the ‘killer app’ for PCs.

Lotus 1-2-3 also revolutionized employee productivity, enabling individuals to manage numerical data in ways previously unimaginable. Many today may not recall the reliance on HP calculators for calculations and manual record-keeping in the workplace.

Despite the substantial productivity gains, several challenges emerged. 1) Users introduced calculation errors that caused significant issues for some companies; 2) Documentation of the underlying assumptions within spreadsheets was often lacking, leading to a lack of transparency; and 3) Consistency and standardization in spreadsheet design and use were frequently absent.

Remarkably, these same issues faced by companies four decades ago with Lotus 1-2-3 are still relevant today in the context of ChatGPT and other Generative AI tools. There is an overreliance on ChatGPT’s occasionally inaccurate outputs, a lack of documentation or a ‘paper trail’ concerning tool usage, and inconsistency in tool utilization among employees, even within the same department or organization.

Similar to the way Lotus 1-2-3 gave rise to numerous plugins that enhanced its functionality, ChatGPT has already spawned hundreds of plugins. In fact, much of its capability to generate outputs such as audio, video, programming code, and other non-text forms is derived from these plugins rather than ChatGPT itself.

4) The Success of Generative AI Breaks on Data Quality

Consultants have long been emphasizing the importance of organizing your internal data infrastructure, and when you begin utilizing Generative AI tools, you’ll witness just how effective your efforts have been. The age-old saying ‘garbage in, garbage out’ couldn’t be more aptly suited for Generative AI.

In the case of open-source Large Language Models (LLMs) that rely on public Internet data, a high degree of caution must be exercised regarding data quality. While the Internet is a treasure trove of data, it often resembles a treasure buried within a data wasteland. Attempting to extract data from it can leave you uncertain whether you’ve acquired a valuable nugget or a handful of useless information.

Companies have grappled with the challenge of granting employees access to the data required for informed decision-making and job performance for decades. Part of this challenge involves deploying tools to access the data and providing training to ensure employees are proficient in using them.

Generative AI tools mitigate some of the complexities associated with data access and reporting software applications, offering a significant advantage that contributes to enhanced human performance.

However, what remains a critical concern is the quality of the data.

Ironically, it’s essential to shift the conversation away from discussing ‘data’ in a generic sense. Instead, the focus should be on evaluating the quality, availability, and accessibility of specific data types, such as customer data, customer interaction data, transaction data, financial performance data, operational performance data, and so on.

Each one of these types of data is fodder for Generative AI tools.

5) Generative AI Requires New Behaviors

Prohibiting the use of Generative AI tools is not a practical approach. What is both feasible and advisable is the establishment of clear guidelines for their utilization.

For instance, employees should be required to. 1) Document the prompts they employ to generate results; 2) Thoroughly review the output generated by Generative AI (and provide evidence of their review); and 3) Comply with internal document standards encompassing the use of keywords, well-defined headings, graphics with alt tags, concise sentences, and formatting criteria.

Admittedly, these expectations are demanding, but as Chris Nichols of South State Bank notes, “poorly structured documents are a primary source of Generative AI inaccuracies.”

Furthermore, the focus of management will evolve over the remainder of the decade. Over the past decade, businesses have started on a ‘digital transformation’ journey, primarily centered around digitizing high-volume transactional processes, such as account opening and customer support.

This focus is now evolving or expanding, to be precise towards enhancing the productivity of knowledge workers within the organization, including those in IT, legal, marketing, and other roles.

In the short term, entrusting Generative AI tools to run the company without human intervention and oversight would be imprudent, given the prevalence of erroneous data leading to potential ‘hallucinations.’

However, in the long run, Generative AI has the potential to be genuinely ‘disruptive’ and serve as a ‘game changer.’ CEOs must proactively take significant measures to ensure that these disruptions and changes yield positive outcomes for their organizations.

Conclusion

In this age of AI, CEOs need to be practical. Focus on making work more efficient, be cautious about AI’s risks, remember past lessons, and ensure good data. Follow guidelines for AI use, adapt leadership, and prepare for big changes. It’s all about being smart in the AI era.

The post 5 Things CEOs Need To Know About ChatGPT And Generative AI first appeared on Business d'Or.



This post first appeared on Bugatti Chiron Successor To Don A More Athletic Shape, Says Designer, please read the originial post: here

Share the post

5 Things CEOs Need To Know About ChatGPT And Generative AI

×

Subscribe to Bugatti Chiron Successor To Don A More Athletic Shape, Says Designer

Get updates delivered right to your inbox!

Thank you for your subscription

×