Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Alexa, Write my OPORD: Promise and Pitfalls of Machine Learning ...



ai psychology :: Article Creator

How AI Will Help Marketers Harness Psychology To Drive Consumer Purchase Decisions

Jessica Hawthorne-Castro is CEO of Hawthorne, an analytics and technology-driven accountable advertising agency.

getty

A marketer's role is to inspire consumers to embrace a product or service, which involves shaping their decision-making process. Equally important is an understanding of psychology and the examination of cognitive preferences, better known as cognitive biases.

Cognitive biases are consistent patterns of deviation from logical reasoning. While we may aspire to make decisions based on objective analysis and unbiased judgment, the reality is that our choices are often influenced by unconscious biases.

These cognitive biases shape and impact consumer behavior, and whether marketers realize it or not, they are likely already leveraging the principles of cognitive bias psychology as a marketing tool. Now that forward-thinking marketers are experimenting with generative artificial intelligence (AI) tools and applications, such as machine learning, it's a great time to explore how AI could be used to leverage cognitive bias psychology.

Members of our agency team have been experimenting with the use of AI and machine learning for professional applications and have found their capabilities remarkable. We're excited about the possibilities as the future of advertising evolves before our eyes. While our current AI usage has primarily focused on programming and copywriting, we're seeing more use cases, including data analysis, image enhancements and creative inspiration, all of which we're excited to experiment with. Our trial usage of AI has also made us more acutely aware of the technology's powerful potential when combined with existing marketing techniques, such as the use of cognitive biases.

The marriage of cognitive bias-based marketing techniques and AI has the potential to revolutionize marketing. The integration of AI can empower marketers and advertisers to further elevate the utilization of cognitive biases, creating new possibilities for driving impactful and targeted marketing campaigns. The synergy between cognitive bias and AI may have the power to unlock unprecedented levels of customer engagement in the ever-evolving landscape of marketing.

Here is a closer look at several cognitive biases currently used in marketing and potential ways marketers can leverage AI to harness the power of these cognitive biases to drive better sales outcomes.

Scarcity Effect: Marketers commonly use scarcity effect-based tactics with phrases like "Act now—supplies are limited!" on the understanding that people tend to assign higher value to items that are scarce compared to those that are available in abundance. Marketers can use generative AI to create time-limited offers or a strategy for limited-edition products to drive higher sales.

Cheerleader Effect: Also known as the bandwagon effect, this cognitive bias creates a sense of confidence in consumers when they see others endorsing a product. AI can help marketers conduct research to identify influencers who would maximize this effect with target groups. It can also analyze data to create more personalized ads that are more likely to resonate with individual consumers.

Empathy Gap: People like to think they make rational choices, but the reality is that emotions often drive consumer behavior. The empathy gap is the tendency to underestimate the role emotions play. This is why successful ad campaigns often use both cognitive and emotional appeals to establish a connection with consumers and drive sales.

Marketers can effectively tap into this bias and influence consumer decision-making, with AI allowing for more responsible and responsive advertising. AI tools such as Google's DeepDream program generate dream-like images, which can help designers create unique designs that appeal to consumers' emotions.

Humor Effect: People find it easier to remember and recall humorous events, which has led marketers to incorporate humor into their campaigns. In the insurance industry, ads featuring popular recurring characters like Mayhem, Flo, and the Emu and Doug have successfully used humor to engage audiences and leave a lasting impression. Although AI doesn't have a sense of humor, marketers can use AI to help generate ideas and identify themes that are relatable and grounded, allowing consumers to connect with the message on a personal level.

Loss Aversion: Human beings tend to perceive losses as more significant than gains, so marketers, particularly in the home improvement service industry, often employ a strategy that emphasizes the potential negative consequences of not taking action. AI can help identify what sense of loss a consumer will feel if they do not purchase the product so marketers can utilize that emotion in ads.

Distinction Bias: When individuals compare two similar products side by side, the differences between them become more pronounced than when they are evaluated individually. By using AI to identify differences and emphasize them, marketers can effectively communicate the distinct advantages of their offerings and influence consumer perceptions and purchasing decisions.

Anchoring: Anchoring is a cognitive bias in which individuals place disproportionate emphasis on a specific piece of information when making decisions. Marketers can use AI to leverage this bias by identifying a higher price point or reference point (the anchor), which makes discounted prices appear more attractive and appealing to consumers.

Generative AI is becoming a powerful tool for marketers who are developing strategies, implementing campaigns and designing ads. The integration of AI with cognitive biases can help marketers analyze creative content and optimize marketing performance, ultimately driving increased sales.

That said, it's important to note that as marketers, it's critical to commit to using AI ethically and transparently. It can be used to create more inclusive experiences, but humans must remain in the mix to fact-check claims and ensure that ads aren't deceptive.

By gaining a deeper understanding of these cognitive biases and AI's potential to leverage them, marketers can lay the groundwork to further enhance marketing strategies and achieve measurable and impactful results. This is a time of radical change in advertising, as a new frontier is being blazed. AI will further empower the harnessing of cognitive biases to effectively elicit stronger and more compelling responses from the target audience. We are on the cusp of the most rapid advertising advancements yet to be seen.

Forbes Agency Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?


Intellectual Humility In The Age Of AI

Many moons ago, I wrote an "infamous" paper, "Modest Systems Psychology: A Neutral Complement to Positive Psychological Thinking." I call it infamous because, at the time, it was a tough "take-down" of the positive psychology movement. I focused on the bias and simplicity of positive psychology thinking—we needed to do better in our efforts to understand and influence happiness and well-being. A modest approach, I argued, is an approach that recognizes and works with the complexity of human systems. As the field of positive psychology research matured, I gravitated toward a more substantive Intellectual humility—I accepted that we have to start somewhere, bias is something we work against collectively over time, and we generally move from simple to more complex and fit-for-purpose thinking and practice as our science in a given field progresses.

I recognized a similar sentiment within myself recently in my response to Chatgpt. In my early testing of ChatGPT—What does it generate when I ask questions about social science?—I recognized its bias, simplicity, and many outlandish errors (e.G., making up study citations that don't exist). I also studied the underlying language models and how they work and decided that ChatGPT was fundamentally flawed as an aid to intellectual activity. However, flawed as it is, I have decided I need to engage with it and continue to explore its functionality as it develops. This aligns with my purpose as a teacher, researcher, and collective intelligence facilitator. Simply put, if we can find ways to put AI to good use, part of my job as a teacher and collective intelligence facilitator is to explore these possibilities.

When it comes to intellectual humility, I came across an interesting study recently focused directly on ChatGPT. The study by Li (2023) explores the relationship between intellectual humility and acceptance of ChatGPT. Intellectual humility is a measure of how much individuals recognize the fallibility of their beliefs, opinions, and knowledge. Intellectual humility is related to the personality disposition of openness to experience. Li (2023) suggests that people who recognize their own intellectual limitations may be more likely to accept AI as potentially useful. This is consistent with previous findings indicating that people with higher intellectual humility are less likely to feel threatened by developments in computer science and more likely to adopt new technologies.

Li (2023) conducted four studies. In Study 1, 309 students from southwest China completed a survey measuring intellectual humility along with a series of questions focused on acceptance and fear of ChatGPT. Analysis of this survey data indicated that higher intellectual humility was associated with higher acceptance of ChatGPT and lower fear.

Study 2 moved beyond a simple survey to evaluate a behavioral indicator of acceptance. A total of 144 students were contacted about an upcoming university event, and they were asked to help with a decision regarding invitation letters that would be sent to guests of honor. They were informed that two versions of the invitation letter were available: one written by ChatGPT and the other written by a professional writer. The students were also informed that both ChatGPT and the professional writer could write engaging and high-quality content and that a pilot survey indicated that the two versions do not differ in terms of many key criteria, such as credibility, effectiveness, readability, and valence.

Students were simply asked to indicate their preference and were not asked to read the invitations as such. When presented with this scenario, the majority of students (82.64 percent) proposed using the invitation letter they thought was written by a human, while only 17.36 percent of students proposed using the version they thought was generated by ChatGPT. Interestingly, however, the study also found that the odds of selecting the ChatGPT version were significantly higher amongst students higher in intellectual humility. In simple terms, they were more open to using the ChatGPT invites.

Study 3 worked with a sample of 172 Chinese non-student adults. Participants were randomly assigned to an experimental condition that prompted either intellectual humility or intellectual certainty. Specifically, one group of participants read an article on the benefits of admitting one's own intellectual limitations (thus prompting intellectual humility), while the second group read an article on the benefits of demonstrating what you know and not being bashful in doing so (prompting intellectual certainty). Participants were next shown a photo of a beautiful mountain lake scene and were told that the government was going to run an advertisement to promote the scenic spot. Two text advertisements were available: one produced by ChatGPT and another produced by a professional writer. (In reality, both texts were written by the same person.)

Again, participants were asked to select one text or the other. And again, we see the same subtle and interesting effect: 85.7 percent of participants in the intellectual certainty condition selected the text they believed was written by a human, but a significantly smaller percentage of participants in the intellectual humility condition, 68.9 percent, did the same. These findings suggest that intellectual humility, even when it is temporarily prompted, may result in more favorable attitudes toward ChatGPT.

Study 4 adopted a similar design to Study 3, but it compared the intellectual humility prompt with a more neutral control reading prompt. Participants were asked to select an invitation letter as in Study 2; only this time, participants were able to read the invitation letters. They were told one letter was generated by ChatGPT, and the other was drafted by a professional writer. (Both were written by the same person and were of similar quality.)

Again, prompting intellectual humility resulted in more favorable attitudes toward ChatGPT: 77.11 percent of participants in the control condition selected the letter they thought was written by a human, whereas 63.75 percent of the participants in the intellectual humility condition did so. Further statistical analysis also revealed that the personality trait of openness to experience mediated the relationship between intellectual humility and letter selection (i.E., the tendency for those prompted with intellectual humility to select ChatGPT letters was explained in part by higher openness).

Overall, while the statistical effects reported across each of the four studies are subtle (i.E., statistically significant but somewhat weak), collectively, the studies do point to an effect of intellectual humility on acceptance of ChatGPT. Li (2023) notes that many other factors are likely to operate when it comes to understanding the responses that people have to ChatGPT, and more research is needed. I'm doing my best to maintain intellectual humility, and I think the best approach for now is a case-by-case analysis of different potential applications of ChatGPT. As a teacher, there is no point in hiding my head in the sand and hoping that traditional models of teaching and learning will be sustained in the new world of AI. But if I identify problems with ChatGPT and the way it is being used, I'll be sure to let my students know. We'll figure it out together!


Has AI Surpassed Human Creativity?

The stunning capabilities of artificial intelligence (AI) large language models (LLMs) challenge the long-held belief that creativity differentiates humans from machine learning algorithms. Has AI technology exceeded humans in the creative realm? A new study compares the abilities of AI versus humans in creative divergent thinking with potential insights on the future of work in creative domains.

The Future of Jobs Report 2023, by the World Economic Forum (WEF), states the most important skills for workers in 2023 are the cognitive skills of analytical and creative thinking. According to the WEF report, creative thinking is growing more in importance compared to analytical thinking.

Increasingly, AI technology is being used for creative purposes. According to a 2023 Statista survey of 4,500 American professionals, 37 percent of those surveyed who are working in advertisement or marketing had used AI to assist with work tasks.

"With AI systems becoming increasingly capable of performing tasks that were once solely within the purview of humans, concerns have been raised about the potential displacement of jobs and its implications for future employment prospects," wrote the study co-authors Simone Grassini and Mika Koivisto, Ph.D.

Grassini is an Associate Professor at the department of psychosocial science of the University of Bergen, Norway, and at the cognitive and behavioral neuroscience laboratory at the University of Stavanger, Norway. Koivisto is a university lecturer in psychology at the University of Turku in Finland.

"The development and widespread availability of generative artificial intelligence (AI) tools, such as ChatGPT or MidJourney, has sparked a lively debate about numerous aspects of their integration into society, as well as about the nature of creativity in humans and AI," the authors wrote.

Large language models are AI deep learning algorithms that are trained using unsupervised learning with massively large data sets, often scraped from the Internet, in order to "understand" existing content and generate new content. Examples of large language models include OpenAI Codex, and the OpenAI LLMs for its AI chatbot ChatGPT (GPT-4 and GPT-3), GPT-4 for Microsoft's AI chatbot Bing Chat, BLOOM by HuggingFace, the Megatron-Turing Natural Language Generation 530B by NVIDIA and Microsoft, Anthropic's Claude (for AI chatbot Claude 2), Meta's LLaMA, Salesforce Einstein GPT (Using OpenAI LLM), the PaLM 2 that powers Bard, Google's AI chatbot, and Amazon's Titan.

To measure the creativity of humans versus AI, the researchers used the Alternative Uses Test (AUT), a test designed by American psychologist J.P. Guilford, one of the eminent psychologists of the 20th century according to the American Psychological Association (APA). The AI chatbots evaluated include ChatGPT3 (version 3.5), ChatGPT4, and Copy.Ai, which is based on GPT3 technology.

Guilford views intelligence as an aggregate of many mental factors or abilities, rather than one dominating general ability. Guilford's theory of human intelligence consists of the three dimensions of operations (cognition, memory, divergent production, convergent production, evaluation), products (units, classes, relations, systems, transformations, and implications), and contents (visual, auditory, symbolic, semantic, behavioral).

Guilford considered creativity as a form of problem-solving and a part of intelligence. Problem-solving abilities could be further defined as sensitivity to problems, fluency (ideational, associational, and expressional), and flexibility (spontaneous and adaptive).

Guilford is credited for introducing the terms "divergent and convergent thinking" in the 1956 theory of human intelligence called the Structure of Intellect Model (SI). Brainstorming is an example of divergent thinking, where many ideas are generated in response to an open-ended task or question. In contrast, the output of convergent thinking is a single correct answer to a well-defined problem.

In this study, the tasks included generating creative and original uses of everyday objects, such as a rope, box, pencil, and candle. The researchers found that, unlike the response generated by the AI chatbots, the 256 human study participants had generated a relatively high proportion of what could be considered sub-par ideas, or common responses.

Artificial Intelligence Essential Reads

"The results suggest that AI has reached at least the same level, or even surpassed, the average human's ability to generate ideas in the most typical test of creative thinking (AUT)," the researchers concluded.

However, the AI chatbots lacked consistency and the top human performers achieved better results than AI, the study results showed. The research has provided a snapshot of AI's creativity versus humans. Grassini and Koivisto caution that this may change six months from now as AI technology continues to rapidly advance in the future.

Copyright © 2023 Cami Rosso All rights reserved.








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

Alexa, Write my OPORD: Promise and Pitfalls of Machine Learning ...

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×