Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Redefining Conversational AI with Large Language Models | by Janna Lipenkova | Sep, 2023

Sed ut perspiciatis unde. The charm of conversational interfaces lies in their simplicity and uniformity across different applications. If the future of user interfaces is that all apps look more or less the same, is the job of the UX designer doomed? Definitely not — Conversation is an art to be taught to your LLM so it can conduct conversations that are helpful, natural, and comfortable for your users. Good conversational design emerges when we combine our knowledge of human psychology, linguistics, and UX design. In the following, we will first consider two basic choices when building a conversational system, namely whether you will use voice and/or chat, as well as the larger context of your system. Then, we will look at the conversations themselves, and see how you can design the personality of your assistant while teaching it to engage in helpful and cooperative conversations.Conversational interfaces can be implemented using chat or voice. In a nutshell, voice is faster while chat allows users to stay private and to benefit from enriched UI functionality. Let’s dive a bit deeper into the two options since this is one of the first and most important decisions you will face when building a conversational app.To pick between the two alternatives, start by considering the physical setting in which your app will be used. For example, why are almost all conversational systems in cars, such as those offered by Nuance Communications, based on voice? Because the hands of the driver are already busy and they cannot constantly switch between the steering wheel and a keyboard. This also applies to other activities like cooking, where users want to stay in the flow of their activity while using your app. Cars and kitchens are mostly private settings, so users can experience the joy of voice interaction without worrying about privacy or about bothering others. By contrast, if your app is to be used in a public setting like the office, a library, or a train station, voice might not be your first choice.After understanding the physical setting, consider the emotional side. Voice can be used intentionally to transmit tone, mood, and personality — does this add value in your context? If you are building your app for leisure, voice might increase the fun factor, while an assistant for mental health could accommodate more empathy and allow a potentially troubled user a larger diapason of expression. By contrast, if your app will assist users in a professional setting like trading or customer service, a more anonymous, text-based interaction might contribute to more objective decisions and spare you the hassle of designing an overly emotional experience.As a next step, think about the functionality. The text-based interface allows you to enrich the conversations with other media like images, as well as graphical UI elements such as buttons. For example, in an e-commerce assistant, an app that suggests products by posting their pictures and structured descriptions will be way more user-friendly than one that describes products via voice and potentially provides their identifiers.Finally, let’s talk about the additional design and development challenges of building a voice UI:If you go for the voice solution, make sure that you not only clearly understand the advantages as compared to chat, but also have the skills and resources to address these additional challenges.Now, let’s consider the larger context in which you can integrate conversational AI. All of us are familiar with chatbots on company websites — those widgets on the right of your screen that pop up when we open the website of a business. Personally, more often than not, my intuitive reaction is to look for the Close button. Why is that? Through initial attempts to “converse” with these bots, I have learned that they cannot satisfy more specific information requirements, and in the end, I still need to comb through the website. The moral of the story? Don’t build a chatbot because it’s cool and trendy — rather, build it because you are sure it can create additional value for your users.Beyond the controversial widget on a company website, there are several exciting contexts to integrate those more general chatbots that have become possible with LLMs:As humans, we are wired to anthropomorphize, i.e. to inflict additional human traits when we see something that vaguely resembles a human. Language is one of the most unique and fascinating characteristics of humankind, and conversational products will automatically be associated with humans. People will imagine a person behind their screen or device — and it is good practice to not leave this specific person to the chance of your users’ imaginations, but rather lend it a consistent personality that matches well with your product and brand. This process is called “persona design”.The first step of persona design is understanding the character traits you would like your persona to display. Ideally, this is already done at the level of the training data — for example, when using RLHF, you can ask your annotators to rank the data according to traits like helpfulness, politeness, fun, etc., in order to bias the model towards the desired characteristics. These characteristics can be matched with your brand attributes to create a consistent image that continuously promotes your branding via the product experience.Beyond general characteristics, you should also think about how your virtual assistant will deal with specific situations beyond the “happy path”. For example, how will it respond to user requests that are beyond its scope, reply to questions about itself, and deal with abusive or vulgar language?It is important to develop explicit internal guidelines on your persona that can be used by data annotators and conversation designers. This will allow you to design your persona in a purposeful way and keep it consistent across your team and over time, as your application undergoes multiple iterations and refinements.Have you ever had the impression of talking to a brick wall when you were actually speaking with a human? Sometimes, we find our conversation partners are just not interested in leading the conversation to success. Fortunately, in most cases, things are smoother, and humans will intuitively follow the “principle of cooperation” that was introduced by the language philosopher Paul Grice. According to this principle, humans who successfully communicate with each other follow four maxims, namely quantity, quality, relevance, and manner.Maxim of quantityThe maxim of quantity asks the speaker to be informative and make their contribution as informative as required. On the side of the virtual assistant, this also means actively moving the conversation forward. For example, consider this snippet from an e-commerce fashion app:Assistant: What kind of clothing items are you looking for?User: I am looking for a dress in orange.Assistant: Don’t: Sorry, we don’t have orange dresses at the moment.Do: Sorry, we don’t have dresses in orange, but we have this great and very comfortable dress in yellow: …The user hopes to leave your app with a suitable item. By stopping the conversation because you don’t have items that would fit the exact description, you kill off the possibility of success. However, if your app makes suggestions about alternative items, it will appear more helpful and leave the option of a successful interaction open.Especially in voice interactions, it is important to find the right balance between providing all the information the user might need for success, while not overwhelming them with unnecessary information which might cloud the interaction.Maxim of qualityThe maxim of quality asks speakers to be truthful and not say things they believe are false, or for which they lack adequate evidence. There is a lot of subjectivity and personal belief involved here, thus, this maxim cannot be directly projected onto LLMs. As we have seen, LLMs tend to hallucinate, and this risk is especially high when the conversation drifts off into less familiar territories. Additional guardrails can be used in order to avoid misleading statements and bring the conversation back on track.Maxim of relevanceThe maxim of relevance asks us to be — well, relevant. As a precondition, the virtual assistant has to understand the true intent of the user. Here is an example of an airline assistant that violates this rule:User: I need a flight from Berlin to Zurich tomorrow morning.Assistant: OK, I found the following flights from Berlin to Zurich for tomorrow morning. Flight BZ36294 leaves Berlin at 6.30am and arrives in Zurich at 8am. Flight BZ96294 leaves Berlin at 8am and arrives at 9.30am. Flight BZ96294 leaves Berlin at 8am and arrives in Zurich at 9.30am. Flight BZ95614 leaves Berlin at 11am and arrives in Zurich at 12.30pm. There is also the possibility of a connecting flight, in which case you would leave Berlin at 11.45am and arrive in Zurich at 3pm.User: Sorry… could you repeat this again?The assistant is overwhelming the user with detailed information that might not be relevant. By breaking this up and guiding the user to limit their search space, the dialogue becomes much easier to digest:User: I need a flight from Berlin to Zurich tomorrow morning.Assistant: Ok, I found flights leaving at 6.30, 8, and 11am. Which time would you like to leave?User: I need to be in Zurich before 9am.Assistant: OK, so you can take the flight BZ36294. It leaves at 6.30 and arrives at 8am. Should I buy the ticket for you?User: Yes, thanks.Maxim of mannerFinally, the maxim of manner states that our speech acts should be clear, concise and orderly, avoiding ambiguity and obscurity of expression. Your virtual assistant should avoid technical or internal jargon, and favour simple, universally understandable formulations.While Grice’s principles are valid for all conversations independently of a specific domain, LLMs that were not trained specifically for conversation will often fail to fulfill them. Thus, when compiling your training data, it is important to have enough dialogue samples that allow your model to learn these principles.The domain of conversational design is developing rather quickly. Whether you are already building AI products or thinking about your career path in AI, I encourage you to dig deeper into this topic (cf. the excellent introductions in [5] and [6]). As AI is turning into a commodity, good design together with a defensible data strategy will become two important differentiators for AI products.Let’s summarize the key takeaways from the article. Additionally, figure 6 shows a “cheatsheet” with the main points that you can download as a reference.Source link Save my name, email, and website in this browser for the next time I comment.By using this form you agree with the storage and handling of your data. * Δdocument.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() );Tech dedicated news site to equip you with all tech related stuff.I agree that my submitted data is being collected and stored.✉️ Send us an emailTechToday © 2023. All Rights Reserved.TechToday.co is a technology blog and review site specializing in providing in-depth insights into the latest news and trends in the technology sector.TechToday © 2023. All Rights Reserved.Be the first to know the latest updatesI agree that my submitted data is being collected and stored.



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Redefining Conversational AI with Large Language Models | by Janna Lipenkova | Sep, 2023

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×