Learn how to choose the right AI for your digital product and user experience.
AI has arguably become the buzzword of 2024—and with good reason. As platforms like ChatGPT revolutionize workflows and integrate with existing software, the technology is becoming so ubiquitous that it seems like it’s everywhere and in everything.
However, that doesn’t mean AI is a monolith. Or, that jumping on the train of “just add an AI integration” is a winning strategy for pairing these models' user-empowering capabilities with great UX. In this article, we’ll explain the different types of AI models, break down their use cases, and learn how to integrate them as a value-add in your product’s UX without adding unnecessary bloat.
Understanding SLMs and LLMs
Language models are artificial intelligence models designed to understand and generate human language. Built using machine learning and vast amounts of text data from various sources, they use complex parameters to predict likely text sequences.
For most users, Large Language Models (LLMs) and the companies that utilize them (ChatGPT) are nearly synonymous with AI. However, not every job requires the capabilities of LLMs, and that’s where Small Language Models (SLMs) come into the picture.
What are SLMs?
SLMs are like the nimble, more efficient cousins of their larger and more robust AI counterparts. However, they don’t have the power or wider breadth of abilities that you’d expect to see in LLMs.
The best part of SLMs is how they can perform specific language tasks quite well without requiring the massive computational resources that larger models do. They’re particularly advantageous for their fast response speeds and for integrating onto devices with less processing power, like smartphones. Perhaps most importantly, once they are installed, they don’t usually need an internet connection.
Use cases and real examples
SLMs are often used for services that require basic queries, like customer service bots that answer FAQs or provide order status updates. While there are many SLMs currently on the market, Mistral’s 7B, Google’s Gemma, and Microsoft’s Phi-2 are some of the most popular. In most cases, these products also give users the opportunity to run models on their own private servers and networks without relying on someone else to provide the computational resources (e.g., what Open AI does for ChatGPT).
What are LLMs?
LLMs function a lot like SLMs, but they are trained on much larger datasets and produce more nuanced and powerful text modeling. One of the biggest benefits of LLMs is their ability to perform a wide range of language tasks without previous task-specific training. This capability, known as "zero-shot learning," enables them to translate languages, summarize texts, answer questions, and even engage in conversation, all by leveraging the patterns and information they've learned during their extensive training.
Use cases and real examples
LLMs are used in more complex applications where deeper language understanding is required. The most notable LLM, Open AI’s GPT-4, is commonly employed to write articles, draft emails, and troubleshoot technical issues. It's become a staple in most of our lives at this point.
In other practices, software development, tools like GitHub Copilot use LLMs to suggest and debug code, while in healthcare, they help summarize medical reports or assist doctors with diagnostic suggestions. LLMs also excel in translating languages, analyzing research, and generating creative content or visual concepts.
How to decide if and where you should integrate AI
Assessing user needs
We’ve written extensively about using research and data to produce better products; at Neuron, it’s a huge part of our work. When it comes to assessing user needs accurately with regards to integrating AI, the TLDR is really this:
Start by collecting real data on pain points and challenges that users face.
These can be qualitative, through interviews and focus groups, or quantitative, by combing through data gleaned from surveys, A/B testing, heat maps, or any combination of sources.
Then, get as specific as possible about use cases where AI could enhance the user experience, including:
Customer support
Content generation
Personalized recommendations
It might sound overly reductive, but that’s how it starts. Once you understand pain points and use cases where AI integration could assuage users' difficulties, you can start evaluating whether or not it’s worth doing.
Evaluating ROI and feasibility of AI integration
Given the increasing presence of AI integrations in competing products, it can be tempting to consider these features “necessary.” In reality, this is a more complicated decision based on ROI and feasibility.
AI will obviously cost money to integrate, but if doing so sufficiently boosts user satisfaction and efficiency in a way that could attract and retain more users, it can certainly be worth it. Conversely, if your integration doesn’t actually make your product appreciably better, it might not be.
A great place to start is to look at technical data on model performance to gain insight into both ROI and technical feasibility. For example, Google’s Gemma 7B is among the cheapest of SLMs, at .15 cents per 1 million tokens. It also has a higher speed, at 153 output tokens per second, compared to Chat GPT’s 83. Although its output quality is lower than bigger LLMs, Gemma’s efficiency and low latency allow it to run on a wider range of back-end systems, especially less powerful ones.
Strategic planning
While it’s natural to be excited about the prospect of adding AI to an already stellar product, you need to start by setting clear goals for your end product. Whatever you want to accomplish, being specific about your objective can help avoid implementing it in a way that could compromise ROI or the integrity of your UX.
Although many of us have used ChatGPT to summarize an article or turn an email chain into a suggested meeting agenda, that doesn’t make us experts. So whether you’re leveraging the wisdom of an internal team or consulting with external experts, it’s a good idea to tap the knowledge of people who understand these models from the inside out. If you have a very clear sense of your user needs and your goals for integrating LLMs, experts with more intimate knowledge of model capabilities and limitations can help guide you to a system that will pair perfectly.
Incorporating SLMs into your product's UX
When to use SLMs
When the time comes to move from research and planning into actual integration, SLMs are a great place to begin. In general, tasks that are repetitive and don’t require deep language comprehension are ideal for SLMs to tackle, including answering FAQs, basic troubleshooting, and routine data entry.
In practice, a chatbot could use an SLM to handle common customer inquiries like return policies, product availability, or checking the status of an online order. Of course, not every request would fall into simple categories like those. But for those that do, SLMs perfectly balance meeting customer needs with conserving computational resources.
Best UX practices for integrating SLMs
Following best practices for integrating SLMs will ensure that you get closer to your goals while improving your user experience along the way:
Text input and output: Ensure that design interfaces let users input queries or commands efficiently—and that reading model-generated output is just as easy. Make sure that text input fields are prominently placed within the UI, such as search bars or chat interfaces, making them easily accessible and identifiable.
Guide the user: Include brief instructions or tooltips to walk users through using the feature effectively. Consider offering examples of common commands or queries that users can try.
Error handling and recovery: While the kinds of tasks you might give SLMs would generally be simpler, they’ll still occasionally run into problems. To combat this, any AI integration needs to include mechanisms for gracefully handling misunderstandings or unclear inputs. For example, tooltips that offer suggestions or alternative wordings could help users refine queries.
Make it accessible: Provide alternative input methods for users with disabilities. To avoid rolling a feature out that’s not sufficiently accessible, be sure to test any integration with diverse user groups and demographics before release.
Keep it clean: In general, you want to make sure that any text input area doesn’t obstruct other UI elements. When in doubt, try to strike a balance between adding a sufficient number of AI controls and keeping the design clean and clutter-free.
Incorporating LLMs into your product's UX
When to use LLMs
As efficient and low-cost as SLMs are, tasks will sometimes require the power and breadth of knowledge required by LLMs. For example, customized content recommendations or more complex customer support requests may require a deeper understanding of the end-user or the ability to synthesize discrete (but related) elements to sufficiently answer questions.
For example, imagine that a healthcare app was trying to recommend personalized medical advice to a consumer. With access to certain medical records, chatbot history, and health data linked via personal devices like a Fitbit or Apple watch, an LLM could conceivably help a user predisposed to heart disease by recommending activities that lower their resting heart rate or systolic blood pressure (or both).
Best UX practices for integrating LLMs
While the majority of the best practices for SLMs also apply to LLMs, there are a few additional UX considerations to remember when incorporating LLMs into your digital product:
Adaptive interfaces: LLMs can handle a diverse range of inputs, including text, voice, and even image inputs. Having an interface that anticipates likely inputs and provides guidance for each interaction type will go a long way toward ensuring a smooth user experience.
Personalization: Interfaces and input preferences are great places to introduce personalization. For example, certain users may frequently use voice input methods, or use them only for very specific tasks. Integrations that can detect these patterns and prioritize input methods accordingly can help make experiences more efficient and satisfying.
Load time: LLMs are powerful, but that power comes with some costs—and a big one is latency. Including a UI element that indicates processing time or progress can provide context for users waiting for models to generate output.
Combining LLMs and SLMs
If you’re feeling stuck between whether to proceed with an SLM or LLM, the good news is that you can use both. By utilizing SLMs for more basic, routine tasks and reserving LLMs for more complex interactions, you can optimize resources and still meet all the needs of end users.
In practical terms, a great strategy for implementing this is to devise a system where SLMs act as gatekeepers, handling initial interactions with users and then passing off more complex work to LLMs as needed. By doing so, you can keep initial interactions quick by providing low-latency answers, only tapping LLMs if necessary.
For example, in the case of a hypothetical piece of marketing automation software, you could have an SLM write a subject line or create email templates. But if you need help tweaking the message to improve campaign results, LLMs might be better suited to change copywriting or design elements with nuance.
Future-proofing your product through SLM and LLM integration
Small Language Models (SLMs) and Large Language Models (LLMs) offer distinct advantages and challenges for integrating AI into your product's user experience. Understanding these differences is crucial for making informed decisions about which model to incorporate based on your users' needs, technical feasibility, and desired ROI.
AI is still a relatively new technology, and the AI landscape will continue to evolve rapidly and offer new integration possibilities. Continually assessing and adapting your integration strategies as these technologies develop will be key to ensuring they meet user expectations and drive product success.
At Neuron, we specialize in helping companies navigate this dynamic field. Our expertise in UX/UI design, combined with our deep understanding of AI technologies, allows us to create solutions that are both innovative and user-centric. If you're ready to enhance your product with AI or need guidance on optimizing your current integration, reach out to our team of experts.
About us
Neuron is a San Francisco-based UX/UI design agency that creates the best-in-class digital experiences to help your business succeed. Whether you want to develop a high-quality sales software or refine an existing product, we strive to create collaborative client partnerships that bring your vision to life.
Want to learn more about what we do or how we approach optimizing sales applications? Get in touch with our team today, or browse our knowledge base of UX/UI tips.
コメント