Thesys Logo
Pricing
Solutions
Resources
Company
Documentation
Company
  • Blogs
  • About
  • Careers
  • Contact us
  • Trust Center
For Developers
  • GitHub
  • API Status
  • Documentation
  • Join community
Product
  • Pricing
  • Startups
  • Enterprise
  • Case Studies
Legal
  • DPA
  • Terms of use
  • Privacy policy
  • Terms of service
355 Bryant St, San Francisco, CA 94107© 2023 - 2025 Thesys Inc. All rights reserved.

How to design AI-Native Conversational Interfaces : From Templates to Generative UI

Parikshit Deshmukh

September 3rd, 2025⋅12 mins read
This article has been co-penned by Michelle Parayil and Parikshit Deshmukh. Michelle Parayil is the Head of Academy at the Conversation Design Institute in Amsterdam and an independent AI design consultant helping enterprises establish and scale their Conversational AI Product strategy.

Since 2017, she has designed over 100 virtual assistants, founded the Conversation Design team at Jio Haptik, authored multiple playbooks on virtual assistant design, and enjoys evangelizing the growing AI space in the UAE.You can find her online on LinkedIn or Twitter. 
6 reasons why Generative UI is better for conversational experiences


If there was ever an award for the "Most Frequently Declared Dead Job," Conversation Design would win by a landslide. And honestly, we don’t blame anybody.

Veterans who've been in the space for over five years miss the accuracy and control that declarative journeys used to provide. Sure, those rule-based flows were painful to build, but they were predictable and made for solid, justifiable products. You could defend every decision in a stakeholder meeting and sleep soundly knowing your bot wouldn't randomly start talking about its feelings.

Meanwhile, newcomers to the field see LLMs and prompts as the be-all and end-all of conversational UX design. LLMs can make contextual decisions, generate perfect copy for every situation and customer emotion, and then completely derail with an unpredictable gaffe or hallucination.

So here we are, stuck between the old and the new.

But what if we didn't have to choose sides?

In this article, we hope to present a hybrid solution that combines the best of both worlds.

We start by acknowledging that building an AI-native conversational interface is not as simple as dropping an AI into a chat template. It requires a fundamentally different approach focused on adaptability, context, and user outcomes. To create a great user experience, designers must rethink how interfaces work when both the user and the AI are "unknowns", each capable of surprising the other.

P.S. No jobs were harmed in the making of this piece.

The Limitations of Template-Based Conversation Design

Historically, conversational designers have relied on manually crafted scripts, flows, and templates for chatbots. These templates define how a dialogue should proceed, for example, which prompts to present and what responses to expect. While this approach works for deterministic systems (where the AI only says or does what it was explicitly programmed to), it breaks down with generative AI. Manually designing and developing every possible prompt-response template does not scale for modern conversational AI products and doesn't meet customer expectations either.

Manually designing and developing every possible prompt-response template does not scale for modern conversational AI products.

Here are some ways this affected the product:

  1. You could never achieve 100% coverage. Especially not with the type of time constraints enterprise customers have. Designers are given a few weeks or sometimes even days or hours to script every possible edge case and obviously the happy path too.
  2. It wasn't fun designing every single edge case. Often at the cost of poorly designed error cases (a personal pet peeve). Templation was the way to go anyway, but with large customer support assistants you often see API errors that go nowhere.
  3. Standard vocab libraries and acknowledgements never worked for every scenario. You'd often end up with awkward greetings to frustrated customers.
Limitations of workflow or template based conversational experiences

And adding generative AI only worsens the problem.

First, LLMs don't follow scripts. They can handle an open-ended range of user inputs and produce countless variations of outputs. They're like that improv actor who takes "yes, and..." way too seriously. You give them a simple prompt, and suddenly they're spinning off into 47 different directions you never saw coming. No design team on earth has the bandwidth to script every possible rabbit hole an LLM might dive down.

Second, today's users expect personalized and context-aware interactions. They want answers that actually relate to their specific situation, not some generic response you wrote three years ago. To deliver a tailored experience for each user (for instance, customizing answers or UI elements based on a user's history and preferences), a static template approach would require creating myriad variants for different scenarios. That might be possible for a handful of cases, but not when you have to serve thousands or millions of unique users.

Take that airline chatbot example. Sure, you could manually craft the perfect experience for Bob from accounting who flies to Denver every Tuesday. But now imagine doing that for every single one of your million customers. With their unique preferences, travel patterns, and that one guy who always asks about bringing his emotional support peacock.

Limitations of workflow or template based conversational experiences

The old template approach isn't just inefficient at this scale – it just won't work. You'd need an army of writers working around the clock just to keep up with the personalization possibilities.

The old template approach isn't just inefficient at this scale – it just won't work. You'd need an army of writers working around the clock just to keep up with the personalization possibilities.

The Solution? Designing for Non-Deterministic Systems

And so, the problems we’ve discussed before lead to the need for a change in approach. A move from rigid, structured journeys to a more fluid approach.

Earlier, the designer controlled almost everything: the dialogue structure, the exact wording of prompts and replies, and the available user options were largely predetermined. The system’s behavior was deterministic, meaning given a certain input it would always respond the same way. 

By contrast, an AI a powered by an LLM is non-deterministic, it can respond in creative or unanticipated ways. This means the interface must be able to handle responses or actions that weren’t explicitly designed in advance.

AI powered apps are probabilistic in nature and need an interface that can scale

For designers, this shift means thinking of the AI as an active participant in the interface rather than a static backend. The AI isn’t just generating text; it can potentially decide how to present information to the user. This is a big departure from the past. Instead of designing a fixed sequence of interactions, designers now must ensure the UI is flexible and “loosely coupled” to accommodate whatever the AI comes up with. 

The goal is to create an interface that can gracefully support a conversation that evolves on the fly. In practice, this might involve designing placeholder areas or dynamic containers in the UI that the AI can fill with varying content, be it text, images, a set of buttons, or a formatted table, depending on the context. The better the UI can adjust to the AI’s unpredictable output, the more seamless the user experience will be. This new paradigm turns the conversation interface into something of a living, breathing space that the AI can help shape, rather than a rigid path laid out entirely by the designer.

Enter AI-native conversational applications (not the most creative of terms but hey, we’ve seen these being called Journey Intelligence Agents led by Adaptive Experience Orchestrators before, so, this isn’t too bad!)

What Is an AI-Native Conversational Application?

AI-native conversational applications are designed from the ground up to leverage an AI’s capabilities, rather than treating the AI as just an add-on to a conventional app. In an AI-native approach, the interface and the AI logic are deeply integrated and co-evolve during an interaction. A key concept in building such applications is the use of generative UI – user interfaces that are dynamically created by AI in response to the current context and user needs, instead of being pre-built entirely by developers. In simple terms, the UI can partially build or adjust itself in real-time based on what the AI decides to show. This is a radical break from the traditional model where every pixel of the interface is predetermined by a designer or coded by a developer ahead of time.

0:00
/0:13

Generative UI helps you render charts, forms and cards as needed in realtime.

Picture this: You're chatting with an assistant for a technology marketplace and ask to compare two laptops. Instead of getting a wall of text that makes your eyes glaze over, the AI whips up a neat little comparison table right there in the chat. Then you ask to schedule a demo, and boom, the assistant shares an interactive calendar picker inside the chat for the user to select a date, instead of going back-and-forth in text about availability. No page refreshes, no "please hold while we transfer you to our scheduling system." Just the exact interface you need, exactly when you need it.

Under the hood, AI-native apps often rely on LLM UI components to achieve this flexibility. LLM UI components are essentially a library of pre-designed UI widgets, such as charts, buttons, forms, tables, images, and other interactive elements, that the AI can invoke when appropriate. The idea is that the designers and developers define what components are available and what they look like, but they don’t dictate exactly when each appears. The AI (the LLM) chooses if and when to use these components based on the user’s request. Instead of outputting plain text alone, the AI can output a sort of “instruction” for the interface, for instance: display a bar chart here with these data points, or present these three options as buttons. The application then renders the actual chart or buttons accordingly.

It's basically like having a conversation where the other person can instantly draw you a diagram, show you a photo, or hand you exactly the right form to fill out. No more "let me describe this thing poorly with words when a picture would make it obvious."

An AI-native conversational app thus feels much more interactive and context-aware than a basic text-based LLM-driven chatbot , because it’s capable of showing the right interface at the right time, not just the same chat transcript for every situation.

The Designer’s Evolving Role

How the role of Designers will change in the AI era.

Conversation Designers earlier spent a lot of time designing specific screens, chat bubbles, and decision-tree flows, essentially creating the exact templates of how an interaction should look and function. Going forward, designers will shift from designing these fixed artifacts to designing for outcomes and frameworks.

So what does this mean practically?

Designers will evolve from people who make specific responses to people who design the system that makes the responses. It means that designers will focus more on what the user needs to achieve and how to enable the AI to help the user achieve it, rather than dictating every step of the interaction.

Welcome to outcome-oriented design, where instead of obsessing over every step of the interaction, you focus on what the user actually needs to accomplish. You define the success criteria and let the AI figure out the best path to get there.

Designers will also become framework builders. Instead of creating a fixed number of finished dialogues or journeys, they create the rules, parameters, and components that the AI will use to generate dialogues or interfaces on the fly. Take flight booking, for example. Instead of writing twelve different conversation scripts for different scenarios (like before), you identify the key factors that matter: dates, budget, seat preferences, whether someone's the type who needs an aisle seat or they'll have a meltdown. The actual wording and presentation might be left to the AI within those guardrails. Essentially, designers provide the guardrails and high-level blueprint, and the AI fills in the details in real time for each user.

One concrete example of this shift is how personalization is handled.

Traditionally, a designer might create a one-size-fits-all interface and perhaps add a few preferences or toggles for users. In a generative UI world, the designer thinks about personalization at a deeper level – what if the interface could change completely for different users? This is exciting but also daunting: if every user's interface is slightly different (generated for their context), how do you ensure all those experiences are good? The designer's job thus expands to defining quality standards and testing procedures for a myriad of possible outcomes rather than a handful of static screens. Skills like user research and usability testing become even more critical. Designers will need to test not just a fixed flow, but the behavior of the generative system to ensure it meets user needs across many scenarios.

Moreover, designers will work more closely with data and AI specialists. Because a lot of the interface is created by the AI, understanding how the AI makes decisions (and possibly influencing it through prompt engineering or model tuning) becomes part of the design process. For instance, if users are getting irrelevant visuals from the AI, a designer might collaborate with a developer to adjust the AI's prompting or fine-tune the model's responses. In effect, designers become curators and coaches for the AI, guiding it to produce outputs that align with good UX principles.

But while the tools and methods are changing dramatically, the heart of Conversation Design remains the same. Empathy for users, clear communication, and iterative problem-solving are still your superpowers.

In fact, these human-centered design skills become even more critical when part of your interface is generated by an algorithm. You're the one ensuring the technology bends toward user needs, not the other way around.

Many of those micro-level design tasks (like obsessing over button styles or crafting the perfect prompt) might fade from daily work, but your higher-level responsibilities grow. Designers will be defining strategy, setting the vision for user experience outcomes, and then orchestrating AI and technology to realize that vision.

In summary, the designer's role is shifting from being the one who designs the exact conversation (the template) to the one who designs the system that designs the conversation. It's a move to a meta-level of design – designing the design process, so to speak. This can be challenging, as it requires letting go of some control over the minute details, but it also allows designers to impact a much broader range of scenarios and users than ever before. Done right, this approach can scale a designer's influence: instead of crafting one interaction at a time, they set up a framework that can generate countless meaningful interactions. In other words, generative AI won't replace designers; it will augment them, enabling designers to deliver value at a scale and level of personalization that was previously unattainable.

Conclusion

AI-native conversational interfaces (like the ones Thesys lets you design) represent a fundamental change in how we build user experiences. For Conversational AI teams and Conversation Designers, this is both an exciting opportunity and a call to adapt. The old playbook of static templates and predetermined chat flows is giving way to a new approach where interfaces are adaptive, collaborative (with AI), and outcome-focused. By embracing generative UI concepts and leveraging LLM UI components, teams can create applications that feel less like rigid tools and more like responsive, intelligent partners to the user.

Designers will play a crucial role in this evolution. They will need to harness their creativity and user advocacy in new ways, defining the guardrails for AI behavior, ensuring consistency and usability across dynamic interfaces, and continually learning from real user interactions to refine these AI-driven systems. While the AI can generate content and even design elements, it’s the designer who imbues the system with purpose, empathy, and human-centric structure.

Ultimately, the goal remains what it has always been: to provide users with an interface that helps them achieve their goals in an intuitive and engaging way. The difference now is that the interface is no longer a static artifact, but a living experience co-created with artificial intelligence. Designers who understand and embrace this paradigm will be at the forefront of crafting the next generation of conversational products, products that can deliver rich, personalized experiences at scale, without sacrificing the clarity and intentionality of good design.Designers will play a crucial role in this evolution. They will need to harness their creativity and user advocacy in new ways, defining the guardrails for AI behavior, ensuring consistency and usability across dynamic interfaces, and continually learning from real user interactions to refine these AI-driven systems. While the AI can generate content and even design elements, it’s the designer who imbues the system with purpose, empathy, and human-centric structure.AI-native conversational interfaces (like the ones Thesys lets you design) represent a fundamental change in how we build user experiences. For Conversational AI teams and Conversation Designers, this is both an exciting opportunity and a call to adapt. The old playbook of static templates and predetermined chat flows is giving way to a new approach where interfaces are adaptive, collaborative (with AI), and outcome-focused. By embracing generative UI concepts and leveraging LLM UI components, teams can create applications that feel less like rigid tools and more like responsive, intelligent partners to the user.



Learn more

Related articles

GPT 5 vs. GPT 4.1

August 12nd, 20256 mins read

How to build Generative UI applications

July 26th, 202515 mins read

Implementing Generative Analytics with Thesys and MCP

July 21th, 20257 mins read

Evolution of Analytics: From Static Dashboards to Generative UI

July 14th, 20259 mins read

Why Generating Code for Generative UI is a bad idea

July 10th, 20255 mins read

Building the First Generative UI API: Technical Architecture and Design Decisions Behind C1

July 10th, 20255 mins read

How we evaluate LLMs for Generative UI

June 26th, 20254 mins read

Generative UI vs Prompt to UI vs Prompt to Design

June 2nd, 20255 mins read

What is Generative UI?

May 8th, 20257 mins read