How to build Generative UI applications with C1 by Thesys
In our last posts we covered "What is Generative UI" and the architectural decisions behind building the First Generative UI API
How Thesys Works using C1 API and GenUI React SDK
As you are aware by now, Thesys is a developer platform for building generative UI applications. It supports multiple LLMs like OpenAI and Anthropic.
There are 2 major components involved:
1) C1 API (LLM Backend)
When you send a prompt to the C1 API, it generates a Domain Specific Language (DSL) based UI specification instead of plain text.
It is fully OpenAI-compatible (same endpoints/params), so you can call it with any OpenAI client (JS or Python SDK), just by pointing your baseURL
to:
https://api.thesys.dev/v1/embed
You can then call client.chat.completions.create({...})
with your messages. Using the C1 model name (such as c1/anthropic/claude-sonnet-4/v-20250617
), the Thesys API will invoke the underlying LLM and return a UI specification.
client.chat.completions.create({
model: "c1/anthropic/claude-sonnet-4/v-20250617",
messages: [...],
});
Depending on your use case, you can use C1 in two ways:
a) Replacing your existing LLM with C1 for UI-first prompts
If you are building a new app and want minimal latency, you send prompts directly to c1/anthropic/claude-sonnet-4/v-20250617
and it returns a UI spec to be rendered immediately.
b) Layering C1 on top of your current LLM stack
If you already have an LLM pipeline (chatbot/agent), you can take its output and pass it to C1 as a second step just to generate a visual layout. This adds richer UI on top of your existing flows without changing your core logic.
2) GenUI SDK (frontend)
Once you receive the UI spec from C1, the GenUI SDK handles the rendering on the frontend.
This SDK is a React framework (built on Crayon’s component library) that reads the Domain Specific Language (DSL)-based UI specification and maps that to visual components.
Its core is the <C1Component>
(and <C1Chat>
for chat interfaces), which accepts the C1 API response and turns it into live UI.
Here's a sample code:
import { C1Component, ThemeProvider } from "@thesys/genui-sdk";
const App = () => {
const response = await fetch("<your-backend-url>");
return (
<ThemeProvider>
<C1Component c1Response={response.text()} />
</ThemeProvider>
);
};
You can read more on the docs and try it live on the playground.
This is how C1 & the React SDK work together to turn an LLM response into a live UI in real time.
A step-by-step guide to building Generative UI using Thesys
In this section, we will be talking about the complete process on how to use Thesys. For this, we will be using Next.js.
To keep things practical, we will build a working form-based assistant using C1 and the Thesys GenUI SDK.
What this lets you build:
- A prompt-driven UI where users can request actions like:
- “I want to order a scarf”
- “Show me hats in inventory”
- The C1 API then generates the UI specification (inputs, selects, buttons, etc. - all at runtime).
- Form data is submitted back to the assistant through defined tools.
Step 1: Create a Next.js application and get API key
We will be integrating the Thesys C1 API and GenUI SDK into a next.js application from scratch. If you don't already have a Next.js application, you can create it using this command.
npx create-next-app@latest thesys-demo
Here is the project structure that we are going to follow.
You will need to create a new API key from C1 Console to set it as an environment variable. Add this API key to the .env
file using the following convention.
THESYS_API_KEY=<your-api-key>
Step 2: Render the Chat UI on the frontend
We will need a basic chat interface where the UI is generated by prompts. Here's how to use <C1Chat>
for that in the file src/app/page.tsx
.
<C1Component />
just needs the raw JSON/text from the C1 API but for chat apps, it's recommended to use <C1Chat>
component as it supports message streams for a complete chat experience.
'use client'
import '@crayonai/react-ui/styles/index.css'
import { C1Chat } from '@thesysai/genui-sdk'
export default function Home() {
return <C1Chat theme={{ mode: 'dark' }} apiUrl="/api/chat" />
}
Here’s what the code is doing:
- Uses the
<C1Chat />
component to render the full chat UI. apiUrl
points to the backend route that will handle prompt submission.
As you can see, this is the entire frontend with no manual HTML or form code.
Step 3: Tool for In-Memory Message Store
Form data will be submitted back to the assistant through defined tools
. We will be building custom tools like createOrder
, getInventory
and in-memory message store
.
To process form data and maintain conversation context, we will start by creating a message store. This tool helps store the assistant’s messages per thread.
Install the openai package. It's the official Node.js SDK for interacting with OpenAI's APIs (like GPT-4 or Thesys C1, which is OpenAI-compatible).
npm install openai
Create a new file at src/app/api/chat/messageStore.ts
with the following code.
import OpenAI from 'openai'
export type DBMessage = OpenAI.Chat.ChatCompletionMessageParam & {
id?: string
}
const messagesStore: {
[threadId: string]: DBMessage[]
} = {}
export const getMessageStore = (id: string) => {
if (!messagesStore[id]) {
messagesStore[id] = []
}
const messageList = messagesStore[id]
return {
addMessage: (message: DBMessage) => {
messageList.push(message)
},
messageList,
getOpenAICompatibleMessageList: () => {
return messageList.map((m) => {
const message = {
...m,
}
delete message.id
return message
})
},
}
}
Here’s what each part of the code is doing:
- Maintains an in-memory chat message store, organized by
threadId
. messagesStore
is a plain JS object that maps eachthreadId
to an array of messages.- Exposes a utility
getMessageStore(threadId)
that returns helpers to:addMessage(message)
→ adds a new message to the thread.messageList
→ returns all stored messages for the thread.getOpenAICompatibleMessageList()
→ returns the message list without theid
field, making it compatible with OpenAI’sChatCompletion
API.
The store is not persistent, as everything is lost on server restart. You can replace this with a database (Redis, PostgreSQL) for production use.
Step 4: Tool for Order Management
This tool will help the assistant to create and list customer orders for different products like gloves, hats, scarves. It uses Zod for runtime validation and schema safety, and stores orders in memory.
You don't need to install zod because it's a transitive dependency. If you check inside GenUI SDK's package.json
(in node_modules/@thesysai/genui-sdk/package.json
), you will likely find something like:
"dependencies": {
"zod": "^3.24.1"
}
Create a new file at src/app/api/chat/orderManagement.ts
with the following code.
import { z } from 'zod'
export const gloveOrderSchema = z.object({
kind: z.literal('gloves'),
quantity: z.number(),
unit: z.enum(['boxes', 'pairs']),
deliveryDate: z.string(),
shipping: z.enum(['normal', 'express']),
})
export const hats = z.object({
kind: z.literal('hats'),
quantity: z.number(),
variants: z.enum(['top', 'beanie', 'cap']),
deliveryDate: z.string(),
shipping: z.enum(['normal', 'express']),
})
export const scarfOrderSchema = z.object({
kind: z.literal('scarves'),
quantity: z.number(),
colors: z.enum(['red', 'blue', 'green', 'yellow', 'purple', 'orange']),
deliveryDate: z.string(),
shipping: z.enum(['normal', 'express']),
})
export const orderSchema = z.object({
order: z.discriminatedUnion('kind', [
gloveOrderSchema,
hats,
scarfOrderSchema,
]),
})
type Order = z.infer<typeof orderSchema>
const orders: Order['order'][] = []
export const createOrder = async (orderJson: unknown) => {
const order = orderSchema.safeParse(orderJson)
if (!order.success) {
console.error('Invalid order', order.error)
return {
success: false,
error: order.error.message,
}
}
const deliveryDate = new Date(order.data.order.deliveryDate)
console.log('Creating order', { ...order.data.order, deliveryDate })
orders.push(order.data.order)
return {
success: true,
}
}
export const getOrderSchema = z.object({
number: z.number().optional().default(10),
})
export const getOrders = (params: unknown) => {
const parsedParams = getOrderSchema.safeParse(params)
if (!parsedParams.success) {
console.error('Invalid params', parsedParams.error)
return {
success: false,
error: parsedParams.error.message,
}
}
return {
success: true,
orders: orders.slice(0, parsedParams.data.number),
}
}
Here’s what the code is doing:
- Supports dynamic order creation for
gloves
,hats
,scarves
with unique schema fields. - Uses
z.discriminatedUnion("kind", [...])
orders based on theirkind
. This is to reason over multiple product forms in one unified schema. - Defines a top-level
orderSchema
that wraps individual schemas and makes sure input matches one of the valid order types. - Creates an in-memory array
orders[]
to temporarily store incoming orders. - Exposes a
createOrder()
function:- Accepts raw input (
orderJson
) - Parses and validates it using Zod
- Logs and stores the order if valid
- Returns a success/failure response
- Accepts raw input (
- Exposes a
getOrders()
function:- Accepts an optional
number
parameter (default: 10) - Returns the latest
n
orders - Useful for listing recent user actions in chat or UI
- Accepts an optional
- We have used
safeParse()
for both creation and retrieval to prevent runtime crashes from bad inputs.
Step 5: Tool for Inventory Lookup
Let's create a tool that will help the assistant to fetch available product inventory based on type like gloves, hats, scarves or all of them.
Create a new file at src/app/api/chat/inventory.ts
with the following code. I'm using mock data since we are just using it locally.
import { z } from 'zod'
export const inventoryQuerySchema = z.object({
productType: z
.enum(['gloves', 'hats', 'scarves', 'all'])
.optional()
.default('all'),
})
const allInventory = [
{
productType: 'gloves',
quantity: 100,
priceInUSD: 10.0,
urgentDeliveryDate: '2025-04-15',
normalDeliveryDate: '2025-04-20',
imageSrc: 'https://images.unsplash.com/photo-1617118602199-d3c05ae37ed8',
},
{
productType: 'hats',
quantity: 200,
priceInUSD: 15.0,
urgentDeliveryDate: '2025-04-15',
normalDeliveryDate: '2025-04-20',
imageSrc: 'https://images.unsplash.com/photo-1556306535-0f09a537f0a3',
},
{
productType: 'scarves',
quantity: 300,
priceInUSD: 5.0,
urgentDeliveryDate: '2025-04-15',
normalDeliveryDate: '2025-04-20',
imageSrc: 'https://images.unsplash.com/photo-1457545195570-67f207084966',
},
]
export const getInventory = (params: unknown) => {
const parsedParams = inventoryQuerySchema.safeParse(params)
if (!parsedParams.success) {
console.error('Invalid params', parsedParams.error)
return { success: false, error: parsedParams.error.message }
}
return {
success: true,
inventory: allInventory.filter(
(item) =>
parsedParams.data.productType === 'all' ||
item.productType === parsedParams.data.productType
),
}
}
Here's what the above code is doing:
- Defines an input schema
inventoryQuerySchema
using Zod:- Accepts an optional
productType
field ("gloves"
,"hats"
,"scarves"
or"all"
) - Defaults to
"all"
if not provided
- Accepts an optional
- Declares a static
allInventory
array with mock data. Each item includes quantity, price, delivery dates, image (this would come from a DB or API). - Exports a
getInventory(params)
function:- Validates input using
safeParse()
- Filters inventory by type if specified
- Returns a
success: true
response with filtered items - Returns an error message if the input is invalid
- Validates input using
In response to queries like “What hats are in stock?” or “Show me all products”, this will show available items with pricing and shipping info.
Step 6: Backend using Chat API Route
Now we need to build a backend to handle AI interactions (/api/chat
) using OpenAI-compatible
API calls and stream responses with support for custom tools.
Before we build the backend, let's learn about System Prompts, which is a key part of C1’s behavior control.
Although C1 is powerful on its own, guiding it with decent prompts is important to get consistent and tailored UIs. Thesys supports system prompts to:
- Define the assistant’s role (like “you are a helpful inventory manager”) or specific tone/language
- Specify rendering behavior (like always including images in a list)
- Enforce formatting rules using
<ui_rules>
tags
Read the official docs for a step-by-step guide on how to add a system prompt to your AI application
Here's an example from the official docs. In practice, you would prepend this prompt to every API call:
const systemPrompt = `
You are a data assistant. Use tables for related info,
charts for comparisons, and carousels for lists of items.
`;
const resp = await client.chat.completions.create({
model: 'c1-nightly',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userQuery }
],
stream: true
});
Here's the following code for src/app/api/chat/route.ts
.
import { NextRequest, NextResponse } from 'next/server'
import OpenAI from 'openai'
import { transformStream } from '@crayonai/stream'
import { DBMessage, getMessageStore } from './messageStore'
import {
createOrder,
getOrders,
getOrderSchema,
orderSchema,
} from './orderManagement'
import { zodToJsonSchema } from 'zod-to-json-schema'
import { JSONSchema } from 'openai/lib/jsonschema.mjs'
import { inventoryQuerySchema } from './inventory'
import { getInventory } from './inventory'
const SYSTEM_MESSAGE = `
You are a helpful assistant who can help with placing orders and checking inventory.
<ui_rules>
- When showing inventory, use the list component to show the inventory along with its image.
Always add the imageSrc to the list component.
</ui_rules>
`
export async function POST(req: NextRequest) {
const { prompt, threadId, responseId } = (await req.json()) as {
prompt: DBMessage
threadId: string
responseId: string
}
const client = new OpenAI({
baseURL: 'https://api.thesys.dev/v1/embed/',
apiKey: process.env.THESYS_API_KEY,
})
const messageStore = getMessageStore(threadId)
if (messageStore.getOpenAICompatibleMessageList().length === 0) {
messageStore.addMessage({
role: 'system',
content: SYSTEM_MESSAGE,
})
}
messageStore.addMessage(prompt)
const llmStream = client.chat.completions.runTools({ // OpenAI SDK v5+
model: 'c1-nightly',
messages: messageStore.getOpenAICompatibleMessageList(),
stream: true,
tools: [
{
type: 'function',
function: {
name: 'createOrder',
description: 'Create an order',
parameters: zodToJsonSchema(orderSchema) as JSONSchema,
function: createOrder,
parse: JSON.parse,
},
},
{
type: 'function',
function: {
name: 'getOrders',
description: 'Get all orders',
parameters: zodToJsonSchema(getOrderSchema) as JSONSchema,
function: getOrders,
parse: JSON.parse,
},
},
{
type: 'function',
function: {
name: 'getInventory',
description: 'Get the current inventory',
parameters: zodToJsonSchema(inventoryQuerySchema) as JSONSchema,
function: getInventory,
parse: JSON.parse,
},
},
],
})
const responseStream = transformStream(
llmStream,
(chunk) => {
return chunk.choices[0].delta.content
},
{
onEnd: ({ accumulated }) => {
const message = accumulated.filter((message) => message).join('')
messageStore.addMessage({
role: 'assistant',
content: message,
id: responseId,
})
},
}
) as ReadableStream<string>
return new NextResponse(responseStream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache, no-transform',
Connection: 'keep-alive',
},
})
}
Here's what the above code is doing:
- Handles incoming POST requests to
/api/chat
with a user prompt, thread ID, response ID. - Initializes a system prompt that tells the assistant how to behave and how to render UI (always include
imageSrc
when showing inventory). - Maintains in-memory chat history using
getMessageStore()
to keep messages thread-specific and OpenAI-compatible. - Calls Thesys C1's Chat Completion API (
/v1/embed/
) using therunTools()
method, which:- Streams LLM responses in real-time.
- Supports multiple custom tools:
createOrder
: for placing an ordergetOrders
: for listing previous ordersgetInventory
: for querying stock info
- Each tool is registered using a
zod
schema converted to JSON Schema viazodToJsonSchema()
so the LLM knows the input/output structure. - Streams the assistant's output back to the client using
transformStream()
:- Processes the stream to extract LLM content.
- Accumulates final response and stores it in the message history (
responseId
used to tag it).
And finally returns a text/event-stream
response, allowing the frontend to render content progressively as it’s generated.
Step 7: Output
To start the local development server, run npm run dev
in your terminal.
Now that we’ve wired up the frontend, backend and all supporting tools, let’s look at the final result.
Here's what the user interface looks like, styled using crayonai/react-ui
.
You can interact with the assistant naturally, with something like hi
or I want to place an order
. The LLM understands the prompt and dynamically renders a proper UI based on your query.
Each tool is schema-bound using Zod and responds in real-time with valid UI blocks.
As you can see, it lists the options and even provides buttons: Place New Order
, Check Inventory
and View Orders
.
Let’s check what’s in stock. The assistant fetches inventory using the getInventory()
tool and renders cards with product info, prices, delivery options (hardcoded values).
Clicking on a product like “Hat” generates a complete order form, built in real-time by the C1 API based on the schema we provided.
The assistant knows what inputs are needed, what types they should be and even adds UX elements like dropdowns, date pickers and radio groups. Which is then rendered by the GenUI SDK.
Once submitted, the form data is passed back to the assistant and handled by the appropriate backend function (createOrder()
), which stores the result and confirms the action.
It will also list out other options to place another order and check existing orders.
You didn’t write a single HTML form, yet your users get fully working input fields and dynamic flows. This is the power of Thesys generative UI - it provides a UI that caters to individual needs, uses modular atomic elements to construct complete interfaces tailored to every use-case. This is all at run-time without manual code intervention. Such generated interfaces are responsive, interactive, consistent in their design and fulfil user intent across multiple contexts.
All major AI stacks you can use with Thesys.
Thesys is designed to plug into modern AI stacks and workflows. Such as:
LLM
→ Anthropic, OpenAI (preview), custom models (via structured JSON)Client SDKs
→ Standard OpenAI-compatible clients (Python, JS)Chaining Tools
→ LangChain, LlamaIndex pipelines → C1 renders UIAgent Framework
→ AutoGPT/BabyAGI agents can call tools and render UIBackend
→ Any REST framework such as Python, Node.js, FastAPI, Django, Next.jsFrontend
→ Any React framework such as Next.jsDeployment
→ Vercel, Netlify, AWS, GCP, your infrastructure
Ready to Build the Future of UI?
Thesys unlocks a new paradigm where interfaces adapt, respond, and build themselves from context. Instead of static layouts that look the same for every user, your applications can now generate personalised experiences in real-time.
Get Started in Minutes
Ready to transform how your users interact with AI? Start building with Thesys today:
- Try the Interactive Playground - See generative UI in action with live examples
- Read the Documentation - Complete guides and API references to get you up and running
- Get Your API Key - Start with free $10 credits
Join developers already building the next generation of adaptive interfaces. Your users deserve experiences that understand them - not just respond to them.