Building a Web Application with the agent#

In the previous parts of this series (here and here), you saw how to define tools and create an LLM-based agent capable of answering queries by calling those tools. This part demonstrates how to build an interactive web interface so end-users can interact with this functionality in a browser. Different frameworks can be used, as detailed below.

Creating a webapp#

First, you’ll set up the webapp framework, alongside creating the necessary infrastructure within Dataiku. Choose your preferred framework and follow the necessary steps.

Dash applications can be created as Code webapps with the following steps:

  • Create a new webapp by clicking on </> > Webapps

  • Click the +New webapp, choose the Code webapp, then click on the Dash button, choose the An empty Dash app option, and choose a meaningful name

  • In the Code env option of the Settings tabs, select the Python code environment with the packages defined in the prerequisites for this tutorial series

  • You’ll need to add the following packages specific to Dash to the code env

      dash # tested with 2.18.2
      dash-bootstrap-components # tested with 1.6.0
    

Note

The predefined tools need to be present in a location accessible via code. You can place the file (available here for download) in </> > Libraries. You can find detailed instructions in the previous tutorial, plus why it is useful to follow this approach.

For similar reasons of modularity, helper functions common among the application scripts are also placed in a separate file. Specifically, the functions create_chat_session, get_customer_details, search_company_info and process_tool_calls are included in the utils.py (also available for download). It needs to be placed in the same location as tools.json, following the same steps.

Passing on the task to the agent#

After choosing our webapp framework, the crucial step is implementing the LLM agent functionality. It follows a consistent pattern across frameworks. Regardless of which one, the chat session is defined the same way.

Similar to the agent in Part 2, the chat session is created by calling the create_chat_session() function. It sets up an LLM via the LLM Mesh with the system prompt.

The application sends the information obtained about the customer to the agent. You’ll see how each framework collects this information below. A loop is created to process the tool calls and responses, until no more tool calls are needed. The agent then returns the final response.

Calling the agent#

The next step is connecting the user interface to the agent’s functionality. Here’s how each framework runs the agent.

Dash wires everything up with callbacks to process user queries. Connect the button to a callback function that invokes the agent with the @app.callback decorator. The update_output() function allows the user to enter the customer ID and click the button to trigger the function with the agent. The agent then processes the input via a chat session and returns the final response.

Calling the agent#
@app.callback(
    [Output("result", "value"), Output("chat-state", "data")],
    Input("search", "n_clicks"),
    [State("customer_id", "value"), State("chat-state", "data")],
    prevent_initial_call=True,
    running=[(Output("auto-toast", "is_open"), True, False),
             (Output("search", "disabled"), True, False)]
)

def update_output(n_clicks, customer_id, chat_state):
    """Callback function that handles agent interactions"""
    if not customer_id:
        return no_update, no_update
    
    # Create new chat session
    chat = create_chat_session(llm, project)

Creating the layout#

Finally, to provide a UI for this agent functionality, you’ll build an interface with components that allows users to interact with it. Each framework offers its own approach.

The layout gathers user inputs (e.g. message with customer ID) and passes it to the agent functions for each framework. The agent then returns the result to be displayed in the UI.

Create a Dash layout that constructs an application like Figure 1, consisting of an input Textbox for entering a customer ID and an output Textarea.

The callback function described above takes the user’s requests from the input Textbox and passes the entered customer ID to create_chat_session(), rendering the final agent response in the output Textarea.

Dash layout#
# Dash app layout
app.layout = html.Div([
    dbc.Row([html.H2("Using LLM Mesh with an agent in Dash")]),
    dbc.Row(dbc.Label("Please enter the ID of the customer:")),
    dbc.Row([
        dbc.Col(dbc.Input(id="customer_id", placeholder="Customer Id"), width=10),
        dbc.Col(dbc.Button("Search", id="search", color="primary"), width="auto")
    ], justify="between"),
    dbc.Row([dbc.Col(dbc.Textarea(id="result", style={"min-height": "500px"}), width=12)]),
    dbc.Toast(
        [html.P("Searching for information about the customer", className="mb-0"),
         dbc.Spinner(color="primary")],
        id="auto-toast",
        header="Agent working",
        icon="primary",
        is_open=False,
        style={"position": "fixed", "top": "50%", "left": "50%", "transform": "translate(-50%, -50%)"},
    ),
    dcc.Store(id="chat-state"),
    dcc.Store(id="step", data={"current_step": 0}),
], className="container-fluid mt-3")
Figure 1: LLM Agentic -- webapp.

Figure 1: LLM Agentic – webapp.#

Conclusion#

You now have an application that:

  1. Uses an LLM-based agent to process queries

  2. Imports predefined tools to complement LLM capabilities

  3. Provides a user-friendly web interface

You could enhance this interface by adding a history of previous searches or creating a more detailed and cleaner results display. This example provides a foundation for building more complex LLM-based browser applications, leveraging tool calls and webapp interfaces.

Dash application code
Longer code block with full script#
import dataiku
from dash import html, dcc, no_update, set_props
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
import json
from utils import get_customer_details, search_company_info, process_tool_calls, create_chat_session

dbc_css = "https://cdn.jsdelivr.net/gh/AnnMarieW/dash-bootstrap-templates/dbc.min.css"
app.config.external_stylesheets = [dbc.themes.SUPERHERO, dbc_css]

# LLM setup
LLM_ID = ""  # LLM ID for the LLM Mesh connection + model goes here
client = dataiku.api_client()
project = client.get_default_project()
llm = project.get_llm(LLM_ID)

# Dash app layout
app.layout = html.Div([
    dbc.Row([html.H2("Using LLM Mesh with an agent in Dash")]),
    dbc.Row(dbc.Label("Please enter the ID of the customer:")),
    dbc.Row([
        dbc.Col(dbc.Input(id="customer_id", placeholder="Customer Id"), width=10),
        dbc.Col(dbc.Button("Search", id="search", color="primary"), width="auto")
    ], justify="between"),
    dbc.Row([dbc.Col(dbc.Textarea(id="result", style={"min-height": "500px"}), width=12)]),
    dbc.Toast(
        [html.P("Searching for information about the customer", className="mb-0"),
         dbc.Spinner(color="primary")],
        id="auto-toast",
        header="Agent working",
        icon="primary",
        is_open=False,
        style={"position": "fixed", "top": "50%", "left": "50%", "transform": "translate(-50%, -50%)"},
    ),
    dcc.Store(id="chat-state"),
    dcc.Store(id="step", data={"current_step": 0}),
], className="container-fluid mt-3")

@app.callback(
    [Output("result", "value"), Output("chat-state", "data")],
    Input("search", "n_clicks"),
    [State("customer_id", "value"), State("chat-state", "data")],
    prevent_initial_call=True,
    running=[(Output("auto-toast", "is_open"), True, False),
             (Output("search", "disabled"), True, False)]
)

def update_output(n_clicks, customer_id, chat_state):
    """Callback function that handles agent interactions"""
    if not customer_id:
        return no_update, no_update
    
    # Create new chat session
    chat = create_chat_session(llm, project)
    
    # Start conversation about customer
    content = f"Tell me about the customer with ID {customer_id}"
    chat.with_message(content, role="user")
    
    conversation_log = []
    while True:
        response = chat.execute()
        
        if not response.tool_calls:
            # Final answer received
            chat.with_message(response.text, role="assistant")
            conversation_log.append(f"Final Answer: {response.text}")
            break
            
        # Handle tool calls
        chat.with_tool_calls(response.tool_calls, role="assistant")
        tool_call_result = process_tool_calls(response.tool_calls)
        chat.with_tool_output(tool_call_result, tool_call_id=response.tool_calls[0]["id"])
        
        # Log the step
        tool_name = response.tool_calls[0]["function"]["name"]
        tool_args = response.tool_calls[0]["function"]["arguments"]
        conversation_log.append(f"Tool: {tool_name}\nInput: {tool_args}\nResult: {tool_call_result}\n{'-'*50}")
    
    return "\n".join(conversation_log), {"messages": chat.cq["messages"]}