Creating a Gradio application using an LLM-based agent#
In this tutorial, you will learn how to build an Agent application using Gradio. You will build an application to retrieve customer and company information based on a login. This tutorial relies on two tools. One tool retrieves a user’s name, position, and company based on a login. This information is stored in a Dataset. A second tool searches the Internet to find company information.
This tutorial is based on the Building and using an agent with Dataiku’s LLM Mesh and Langchain tutorial. It uses the same tools and agents in a similar context. If you have followed this tutorial, you can jump to the Creating the Gradio application section.
Prerequisites#
Administrator permission to build the template
An LLM connection configured
A Dataiku version > 12.6.2
A code environment (named
gradio-and-agents
) with the following packages:gradio <4 langchain==0.2.0 duckduckgo_search==6.1.0
Building the Code Studio template#
If you know how to build a Code Studio template using Gradio and a dedicated code environment,
you have to create one named gradio-and-agent
.
If you don’t know how to do it, please follow these instructions:
Go to the Code Studios tab in the Administration menu, click the +Create Code Studio template button, and choose a meaningful label (
gradio_template
, for example).Click on the Definition tab.
Add a new Visual Studio Code block. This block will allow you to edit your Gradio application in a dedicated Code Studio.
Add the Add Code environment block, and choose the code environment previously created (
gradio-and-agents
).Add the Gradio block and select the code environment previously imported, as shown in Figure 1.
Click the Save button.
Click the Build button to build the template.
Your Code Studio template is ready to be used in a project.
Creating the Agent application#
Preparing the data#
You need to create the associated dataset, as you will use a dataset that stores a user’s ID, name, position, and company based on that ID.
id |
name |
job |
company |
---|---|---|---|
tcook |
Tim Cook |
CEO |
Apple |
snadella |
Satya Nadella |
CEO |
Microsoft |
jbezos |
Jeff Bezos |
CEO |
Amazon |
fdouetteau |
Florian Douetteau |
CEO |
Dataiku |
wcoyote |
Wile E. Coyote |
Business Developer |
ACME |
Table 1, which can be downloaded here
,
represents such Data.
Create a SQL Database named pro_customers_sql
by uploading the CSV file
and using a Sync recipe to store the data in an SQL connection.
Creating utility functions#
Be sure to have a valid LLM ID
before creating your Gradio application.
The documentation provides instructions on obtaining an LLM ID
.
Create a new project, click on </> > Code Studios.
Click the +New Code Studio, choose the previously created template, choose a meaningful name, click the Create button, and then click the Start Code Studio button.
To edit the code of your Gradio application, click the highlighted tabs (VS Code) as shown in Figure 2.
Select the
gradio
subdirectory in thecode_studio-versioned
directory. Dataiku provides a sample application in the fileapp.py
.
You will modify this code to build the application.
The first thing to do is define the different tools the application needs.
There are various ways of defining a tool.
The most precise one is based on defining classes that encapsulate the tool.
Alternatively, you can use the @tool
annotation or the StructuredTool.from_function
function,
but it may require more work when using those tools in a chain.
To define a tool using classes, there are two steps to follow:
Define the interface: which parameter is used by your tool.
Define the code: how the code is executed.
Code 1 shows how to describe a tool using classes. The highlighted lines define the tool’s interface. This simple tool takes a customer ID as an input parameter and runs a query on the SQL Dataset.
class CustomerInfo(BaseModel):
"""Parameter for GetCustomerInfo"""
id: str = Field(description="customer ID")
class GetCustomerInfo(BaseTool):
"""Gathering customer information"""
name = "GetCustomerInfo"
description = "Provide a name, job title and company of a customer, given the customer's ID"
args_schema: Type[BaseModel] = CustomerInfo
def _run(self, id: str):
dataset = dataiku.Dataset(DATASET_NAME)
table_name = dataset.get_location_info().get('info', {}).get('table')
executor = SQLExecutor2(dataset=dataset)
eid = id.replace("'", "\\'")
query_reader = executor.query_to_iter(
f"""SELECT name, job, company FROM "{table_name}" WHERE id = '{eid}'""")
for (name, job, company) in query_reader.iter_tuples():
return f"The customer's name is \"{name}\", holding the position \"{job}\" at the company named {company}"
return f"No information can be found about the customer {id}"
def _arun(self, id: str):
raise NotImplementedError("This tool does not support async")
Note
The SQL query might be written differently depending on your SQL Engine.
Similarly, Code 2 shows how to create a tool that searches the Internet for information on a company.
class CompanyInfo(BaseModel):
"""Parameter for the GetCompanyInfo"""
name: str = Field(description="Company's name")
class GetCompanyInfo(BaseTool):
"""Class for gathering in the company information"""
name = "GetCompanyInfo"
description = "Provide general information about a company, given the company's name."
args_schema: Type[BaseModel] = CompanyInfo
def _run(self, name: str):
results = DDGS().answers(name + " (company)")
result = "Information found about " + name + ": " + results[0]["text"] + "\n" \
if len(results) > 0 and "text" in results[0] \
else None
if not result:
results = DDGS().answers(name)
result = "Information found about " + name + ": " + results[0]["text"] + "\n" \
if len(results) > 0 and "text" in results[0] \
else "No information can be found about the company " + name
return result
Code 3 shows how to declare and use these tools.
tools = [GetCustomerInfo(), GetCompanyInfo()]
tool_names = [tool.name for tool in tools]
Once all the tools are defined, you are ready to create your agent. An agent is based on a prompt and uses some tools and an LLM. Code 4 is about creating an agent and the associated agent executor.
# Initializes the agent
prompt = ChatPromptTemplate.from_template(
"""Answer the following questions as best you can. You have only access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}""")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools,
verbose=True, return_intermediate_steps=True, handle_parsing_errors=True)
Creating the Gradio application#
You now have a working agent; let’s build the Gradio application. This first version has an input Textbox for entering a customer ID and displays the result in an output Textbox. Thus, the code is straightforward. You need to connect your agent to the Gradio framework, as shown in Code 5.
def search_V1(customer_id):
"""
Search information about a customer
Args:
customer_id: customer ID
Returns:
the agent result
"""
return agent_executor.invoke({
"input": f"""Give all the professional information you can about the customer with ID: {customer_id}.
Also include information about the company if you can.""",
"tools": tools,
"tool_names": tool_names
})['output']
demo = gr.Interface(
fn=search_V1,
inputs=gr.Textbox(label="Enter a customer ID to get more information", placeholder="ID Here..."),
outputs="text"
)
browser_path = os.getenv("DKU_CODE_STUDIO_BROWSER_PATH_7860")
# replacing env var keys in browser_path with their values
env_var_pattern = re.compile(r'(\${(.*)})')
env_vars = env_var_pattern.findall(browser_path)
for env_var in env_vars:
browser_path = browser_path.replace(env_var[0], os.getenv(env_var[1], ''))
# WARNING: make sure to use the same params as the ones defined below when calling the launch method,
# otherwise you app might not be responding!
demo.queue().launch(server_port=7860, root_path=browser_path)
This will lead to an application like the one shown in Figure 3.
Going further#
You have an application that takes a customer ID as input and displays the result. However, the result is displayed only when the agent has it, and you don’t see the agent’s actions to obtain it. The second version of the application (Code 6) displays the process as it goes along.
async def search_V2(customer_id):
"""
Search information about a customer
Args:
customer_id: customer ID
Returns:
the agent result
"""
iterator = agent_executor.stream({
"input": f"""Give all the professional information you can about the customer with ID: {customer_id.strip()}.
Also include information about the company if you can.""",
"tools": tools,
"tool_names": tool_names
})
for i in iterator:
if "output" in i:
yield i['output']
else:
yield i
demo = gr.Interface(
fn=search_V2,
inputs=gr.Textbox(label="Enter a customer ID to get more information", placeholder="ID Here..."),
outputs="text"
)
browser_path = os.getenv("DKU_CODE_STUDIO_BROWSER_PATH_7860")
# replacing env var keys in browser_path with their values
env_var_pattern = re.compile(r'(\${(.*)})')
env_vars = env_var_pattern.findall(browser_path)
for env_var in env_vars:
browser_path = browser_path.replace(env_var[0], os.getenv(env_var[1], ''))
# WARNING: make sure to use the same params as the ones defined below when calling the launch method,
# otherwise you app might not be responding!
demo.queue().launch(server_port=7860, root_path=browser_path)
Code 7 shows how to show how the agent is acting more comprehensively. Figure 5 shows the result of this code.
async def search_V3(customer_id):
"""
Search information about a customer
Args:
customer_id: customer ID
Returns:
the agent result
"""
actions = ""
steps = ""
output = ""
iterator = agent_executor.stream({
"input": f"""Give all the professional information you can about the customer with ID: {customer_id.strip()}.
Also include information about the company if you can.""",
"tools": tools,
"tool_names": tool_names
})
for i in iterator:
if "output" in i:
output = i['output']
elif "actions" in i:
for action in i["actions"]:
actions = action.log
elif "steps" in i:
for step in i['steps']:
steps = step.observation
yield [actions, steps, output]
demo = gr.Interface(
fn=search_V3,
inputs=gr.Textbox(label="Enter a customer ID to get more information", placeholder="ID Here..."),
outputs=[
gr.Textbox(label="Agent thought"),
gr.Textbox(label="Tool Result"),
gr.Textbox(label="Final result")]
)
browser_path = os.getenv("DKU_CODE_STUDIO_BROWSER_PATH_7860")
# replacing env var keys in browser_path with their values
env_var_pattern = re.compile(r'(\${(.*)})')
env_vars = env_var_pattern.findall(browser_path)
for env_var in env_vars:
browser_path = browser_path.replace(env_var[0], os.getenv(env_var[1], ''))
# WARNING: make sure to use the same params as the ones defined below when calling the launch method,
# otherwise you app might not be responding!
demo.queue().launch(server_port=7860, root_path=browser_path)
If you want to test different usage of an LLM, follow the steps:
Use the
list_llms()
method (like shown here).Store the result in a list.
Use this list as a dropdown.
Create a new agent each time the user changes the input.
There are many other ways to improve this application, but you now have enough knowledge to adapt it to your needs.
Here are the complete versions of the code presented in this tutorial:
app.py
import gradio as gr
import os
import re
import dataiku
from dataiku.langchain.dku_llm import DKUChatLLM
from dataiku import SQLExecutor2
from langchain.agents import AgentExecutor
from langchain.agents import create_react_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain.tools import BaseTool, StructuredTool
from langchain.pydantic_v1 import BaseModel, Field
from typing import Optional, Type
from duckduckgo_search import DDGS
LLM_ID = "" # Fill in with a valid LLM ID
DATASET_NAME = "pro_customers_sql"
VERSION = "V3"
llm = DKUChatLLM(llm_id=LLM_ID, temperature=0)
class CustomerInfo(BaseModel):
"""Parameter for GetCustomerInfo"""
id: str = Field(description="customer ID")
class GetCustomerInfo(BaseTool):
"""Gathering customer information"""
name = "GetCustomerInfo"
description = "Provide a name, job title and company of a customer, given the customer's ID"
args_schema: Type[BaseModel] = CustomerInfo
def _run(self, id: str):
dataset = dataiku.Dataset(DATASET_NAME)
table_name = dataset.get_location_info().get('info', {}).get('table')
executor = SQLExecutor2(dataset=dataset)
eid = id.replace("'", "\\'")
query_reader = executor.query_to_iter(
f"""SELECT name, job, company FROM "{table_name}" WHERE id = '{eid}'""")
for (name, job, company) in query_reader.iter_tuples():
return f"The customer's name is \"{name}\", holding the position \"{job}\" at the company named {company}"
return f"No information can be found about the customer {id}"
def _arun(self, id: str):
raise NotImplementedError("This tool does not support async")
class CompanyInfo(BaseModel):
"""Parameter for the GetCompanyInfo"""
name: str = Field(description="Company's name")
class GetCompanyInfo(BaseTool):
"""Class for gathering in the company information"""
name = "GetCompanyInfo"
description = "Provide general information about a company, given the company's name."
args_schema: Type[BaseModel] = CompanyInfo
def _run(self, name: str):
results = DDGS().answers(name + " (company)")
result = "Information found about " + name + ": " + results[0]["text"] + "\n" \
if len(results) > 0 and "text" in results[0] \
else None
if not result:
results = DDGS().answers(name)
result = "Information found about " + name + ": " + results[0]["text"] + "\n" \
if len(results) > 0 and "text" in results[0] \
else "No information can be found about the company " + name
return result
def _arun(self, name: str):
raise NotImplementedError("This tool does not support async")
tools = [GetCustomerInfo(), GetCompanyInfo()]
tool_names = [tool.name for tool in tools]
# Initializes the agent
prompt = ChatPromptTemplate.from_template(
"""Answer the following questions as best you can. You have only access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}""")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools,
verbose=True, return_intermediate_steps=True, handle_parsing_errors=True)
def search_V1(customer_id):
"""
Search information about a customer
Args:
customer_id: customer ID
Returns:
the agent result
"""
return agent_executor.invoke({
"input": f"""Give all the professional information you can about the customer with ID: {customer_id}.
Also include information about the company if you can.""",
"tools": tools,
"tool_names": tool_names
})['output']
async def search_V2(customer_id):
"""
Search information about a customer
Args:
customer_id: customer ID
Returns:
the agent result
"""
iterator = agent_executor.stream({
"input": f"""Give all the professional information you can about the customer with ID: {customer_id.strip()}.
Also include information about the company if you can.""",
"tools": tools,
"tool_names": tool_names
})
for i in iterator:
if "output" in i:
yield i['output']
else:
yield i
async def search_V3(customer_id):
"""
Search information about a customer
Args:
customer_id: customer ID
Returns:
the agent result
"""
actions = ""
steps = ""
output = ""
iterator = agent_executor.stream({
"input": f"""Give all the professional information you can about the customer with ID: {customer_id.strip()}.
Also include information about the company if you can.""",
"tools": tools,
"tool_names": tool_names
})
for i in iterator:
if "output" in i:
output = i['output']
elif "actions" in i:
for action in i["actions"]:
actions = action.log
elif "steps" in i:
for step in i['steps']:
steps = step.observation
yield [actions, steps, output]
if VERSION == "V1":
demo = gr.Interface(
fn=search_V1,
inputs=gr.Textbox(label="Enter a customer ID to get more information", placeholder="ID Here..."),
outputs="text"
)
if VERSION == "V2":
demo = gr.Interface(
fn=search_V2,
inputs=gr.Textbox(label="Enter a customer ID to get more information", placeholder="ID Here..."),
outputs="text"
)
if VERSION == "V3":
demo = gr.Interface(
fn=search_V3,
inputs=gr.Textbox(label="Enter a customer ID to get more information", placeholder="ID Here..."),
outputs=[
gr.Textbox(label="Agent thought"),
gr.Textbox(label="Tool Result"),
gr.Textbox(label="Final result")]
)
browser_path = os.getenv("DKU_CODE_STUDIO_BROWSER_PATH_7860")
# replacing env var keys in browser_path with their values
env_var_pattern = re.compile(r'(\${(.*)})')
env_vars = env_var_pattern.findall(browser_path)
for env_var in env_vars:
browser_path = browser_path.replace(env_var[0], os.getenv(env_var[1], ''))
# WARNING: make sure to use the same params as the ones defined below when calling the launch method,
# otherwise you app might not be responding!
demo.queue().launch(server_port=7860, root_path=browser_path)