Assistant Configuration Fields
Deepdesk Assistants require proper configuration to effectively support your customer service operations. This chapter details each configuration field, providing guidance on best practices and implementation strategies.
Note: To understand how Deepdesk uses your configuration to construct prompts behind the scenes, see section How Deepdesk Constructs Assistant Prompts.
Assistant Name and Code
Assistant Name
The name you choose for your assistant should be descriptive and clearly indicate its purpose. A well-named assistant makes it easier for agents and administrators to understand its function at a glance.
Best Practices:
- Use concise, descriptive names (e.g., "Customer Verification Assistant" rather than "Assistant #1")
- Include the function in the name (e.g., "Sales NBA Assistant" for Next Best Action recommendations)
Assistant Code
The code is a unique identifier for your assistant that will be used across different tools and systems within Deepdesk. Unlike the name, the code cannot be changed after creation.
Requirements:
- Must be unique across all assistants in your Deepdesk environment
- Should use lowercase letters, numbers, and hyphens
- Avoid spaces and special characters
- Examples:
customer-verification,sales-assistant,conversation-summarizer
Important: The assistant code is used as a reference when calling assistants from other assistants, in API calls, and in assistant routes. Choose a consistent naming convention that will scale with your growing collection of assistants.
Assistant Instructions
The instructions field is where you define what your assistant should do and how it should behave. This is essentially the "prompt" that guides the AI model's responses.
Writing Effective Prompts
When crafting instructions, follow this structure for the best results:
- Role and Purpose: Define who the assistant is and what its primary goal is
You are a customer service assistant helping agents verify customer identities. Your goal is to...
- Task Instructions: Provide clear guidelines on what the assistant should do
Analyze the conversation to detect when a customer needs to be verified.
Look for specific identity information such as...
- Tool Calling Instructions: If your assistant uses tools, be explicit about when and how to use them
Use the 'customer-verification-api' tool when you need to validate a customer's identity.
Call the 'knowledge-search' tool if you need to find specific product information.
When the customer asks about their account status, use the 'account-status' assistant.
- Reasoning Steps: Include step-by-step reasoning processes
First, check if the customer has provided their account number.
Next, determine if additional verification is needed by...
- Response Format: Specify how responses should be structured
Respond with a JSON object containing:
- verification_status: "complete" or "needed"
- missing_information: array of required items still needed
- Examples: Include examples of ideal responses
Example:
When a customer says "I want to check my balance", you should respond with...
Adding Tools to Your Assistant
See also: Available Tools and How to Use Them
In the instructions section, you'll also select which tools your assistant can use. Tools extend your assistant's capabilities beyond simply responding with text:
- Knowledge bases: Allow assistants to search for information
- API tools: Enable integration with external systems
- Other assistants: Let your assistant call more specialized assistants
Choose tools that align with your assistant's purpose, and include specific instructions on when and how to use each tool.
Important: Every assistant you create automatically becomes available as a tool that other assistants can use. This enables you to build modular, specialized assistants that can work together to accomplish complex tasks. When an assistant is used as a tool, its description and parameters fields become critical for proper integration.
Assistant Settings
Assistant settings control how your assistant processes inputs and formats responses.
Response Format
Choose how your assistant will structure its output:
-
Plain Text: Simple text responses with no specific structure
- Best for direct answers and simple interactions
- Use when the assistant's output doesn't need parsing
-
JSON Object: Structured data in a simple JSON format
- Useful when the response needs to be processed by other systems
- Provides consistent structure but with flexibility
-
JSON Schema: Highly structured responses following predefined formats
- Cues: Required format when your assistant will appear as a cue within the Deepdesk widget
- Knowledge Assist: Required format when using the
call_knowledge_assisttool - Enforces specific response structures
LLM Instructions for JSON Schema Responses
When configuring an assistant to respond with a specific schema, include these instructions in your prompt:
For Assistant Cue Responses:
You must respond with a valid JSON object that follows this exact schema:
[
{
"code": "assistant-code", // The code of this assistant
"name": "Assistant Name", // The name of this assistant as it should appear to users
"response": "Your detailed response text goes here"
}
]
Do not include any explanations or text outside of this JSON object.
For Knowledge Assist Responses:
You must respond with a valid JSON object that follows this exact schema:
[
{
"answer": "Your detailed answer to the question",
"question": "The original question being asked",
"sources": [
{
"name": "Name of the source document",
"url": "URL or reference to the source"
}
]
}
]
The sources array should include all references used to create your answer.
Do not include any explanations or text outside of this JSON object.
Including these explicit instructions in your assistant's prompt ensures the model will format responses correctly for system integration.
Toggle Options
-
Include conversation transcript: When enabled, the assistant will automatically retrieve the conversation transcript and include it in the prompt. This provides context for the assistant's responses.
- Enable for assistants that need to analyze conversation history.
- Particularly important for summarizers and assistants that detect patterns in conversations.
-
Silent Mode: When enabled, the assistant performs its tasks without any visible output to the chat.
- Useful for analytical assistants that don't need to communicate directly with the human agent.
- Ideal for background processes like sentiment analysis or data collection.
-
Reuse Assistant Thread: This is enabled by default. See Assistant Threads vs Threadless for further details.
- Enabling means the assistant retains persistent context across multiple calls. This is useful when:
- The assistant needs to build on prior responses, refinements, or evaluations. Like a multi-step problem solving where each turn depends on the last.
- The assistant benefits from remembering previous tool outputs, conversations, or user preferences.
- Disabling means the assistant does not keep history between calls — it always starts fresh. This is useful when:
- Each call is independent and should not be influenced by past state. Like running the same tool command multiple times (API calls).
- You don't want earlier interactions to bias or interfere with the current task.
- Avoiding thread reuse can reduce memory usage and unnecessary context passing for high-volume, single-turn workloads.
- Enabling means the assistant retains persistent context across multiple calls. This is useful when:
Model Selection
Select the Large Language Model (LLM) that powers your assistant from the available options in your deployment. Model selection affects:
- Response quality and capabilities
- Processing speed
- Cost
Choose models with capabilities that match your assistant's requirements:
- More capable models for complex reasoning tasks
- Faster, more economical models for simple, high-volume tasks
Tip: For most assistant tasks, GPT-4o-mini provides a good balance of capability and performance, making it sufficient for the majority of use cases.
Condition (JSON Logic)
Conditions determine if your assistant should execute. Using JSONLogic format, you can create rules based on conversation metadata:
{"==":[{"var":"source_id"},"abc123"]}
This example only runs the assistant when the conversation's source_id is abc123. If the condition evaluates to true, the assistant will be evaluated; if false, the assistant will not be evaluated.
Note: Assistant conditions are different from assistant routes. Conditions block evaluation of an already called assistant, while routes determine whether to call an assistant in the first place.
Common condition patterns:
- Match specific channels or sources
- Check customer attributes
- Verify conversation states
Authorized Groups
Authorized Groups control which users can access and use your assistant. By assigning user groups to assistants, you can:
- Limit access to specialized assistants to specific teams
- Roll out new assistants to test groups before full deployment
- Restrict sensitive assistants to authorized personnel
When configuring Authorized Groups:
- If no groups are assigned, the assistant is available to all users
- If one or more groups are assigned, only members of those groups can access the assistant
- On-demand assistants will only be visible to members of the authorized groups
- Automatically evaluated assistants will not run for users outside of the authorized groups
Tip: Create groups based on functional roles (e.g., "Pilot Team", "Tier 2 Support") rather than individual users to simplify management as your team grows.
Fallback Assistant
A Fallback Assistant can be assigned to handle situations where an assistant cannot return a response due to external errors (e.g., OpenAI content policy restrictions).
Fallback Assistants are useful for providing a standardized response to the agent or customer, ensuring conversations remain consistent and uninterrupted even when the primary assistant fails.
Typical use cases include:
- Displaying a clear explanation of why the assistant could not provide an answer.
- Offering templated responses that comply with business or compliance requirements.
Error Handling Mode
Error Handling Mode controls how your assistant responds when errors occur during evaluation or when tool calls fail. You can configure this setting to match your use case.
Available Options:
-
Full Stop (Default): This is the traditional behavior where any error during assistant evaluation or tool call execution causes the entire evaluation to halt and fail immediately.
- Use when you need strict error handling and want to be notified of any issues
-
Let LLM Decide: When enabled, the assistant evaluation continues even when errors occur, allowing the LLM to decide how to proceed based on the error context.
- The LLM receives information about the error and can respond appropriately
- Useful for customer-facing assistants where graceful responses are preferred
- Enables the assistant to provide helpful responses even when some tools fail
Use Cases for "Let LLM Decide":
When an API call fails or a tool encounters an error, the LLM can:
- Provide graceful error messages: "I seem to be having issues connecting with the order system right now. Perhaps I can assist you in another way?"
- Retry with different approaches: You can instruct the LLM in your prompt to attempt the tool call again with different parameters
- Use alternative strategies: The LLM might try a different tool or approach to accomplish the task
Prompting for Retry Behavior:
To influence how the LLM handles errors, you can add instructions to your assistant prompt. For example:
If a tool call fails:
1. Review the error message to understand what went wrong
2. Attempt the call again with corrected parameters if the issue seems resolvable
3. If the tool continues to fail after 2 attempts, inform the user about the issue and offer alternative assistance
This allows the assistant to be more resilient and user-friendly when dealing with temporary issues or recoverable errors.
Note: The Error Handling Mode setting is located in the backend under Assistant settings, within the Output section. It appears as a checkbox labeled "Continue evaluation on error".
Assistant Description and Parameters
These fields are especially important when your assistant will be called by other assistants as a tool.
Description
The description helps the calling LLM understand when to use this assistant. It should clearly explain:
- What the assistant does
- When it should be used
- What inputs it expects
- What outputs it provides
Example:
This assistant validates customer information against our CRM system.
Use it when you need to check if customer details match our records.
It requires a customer ID and at least one piece of information to verify.
It returns verification status and confidence level.
Parameters
Parameters define the inputs your assistant requires when called by another assistant. Using JSON Schema format, you can specify:
- Required parameters that must be provided
- Optional parameters with default values
- Data types and validation rules
Example:
{"type":"object","required":["source_id"],"properties":{"source_id":{"type":"string","description":"The source_id of the interaction."},"customer_id":{"type":"string","description":"Customer identifier if available."}}}
This example requires source_id as a mandatory parameter and makes customer_id optional.
Best Practice: Only include parameters that your assistant actually needs. Too many parameters make it harder for calling assistants to use your assistant effectively.