Tool Manager
๐งญ Overview¶
This document explains how the Tool Manager organizes, exposes, and executes tools for an LLM-based orchestrator. Code for each capability is embedded in the relevant sections, so you can see how itโs implemented where it matters.
Key capabilities: - Load and filter tools from configuration and the users choices - Respects Tool exclusivity, weather it is enabled by the admin or weather the user choose the tool as a must (forced tools) - Expose tool definitions and prompt enhancements for the LLM - Detects if a tool requests a handoff so it โtakes controlโ (e.g., deep research) - Execute selected tools in parallel with tool-call deduplication and max-call limits - it returns the ToolCallResponses of all the tools to the orchestrator for further rounds with the LLM and additional processing like preparation for referencing or collection of debug info.
๐ Initialization and Tool Loading¶
The Tool Manager is responsible for initializing and managing the tools available to the agent. It supports "Internal"-tools but also both MCP tools and A2A sub-agents, treating them as tools that can be called directly. Here's a breakdown of its functionality:
Internal Tools¶
Internal tools are loaded directly in the python code. Like the web-search tool or the internal-search tool.
Agent-to-Agent Protocol (A2A)¶
The A2A protocol enables communication between agents. During initialization, the Tool Manager: 1. Loads all sub-agents defined for the A2A (Agent to Agent) protocol. 2. These sub-agents are treated as callable tools, making them callable by the LLM.
MCP Tools¶
The Tool Manager also integrates MCP tools, which are added to the pool of available tools. These tools can be invoked directly, just like sub-agents, and are managed by the MCP Manager.
Tool Discovery and Filtering¶
The Tool Manager combines tools from three sources: 1. Internal tools: Built from the configuration provided by the admin. 2. MCP tools: Retrieved from the MCP Manager. 3. A2A sub-agents: Loaded via the A2A Manager.
After combining these tools, the manager applies several filters: - Exclusivity: If a tool is marked as exclusive, only that tool is loaded. When a tool is exclusive only that tool can be executed no other (e.g. Deep-research). - Enablement: Disabled tools are excluded. This is done by the admin of the Space to say which ones are available. - User Preferences: Should tools be selected by the end-user in the frontend, they are set as exclusive tools for the first iteration with the model. Then only these can be chosen.
Configuration¶
The available tools (MCP, sub-agents, and internal tools) are derived directly from the front-end configuration, which is set up by the admin of the space.
Code Implementation¶
Constructor and Initialization¶
The constructor initializes the Tool Manager with the necessary runtime context and managers:
Bases: _ToolManager[Literal['completions']]
Source code in unique_toolkit/unique_toolkit/agentic/tools/tool_manager.py
Tool Initialization¶
The _init__tools method discovers and filters tools:
Source code in unique_toolkit/unique_toolkit/agentic/tools/tool_manager.py
๐ฃ Exposing Tools to the Orchestrator and LLM¶
The orchestrator that works with the tool-manager needs three kinds of information: - The actual tool objects (for runtime operations) - Tool โdefinitionsโ or schemas consumable by the LLM - Additional tool-specific prompt enhancements/guidance to help the LLM format call the correct tool and format the output of the tools correctly.
Get loaded tools and log them:
Expose tool definitions and prompts (prompt enhancements):
Evaluation metrics aggregation:
๐๏ธ Forced Tools and Admin/User Constraints¶
Users can force a subset of tools via the UI. Forced tools are surfaced in an LLM API-compatible structure. So that the orchestrator can hand this information over to the LLM call in the correct format.
Retrieve forced tools and add a forced tool programmatically:
๐ง Control-Taking Tools (e.g., Deep Research)¶
Some tools request a handover from the main orchestrator so they can โtake controlโ of the session. The orchestrator can check this before deciding weather to yield control or to continue its flow.
Check if any selected call belongs to a control-taking tool:
โ๏ธ Tool Execution Workflow¶
The orchestrator receives the information from the LLM on what tools to be executed in oder for the llm to receive the requested information. The Tool Manager handles the execution of selected tools with the following steps:
-
Deduplication: It removes duplicate tool calls from the LLM, ensuring identical calls (e.g., same tool with identical parameters) are executed only once. This prevents redundant processing caused by occasional LLM errors.
-
Call Limit Enforcement: A maximum of two tool calls is allowed per execution round. This prevents overloading the system with excessive requests.
-
Parallel Execution: Tools are executed concurrently to save time, as individual tool calls can be time-intensive.
-
Result Handling: Once the tools return their responses, the Tool Manager:
- Sends the results back to the orchestrator.
- Updates the call history.
- Extracts references and debug information for further use.
This streamlined process ensures efficient, accurate, and manageable tool execution.",
Parallel execution strategy:
Execute a single tool call:
Normalize outcomes from the task executor:
๐ Deduplication and Safety¶
Before executing, the Tool Manager removes duplicate calls with identical names and arguments to prevent repeated work in the same round.
Deduplicate calls and warn when filtered:
๐ฃ๏ธ Enhanced Prompting Guidance for the LLM¶
To optimize tool selection and minimize formatting errors, the orchestrator should:
- Incorporate Tool Definitions
-
Use
get_tool_definitions()to retrieve the function/tool schema and provide it to the LLM. This ensures the LLM understands the available tools and their parameters. -
Enhance System Prompts with Tool-Specific Guidance
-
Inject
get_tool_prompts()content into the system prompt to:- Clearly define when each tool should be used.
- Specify the expected inputs and outputs.
- Include argument formatting examples for clarity.
-
Iterative Feedback for Improved Formatting
- In subsequent interactions, provide explicit formatting guidance based on the tools previously selected. This iterative refinement ensures consistent and accurate tool usage.
Key Mechanism:¶
The orchestrator retrieves both tool definitions and tool prompts. Tool definitions describe the functionality and parameters of each tool, while tool prompts act as enhancements to the system message. These prompts guide the LLM in selecting the correct tool and formatting its arguments effectively. This process improves robustness, ensures accurate tool selection, and enhances the overall response quality.