You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flow Framework can be used to set up a common use case, conversational chat using a Chain of Thought Agent. This setup requires the following sequence of API requests, with provisioned resources used in subsequent requests:
Deploy a model on the cluster
Create a connector to a remote model
Register a model using the connector just created
Deploy that model
Use the deployed model for inference
Set up several tools which perform specific tasks
Set up one or more agents which use some combination of the tools
Set up tools representing these agents
Set up a root agent which may delegate the task to either tools or another agent
A full template to perform the first three steps is documented in the Create Workflow API documentation.
Deploy a model on the cluster
The first step in the workflow is to create a connector to a remote model. The content in the user_inputs field exactly matches the ML Commons Create Connector API.
nodes:
- id: create_connector_1type: create_connectoruser_inputs:
name: OpenAI Chat Connectordescription: The connector to public OpenAI model service for GPT 3.5version: '1'protocol: httpparameters:
endpoint: api.openai.commodel: gpt-3.5-turbocredential:
openAI_key: '12345'actions:
- action_type: predictmethod: POSTurl: https://${parameters.endpoint}/v1/chat/completions
This API results in a connector_id which is needed to register this remote model. The previous_node_inputs field indicates that the required connector_id will be provided from the output of the create_connector_1 step. Other inputs required by the Register Model API are included in the user_inputs field.
- id: register_model_2type: register_remote_modelprevious_node_inputs:
create_connector_1: connector_iduser_inputs:
name: openAI-gpt-3.5-turbofunction_name: remotedescription: test model
The output of this step is a model_id. The registered model must then be deployed to the cluster. The Deploy Model API requires this model_id which is included in the previous_node_inputs field of the next step.
- id: deploy_model_3type: deploy_model# This step needs the model_id produced as an output of the previous stepprevious_node_inputs:
register_model_2: model_id
When using the Deploy Model API directly, a task ID is returned, requiring use of the Tasks API to determine when the deployment is complete. Flow Framework automates the process of checking this progress and returns the final model_id directly.
In order to connect these steps in sequence, they must be connected by an edge in the graph. The presence of a previous_node_input field in a step requires that node to be the source with the step requiring that input as the dest field. The register_model_2 step requires the connector_id from the create_connector_1 step, and the deploy_model_3 step requires the model_id from the register_model_2 step, so the first two edges in the graph enforce this.
A Chain-of-Thought (CoT) Agent can leverage this model in a tool using the ML Commons Agent Framework [Link TBD]. This step doesn’t correspond exactly to an API, but represents a component of the body required by the Register Agent API. This simplifies the register request and allows re-use of the same tool in multiple agents.
A Math Tool can also be configured. It does not depend on any previous steps.
- id: math_tooltype: create_tooluser_inputs:
name: MathTooltype: MathTooldescription: A general tool to calculate any math problem. The action inputmust be a valid math expression, like 2+3parameters:
max_iteration: 5
Other tools can be configured. The CoT Agent can be configured to use these tools. The previous_node_inputs field identifies the math_tool as its tool. Additional tools could be added here.
The Agent also needs an LLM to reason with the tools, and that is defined by the llm.model_id field. This example assumes the model_id from the deploy_model_3 step will be used. However, if another model is already deployed, the model_id of that previously deployed model could be included in the user_inputs field instead.
- id: sub_agenttype: register_agentprevious_node_inputs:
deploy-model-3: llm.model_idmath_tool: toolsuser_inputs:
name: Sub Agenttype: cotdescription: this is a test agentparameters:
hello: worldllm.parameters:
max_iteration: '5'stop_when_no_tool_found: 'true'memory:
type: conversation_indexapp_type: chatbot
Edges must be defined to permit the agent to retrieve these fields from the previous node.
Agents can be used as tools for another agent. Registering an agent produces an agent_id in the output. This step defines a tool which uses the agent_id from the previous step.
An edge connection is required to enable the previous_node_input.
- source: sub_agentdest: agent_tool
A tool may reference an ML Model. This example gets the required model_id from the model deployed in a previous step.
- id: ml_model_tooltype: create_toolprevious_node_inputs:
deploy-model-3: model_iduser_inputs:
name: MLModelTooltype: MLModelToolalias: language_model_tooldescription: A general tool to answer any question.parameters:
prompt: Answer the question as best you can.response_filter: choices[0].message.content
An edge is required to use this previous_node_input.
- source: deploy-model-3dest: ml_model_tool
A conversational chat application will communicate with a single root agent which includes the ML Model Tool and the Agent Tool in its tools field. It will also obtain the llm.model_id from the the deployed model. Some agents require tools to be in a specific order, which can be enforced with the tools_order field.
- id: root_agenttype: register_agentprevious_node_inputs:
deploy-model-3: llm.model_idml_model_tool: toolsagent_tool: toolsuser_inputs:
name: DEMO-Test_Agent_For_CoTtype: cotdescription: this is a test agentparameters:
prompt: Answer the question as best you can.llm.parameters:
max_iteration: '5'stop_when_no_tool_found: 'true'tools_order: ['agent_tool', 'ml_model_tool']memory:
type: conversation_indexapp_type: chatbot
Edges are required for te previous_node_input sources.
A final template including all of these steps in the provision workflow is shown below in YAML format.
# This template demonstrates provisioning the resources for a # Chain-of-Thought chat botname: tool-register-agentdescription: test caseuse_case: REGISTER_AGENTversion:
template: 1.0.0compatibility:
- 2.12.0
- 3.0.0workflows:
# This workflow defines the actions to be taken when the Provision Workflow API is usedprovision:
nodes:
# The first three nodes create a connector to a remote model, registers and deploy that model
- id: create_connector_1type: create_connectoruser_inputs:
name: OpenAI Chat Connectordescription: The connector to public OpenAI model service for GPT 3.5version: '1'protocol: httpparameters:
endpoint: api.openai.commodel: gpt-3.5-turbocredential:
openAI_key: '12345'actions:
- action_type: predictmethod: POSTurl: https://${parameters.endpoint}/v1/chat/completions
- id: register_model_2type: register_remote_modelprevious_node_inputs:
create_connector_1: connector_iduser_inputs:
name: openAI-gpt-3.5-turbofunction_name: remotedescription: test model
- id: deploy_model_3type: deploy_modelprevious_node_inputs:
register_model_2: model_id# For example purposes, the model_id obtained as the output of the deploy_model_3 step will be used# for several below steps. However, any other deployed model_id can be used for those steps.# This is one example tool from the Agent Framework.
- id: math_tooltype: create_tooluser_inputs:
name: MathTooltype: MathTooldescription: A general tool to calculate any math problem. The action inputmust be a valid math expression, like 2+3parameters:
max_iteration: 5# This simple agent onlyhas one tool, but could be configured with many tools
- id: sub_agenttype: register_agentprevious_node_inputs:
deploy-model-3: llm.model_idmath_tool: toolsuser_inputs:
name: Sub Agenttype: cotdescription: this is a test agentparameters:
hello: worldllm.parameters:
max_iteration: '5'stop_when_no_tool_found: 'true'memory:
type: conversation_indexapp_type: chatbot# An agent can be used itself as a tool in a nested relationship
- id: agent_tooltype: create_toolprevious_node_inputs:
sub_agent: agent_iduser_inputs:
name: AgentTooltype: AgentTooldescription: Agent Toolparameters:
max_iteration: 5# An ML Model can be used as a tool
- id: ml_model_tooltype: create_toolprevious_node_inputs:
deploy-model-3: model_iduser_inputs:
name: MLModelTooltype: MLModelToolalias: language_model_tooldescription: A general tool to answer any question.parameters:
prompt: Answer the question as best you can.response_filter: choices[0].message.content# This final agent will be the interface for the CoT chat user
- id: root_agenttype: register_agentprevious_node_inputs:
deploy-model-3: llm.model_idml_model_tool: toolsagent_tool: toolsuser_inputs:
name: DEMO-Test_Agent_For_CoTtype: cotdescription: this is a test agentparameters:
prompt: Answer the question as best you can.llm.parameters:
max_iteration: '5'stop_when_no_tool_found: 'true'tools_order: ['agent_tool', 'ml_model_tool']memory:
type: conversation_indexapp_type: chatbot# These edges define nodes which must provide output as input for later nodes in the workflowedges:
- source: create_connector_1dest: register_model_2
- source: register_model_2dest: deploy_model_3
- source: math_tooldest: sub_agent
- source: deploy_model_3dest: sub_agent
- source: sub_agentdest: agent_tool
- source: deploy-model-3dest: ml_model_tool
- source: deploy-model-3dest: root_agent
- source: ml_model_tooldest: root_agent
- source: agent_tooldest: root_agent
The below content is intended for publication to opensearch.org.
Please comment below on anything you think needs to be added/changed/deleted.
This references the API docs in #311
Example Workflow Steps
Flow Framework can be used to set up a common use case, conversational chat using a Chain of Thought Agent. This setup requires the following sequence of API requests, with provisioned resources used in subsequent requests:
A full template to perform the first three steps is documented in the Create Workflow API documentation.
Deploy a model on the cluster
The first step in the workflow is to create a connector to a remote model. The content in the
user_inputs
field exactly matches the ML Commons Create Connector API.This API results in a
connector_id
which is needed to register this remote model. Theprevious_node_inputs
field indicates that the requiredconnector_id
will be provided from the output of thecreate_connector_1
step. Other inputs required by the Register Model API are included in theuser_inputs
field.The output of this step is a
model_id
. The registered model must then be deployed to the cluster. The Deploy Model API requires thismodel_id
which is included in theprevious_node_inputs
field of the next step.When using the Deploy Model API directly, a task ID is returned, requiring use of the Tasks API to determine when the deployment is complete. Flow Framework automates the process of checking this progress and returns the final
model_id
directly.In order to connect these steps in sequence, they must be connected by an edge in the graph. The presence of a
previous_node_input
field in a step requires that node to be thesource
with the step requiring that input as thedest
field. Theregister_model_2
step requires theconnector_id
from thecreate_connector_1
step, and thedeploy_model_3
step requires themodel_id
from theregister_model_2
step, so the first two edges in the graph enforce this.Use the deployed model for inference
A Chain-of-Thought (CoT) Agent can leverage this model in a tool using the ML Commons Agent Framework [Link TBD]. This step doesn’t correspond exactly to an API, but represents a component of the body required by the Register Agent API. This simplifies the register request and allows re-use of the same tool in multiple agents.
A Math Tool can also be configured. It does not depend on any previous steps.
Other tools can be configured. The CoT Agent can be configured to use these tools. The
previous_node_inputs
field identifies themath_tool
as its tool. Additional tools could be added here.The Agent also needs an LLM to reason with the tools, and that is defined by the
llm.model_id
field. This example assumes themodel_id
from thedeploy_model_3
step will be used. However, if another model is already deployed, themodel_id
of that previously deployed model could be included in theuser_inputs
field instead.Edges must be defined to permit the agent to retrieve these fields from the previous node.
Agents can be used as tools for another agent. Registering an agent produces an
agent_id
in the output. This step defines a tool which uses theagent_id
from the previous step.An edge connection is required to enable the
previous_node_input
.A tool may reference an ML Model. This example gets the required
model_id
from the model deployed in a previous step.An edge is required to use this
previous_node_input
.A conversational chat application will communicate with a single root agent which includes the ML Model Tool and the Agent Tool in its
tools
field. It will also obtain thellm.model_id
from the the deployed model. Some agents require tools to be in a specific order, which can be enforced with thetools_order
field.Edges are required for te
previous_node_input
sources.A final template including all of these steps in the
provision
workflow is shown below in YAML format.The same template is shown in JSON format.
The text was updated successfully, but these errors were encountered: