Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Examples & Documentation, Add Support for Temporary Queue and UUID Invoke Function, Added assembly component #36

Merged
merged 8 commits into from
Sep 5, 2024
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,27 +6,27 @@ This project provides a standalone, Python-based application to allow Solace eve
a wide range of AI models and services. The application is designed to be easily extensible to
support new AI models and services.

## Getting started quickly

Please see the [getting started guide](docs/getting_started.md) for instructions on how to get started quickly.

## Documentation

Please see the [documentation](docs/index.md) for more information.

## Getting started quickly

Please see the [getting started guide](docs/getting_started.md) for instructions on how to get started quickly.

## Support

This is not an officially supported Solace product.

For more information try these resources:

- Ask the [Solace Community](https://solace.community)
- The Solace Developer Portal website at: https://solace.dev


## Contributing

Contributions are encouraged! Please read [CONTRIBUTING](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.


## License

See the LICENSE file for details.
6 changes: 2 additions & 4 deletions config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,16 +41,14 @@ flows:
- type: copy
source_expression: input.payload
dest_expression: user_data.temp:text
component_input:
input_selection:
source_expression: user_data.temp:text

- component_name: solace_sw_broker
component_module: broker_output
component_config:
<<: *broker_connection
payload_format: json
component_input:
source_expression: user_data.output
input_transforms:
- type: copy
source_expression: input.payload
Expand All @@ -67,5 +65,5 @@ flows:
- type: copy
source_expression: user_data.temp
dest_expression: user_data.output:user_properties
component_input:
input_selection:
source_expression: user_data.output
19 changes: 19 additions & 0 deletions docs/components/aggregate.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Aggregate

Take multiple messages and aggregate them into one. The output of this component is a list of the exact structure of the input data.
This can be useful for batch processing or for aggregating events together before processing them. The Aggregate component will take a sequence of events and combine them into a single event before enqueuing it to the next component in the flow so that it can perform batch processing.

## Configuration Parameters

Expand Down Expand Up @@ -37,3 +38,21 @@ component_config:
...
]
```


## Example Configuration


```yaml
- component_name: aggretator_example
component_module: aggregate
component_config:
# The maximum number of items to aggregate before sending the data to the next component
max_items: 3
# The maximum time to wait before sending the data to the next component
max_time_ms: 1000
input_selection:
# Take the text field from the message and use it as the input to the aggregator
source_expression: input.payload:text
```

50 changes: 50 additions & 0 deletions docs/components/assembly.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Assembly

Assembles messages till criteria is met, the output will be the assembled message

## Configuration Parameters

```yaml
component_name: <user-supplied-name>
component_module: assembly
component_config:
assemble_key: <string>
max_items: <string>
max_time_ms: <string>
```

| Parameter | Required | Default | Description |
| --- | --- | --- | --- |
| assemble_key | True | | The key from input message that would cluster the similar messages together |
| max_items | False | 10 | Maximum number of messages to assemble. Once this value is reached, the messages would be flushed to the output |
| max_time_ms | False | 10000 | The timeout in seconds to wait for the messages to assemble. If timeout is reached before the max size is reached, the messages would be flushed to the output |


## Component Input Schema

```
{
<freeform-object>
}
```


## Component Output Schema

```
[
{
payload: <string>,
topic: <string>,
user_properties: {
<freeform-object>
}
},
...
]
```
| Field | Required | Description |
| --- | --- | --- |
| [].payload | False | |
| [].topic | False | |
| [].user_properties | False | |
4 changes: 2 additions & 2 deletions docs/components/broker_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ component_config:
| broker_username | True | | Client username for broker |
| broker_password | True | | Client password for broker |
| broker_vpn | True | | Client VPN for broker |
| broker_queue_name | True | | Queue name for broker |
| temporary_queue | False | False | Whether to create a temporary queue that will be deleted after disconnection |
| broker_queue_name | False | | Queue name for broker, if not provided it will use a temporary queue |
| temporary_queue | False | False | Whether to create a temporary queue that will be deleted after disconnection, defaulted to True if broker_queue_name is not provided |
| broker_subscriptions | True | | Subscriptions for broker |
| payload_encoding | False | utf-8 | Encoding for the payload (utf-8, base64, gzip, none) |
| payload_format | False | json | Format for the payload (json, yaml, text) |
Expand Down
2 changes: 1 addition & 1 deletion docs/components/error_input.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ErrorInput

Receive processing errors from the Solace AI Event Connector. Note that the component_input configuration is ignored. This component should be used to create a flow that handles errors from other flows.
Receive processing errors from the Solace AI Event Connector. Note that the input_selection configuration is ignored. This component should be used to create a flow that handles errors from other flows.

## Configuration Parameters

Expand Down
5 changes: 4 additions & 1 deletion docs/components/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@
| Component | Description |
| --- | --- |
| [aggregate](aggregate.md) | Aggregate messages into one message. |
| [assembly](assembly.md) | Assembles messages till criteria is met, the output will be the assembled message |
| [broker_input](broker_input.md) | Connect to a messaging broker and receive messages from it. The component will output the payload, topic, and user properties of the message. |
| [broker_output](broker_output.md) | Connect to a messaging broker and send messages to it. Note that this component requires that the data is transformed into the input schema. |
| [broker_request_response](broker_request_response.md) | Connect to a messaging broker, send request messages, and receive responses. This component combines the functionality of broker_input and broker_output with additional request-response handling. |
| [delay](delay.md) | A simple component that simply passes the input to the output, but with a configurable delay. |
| [error_input](error_input.md) | Receive processing errors from the Solace AI Event Connector. Note that the component_input configuration is ignored. This component should be used to create a flow that handles errors from other flows. |
| [error_input](error_input.md) | Receive processing errors from the Solace AI Event Connector. Note that the input_selection configuration is ignored. This component should be used to create a flow that handles errors from other flows. |
| [file_output](file_output.md) | File output component |
| [iterate](iterate.md) | Take a single message that is a list and output each item in that list as a separate message |
| [langchain_chat_model](langchain_chat_model.md) | Provide access to all the LangChain chat models via configuration |
Expand All @@ -17,6 +18,8 @@
| [langchain_vector_store_embedding_index](langchain_vector_store_embedding_index.md) | Use LangChain Vector Stores to index text for later semantic searches. This will take text, run it through an embedding model and then store it in a vector database. |
| [langchain_vector_store_embedding_search](langchain_vector_store_embedding_search.md) | Use LangChain Vector Stores to search a vector store with a semantic search. This will take text, run it through an embedding model with a query embedding and then find the closest matches in the store. |
| [message_filter](message_filter.md) | A filtering component. This will apply a user configurable expression. If the expression evaluates to True, the message will be passed on. If the expression evaluates to False, the message will be discarded. If the message is discarded, any previous components that require an acknowledgement will be acknowledged. |
| [openai_chat_model](openai_chat_model.md) | OpenAI chat model component |
| [openai_chat_model_with_history](openai_chat_model_with_history.md) | OpenAI chat model component with conversation history |
| [pass_through](pass_through.md) | What goes in comes out |
| [stdin_input](stdin_input.md) | STDIN input component. The component will prompt for input, which will then be placed in the message payload using the output schema below. |
| [stdout_output](stdout_output.md) | STDOUT output component |
Expand Down
13 changes: 13 additions & 0 deletions docs/components/iterate.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,16 @@ No configuration parameters
<freeform-object>
}
```


## Example Configuration


```yaml
- component_name: iterate_example
component_module: iterate
component_config:
input_selection:
# Take the list field from the message and use it as the input to the iterator
source_expression: input.payload:embeddings
```
2 changes: 2 additions & 0 deletions docs/components/langchain_chat_model_with_history.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ component_config:
history_max_turns: <string>
history_max_message_size: <string>
history_max_tokens: <string>
history_max_time: <string>
history_module: <string>
history_class: <string>
history_config: <object>
Expand All @@ -33,6 +34,7 @@ component_config:
| history_max_turns | False | 20 | The maximum number of turns to keep in the history. If not set, the history will be limited to 20 turns. |
| history_max_message_size | False | 1000 | The maximum amount of characters to keep in a single message in the history. |
| history_max_tokens | False | 8000 | The maximum number of tokens to keep in the history. If not set, the history will be limited to 8000 tokens. |
| history_max_time | False | None | The maximum time (in seconds) to keep messages in the history. If not set, messages will not expire based on time. |
| history_module | False | langchain_community.chat_message_histories | The module that contains the history class. Default: 'langchain_community.chat_message_histories' |
| history_class | False | ChatMessageHistory | The class to use for the history. Default: 'ChatMessageHistory' |
| history_config | False | | The configuration for the history class. |
Expand Down
9 changes: 8 additions & 1 deletion docs/components/langchain_vector_store_embedding_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,20 @@ component_config:
<freeform-object>
},
...
]
],
ids: [
<string>,
...
],
action: <string>
}
```
| Field | Required | Description |
| --- | --- | --- |
| texts | True | |
| metadatas | False | |
| ids | False | The ID of the text to add to the index. required for 'delete' action |
| action | False | The action to perform on the index from one of 'add', 'delete' |


## Component Output Schema
Expand Down
2 changes: 1 addition & 1 deletion docs/components/langchain_vector_store_embedding_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ component_config:
| embedding_component_path | True | | The embedding library path - e.g. 'langchain_community.embeddings' |
| embedding_component_name | True | | The embedding model to use - e.g. BedrockEmbeddings |
| embedding_component_config | True | | Model specific configuration for the embedding model. See documentation for valid parameter names. |
| max_results | True | | The maximum number of results to return |
| max_results | True | 3 | The maximum number of results to return |
| combine_context_from_same_source | False | True | Set to False if you don't want to combine all the context from the same source. Default is True |


Expand Down
62 changes: 62 additions & 0 deletions docs/components/openai_chat_model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# OpenAIChatModel

OpenAI chat model component

## Configuration Parameters

```yaml
component_name: <user-supplied-name>
component_module: openai_chat_model
component_config:
api_key: <string>
model: <string>
temperature: <string>
base_url: <string>
stream_to_flow: <string>
llm_mode: <string>
stream_batch_size: <string>
set_response_uuid_in_user_properties: <boolean>
```

| Parameter | Required | Default | Description |
| --- | --- | --- | --- |
| api_key | True | | OpenAI API key |
| model | True | | OpenAI model to use (e.g., 'gpt-3.5-turbo') |
| temperature | False | 0.7 | Sampling temperature to use |
| base_url | False | None | Base URL for OpenAI API |
| stream_to_flow | False | | Name the flow to stream the output to - this must be configured for llm_mode='stream'. |
| llm_mode | False | none | The mode for streaming results: 'sync' or 'stream'. 'stream' will just stream the results to the named flow. 'none' will wait for the full response. |
| stream_batch_size | False | 15 | The minimum number of words in a single streaming result. Default: 15. |
| set_response_uuid_in_user_properties | False | False | Whether to set the response_uuid in the user_properties of the input_message. This will allow other components to correlate streaming chunks with the full response. |


## Component Input Schema

```
{
messages: [
{
role: <string>,
content: <string>
},
...
]
}
```
| Field | Required | Description |
| --- | --- | --- |
| messages | True | |
| messages[].role | True | |
| messages[].content | True | |


## Component Output Schema

```
{
content: <string>
}
```
| Field | Required | Description |
| --- | --- | --- |
| content | True | The generated response from the model |
68 changes: 68 additions & 0 deletions docs/components/openai_chat_model_with_history.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# OpenAIChatModelWithHistory

OpenAI chat model component with conversation history

## Configuration Parameters

```yaml
component_name: <user-supplied-name>
component_module: openai_chat_model_with_history
component_config:
api_key: <string>
model: <string>
temperature: <string>
base_url: <string>
stream_to_flow: <string>
llm_mode: <string>
stream_batch_size: <string>
set_response_uuid_in_user_properties: <boolean>
history_max_turns: <string>
history_max_time: <string>
```

| Parameter | Required | Default | Description |
| --- | --- | --- | --- |
| api_key | True | | OpenAI API key |
| model | True | | OpenAI model to use (e.g., 'gpt-3.5-turbo') |
| temperature | False | 0.7 | Sampling temperature to use |
| base_url | False | None | Base URL for OpenAI API |
| stream_to_flow | False | | Name the flow to stream the output to - this must be configured for llm_mode='stream'. |
| llm_mode | False | none | The mode for streaming results: 'sync' or 'stream'. 'stream' will just stream the results to the named flow. 'none' will wait for the full response. |
| stream_batch_size | False | 15 | The minimum number of words in a single streaming result. Default: 15. |
| set_response_uuid_in_user_properties | False | False | Whether to set the response_uuid in the user_properties of the input_message. This will allow other components to correlate streaming chunks with the full response. |
| history_max_turns | False | 10 | Maximum number of conversation turns to keep in history |
| history_max_time | False | 3600 | Maximum time to keep conversation history (in seconds) |


## Component Input Schema

```
{
messages: [
{
role: <string>,
content: <string>
},
...
],
clear_history_but_keep_depth: <integer>
}
```
| Field | Required | Description |
| --- | --- | --- |
| messages | True | |
| messages[].role | True | |
| messages[].content | True | |
| clear_history_but_keep_depth | False | Clear history but keep the last N messages. If 0, clear all history. If not set, do not clear history. |


## Component Output Schema

```
{
content: <string>
}
```
| Field | Required | Description |
| --- | --- | --- |
| content | True | The generated response from the model |
Loading
Loading