diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..d6bfc9ecc --- /dev/null +++ b/404.html @@ -0,0 +1,2220 @@ + + + +
+ + + + + + + + + + + + + + + + +We're going to write documentation for Marvin together.
+First, here's a style guide.
+A style guide for AI documentation authors to adhere to Marvin's documentation standards. Remember, you are an expert technical writer with an extensive background in educating and explaining open-source software. You are not a marketer, a salesperson, or a product manager. Marvin's documentation should resemble renowed technical documentation like Stripe.
+You must follow the below guide. Do not deviate from it.
+code
in headers or titles (e.g. prefer "Overview" to "Overview of extract()
"). If you must, use code
in headers or titles sparingly.marvin.classify()
, not just classify()
.
generate_speech
+
+¶Generates an image based on a provided prompt template.
+This function uses the DALL-E API to generate an image based on a provided +prompt template. The function supports additional arguments for the prompt +and the model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
prompt_template |
+
+ str
+ |
+
+
+
+ The template for the prompt. + |
+ + required + | +
prompt_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +prompt. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse |
+ HttpxBinaryResponseContent
+ |
+
+
+
+ The response from the DALL-E API, which includes the +generated image. + |
+
speak
+
+¶Generates audio from text using an AI.
+This function uses an AI to generate audio from the provided text. The voice +used for the audio can be specified.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The text to generate audio from. + |
+ + required + | +
voice |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
HttpxBinaryResponseContent |
+ HttpxBinaryResponseContent
+ |
+
+
+
+ The generated audio. + |
+
speech
+
+¶Function decorator that generates audio from the wrapped function's return +value. The voice used for the audio can be specified.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
fn |
+
+ Callable
+ |
+
+
+
+ The function to wrap. Defaults to None. + |
+
+ None
+ |
+
voice |
+
+ str
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The wrapped function. + |
+
generate_image
+
+¶Generates an image based on a provided prompt template.
+This function uses the DALL-E API to generate an image based on a provided +prompt template. The function supports additional arguments for the prompt +and the model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
prompt_template |
+
+ str
+ |
+
+
+
+ The template for the prompt. + |
+ + required + | +
prompt_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +prompt. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse |
+ ImagesResponse
+ |
+
+
+
+ The response from the DALL-E API, which includes the +generated image. + |
+
image
+
+¶A decorator that transforms a function's output into an image.
+This decorator takes a function that returns a string, and uses that string +as instructions to generate an image. The generated image is then returned.
+The decorator can be used with or without parentheses. If used without
+parentheses, the decorated function's output is used as the instructions
+for the image. If used with parentheses, an optional literal
argument can
+be provided. If literal
is set to True
, the function's output is used
+as the literal instructions for the image, without any modifications.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
fn |
+
+ callable
+ |
+
+
+
+ The function to decorate. If |
+
+ None
+ |
+
literal |
+
+ bool
+ |
+
+
+
+ Whether to use the function's output as the
+literal instructions for the image. Defaults to |
+
+ False
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
callable | + | +
+
+
+ The decorated function. + |
+
paint
+
+¶Generates an image based on the provided instructions and context.
+This function uses the DALLE-3 API to generate an image based on the provided
+instructions and context. By default, the API modifies prompts to add detail
+and style. This behavior can be disabled by setting literal=True
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
instructions |
+
+ str
+ |
+
+
+
+ The instructions for the image generation. +Defaults to None. + |
+
+ None
+ |
+
context |
+
+ dict
+ |
+
+
+
+ The context for the image generation. Defaults to None. + |
+
+ None
+ |
+
literal |
+
+ bool
+ |
+
+
+
+ Whether to disable the API's default behavior of +modifying prompts. Defaults to False. + |
+
+ False
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse | + | +
+
+
+ The response from the DALLE-3 API, which includes the +generated image. + |
+
Core LLM tools for working with text and structured data.
+ + + +
Model
+
+
+¶A Pydantic model that can be instantiated from a natural language string, in +addition to keyword arguments.
+ + + + +
from_text
+
+
+ classmethod
+
+
+¶Class method to create an instance of the model from a natural language string.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The natural language string to convert into an instance of the model. + |
+ + required + | +
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
**kwargs |
+ + | +
+
+
+ Additional keyword arguments to pass to the model's constructor. + |
+
+ {}
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Model |
+ Model
+ |
+
+
+
+ An instance of the model. + |
+
cast
+
+¶Converts the input data into the specified type.
+This function uses a language model to convert the input data into a specified type. +The conversion process can be guided by specific instructions. The function also +supports additional arguments for the language model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ str
+ |
+
+
+
+ The data to be converted. + |
+ + required + | +
target |
+
+ type
+ |
+
+
+
+ The type to convert the data into. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
classifier
+
+¶Class decorator that modifies the behavior of an Enum class to classify a string.
+This decorator modifies the call method of the Enum class to use the
+marvin.classify
function instead of the default Enum behavior. This allows
+the Enum class to classify a string based on its members.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
cls |
+
+ Enum
+ |
+
+
+
+ The Enum class to be decorated. + |
+
+ None
+ |
+
instructions |
+
+ str
+ |
+
+
+
+ Instructions for the AI on +how to perform the classification. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword +arguments to pass to the model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Enum | + | +
+
+
+ The decorated Enum class with modified call method. + |
+
Raises:
+Type | +Description | +
---|---|
+ AssertionError
+ |
+
+
+
+ If the decorated class is not a subclass of Enum. + |
+
classify
+
+¶Classifies the provided data based on the provided labels.
+This function uses a language model with a logit bias to classify the input +data. The logit bias constrains the language model's response to a single +token, making this function highly efficient for classification tasks. The +function will always return one of the provided labels.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ str
+ |
+
+
+
+ The data to be classified. + |
+ + required + | +
labels |
+
+ Union[Enum, list[T], type]
+ |
+
+
+
+ The labels to classify the data into. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the +classification. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The label that the data was classified into. + |
+
extract
+
+¶Extracts entities of a specific type from the provided data.
+This function uses a language model to identify and extract entities of the +specified type from the input data. The extracted entities are returned as a +list.
+Note that either a target type or instructions must be provided (or both). +If only instructions are provided, the target type is assumed to be a +string.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ str
+ |
+
+
+
+ The data from which to extract entities. + |
+ + required + | +
target |
+
+ type
+ |
+
+
+
+ The type of entities to extract. + |
+
+ None
+ |
+
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the extraction. +Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of extracted entities of the specified type. + |
+
fn
+
+¶Converts a Python function into an AI function using a decorator.
+This decorator allows a Python function to be converted into an AI function. +The AI function uses a language model to generate its output.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
func |
+
+ Callable
+ |
+
+
+
+ The function to be converted. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The converted AI function. + |
+
@fn +def list_fruit(n:int) -> list[str]: + '''generates a list of n fruit'''
+list_fruit(3) # ['apple', 'banana', 'orange']
+
generate
+
+¶Generates a list of 'n' items of the provided type or based on instructions.
+Either a type or instructions must be provided. If instructions are provided +without a type, the type is assumed to be a string. The function generates at +least 'n' items.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
target |
+
+ type
+ |
+
+
+
+ The type of items to generate. Defaults to None. + |
+
+ None
+ |
+
instructions |
+
+ str
+ |
+
+
+
+ Instructions for the generation. Defaults to None. + |
+
+ None
+ |
+
n |
+
+ int
+ |
+
+
+
+ The number of items to generate. Defaults to 1. + |
+
+ 1
+ |
+
use_cache |
+
+ bool
+ |
+
+
+
+ If True, the function will cache the last +100 responses for each (target, instructions, and temperature) and use +those to avoid repetition on subsequent calls. Defaults to True. + |
+
+ True
+ |
+
temperature |
+
+ float
+ |
+
+
+
+ The temperature for the generation. Defaults to 1. + |
+
+ 1
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of generated items. + |
+
generate_llm_response
+
+¶Generates a language model response based on a provided prompt template.
+This function uses a language model to generate a response based on a provided prompt template. +The function supports additional arguments for the prompt and the language model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
prompt_template |
+
+ str
+ |
+
+
+
+ The template for the prompt. + |
+ + required + | +
prompt_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the prompt. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ChatResponse |
+ ChatResponse
+ |
+
+
+
+ The generated response from the language model. + |
+
model
+
+¶Class decorator for instantiating a Pydantic model from a string.
+This decorator allows a Pydantic model to be instantiated from a string. It's +equivalent to subclassing the Model class.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
type_ |
+
+ Union[Type[M], None]
+ |
+
+
+
+ The type of the Pydantic model. +Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Type[M], Callable[[Type[M]], Type[M]]]
+ |
+
+
+
+ Union[Type[M], Callable[[Type[M]], Type[M]]]: The decorated Pydantic model. + |
+
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Application
+
+
+¶Tools for Applications have a special property: if any parameter is
+annotated as Application
, then the tool will be called with the
+Application instance as the value for that parameter. This allows tools to
+access the Application's state and other properties.
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Assistant
+
+
+¶The Assistant class represents an AI assistant that can be created, deleted, +loaded, and interacted with.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
id |
+
+ str
+ |
+
+
+
+ The unique identifier of the assistant. None if the assistant + hasn't been created yet. + |
+
name |
+
+ str
+ |
+
+
+
+ The name of the assistant. + |
+
model |
+
+ str
+ |
+
+
+
+ The model used by the assistant. + |
+
metadata |
+
+ dict
+ |
+
+
+
+ Additional data about the assistant. + |
+
file_ids |
+
+ list
+ |
+
+
+
+ List of file IDs associated with the assistant. + |
+
tools |
+
+ list
+ |
+
+
+
+ List of tools used by the assistant. + |
+
instructions |
+
+ list
+ |
+
+
+
+ List of instructions for the assistant. + |
+
say_async
+
+
+ async
+
+
+¶A convenience method for adding a user message to the assistant's +default thread, running the assistant, and returning the assistant's +messages.
+ +
download_temp_file
+
+¶Downloads a file from OpenAI's servers and saves it to a temporary file.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
file_id |
+
+ str
+ |
+
+
+
+ The ID of the file to be downloaded. + |
+ + required + | +
suffix |
+
+ str
+ |
+
+
+
+ The file extension to be used for the temporary file. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ | +
+
+
+ The file path of the downloaded temporary file. + |
+
pprint_message
+
+¶Pretty-prints a single message using the rich library, highlighting the +speaker's role, the message text, any available images, and the message +timestamp in a panel format.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
message |
+
+ ThreadMessage
+ |
+
+
+
+ A message object + |
+ + required + | +
pprint_messages
+
+¶Iterates over a list of messages and pretty-prints each one.
+Messages are pretty-printed using the rich library, highlighting the +speaker's role, the message text, any available images, and the message +timestamp in a panel format.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
messages |
+
+ list[ThreadMessage]
+ |
+
+
+
+ A list of ThreadMessage objects to be +printed. + |
+ + required + | +
pprint_run_step
+
+¶Formats and prints a run step with status information.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
run_step |
+
+ RunStep
+ |
+
+
+
+ A RunStep object containing the details of the run step. + |
+ + required + | +
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Run
+
+
+¶The Run class represents a single execution of an assistant.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
thread |
+
+ Thread
+ |
+
+
+
+ The thread in which the run is executed. + |
+
assistant |
+
+ Assistant
+ |
+
+
+
+ The assistant that is being run. + |
+
instructions |
+
+ str
+ |
+
+
+
+ Replacement instructions for the run. + |
+
additional_instructions |
+
+ str
+ |
+
+
+
+ Additional instructions to append + to the assistant's instructions. + |
+
tools |
+
+ list[Union[AssistantTool, Callable]]
+ |
+
+
+
+ Replacement tools + for the run. + |
+
additional_tools |
+
+ list[AssistantTool]
+ |
+
+
+
+ Additional tools to append + to the assistant's tools. + |
+
run |
+
+ Run
+ |
+
+
+
+ The OpenAI run object. + |
+
data |
+
+ Any
+ |
+
+
+
+ Any additional data associated with the run. + |
+
RunMonitor
+
+
+¶
refresh_run_steps_async
+
+
+ async
+
+
+¶Asynchronously refreshes and updates the run steps list.
+This function fetches the latest run steps up to a specified limit and
+checks if the latest run step in the current run steps list
+(self.steps
) is included in the new batch. If the latest run step is
+missing, it continues to fetch additional run steps in batches, up to a
+maximum count, using pagination. The function then updates
+self.steps
with these new run steps, ensuring any existing run steps
+are updated with their latest versions and new run steps are appended in
+their original order.
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Thread
+
+
+¶The Thread class represents a conversation thread with an assistant.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
id |
+
+ Optional[str]
+ |
+
+
+
+ The unique identifier of the thread. None if the thread + hasn't been created yet. + |
+
metadata |
+
+ dict
+ |
+
+
+
+ Additional data about the thread. + |
+
add_async
+
+
+ async
+
+
+¶Add a user message to the thread.
+ +
chat
+
+¶Starts an interactive chat session with the provided assistant.
+ +
create_async
+
+
+ async
+
+
+¶Creates a thread.
+ +
get_messages_async
+
+
+ async
+
+
+¶Asynchronously retrieves messages from the thread.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
limit |
+
+ int
+ |
+
+
+
+ The maximum number of messages to return. + |
+
+ None
+ |
+
before_message |
+
+ str
+ |
+
+
+
+ The ID of the message to start the list from, + retrieving messages sent before this one. + |
+
+ None
+ |
+
after_message |
+
+ str
+ |
+
+
+
+ The ID of the message to start the list from, + retrieving messages sent after this one. + |
+
+ None
+ |
+
json_compatible |
+
+ bool
+ |
+
+
+
+ If True, returns messages as dictionaries. + If False, returns messages as ThreadMessage + objects. Default is False. + |
+
+ False
+ |
+
Returns:
+Type | +Description | +
---|---|
+ list[Union[ThreadMessage, dict]]
+ |
+
+
+
+ list[Union[ThreadMessage, dict]]: A list of messages from the thread, either + as dictionaries or ThreadMessage objects, + depending on the value of json_compatible. + |
+
run_async
+
+
+ async
+
+
+¶Creates and returns a Run
of this thread with the provided assistant.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
assistant |
+
+ Assistant
+ |
+
+
+
+ The assistant to run the thread with. + |
+ + required + | +
run_kwargs |
+ + | +
+
+
+ Additional keyword arguments to pass to the Run constructor. + |
+
+ {}
+ |
+
ThreadMonitor
+
+
+¶The ThreadMonitor class represents a monitor for a specific thread.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
thread_id |
+
+ str
+ |
+
+
+
+ The unique identifier of the thread being monitored. + |
+
last_message_id |
+
+ Optional[str]
+ |
+
+
+
+ The ID of the last message received in the thread. + |
+
on_new_message |
+
+ Callable
+ |
+
+
+
+ A callback function that is called when a new message + is received in the thread. + |
+
run_async
+
+
+ async
+
+
+¶Run the thread monitor in a loop, checking for new messages every interval_seconds
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
interval_seconds |
+
+ int
+ |
+
+
+
+ The number of seconds to wait between + checking for new messages. Default is 1. + |
+
+ None
+ |
+
Beta
+Please note that vision support in Marvin is still in beta, as OpenAI has not finalized the vision API yet. While it works as expected, it is subject to change.
+This model contains tools for working with the vision API, including
+vision-enhanced versions of cast
, extract
, and classify
.
caption
+
+¶Generates a caption for an image using a language model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
image |
+
+ Union[str, Path, Image]
+ |
+
+
+
+ URL or local path of the image. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Instructions for the caption generation. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Generated caption. + |
+
cast
+
+¶Converts the input data into the specified type using a vision model.
+This function uses a vision model and a language model to convert the input +data into a specified type. The conversion process can be guided by specific +instructions. The function also supports additional arguments for both models.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
images |
+
+ list[Union[str, Path]]
+ |
+
+
+
+ The images to be processed. + |
+
+ None
+ |
+
data |
+
+ str
+ |
+
+
+
+ The data to be converted. + |
+ + required + | +
target |
+
+ type
+ |
+
+
+
+ The type to convert the data into. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. +Defaults to None. + |
+
+ None
+ |
+
vision_model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for +the vision model. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
classify
+
+¶Classifies provided data and/or images into one of the specified labels. +Args: + data (Union[str, Image]): Data or an image for classification. + labels (Union[Enum, list[T], type]): Labels to classify into. + images (Union[Union[str, Path], list[Union[str, Path]]], optional): Additional images for classification. + instructions (str, optional): Instructions for the classification. + vision_model_kwargs (dict, optional): Arguments for the vision model. + model_kwargs (dict, optional): Arguments for the language model.
+ + + +Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ Label that the data/images were classified into. + |
+
extract
+
+¶Extracts information from provided data and/or images using a vision model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ Union[str, Image]
+ |
+
+
+
+ Data or an image for information extraction. + |
+ + required + | +
target |
+
+ type[T]
+ |
+
+
+
+ The type to extract the data into. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Instructions for extraction. + |
+
+ None
+ |
+
images |
+
+ list[Union[str, Path]]
+ |
+
+
+
+ Additional images for extraction. + |
+
+ None
+ |
+
vision_model_kwargs |
+
+ dict
+ |
+
+
+
+ Arguments for the vision model. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ Extracted data of the specified type. + |
+
generate_vision_response
+
+¶Generates a language model response based on a provided prompt template and images.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
images |
+
+ list[Image]
+ |
+
+
+
+ Images used in the prompt, either URLs or local paths. + |
+ + required + | +
prompt_template |
+
+ str
+ |
+
+
+
+ Template for the language model prompt. + |
+ + required + | +
prompt_kwargs |
+
+ dict
+ |
+
+
+
+ Keyword arguments for the prompt. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Keyword arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ChatResponse |
+ ChatResponse
+ |
+
+
+
+ Response from the language model. + |
+
Model
+
+
+¶A Pydantic model that can be instantiated from a natural language string, in +addition to keyword arguments.
+ + + + +
from_text
+
+
+ classmethod
+
+
+¶Class method to create an instance of the model from a natural language string.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The natural language string to convert into an instance of the model. + |
+ + required + | +
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
**kwargs |
+ + | +
+
+
+ Additional keyword arguments to pass to the model's constructor. + |
+
+ {}
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Model |
+ Model
+ |
+
+
+
+ An instance of the model. + |
+
cast
+
+¶Converts the input data into the specified type.
+This function uses a language model to convert the input data into a specified type. +The conversion process can be guided by specific instructions. The function also +supports additional arguments for the language model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ str
+ |
+
+
+
+ The data to be converted. + |
+ + required + | +
target |
+
+ type
+ |
+
+
+
+ The type to convert the data into. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
classifier
+
+¶Class decorator that modifies the behavior of an Enum class to classify a string.
+This decorator modifies the call method of the Enum class to use the
+marvin.classify
function instead of the default Enum behavior. This allows
+the Enum class to classify a string based on its members.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
cls |
+
+ Enum
+ |
+
+
+
+ The Enum class to be decorated. + |
+
+ None
+ |
+
instructions |
+
+ str
+ |
+
+
+
+ Instructions for the AI on +how to perform the classification. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword +arguments to pass to the model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Enum | + | +
+
+
+ The decorated Enum class with modified call method. + |
+
Raises:
+Type | +Description | +
---|---|
+ AssertionError
+ |
+
+
+
+ If the decorated class is not a subclass of Enum. + |
+
classify
+
+¶Classifies the provided data based on the provided labels.
+This function uses a language model with a logit bias to classify the input +data. The logit bias constrains the language model's response to a single +token, making this function highly efficient for classification tasks. The +function will always return one of the provided labels.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ str
+ |
+
+
+
+ The data to be classified. + |
+ + required + | +
labels |
+
+ Union[Enum, list[T], type]
+ |
+
+
+
+ The labels to classify the data into. + |
+ + required + | +
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the +classification. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The label that the data was classified into. + |
+
extract
+
+¶Extracts entities of a specific type from the provided data.
+This function uses a language model to identify and extract entities of the +specified type from the input data. The extracted entities are returned as a +list.
+Note that either a target type or instructions must be provided (or both). +If only instructions are provided, the target type is assumed to be a +string.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
data |
+
+ str
+ |
+
+
+
+ The data from which to extract entities. + |
+ + required + | +
target |
+
+ type
+ |
+
+
+
+ The type of entities to extract. + |
+
+ None
+ |
+
instructions |
+
+ str
+ |
+
+
+
+ Specific instructions for the extraction. +Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of extracted entities of the specified type. + |
+
fn
+
+¶Converts a Python function into an AI function using a decorator.
+This decorator allows a Python function to be converted into an AI function. +The AI function uses a language model to generate its output.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
func |
+
+ Callable
+ |
+
+
+
+ The function to be converted. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The converted AI function. + |
+
@fn +def list_fruit(n:int) -> list[str]: + '''generates a list of n fruit'''
+list_fruit(3) # ['apple', 'banana', 'orange']
+
generate
+
+¶Generates a list of 'n' items of the provided type or based on instructions.
+Either a type or instructions must be provided. If instructions are provided +without a type, the type is assumed to be a string. The function generates at +least 'n' items.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
target |
+
+ type
+ |
+
+
+
+ The type of items to generate. Defaults to None. + |
+
+ None
+ |
+
instructions |
+
+ str
+ |
+
+
+
+ Instructions for the generation. Defaults to None. + |
+
+ None
+ |
+
n |
+
+ int
+ |
+
+
+
+ The number of items to generate. Defaults to 1. + |
+
+ 1
+ |
+
use_cache |
+
+ bool
+ |
+
+
+
+ If True, the function will cache the last +100 responses for each (target, instructions, and temperature) and use +those to avoid repetition on subsequent calls. Defaults to True. + |
+
+ True
+ |
+
temperature |
+
+ float
+ |
+
+
+
+ The temperature for the generation. Defaults to 1. + |
+
+ 1
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
client |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of generated items. + |
+
image
+
+¶A decorator that transforms a function's output into an image.
+This decorator takes a function that returns a string, and uses that string +as instructions to generate an image. The generated image is then returned.
+The decorator can be used with or without parentheses. If used without
+parentheses, the decorated function's output is used as the instructions
+for the image. If used with parentheses, an optional literal
argument can
+be provided. If literal
is set to True
, the function's output is used
+as the literal instructions for the image, without any modifications.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
fn |
+
+ callable
+ |
+
+
+
+ The function to decorate. If |
+
+ None
+ |
+
literal |
+
+ bool
+ |
+
+
+
+ Whether to use the function's output as the
+literal instructions for the image. Defaults to |
+
+ False
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
callable | + | +
+
+
+ The decorated function. + |
+
model
+
+¶Class decorator for instantiating a Pydantic model from a string.
+This decorator allows a Pydantic model to be instantiated from a string. It's +equivalent to subclassing the Model class.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
type_ |
+
+ Union[Type[M], None]
+ |
+
+
+
+ The type of the Pydantic model. +Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Type[M], Callable[[Type[M]], Type[M]]]
+ |
+
+
+
+ Union[Type[M], Callable[[Type[M]], Type[M]]]: The decorated Pydantic model. + |
+
paint
+
+¶Generates an image based on the provided instructions and context.
+This function uses the DALLE-3 API to generate an image based on the provided
+instructions and context. By default, the API modifies prompts to add detail
+and style. This behavior can be disabled by setting literal=True
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
instructions |
+
+ str
+ |
+
+
+
+ The instructions for the image generation. +Defaults to None. + |
+
+ None
+ |
+
context |
+
+ dict
+ |
+
+
+
+ The context for the image generation. Defaults to None. + |
+
+ None
+ |
+
literal |
+
+ bool
+ |
+
+
+
+ Whether to disable the API's default behavior of +modifying prompts. Defaults to False. + |
+
+ False
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse | + | +
+
+
+ The response from the DALLE-3 API, which includes the +generated image. + |
+
speak
+
+¶Generates audio from text using an AI.
+This function uses an AI to generate audio from the provided text. The voice +used for the audio can be specified.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The text to generate audio from. + |
+ + required + | +
voice |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
model_kwargs |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
HttpxBinaryResponseContent |
+ HttpxBinaryResponseContent
+ |
+
+
+
+ The generated audio. + |
+
speech
+
+¶Function decorator that generates audio from the wrapped function's return +value. The voice used for the audio can be specified.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
fn |
+
+ Callable
+ |
+
+
+
+ The function to wrap. Defaults to None. + |
+
+ None
+ |
+
voice |
+
+ str
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The wrapped function. + |
+
Settings for configuring marvin
.
AssistantSettings
+
+
+¶Settings for the assistant API.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
model |
+
+ str
+ |
+
+
+
+ The default assistant model to use, defaults to |
+
ImageSettings
+
+
+¶Settings for OpenAI's image API.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
model |
+
+ str
+ |
+
+
+
+ The default image model to use, defaults to |
+
size |
+
+ Literal['1024x1024', '1792x1024', '1024x1792']
+ |
+
+
+
+ The default image size to use, defaults to |
+
response_format |
+
+ Literal['url', 'b64_json']
+ |
+
+
+
+ The default response format to use, defaults to |
+
style |
+
+ Literal['vivid', 'natural']
+ |
+
+
+
+ The default style to use, defaults to |
+
OpenAISettings
+
+
+¶Settings for the OpenAI API.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
api_key |
+
+ Optional[SecretStr]
+ |
+
+
+
+ Your OpenAI API key. + |
+
organization |
+
+ Optional[str]
+ |
+
+
+
+ Your OpenAI organization ID. + |
+
llms |
+
+ Optional[str]
+ |
+
+
+
+ Settings for the chat API. + |
+
images |
+
+ ImageSettings
+ |
+
+
+
+ Settings for the images API. + |
+
audio |
+
+ AudioSettings
+ |
+
+
+
+ Settings for the audio API. + |
+
assistants |
+
+ AssistantSettings
+ |
+
+
+
+ Settings for the assistants API. + |
+
Set the OpenAI API key: +
+
Settings
+
+
+¶Settings for marvin
.
This is the main settings object for marvin
.
Attributes:
+Name | +Type | +Description | +
---|---|---|
openai |
+
+ OpenAISettings
+ |
+
+
+
+ Settings for the OpenAI API. + |
+
log_level |
+
+ str
+ |
+
+
+
+ The log level to use, defaults to |
+
Set the log level to INFO
:
+
SpeechSettings
+
+
+¶Settings for OpenAI's speech API.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
model |
+
+ str
+ |
+
+
+
+ The default speech model to use, defaults to |
+
voice |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The default voice to use, defaults to |
+
response_format |
+
+ Literal['mp3', 'opus', 'aac', 'flac']
+ |
+
+
+
+ The default response format to use, defaults to |
+
speed |
+
+ float
+ |
+
+
+
+ The default speed to use, defaults to |
+
temporary_settings
+
+¶Temporarily override Marvin setting values, including nested settings objects.
+To override nested settings, use __
to separate nested attribute names.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
**kwargs |
+
+ Any
+ |
+
+
+
+ The settings to override, including nested settings. + |
+
+ {}
+ |
+
Temporarily override log level and OpenAI API key: +
import marvin
+from marvin.settings import temporary_settings
+
+# Override top-level settings
+with temporary_settings(log_level="INFO"):
+ assert marvin.settings.log_level == "INFO"
+assert marvin.settings.log_level == "DEBUG"
+
+# Override nested settings
+with temporary_settings(openai__api_key="new-api-key"):
+ assert marvin.settings.openai.api_key.get_secret_value() == "new-api-key"
+assert marvin.settings.openai.api_key.get_secret_value().startswith("sk-")
+
BaseMessage
+
+
+¶Base schema for messages
+ + + + +
MessageImageURLContent
+
+
+¶Schema for messages containing images
+ + + + +
MessageTextContent
+
+
+¶Schema for messages containing text
+ + + + +Utilities for working with asyncio.
+ + + +
ExposeSyncMethodsMixin
+
+
+¶A mixin that can take functions decorated with expose_sync_method
+and automatically create synchronous versions.
create_task
+
+¶Creates async background tasks in a way that is safe from garbage +collection.
+See +https://textual.textualize.io/blog/2023/02/11/the-heisenbug-lurking-in-your-async-code/
+Example:
+async def my_coro(x: int) -> int: + return x + 1
+create_task(my_coro(1))
+ +
expose_sync_method
+
+¶Decorator that automatically exposes synchronous versions of async methods. +Note it doesn't work with classmethods.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
name |
+
+ str
+ |
+
+
+
+ The name of the synchronous method. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ Callable[..., Any]
+ |
+
+
+
+ The decorated function. + |
+
make_sync
+
+¶Creates a synchronous function from an asynchronous function.
+ +
run_async
+
+
+ async
+
+
+¶Runs a synchronous function in an asynchronous manner.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
fn |
+
+ Callable[..., T]
+ |
+
+
+
+ The function to run. + |
+ + required + | +
*args |
+
+ Any
+ |
+
+
+
+ Positional arguments to pass to the function. + |
+
+ ()
+ |
+
**kwargs |
+
+ Any
+ |
+
+
+
+ Keyword arguments to pass to the function. + |
+
+ {}
+ |
+
Returns:
+Type | +Description | +
---|---|
+ T
+ |
+
+
+
+ The return value of the function. + |
+
run_sync
+
+¶Runs a coroutine from a synchronous context, either in the current event +loop or in a new one if there is no event loop running. The coroutine will +block until it is done. A thread will be spawned to run the event loop if +necessary, which allows coroutines to run in environments like Jupyter +notebooks where the event loop runs on the main thread.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
coroutine |
+
+ Coroutine[Any, Any, T]
+ |
+
+
+
+ The coroutine to run. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ T
+ |
+
+
+
+ The return value of the coroutine. + |
+
run_sync_if_awaitable
+
+¶If the object is awaitable, run it synchronously. Otherwise, return the +object.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
obj |
+
+ Any
+ |
+
+
+
+ The object to run. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ Any
+ |
+
+
+
+ The return value of the object if it is awaitable, otherwise the object + |
+
+ Any
+ |
+
+
+
+ itself. + |
+
Module for defining context utilities.
+ + + +
ScopedContext
+
+
+¶ScopedContext
provides a context management mechanism using contextvars
.
This class allows setting and retrieving key-value pairs in a scoped context, +which is preserved across asynchronous tasks and threads within the same context.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
_context_storage |
+
+ ContextVar
+ |
+
+
+
+ A context variable to store the context data. + |
+
Basic Usage of ScopedContext +
+Module for Jinja utilities.
+ + + +
BaseEnvironment
+
+
+¶BaseEnvironment provides a configurable environment for rendering Jinja templates.
+This class encapsulates a Jinja environment with customizable global functions and +template settings, allowing for flexible template rendering.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
environment |
+
+ Environment
+ |
+
+
+
+ The Jinja environment for template rendering. + |
+
globals |
+
+ dict[str, Any]
+ |
+
+
+
+ A dictionary of global functions and variables available in templates. + |
+
Basic Usage of BaseEnvironment +
+
render
+
+¶Renders a given template str
or BaseTemplate
with provided context.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
template |
+
+ Union[str, Template]
+ |
+
+
+
+ The template to be rendered. + |
+ + required + | +
**kwargs |
+
+ Any
+ |
+
+
+
+ Context variables to be passed to the template. + |
+
+ {}
+ |
+
Returns:
+Type | +Description | +
---|---|
+ str
+ |
+
+
+
+ The rendered template as a string. + |
+
Transcript
+
+
+¶Transcript is a model representing a conversation involving multiple roles.
+ + + +Attributes:
+Name | +Type | +Description | +
---|---|---|
content |
+
+ str
+ |
+
+
+
+ The content of the transcript. + |
+
roles |
+
+ dict[str, str]
+ |
+
+
+
+ The roles involved in the transcript. + |
+
environment |
+
+ BaseEnvironment
+ |
+
+
+
+ The jinja environment to use for rendering the transcript. + |
+
Basic Usage of Transcript: +
from marvin.utilities.jinja import Transcript
+
+transcript = Transcript(
+ content="system: Hello, there! user: Hello, yourself!",
+ roles=["system", "user"],
+)
+print(transcript.render_to_messages())
+# [
+# BaseMessage(content='system: Hello, there!', role='system'),
+# BaseMessage(content='Hello, yourself!', role='user')
+# ]
+
split_text_by_tokens
+
+¶Splits a given text by a list of tokens.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The text to be split. split_tokens: The tokens to split the text + |
+ + required + | +
by. |
+
+ only_on_newline
+ |
+
+
+
+ If True, only match tokens that are either + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ list[tuple[str, str]]
+ |
+
+
+
+ A list of tuples containing the token and the text following it. + |
+
Basic Usage of split_text_by_tokens
```python from
+marvin.utilities.jinja import split_text_by_tokens
text = "Hello, World!" split_tokens = ["Hello", "World"] pairs = +split_text_by_tokens(text, split_tokens) print(pairs) # Output: +[("Hello", ", "), ("World", "!")] ```
+Module for logging utilities.
+ + + +
get_logger
+
+
+ cached
+
+
+¶Retrieves a logger with the given name, or the root logger if no name is given.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
name |
+
+ Optional[str]
+ |
+
+
+
+ The name of the logger to retrieve. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Logger
+ |
+
+
+
+ The logger with the given name, or the root logger if no name is given. + |
+
Module for working with OpenAI.
+ + + +
get_openai_client
+
+¶Retrieves an OpenAI client with the given api key and organization.
+ + + +Returns:
+Type | +Description | +
---|---|
+ AsyncClient
+ |
+
+
+
+ The OpenAI client with the given api key and organization. + |
+
Module for Pydantic utilities.
+ + + +
cast_to_model
+
+¶Casts a type or callable to a Pydantic model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
function_or_type |
+
+ Union[type, type[BaseModel], GenericAlias, Callable[..., Any]]
+ |
+
+
+
+ The type or callable to cast to a Pydantic model. + |
+ + required + | +
name |
+
+ Optional[str]
+ |
+
+
+
+ The name of the model to create. + |
+
+ None
+ |
+
description |
+
+ Optional[str]
+ |
+
+
+
+ The description of the model to create. + |
+
+ None
+ |
+
field_name |
+
+ Optional[str]
+ |
+
+
+
+ The name of the field to create. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ type[BaseModel]
+ |
+
+
+
+ The Pydantic model created from the given type or callable. + |
+
parse_as
+
+¶Parse a given data structure as a Pydantic model via TypeAdapter
.
Read more about TypeAdapter
here.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
type_ |
+
+ type[T]
+ |
+
+
+
+ The type to parse the data as. + |
+ + required + | +
data |
+
+ Any
+ |
+
+
+
+ The data to be parsed. + |
+ + required + | +
mode |
+
+ Literal['python', 'json', 'strings']
+ |
+
+
+
+ The mode to use for parsing, either |
+
+ 'python'
+ |
+
Returns:
+Type | +Description | +
---|---|
+ T
+ |
+
+
+
+ The parsed |
+
Basic Usage of parse_as
+
from marvin.utilities.pydantic import parse_as
+from pydantic import BaseModel
+
+class ExampleModel(BaseModel):
+ name: str
+
+# parsing python objects
+parsed = parse_as(ExampleModel, {"name": "Marvin"})
+assert isinstance(parsed, ExampleModel)
+assert parsed.name == "Marvin"
+
+# parsing json strings
+parsed = parse_as(
+ list[ExampleModel],
+ '[{"name": "Marvin"}, {"name": "Arthur"}]',
+ mode="json"
+)
+assert all(isinstance(item, ExampleModel) for item in parsed)
+assert parsed[0].name == "Marvin"
+assert parsed[1].name == "Arthur"
+
+# parsing raw strings
+parsed = parse_as(int, '123', mode="strings")
+assert isinstance(parsed, int)
+assert parsed == 123
+
Module for Slack-related utilities.
+ + + +
edit_slack_message
+
+
+ async
+
+
+¶Edit an existing Slack message by appending new text or replacing it.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
channel |
+
+ str
+ |
+
+
+
+ The Slack channel ID. + |
+ + required + | +
ts |
+
+ str
+ |
+
+
+
+ The timestamp of the message to edit. + |
+ + required + | +
new_text |
+
+ str
+ |
+
+
+
+ The new text to append or replace in the message. + |
+ + required + | +
mode |
+
+ str
+ |
+
+
+
+ The mode of text editing, 'append' (default) or 'replace'. + |
+
+ 'append'
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Response
+ |
+
+
+
+ httpx.Response: The response from the Slack API. + |
+
fetch_current_message_text
+
+
+ async
+
+
+¶Fetch the current text of a specific Slack message using its timestamp.
+ +
get_thread_messages
+
+
+ async
+
+
+¶Get all messages from a slack thread.
+ +
get_token
+
+
+ async
+
+
+¶Get the Slack bot token from the environment.
+ +
search_slack_messages
+
+
+ async
+
+
+¶Search for messages in Slack workspace based on a query.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
query |
+
+ str
+ |
+
+
+
+ The search query. + |
+ + required + | +
max_messages |
+
+ int
+ |
+
+
+
+ The maximum number of messages to retrieve. + |
+
+ 3
+ |
+
channel |
+
+ str
+ |
+
+
+
+ The specific channel to search in. Defaults to None, +which searches all channels. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list
+ |
+
+
+
+ A list of message contents and permalinks matching the query. + |
+
Module for string utilities.
+ + + +
count_tokens
+
+¶Counts the number of tokens in the given text using the specified model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The text to count tokens in. + |
+ + required + | +
model |
+
+ str
+ |
+
+
+
+ The model to use for token counting. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
int |
+ int
+ |
+
+
+
+ The number of tokens in the text. + |
+
detokenize
+
+¶Detokenizes the given tokens using the specified model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
tokens |
+
+ list[int]
+ |
+
+
+
+ The tokens to detokenize. + |
+ + required + | +
model |
+
+ str
+ |
+
+
+
+ The model to use for detokenization. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ The detokenized text. + |
+
slice_tokens
+
+¶Slices the given text to the specified number of tokens.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The text to slice. + |
+ + required + | +
n_tokens |
+
+ int
+ |
+
+
+
+ The number of tokens to slice the text to. + |
+ + required + | +
model |
+
+ str
+ |
+
+
+
+ The model to use for token counting. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ The sliced text. + |
+
tokenize
+
+¶Tokenizes the given text using the specified model.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
text |
+
+ str
+ |
+
+
+
+ The text to tokenize. + |
+ + required + | +
model |
+
+ str
+ |
+
+
+
+ The model to use for tokenization. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ list[int]
+ |
+
+
+
+ list[int]: The tokenized text as a list of integers. + |
+
Module for LLM tool utilities.
+ + + +
custom_partial
+
+¶Returns a new function with partial application of the given keyword arguments. +The new function has the same name and docstring as the original, and its +signature excludes the provided kwargs.
+ +
tool_from_function
+
+¶Creates an OpenAI-compatible tool from a Python function.
+If any kwargs are provided, they will be stored and provided at runtime. +Provided kwargs will be removed from the tool's parameter schema.
+ +
tool_from_model
+
+¶Creates an OpenAI-compatible tool from a Pydantic model class.
+ +{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Wa=/["'&<>]/;Vn.exports=Ua;function Ua(e){var t=""+e,r=Wa.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i