Skip to content

Commit

Permalink
chore(*-genai): bump version (#371)
Browse files Browse the repository at this point in the history
  • Loading branch information
pr-Mais committed Feb 15, 2024
1 parent 9a7f976 commit ead4864
Show file tree
Hide file tree
Showing 13 changed files with 71 additions and 48 deletions.
6 changes: 6 additions & 0 deletions firestore-genai-chatbot/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
## Version 0.0.7

- Update documentation

- Update extensions display name

## Version 0.0.6

- Make model a string param, to allow for future changes to model names.
Expand Down
28 changes: 11 additions & 17 deletions firestore-genai-chatbot/README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,22 @@
# Chatbot with Gemini
# Build Chatbot with the Gemini API

**Author**: Google Cloud (**[https://cloud.google.com/](https://cloud.google.com/)**)

**Description**: Deploys customizable chatbots using Google AI and Firestore.
**Description**: Deploys customizable chatbots using Gemini models and Firestore.



**Details**: Use this extension to easily deploy a chatbot using Gemini large language models, stored and managed by Cloud Firestore.
**Details**: Use this extension to easily deploy a chatbot using Gemini models, stored and managed by Cloud Firestore.

On install you will be asked to provide:

- **Generative AI Provider** This extension makes use of the Gemini family of large language models. Currently the extension supports the Google AI Gemini API (for developers) and the Vertex AI Gemini API.
- **Gemini API Provider** This extension makes use of the Gemini family of models. Currently the extension supports the Google AI Gemini API and the Vertex AI Gemini API. Learn more about the differences between the Google AI and Vertex AI Gemini APIs here.

- **Language model**: Which language model do you want to use? Please ensure you pick a model supported by your selected provider.
- **Gemini Model**: Input the name of the Gemini model you would like to use. To view available models for each provider, see:
- [Vertex AI Gemini models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models)
- [Google AI Gemini models](https://ai.google.dev/models/gemini)

- **Firestore collection path**: Used to store conversation history represented as documents. This extension will listen to the specified collection(s) for new message documents.
- **Firestore Collection Path**: Used to store conversation history represented as documents. This extension will listen to the specified collection(s) for new message documents.

The collection path also supports wildcards, so you can trigger the extension on multiple collections, each with their own private conversation history. This is useful if you want to create separate conversations for different users, or support multiple chat sessions.

Expand Down Expand Up @@ -42,17 +44,9 @@ I want you to act as a travel guide. I will ask you questions about various trav

You can also configure the model to return different results by tweaking model parameters (temperature, candidate count, etc.), which are exposed as configuration during install as well.

## About the models

The extension gives you a choice of 2 models:

- Gemini Pro chat model

- Gemini Pro Vision multimodal chat model.

## Additional Setup

Ensure you have a [Cloud Firestore database](https://firebase.google.com/docs/firestore/quickstart) set up in your Firebase project, set up in your Firebase project, and have obtained an API key for Google AI's Gemini API.
Ensure you have a [Cloud Firestore database](https://firebase.google.com/docs/firestore/quickstart) set up in your Firebase project, set up in your Firebase project, and have obtained an API key for the Gemini API.

### Regenerating a response

Expand All @@ -79,9 +73,9 @@ This extension uses other Firebase and Google Cloud Platform services, which hav

* Google AI API Key: If you have selected Google AI as your provider, then this parameteris required. If you have instead selected Vertex AI, then this parameter is not required, and application default credentials will be used.

* Generative model: Which genai model do you want to use? For Google AI the list of supported models is [here](https://ai.google.dev/models/gemini), and this parameter should be set to the model name, the second segment of the model code (for example models/gemini-pro should be chosen as gemini-pro). For Vertex AI, there is a list of models [here](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models), currently only the Gemini family of models listed there is supported.
* Gemini model: Input the name of theWhich Gemini model you would like to use. To view available models for each provider, see: [Vertex AI Gemini models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models), [Google AI Gemini models](https://ai.google.dev/models/gemini)

* Collection Path: Path to the Firestore collection which will represent a chat with the generative model.
* Firestore Collection Path: Used to store conversation history represented as documents. This extension will listen to the specified collection(s) for new message documents.

* Prompt Field: The field in the message document that contains the prompt.

Expand Down
2 changes: 1 addition & 1 deletion firestore-genai-chatbot/extension.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: firestore-genai-chatbot
version: 0.0.6
version: 0.0.7
specVersion: v1beta

icon: icon.png
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -203,12 +203,19 @@ describe('generateMessage', () => {
expect(mockGetClient).toHaveBeenCalledWith(config.googleAi.apiKey);

expect(mockGetModel).toHaveBeenCalledTimes(1);
expect(mockGetModel).toBeCalledWith({model: config.googleAi.model});
expect(mockGetModel).toHaveBeenCalledWith({model: config.googleAi.model});

expect(mockStartChat).toHaveBeenCalledTimes(1);
expect(mockStartChat).toHaveBeenCalledWith({
history: [],
generationConfig: {},
generationConfig: {
candidate_count: undefined,
max_output_tokens: undefined,
temperature: undefined,
top_k: undefined,
top_p: undefined,
},
safetySettings: [],
});
expect(mockSendMessage).toHaveBeenCalledTimes(1);
expect(mockSendMessage).toHaveBeenCalledWith(message.prompt);
Expand Down Expand Up @@ -246,12 +253,19 @@ describe('generateMessage', () => {
expect(mockGetClient).toHaveBeenCalledWith(config.googleAi.apiKey);

expect(mockGetModel).toHaveBeenCalledTimes(1);
expect(mockGetModel).toBeCalledWith({model: config.googleAi.model});
expect(mockGetModel).toHaveBeenCalledWith({model: config.googleAi.model});

expect(mockStartChat).toHaveBeenCalledTimes(1);
expect(mockStartChat).toHaveBeenCalledWith({
history: [],
generationConfig: {},
generationConfig: {
candidateCount: undefined,
maxOutputTokens: undefined,
temperature: undefined,
topK: undefined,
topP: undefined,
},
safetySettings: [],
});
expect(mockSendMessage).toHaveBeenCalledTimes(1);
expect(mockSendMessage).toHaveBeenCalledWith(message.prompt);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,6 @@ describe('extractOverrides function', () => {
mockDocSnap['context'] = 123; // Invalid context
mockDocSnap['topK'] = 'not-a-number'; // Invalid topK

const overrides = extractOverrides(mockDocSnap);

// Expect the function to skip invalid fields or handle them as per your error handling logic
expect(overrides).toEqual({});
expect(() => extractOverrides(mockDocSnap)).toThrow();
});
});
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ describe('generateMessage', () => {
expect(mockGetClient).toHaveBeenCalledTimes(1);

expect(mockGetModel).toHaveBeenCalledTimes(1);
expect(mockGetModel).toBeCalledWith({model: config.googleAi.model});
expect(mockGetModel).toHaveBeenCalledWith({model: config.googleAi.model});
expect(mockGenerateContentStream).toHaveBeenCalledTimes(1);
expect(mockGenerateContentStream).toHaveBeenCalledWith({
contents: [{parts: [{text: 'hello chat bison'}], role: 'user'}],
Expand All @@ -216,6 +216,7 @@ describe('generateMessage', () => {
top_k: undefined,
top_p: undefined,
},
safety_settings: [],
});
});

Expand Down Expand Up @@ -261,6 +262,7 @@ describe('generateMessage', () => {
top_k: undefined,
top_p: undefined,
},
safety_settings: [],
});
});
});
Expand Down
2 changes: 1 addition & 1 deletion firestore-genai-chatbot/functions/tsconfig.build.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@
"__mocks__",
"__tests__"
]
}
}
5 changes: 3 additions & 2 deletions firestore-genai-chatbot/functions/tsconfig.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@
"skipLibCheck": true,
"sourceMap": true,
"strict": true,
"target": "es2017"
"target": "es2017",
"types": ["node", "jest"]
},
"compileOnSave": true,
"include": ["src"]
"include": ["."]
}
6 changes: 6 additions & 0 deletions firestore-multimodal-genai/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
## Version 0.0.5

- Update documentation

- Update extensions display name

## Version 0.0.4

- Make model a string param, to allow for future changes to model names.
Expand Down
32 changes: 16 additions & 16 deletions firestore-multimodal-genai/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,26 @@
# Multimodal Tasks with Gemini
# Multimodal Tasks with the Gemini API

**Author**: Google Cloud (**[https://cloud.google.com](https://cloud.google.com)**)

**Description**: Performs AI/ML tasks on text and images, customizable with prompt engineering, using Gemini AI and Firestore.
**Description**: Performs multimodel generative tasks on text and images, customizable with prompt engineering, using Gemini models and Firestore.



**Details**: This extension allows you to perform generative tasks using Google AI, a custom prompt, and Firestore.
**Details**: This extension allows you to perform generative tasks using the Gemini API, a custom prompt, and Firestore.

On installation, you will be asked to provide the following information:

- **Generative AI Provider** This extension makes use of the Gemini family of generative models. The extension supports both the Google AI and Vertex AI Gemini APIs
- **Generative model**: Which genai model do you want to use?
- **Prompt:** This is the text that you want Gemini to generate a response for. It can be free-form text or it can use handlebars variables to substitute values from the Firestore document.
- **Firestore collection path:** This is the path to the Firestore collection that contains the documents that you want to perform the generative task on.
- **Response field:** This is the name of the field in the Firestore document where you want the extension to store the response from the Model API.
**Gemini API Provider**: This extension makes use of the Gemini family of large language models. Currently the extension supports the Google AI Gemini API (for developers) and the Vertex AI Gemini API. Learn more about the differences between the Google AI and Vertex AI Gemini APIs here.
**Gemini Model**: Input the name of theWhich Gemini model you would like to use. To view available models for each provider, see:
- [Vertex AI Gemini models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models)
- [Google AI Gemini models](https://ai.google.dev/models/gemini)
**Firestore Collection Path**: Used to store conversation history represented as documents. This extension will listen to the specified collection(s) for new message documents.
**Prompt**: This is the text that you want the Gemini API to generate a response for. It can be free-form text or it can use handlebars variables to substitute values from the Firestore document.

This extension will listen to the specified collection for new documents. When such a document is added, the extension will:

1. Substitute any variables from the document into the prompt.
2. Query Gemini to generate a response based on the prompt.
2. Query the Gemini API to generate a response based on the prompt.
3. Write the response from the Model API back to the triggering document in the response field.

Each instance of the extension should be configured to perform one particular task. If you have multiple tasks, you can install multiple instances.
Expand Down Expand Up @@ -54,13 +55,12 @@ In this case, `review_text`` is a field of the Firestore document and will be su

### Choosing a generative model

When installing this extension you will be prompted to pick a genai model.
When installing this extension you will be prompted to pick a Gemini model.

For Google AI the list of supported models is [here](https://ai.google.dev/models/gemini), and this parameter should be set to the model name, the second segment of the model code (for
example models/gemini-pro should be chosen as gemini-pro).

For Vertex AI,
there is a list of models [here](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models).
For Vertex AI, the list of models is [here](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models).

#### Multimodal Prompts

Expand Down Expand Up @@ -95,15 +95,15 @@ This extension uses other Firebase and Google Cloud Platform services, which hav

**Configuration Parameters:**

* Gemini API Provider: This extension makes use of the Gemini family of generative models. For Google AI you will require an API key, whereas Vertex AI will authenticate using application default credentials. For more information see the [docs](https://firebase.google.com/docs/admin/setup#initialize-sdk).
* Gemini API Provider: This extension makes use of the Gemini family of large language models. Currently the extension supports the Google AI Gemini API (for developers) and the Vertex AI Gemini API. Learn more about the differences between the Google AI and Vertex AI Gemini APIs here.

* Generative model: Which genai model do you want to use? For Google AI the list of supported models is [here](https://ai.google.dev/models/gemini), and this parameter should be set to the model name, the second segment of the model code (for example models/gemini-pro should be chosen as gemini-pro). For Vertex AI, there is a list of models [here](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models), currently only the Gemini family of models listed there is supported.
* Gemini model: Input the name of theWhich Gemini model you would like to use. To view available models for each provider, see: [Vertex AI Gemini models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models), [Google AI Gemini models](https://ai.google.dev/models/gemini)

* Google AI API Key: If you have selected Google AI as your provider, then this parameteris required. If you have instead selected Vertex AI, then this parameter is not required, and application default credentials will be used.

* Collection Path: Path to the Firestore collection where text will be generated.
* Firestore Collection Path: Used to store conversation history represented as documents. This extension will listen to the specified collection(s) for new message documents.

* Prompt: Prompt. Use {{ handlebars }} for variable substitution from the created or updated doc. For example if you set this parameter as "What is the capital of {{ country }}?"
* Prompt: This is the text that you want the Gemini API to generate a response for. It can be free-form text or it can use handlebars variables to substitute values from the Firestore document. For example if you set this parameter as "What is the capital of {{ country }}?"

* Variable fields: A comma separated list of fields to substitute as variables in the prompt.

Expand Down
2 changes: 1 addition & 1 deletion firestore-multimodal-genai/extension.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: firestore-multimodal-genai
version: 0.0.4
version: 0.0.5
specVersion: v1beta

icon: icon.png
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -306,6 +306,7 @@ describe('generateMessage', () => {
candidate_count: undefined,
max_output_tokens: undefined,
},
safety_settings: undefined,
});
});
});
Expand Down
4 changes: 3 additions & 1 deletion lerna.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
"firestore-palm-chatbot/functions",
"firestore-palm-gen-text/functions",
"palm-secure-backend/functions",
"firestore-multimodal-genai/functions",
"firestore-genai-chatbot/functions",
"firestore-palm-summarize-text/functions",
"speech-to-text/functions",
"storage-extract-image-text/functions",
Expand All @@ -16,4 +18,4 @@
],
"version": "0.0.0",
"npmClient": "npm"
}
}

0 comments on commit ead4864

Please sign in to comment.