A CRUD application built on the MERN stack, enhanced with LLM interfaces and ML capabilities.
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
- About The Project
- Getting Started
- Usage
- Features
- Voice Command UI Demonstration
- NLU Implementation
- Roadmap
- Contributing
- License
- Contact
Mellifera is designed to be a beekeeper's companion, enabling data-driven decision-making for beekeepers worldwide. It leverages natural language processing to update an extensive, and extensible, database with data about honeybee hives.
To get a local copy up and running, follow these simple steps.
- Node.js (latest LTS version)
- npm (comes with Node.js)
- MongoDB
- Docker and Docker Compose (for Docker installation)
- Kubernetes cluster (for Kubernetes deployment)
-
Clone the repo
git clone https://github.com/jstiltner/Mellifera-app.git
-
Navigate to the project directory
cd Mellifera-app
-
Install NPM packages
npm install
-
Create a
.env
file in the root directory and add your environment variables (e.g., MongoDB connection string, JWT secret) -
Start the development server
npm run dev
If you prefer to use Docker, follow these steps:
-
Ensure you have Docker and Docker Compose installed on your system.
-
Clone the repo (if you haven't already)
git clone https://github.com/jstiltner/Mellifera-app.git
-
Navigate to the project directory
cd Mellifera-app
-
Create a
secrets
directory in the project root and add the following files with appropriate values:db_password.txt
: MongoDB database passwordmongo_root_username.txt
: MongoDB root usernamemongo_root_password.txt
: MongoDB root password
-
Build and run the Docker containers for development:
docker-compose -f docker-compose.dev.yml up --build
Or for production:
docker-compose -f docker-compose.prod.yml up --build
-
The application will be available at
http://localhost:5000
We've recently updated the Dockerfile to resolve issues related to bcrypt. The changes include:
- Adding necessary build tools (python3, make, g++) to compile native modules.
- Using a specific Alpine version for consistency.
If you're experiencing any bcrypt-related errors, please follow these steps:
-
Rebuild your Docker images:
For development:
docker-compose -f docker-compose.dev.yml build --no-cache docker-compose -f docker-compose.dev.yml up
For production:
docker-compose -f docker-compose.prod.yml build --no-cache docker-compose -f docker-compose.prod.yml up
-
If using Kubernetes, update your deployment:
# Build and push the updated Docker image docker build -t your-registry/mellifera-app:v1.0.1 -f Dockerfile.prod . docker push your-registry/mellifera-app:v1.0.1
-
Update the
kubernetes/deployment.yaml
file to use the new image tag:spec: containers: - name: mellifera-app image: your-registry/mellifera-app:v1.0.1
-
Apply the Kubernetes configurations:
kubectl apply -f kubernetes/configmap.yaml kubectl apply -f kubernetes/deployment.yaml kubectl apply -f kubernetes/service.yaml
-
To check the status of your deployment:
kubectl get deployments kubectl get pods kubectl get services
-
To access the application, use the external IP provided by the LoadBalancer service:
kubectl get services mellifera-app-service
Use the EXTERNAL-IP to access the application in your browser.
-
To update the deployment after making changes:
kubectl apply -f kubernetes/deployment.yaml
-
To scale the deployment:
kubectl scale deployment mellifera-app --replicas=5
-
To view logs:
kubectl logs deployment/mellifera-app
-
To delete the deployment:
kubectl delete -f kubernetes/
Remember to update the ConfigMap (kubernetes/configmap.yaml
) with the appropriate environment variables for your deployment.
Note: After making changes to the Dockerfile or application code, always rebuild the Docker image, push it to your registry, update the image tag in the deployment.yaml file, and then apply the changes to your Kubernetes cluster.
Mellifera allows beekeepers to easily record their observations and manage their hives hands-free using voice commands. The app is designed to be used in the field, where traditional input methods are impractical.
- Optimized data fetching with automatic caching and background updates
- Seamless integration with our REST API
- Improved performance and user experience with instant UI updates
- Speech recognition for voice commands
- Natural Language Processing (NLP) for understanding complex instructions
- Text-to-speech feedback for a fully hands-free experience
Mellifera's Voice Command UI allows users to interact with the application using natural language commands. Here's a step-by-step demonstration of how to use voice commands for common tasks:
-
Start Listening:
- Click the "Start Listening" button or say "Start listening" to activate voice commands.
- The app will respond with a prompt like "I'm listening" or "Voice commands activated."
-
Select an Apiary:
- Command: "Select apiary [name]"
- Example: "Select apiary Sunflower Fields"
- The app will confirm the selection: "Apiary Sunflower Fields selected."
-
Create a New Hive:
- Command: "Add hive" or "Create new hive"
- The app will navigate to the hive creation form and confirm: "Navigating to create a new hive."
-
Start an Inspection:
- Command: "Start inspection" or "Begin hive check"
- The app will navigate to the inspection form for the selected apiary and confirm: "Starting a new inspection for Sunflower Fields apiary."
-
View Apiaries:
- Command: "Show apiaries" or "List bee yards"
- The app will navigate to the apiaries list and confirm: "Displaying the list of apiaries."
-
Create a New Apiary:
- Command: "Create apiary" or "Add new bee yard"
- The app will navigate to the apiary creation form and confirm: "Navigating to create a new apiary."
-
Return to Dashboard:
- Command: "Go to dashboard" or "Show main screen"
- The app will navigate to the main dashboard and confirm: "Returning to the main dashboard."
-
Get Help:
- Command: "Show help" or "What can you do?"
- The app will display or speak a list of available commands.
-
Stop Listening:
- Click the "Stop Listening" button or say "Stop listening" to deactivate voice commands.
- The app will confirm: "Voice commands paused. Click Start Listening when you need me again."
Mellifera uses Natural Language Understanding (NLU) to process voice commands and convert them into actionable instructions. Here's an overview of how the NLU is implemented:
-
Speech Recognition:
- The app uses the Web Speech API (via the
useSpeechRecognition
hook) to convert spoken words into text.
- The app uses the Web Speech API (via the
-
Command Processing:
- The transcribed text is sent to a backend service (accessed through the
useVoiceCommand
hook) that uses advanced NLP techniques to understand the user's intent.
- The transcribed text is sent to a backend service (accessed through the
-
Context-Aware Processing:
- The NLU system takes into account the current context (e.g., selected apiary, current page) to improve command interpretation accuracy.
-
Action Determination:
- Based on the understood intent and context, the system determines the appropriate action (e.g., navigate to a page, create a new entity, or provide information).
-
Feedback Loop:
- The app provides audio feedback using text-to-speech, confirming the understood command and the action taken.
-
Error Handling:
- If a command is not understood or cannot be executed, the system provides helpful error messages and suggestions.
-
Continuous Learning:
- The NLU system is designed to improve over time, learning from user interactions to better understand various phrasings and dialects.
This implementation allows for a flexible and user-friendly voice interface, enabling beekeepers to interact with the app naturally, even with hands full or while wearing protective gear.
- MVP
- Web UI
- DB
- Basic reporting functionality
- "Hive Inspection Companion"
- LLM Integration
- NLP for inputs
- Audio output
- ML
- Build a self-improving model with data collected over time by userbase
- Mobile app development
- Offline mode with data synchronization
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is proprietary software. All rights reserved. See LICENSE.txt
for more information.
Jason L Stiltner - @jasonlstiltner - [email protected]
Project Link: https://github.com/jstiltner/Mellifera-app
-
To view logs:
kubectl logs deployment/mellifera-app
-
To delete the deployment:
kubectl delete -f kubernetes/
Remember to update the ConfigMap (kubernetes/configmap.yaml
) with the appropriate environment variables for your deployment.
Note: After making changes to the Dockerfile or application code, always rebuild the Docker image, push it to your registry, update the image tag in the deployment.yaml file, and then apply the changes to your Kubernetes cluster.
Mellifera allows beekeepers to easily record their observations and manage their hives hands-free using voice commands. The app is designed to be used in the field, where traditional input methods are impractical.
- Optimized data fetching with automatic caching and background updates
- Seamless integration with our REST API
- Improved performance and user experience with instant UI updates
- Speech recognition for voice commands
- Natural Language Processing (NLP) for understanding complex instructions
- Text-to-speech feedback for a fully hands-free experience
Mellifera's Voice Command UI allows users to interact with the application using natural language commands. Here's a step-by-step demonstration of how to use voice commands for common tasks:
-
Start Listening:
- Click the "Start Listening" button or say "Start listening" to activate voice commands.
- The app will respond with a prompt like "I'm listening" or "Voice commands activated."
-
Select an Apiary:
- Command: "Select apiary [name]"
- Example: "Select apiary Sunflower Fields"
- The app will confirm the selection: "Apiary Sunflower Fields selected."
-
Create a New Hive:
- Command: "Add hive" or "Create new hive"
- The app will navigate to the hive creation form and confirm: "Navigating to create a new hive."
-
Start an Inspection:
- Command: "Start inspection" or "Begin hive check"
- The app will navigate to the inspection form for the selected apiary and confirm: "Starting a new inspection for Sunflower Fields apiary."
-
View Apiaries:
- Command: "Show apiaries" or "List bee yards"
- The app will navigate to the apiaries list and confirm: "Displaying the list of apiaries."
-
Create a New Apiary:
- Command: "Create apiary" or "Add new bee yard"
- The app will navigate to the apiary creation form and confirm: "Navigating to create a new apiary."
-
Return to Dashboard:
- Command: "Go to dashboard" or "Show main screen"
- The app will navigate to the main dashboard and confirm: "Returning to the main dashboard."
-
Get Help:
- Command: "Show help" or "What can you do?"
- The app will display or speak a list of available commands.
-
Stop Listening:
- Click the "Stop Listening" button or say "Stop listening" to deactivate voice commands.
- The app will confirm: "Voice commands paused. Click Start Listening when you need me again."
Mellifera uses Natural Language Understanding (NLU) to process voice commands and convert them into actionable instructions. Here's an overview of how the NLU is implemented:
-
Speech Recognition:
- The app uses the Web Speech API (via the
useSpeechRecognition
hook) to convert spoken words into text.
- The app uses the Web Speech API (via the
-
Command Processing:
- The transcribed text is sent to a backend service (accessed through the
useVoiceCommand
hook) that uses advanced NLP techniques to understand the user's intent.
- The transcribed text is sent to a backend service (accessed through the
-
Context-Aware Processing:
- The NLU system takes into account the current context (e.g., selected apiary, current page) to improve command interpretation accuracy.
-
Action Determination:
- Based on the understood intent and context, the system determines the appropriate action (e.g., navigate to a page, create a new entity, or provide information).
-
Feedback Loop:
- The app provides audio feedback using text-to-speech, confirming the understood command and the action taken.
-
Error Handling:
- If a command is not understood or cannot be executed, the system provides helpful error messages and suggestions.
-
Continuous Learning:
- The NLU system is designed to improve over time, learning from user interactions to better understand various phrasings and dialects.
This implementation allows for a flexible and user-friendly voice interface, enabling beekeepers to interact with the app naturally, even with hands full or while wearing protective gear.
- MVP
- Web UI
- DB
- Basic reporting functionality
- "Hive Inspection Companion"
- LLM Integration
- NLP for inputs
- Audio output
- ML
- Build a self-improving model with data collected over time by userbase
- Mobile app development
- Offline mode with data synchronization
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is proprietary software. All rights reserved. See LICENSE.txt
for more information.
Jason L Stiltner - @jasonlstiltner - [email protected]
Project Link: https://github.com/jstiltner/Mellifera-app
-
To view logs:
kubectl logs deployment/mellifera-app
-
To delete the deployment:
kubectl delete -f kubernetes/
Remember to update the ConfigMap (kubernetes/configmap.yaml
) with the appropriate environment variables for your deployment.
Note: After making changes to the Dockerfile or application code, always rebuild the Docker image, push it to your registry, update the image tag in the deployment.yaml file, and then apply the changes to your Kubernetes cluster.
Mellifera allows beekeepers to easily record their observations and manage their hives hands-free using voice commands. The app is designed to be used in the field, where traditional input methods are impractical.
- Optimized data fetching with automatic caching and background updates
- Seamless integration with our REST API
- Improved performance and user experience with instant UI updates
- Speech recognition for voice commands
- Natural Language Processing (NLP) for understanding complex instructions
- Text-to-speech feedback for a fully hands-free experience
Mellifera's Voice Command UI allows users to interact with the application using natural language commands. Here's a step-by-step demonstration of how to use voice commands for common tasks:
-
Start Listening:
- Click the "Start Listening" button or say "Start listening" to activate voice commands.
- The app will respond with a prompt like "I'm listening" or "Voice commands activated."
-
Select an Apiary:
- Command: "Select apiary [name]"
- Example: "Select apiary Sunflower Fields"
- The app will confirm the selection: "Apiary Sunflower Fields selected."
-
Create a New Hive:
- Command: "Add hive" or "Create new hive"
- The app will navigate to the hive creation form and confirm: "Navigating to create a new hive."
-
Start an Inspection:
- Command: "Start inspection" or "Begin hive check"
- The app will navigate to the inspection form for the selected apiary and confirm: "Starting a new inspection for Sunflower Fields apiary."
-
View Apiaries:
- Command: "Show apiaries" or "List bee yards"
- The app will navigate to the apiaries list and confirm: "Displaying the list of apiaries."
-
Create a New Apiary:
- Command: "Create apiary" or "Add new bee yard"
- The app will navigate to the apiary creation form and confirm: "Navigating to create a new apiary."
-
Return to Dashboard:
- Command: "Go to dashboard" or "Show main screen"
- The app will navigate to the main dashboard and confirm: "Returning to the main dashboard."
-
Get Help:
- Command: "Show help" or "What can you do?"
- The app will display or speak a list of available commands.
-
Stop Listening:
- Click the "Stop Listening" button or say "Stop listening" to deactivate voice commands.
- The app will confirm: "Voice commands paused. Click Start Listening when you need me again."
Mellifera uses Natural Language Understanding (NLU) to process voice commands and convert them into actionable instructions. Here's an overview of how the NLU is implemented:
-
Speech Recognition:
- The app uses the Web Speech API (via the
useSpeechRecognition
hook) to convert spoken words into text.
- The app uses the Web Speech API (via the
-
Command Processing:
- The transcribed text is sent to a backend service (accessed through the
useVoiceCommand
hook) that uses advanced NLP techniques to understand the user's intent.
- The transcribed text is sent to a backend service (accessed through the
-
Context-Aware Processing:
- The NLU system takes into account the current context (e.g., selected apiary, current page) to improve command interpretation accuracy.
-
Action Determination:
- Based on the understood intent and context, the system determines the appropriate action (e.g., navigate to a page, create a new entity, or provide information).
-
Feedback Loop:
- The app provides audio feedback using text-to-speech, confirming the understood command and the action taken.
-
Error Handling:
- If a command is not understood or cannot be executed, the system provides helpful error messages and suggestions.
-
Continuous Learning:
- The NLU system is designed to improve over time, learning from user interactions to better understand various phrasings and dialects.
This implementation allows for a flexible and user-friendly voice interface, enabling beekeepers to interact with the app naturally, even with hands full or while wearing protective gear.
- MVP
- Web UI
- DB
- Basic reporting functionality
- "Hive Inspection Companion"
- LLM Integration
- NLP for inputs
- Audio output
- ML
- Build a self-improving model with data collected over time by userbase
- Mobile app development
- Offline mode with data synchronization
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is proprietary software. All rights reserved. See LICENSE.txt
for more information.
Jason L Stiltner - @jasonlstiltner - [email protected]
Project Link: https://github.com/jstiltner/Mellifera-app
-
To view logs:
kubectl logs deployment/mellifera-app
-
To delete the deployment:
kubectl delete -f kubernetes/
Remember to update the ConfigMap (kubernetes/configmap.yaml
) with the appropriate environment variables for your deployment.
Note: After making changes to the Dockerfile or application code, always rebuild the Docker image, push it to your registry, update the image tag in the deployment.yaml file, and then apply the changes to your Kubernetes cluster.
Mellifera allows beekeepers to easily record their observations and manage their hives hands-free using voice commands. The app is designed to be used in the field, where traditional input methods are impractical.
- Optimized data fetching with automatic caching and background updates
- Seamless integration with our REST API
- Improved performance and user experience with instant UI updates
- Speech recognition for voice commands
- Natural Language Processing (NLP) for understanding complex instructions
- Text-to-speech feedback for a fully hands-free experience
Mellifera's Voice Command UI allows users to interact with the application using natural language commands. Here's a step-by-step demonstration of how to use voice commands for common tasks:
-
Start Listening:
- Click the "Start Listening" button or say "Start listening" to activate voice commands.
- The app will respond with a prompt like "I'm listening" or "Voice commands activated."
-
Select an Apiary:
- Command: "Select apiary [name]"
- Example: "Select apiary Sunflower Fields"
- The app will confirm the selection: "Apiary Sunflower Fields selected."
-
Create a New Hive:
- Command: "Add hive" or "Create new hive"
- The app will navigate to the hive creation form and confirm: "Navigating to create a new hive."
-
Start an Inspection:
- Command: "Start inspection" or "Begin hive check"
- The app will navigate to the inspection form for the selected apiary and confirm: "Starting a new inspection for Sunflower Fields apiary."
-
View Apiaries:
- Command: "Show apiaries" or "List bee yards"
- The app will navigate to the apiaries list and confirm: "Displaying the list of apiaries."
-
Create a New Apiary:
- Command: "Create apiary" or "Add new bee yard"
- The app will navigate to the apiary creation form and confirm: "Navigating to create a new apiary."
-
Return to Dashboard:
- Command: "Go to dashboard" or "Show main screen"
- The app will navigate to the main dashboard and confirm: "Returning to the main dashboard."
-
Get Help:
- Command: "Show help" or "What can you do?"
- The app will display or speak a list of available commands.
-
Stop Listening:
- Click the "Stop Listening" button or say "Stop listening" to deactivate voice commands.
- The app will confirm: "Voice commands paused. Click Start Listening when you need me again."
Mellifera uses Natural Language Understanding (NLU) to process voice commands and convert them into actionable instructions. Here's an overview of how the NLU is implemented:
-
Speech Recognition:
- The app uses the Web Speech API (via the
useSpeechRecognition
hook) to convert spoken words into text.
- The app uses the Web Speech API (via the
-
Command Processing:
- The transcribed text is sent to a backend service (accessed through the
useVoiceCommand
hook) that uses advanced NLP techniques to understand the user's intent.
- The transcribed text is sent to a backend service (accessed through the
-
Context-Aware Processing:
- The NLU system takes into account the current context (e.g., selected apiary, current page) to improve command interpretation accuracy.
-
Action Determination:
- Based on the understood intent and context, the system determines the appropriate action (e.g., navigate to a page, create a new entity, or provide information).
-
Feedback Loop:
- The app provides audio feedback using text-to-speech, confirming the understood command and the action taken.
-
Error Handling:
- If a command is not understood or cannot be executed, the system provides helpful error messages and suggestions.
-
Continuous Learning:
- The NLU system is designed to improve over time, learning from user interactions to better understand various phrasings and dialects.
This implementation allows for a flexible and user-friendly voice interface, enabling beekeepers to interact with the app naturally, even with hands full or while wearing protective gear.
- MVP
- Web UI
- DB
- Basic reporting functionality
- "Hive Inspection Companion"
- LLM Integration
- NLP for inputs
- Audio output
- ML
- Build a self-improving model with data collected over time by userbase
- Mobile app development
- Offline mode with data synchronization
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is proprietary software. All rights reserved. See LICENSE.txt
for more information.
Jason L Stiltner - @jasonlstiltner - [email protected]
Project Link: https://github.com/jstiltner/Mellifera-app
-
To view logs:
kubectl logs deployment/mellifera-app
-
To delete the deployment:
kubectl delete -f kubernetes/
Remember to update the ConfigMap (kubernetes/configmap.yaml
) with the appropriate environment variables for your deployment.
Note: After making changes to the Dockerfile or application code, always rebuild the Docker image, push it to your registry, update the image tag in the deployment.yaml file, and then apply the changes to your Kubernetes cluster.
Mellifera allows beekeepers to easily record their observations and manage their hives hands-free using voice commands. The app is designed to be used in the field, where traditional input methods are impractical.
- Optimized data fetching with automatic caching and background updates
- Seamless integration with our REST API
- Improved performance and user experience with instant UI updates
- Speech recognition for voice commands
- Natural Language Processing (NLP) for understanding complex instructions
- Text-to-speech feedback for a fully hands-free experience
Mellifera's Voice Command UI allows users to interact with the application using natural language commands. Here's a step-by-step demonstration of how to use voice commands for common tasks:
-
Start Listening:
- Click the "Start Listening" button or say "Start listening" to activate voice commands.
- The app will respond with a prompt like "I'm listening" or "Voice commands activated."
-
Select an Apiary:
- Command: "Select apiary [name]"
- Example: "Select apiary Sunflower Fields"
- The app will confirm the selection: "Apiary Sunflower Fields selected."
-
Create a New Hive:
- Command: "Add hive" or "Create new hive"
- The app will navigate to the hive creation form and confirm: "Navigating to create a new hive."
-
Start an Inspection:
- Command: "Start inspection" or "Begin hive check"
- The app will navigate to the inspection form for the selected apiary and confirm: "Starting a new inspection for Sunflower Fields apiary."
-
View Apiaries:
- Command: "Show apiaries" or "List bee yards"
- The app will navigate to the apiaries list and confirm: "Displaying the list of apiaries."
-
Create a New Apiary:
- Command: "Create apiary" or "Add new bee yard"
- The app will navigate to the apiary creation form and confirm: "Navigating to create a new apiary."
-
Return to Dashboard:
- Command: "Go to dashboard" or "Show main screen"
- The app will navigate to the main dashboard and confirm: "Returning to the main dashboard."
-
Get Help:
- Command: "Show help" or "What can you do?"
- The app will display or speak a list of available commands.
-
Stop Listening:
- Click the "Stop Listening" button or say "Stop listening" to deactivate voice commands.
- The app will confirm: "Voice commands paused. Click Start Listening when you need me again."
Mellifera uses Natural Language Understanding (NLU) to process voice commands and convert them into actionable instructions. Here's an overview of how the NLU is implemented:
-
Speech Recognition:
- The app uses the Web Speech API (via the
useSpeechRecognition
hook) to convert spoken words into text.
- The app uses the Web Speech API (via the
-
Command Processing:
- The transcribed text is sent to a backend service (accessed through the
useVoiceCommand
hook) that uses advanced NLP techniques to understand the user's intent.
- The transcribed text is sent to a backend service (accessed through the
-
Context-Aware Processing:
- The NLU system takes into account the current context (e.g., selected apiary, current page) to improve command interpretation accuracy.
-
Action Determination:
- Based on the understood intent and context, the system determines the appropriate action (e.g., navigate to a page, create a new entity, or provide information).
-
Feedback Loop:
- The app provides audio feedback using text-to-speech, confirming the understood command and the action taken.
-
Error Handling:
- If a command is not understood or cannot be executed, the system provides helpful error messages and suggestions.
-
Continuous Learning:
- The NLU system is designed to improve over time, learning from user interactions to better understand various phrasings and dialects.
This implementation allows for a flexible and user-friendly voice interface, enabling beekeepers to interact with the app naturally, even with hands full or while wearing protective gear.
- MVP
- Web UI
- DB
- Basic reporting functionality
- "Hive Inspection Companion"
- LLM Integration
- NLP for inputs
- Audio output
- ML
- Build a self-improving model with data collected over time by userbase
- Mobile app development
- Offline mode with data synchronization
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is proprietary software. All rights reserved. See LICENSE.txt
for more information.
Jason L Stiltner - @jasonlstiltner - [email protected]
Project Link: https://github.com/jstiltner/Mellifera-app
-
To view logs:
kubectl logs deployment/mellifera-app
-
To delete the deployment:
kubectl delete -f kubernetes/
Remember to update the ConfigMap (kubernetes/configmap.yaml
) with the appropriate environment variables for your deployment.
Note: After making changes to the Dockerfile or application code, always rebuild the Docker image, push it to your registry, update the image tag in the deployment.yaml file, and then apply the changes to your Kubernetes cluster.
Mellifera allows beekeepers to easily record their observations and manage their hives hands-free using voice commands. The app is designed to be used in the field, where traditional input methods are impractical.
- Optimized data fetching with automatic caching and background updates
- Seamless integration with our REST API
- Improved performance and user experience with instant UI updates
- Speech recognition for voice commands
- Natural Language Processing (NLP) for understanding complex instructions
- Text-to-speech feedback for a fully hands-free experience
Mellifera's Voice Command UI allows users to interact with the application using natural language commands. Here's a step-by-step demonstration of how to use voice commands for common tasks:
-
Start Listening:
- Click the "Start Listening" button or say "Start listening" to activate voice commands.
- The app will respond with a prompt like "I'm listening" or "Voice commands activated."
-
Select an Apiary:
- Command: "Select apiary [name]"
- Example: "Select apiary Sunflower Fields"
- The app will confirm the selection: "Apiary Sunflower Fields selected."
-
Create a New Hive:
- Command: "Add hive" or "Create new hive"
- The app will navigate to the hive creation form and confirm: "Navigating to create a new hive."
-
Start an Inspection:
- Command: "Start inspection" or "Begin hive check"
- The app will navigate to the inspection form for the selected apiary and confirm: "Starting a new inspection for Sunflower Fields apiary."
-
View Apiaries:
- Command: "Show apiaries" or "List bee yards"
- The app will navigate to the apiaries list and confirm: "Displaying the list of apiaries."
-
Create a New Apiary:
- Command: "Create apiary" or "Add new bee yard"
- The app will navigate to the apiary creation form and confirm: "Navigating to create a new apiary."
-
Return to Dashboard:
- Command: "Go to dashboard" or "Show main screen"
- The app will navigate to the main dashboard and confirm: "Returning to the main dashboard."
-
Get Help:
- Command: "Show help" or "What can you do?"
- The app will display or speak a list of available commands.
-
Stop Listening:
- Click the "Stop Listening" button or say "Stop listening" to deactivate voice commands.
- The app will confirm: "Voice commands paused. Click Start Listening when you need me again."
Mellifera uses Natural Language Understanding (NLU) to process voice commands and convert them into actionable instructions. Here's an overview of how the NLU is implemented:
-
Speech Recognition:
- The app uses the Web Speech API (via the
useSpeechRecognition
hook) to convert spoken words into text.
- The app uses the Web Speech API (via the
-
Command Processing:
- The transcribed text is sent to a backend service (accessed through the
useVoiceCommand
hook) that uses advanced NLP techniques to understand the user's intent.
- The transcribed text is sent to a backend service (accessed through the
-
Context-Aware Processing:
- The NLU system takes into account the current context (e.g., selected apiary, current page) to improve command interpretation accuracy.
-
Action Determination:
- Based on the understood intent and context, the system determines the appropriate action (e.g., navigate to a page, create a new entity, or provide information).
-
Feedback Loop:
- The app provides audio feedback using text-to-speech, confirming the understood command and the action taken.
-
Error Handling:
- If a command is not understood or cannot be executed, the system provides helpful error messages and suggestions.
-
Continuous Learning:
- The NLU system is designed to improve over time, learning from user interactions to better understand various phrasings and dialects.
This implementation allows for a flexible and user-friendly voice interface, enabling beekeepers to interact with the app naturally, even with hands full or while wearing protective gear.
- MVP
- Web UI
- DB
- Basic reporting functionality
- "Hive Inspection Companion"
- LLM Integration
- NLP for inputs
- Audio output
- ML
- Build a self-improving model with data collected over time by userbase
- Mobile app development
- Offline mode with data synchronization
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is proprietary software. All rights reserved. See LICENSE.txt
for more information.
Jason L Stiltner - @jasonlstiltner - [email protected]
Project Link: https://github.com/jstiltner/Mellifera-app