You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At present, Wordflow lacks the capability to utilize self-hosted Language Models (LLMs), limiting users' flexibility in choosing or swapping models according to their specific needs or preferences.
Proposed Solution:
Integrating support for self-hosted LLMs within Wordflow would greatly enhance its functionality and utility. This enhancement would empower users to leverage a diverse range of models, adapt to evolving requirements, and potentially improve performance by selecting models tailored to specific prompt tasks.
Benefits:
Flexibility: Users can seamlessly switch between different models or select a specific model based on task requirements. Customization: Allows users to fine-tune their experience by integrating custom or specialized models. Scalability: Accommodates the rapid influx of new models, ensuring users can readily integrate them into their workflow. Performance Optimization: Users can choose models optimized for specific tasks, potentially improving overall performance and accuracy.
Easy win would be to integrate with something like ollama, which exposes a standard api for a whole range of different models: https://github.com/ollama/ollama
The text was updated successfully, but these errors were encountered:
Good idea!! I will add this feature when I have time. PR is welcome! We can add a new option in the model drop-down menu and allow users to enter a URL (e.g., localhost:8080) to call local/self-hosted ollama.
Problem Statement:
At present, Wordflow lacks the capability to utilize self-hosted Language Models (LLMs), limiting users' flexibility in choosing or swapping models according to their specific needs or preferences.
Proposed Solution:
Integrating support for self-hosted LLMs within Wordflow would greatly enhance its functionality and utility. This enhancement would empower users to leverage a diverse range of models, adapt to evolving requirements, and potentially improve performance by selecting models tailored to specific prompt tasks.
Benefits:
Flexibility: Users can seamlessly switch between different models or select a specific model based on task requirements.
Customization: Allows users to fine-tune their experience by integrating custom or specialized models.
Scalability: Accommodates the rapid influx of new models, ensuring users can readily integrate them into their workflow.
Performance Optimization: Users can choose models optimized for specific tasks, potentially improving overall performance and accuracy.
Easy win would be to integrate with something like ollama, which exposes a standard api for a whole range of different models:
https://github.com/ollama/ollama
The text was updated successfully, but these errors were encountered: