Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(feat): Add small model lid.176.ftz to library resources, for offline use #5

Merged
merged 14 commits into from
Sep 29, 2024

Conversation

sudoskys
Copy link
Member

No description provided.

- Refactor `get_model_loaded` to `load_model` with enhanced error handling
- Rename constants for better clarity and consistency
- Simplify language detection functions and improve docstrings
- Use `Optional` typing and rename parameters for clarity
@sudoskys sudoskys changed the title Add small model lid.176.ftz to library resources (feat): Add small model lid.176.ftz to library resources, for offline use Sep 29, 2024
Refactor model loading to use an enum and cache class.
Improve error handling and logging.
Add functionality to fallback to a local small model if download fails.
add `USE_STRICT_MODE` setting to enable/disable strict mode in detection
updated README.md to reflect the new setting
removed redundant `USE_STRICT_MODE` declaration in `infer.py`
- Updated README with new `use_strict_mode` parameter examples
- Removed unused imports and settings in `infer.py`
- Implemented `use_strict_mode` in model loading functions
- Added `NOTICE.MD` with license information
- Adjusted parameter alignment in `load_model` definition
- Refactored `load_large_model` logic for clarity
- Enhanced fallback logic for strict and non-strict modes
- Adjusted parameter alignment in `load_model` definition
- Refactored `load_large_model` logic for clarity
- Enhanced fallback logic for strict and non-strict modes
@sudoskys sudoskys merged commit 5728ba9 into main Sep 29, 2024
4 checks passed
@sudoskys
Copy link
Member Author

sudoskys commented Sep 29, 2024

Sorry, forgot to provide the manual gc function for model_cache. The next update will be from the dev branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error loading model /tmp/fasttext-langdetect/lid.176.ftz: vector::_M_default_append
1 participant