Skip to content

Releases: BerriAI/litellm

v1.56.9

04 Jan 02:46
Compare
Choose a tag to compare

What's Changed

  • (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7519
  • (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by @ishaan-jaff in #7529
  • [Bug-Fix]: None metadata not handled for _PROXY_VirtualKeyModelMaxBudgetLimiter hook by @ishaan-jaff in #7523
  • Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by @Manouchehri in #7118
  • Fix langfuse prompt management on proxy by @krrishdholakia in #7535
  • (Feat) - Hashicorp secret manager, use TLS cert authentication by @ishaan-jaff in #7532
  • Fix OTEL message redaction + Langfuse key leak in logs by @krrishdholakia in #7516
  • feat: implement support for limit, order, before, and after parameters in get_assistants by @jeansouzak in #7537
  • Add missing prefix for deepseek by @SmartManoj in #7508
  • (fix) aiohttp_openai/ route - get to 1K RPS on single instance by @ishaan-jaff in #7539
  • Revert "feat: implement support for limit, order, before, and after parameters in get_assistants" by @krrishdholakia in #7542
  • [Feature]: - allow print alert log to console by @ishaan-jaff in #7534
  • (fix proxy perf) use _read_request_body instead of ast.literal_eval to get better performance by @ishaan-jaff in #7545

New Contributors

Full Changelog: v1.56.8...v1.56.9

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.9

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 269.3983699320639 6.149252570882109 0.0 1840 0 211.95807399999467 2571.210135000001
Aggregated Passed ✅ 240.0 269.3983699320639 6.149252570882109 0.0 1840 0 211.95807399999467 2571.210135000001

v1.56.8-dev2

04 Jan 00:36
Compare
Choose a tag to compare

What's Changed

  • (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7519
  • (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by @ishaan-jaff in #7529
  • [Bug-Fix]: None metadata not handled for _PROXY_VirtualKeyModelMaxBudgetLimiter hook by @ishaan-jaff in #7523
  • Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by @Manouchehri in #7118
  • Fix langfuse prompt management on proxy by @krrishdholakia in #7535
  • (Feat) - Hashicorp secret manager, use TLS cert authentication by @ishaan-jaff in #7532
  • Fix OTEL message redaction + Langfuse key leak in logs by @krrishdholakia in #7516
  • feat: implement support for limit, order, before, and after parameters in get_assistants by @jeansouzak in #7537
  • Add missing prefix for deepseek by @SmartManoj in #7508
  • (fix) aiohttp_openai/ route - get to 1K RPS on single instance by @ishaan-jaff in #7539

New Contributors

Full Changelog: v1.56.8...v1.56.8-dev2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev2

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 260.0 302.69986428167584 6.1480113905567375 0.0 1839 0 230.89517400001114 2985.9468520000405
Aggregated Failed ❌ 260.0 302.69986428167584 6.1480113905567375 0.0 1839 0 230.89517400001114 2985.9468520000405

v1.56.3-stable

04 Jan 20:47
Compare
Choose a tag to compare

Full Changelog: v1.55.8-stable...v1.56.3-stable

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.56.3-stable

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 285.39144223780414 6.0307890213828905 0.0033430094353563695 1804 1 125.146089999987 3186.0641239999836
Aggregated Passed ✅ 250.0 285.39144223780414 6.0307890213828905 0.0033430094353563695 1804 1 125.146089999987 3186.0641239999836

v1.56.8-dev1

03 Jan 20:58
Compare
Choose a tag to compare

Full Changelog: v1.56.8...v1.56.8-dev1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 284.69056873304487 6.157751312397796 0.0 1843 0 211.56842700003153 2410.6343400000014
Aggregated Passed ✅ 250.0 284.69056873304487 6.157751312397796 0.0 1843 0 211.56842700003153 2410.6343400000014

v1.56.8

03 Jan 07:07
Compare
Choose a tag to compare

What's Changed

  • Prometheus - custom metrics support + other improvements by @krrishdholakia in #7489
  • (feat) POST /fine_tuning/jobs support passing vertex specific hyper params by @ishaan-jaff in #7490
  • (Feat) - LiteLLM Use UsernamePasswordCredential for Azure OpenAI by @ishaan-jaff in #7496
  • (docs) Add docs on load testing benchmarks by @ishaan-jaff in #7499
  • (Feat) Add support for reading secrets from Hashicorp vault by @ishaan-jaff in #7497
  • Litellm dev 12 30 2024 p2 by @krrishdholakia in #7495
  • Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by @krrishdholakia in #7498
  • (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7500
  • Litellm dev 01 01 2025 p3 by @krrishdholakia in #7503
  • Litellm dev 01 02 2025 p2 by @krrishdholakia in #7512
  • Revert "(fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails" by @ishaan-jaff in #7515
  • (perf) use aiohttp for custom_openai by @ishaan-jaff in #7514
  • (perf) use threadpool executor - for sync logging integrations by @ishaan-jaff in #7509

Full Changelog: v1.56.6...v1.56.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 247.81903455189286 6.181081075067931 0.0 1850 0 191.81740900000932 2126.8676100000903
Aggregated Passed ✅ 230.0 247.81903455189286 6.181081075067931 0.0 1850 0 191.81740900000932 2126.8676100000903

v1.56.6.dev1

03 Jan 02:27
Compare
Choose a tag to compare

What's Changed

  • Prometheus - custom metrics support + other improvements by @krrishdholakia in #7489
  • (feat) POST /fine_tuning/jobs support passing vertex specific hyper params by @ishaan-jaff in #7490
  • (Feat) - LiteLLM Use UsernamePasswordCredential for Azure OpenAI by @ishaan-jaff in #7496
  • (docs) Add docs on load testing benchmarks by @ishaan-jaff in #7499
  • (Feat) Add support for reading secrets from Hashicorp vault by @ishaan-jaff in #7497
  • Litellm dev 12 30 2024 p2 by @krrishdholakia in #7495
  • Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by @krrishdholakia in #7498
  • (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7500
  • Litellm dev 01 01 2025 p3 by @krrishdholakia in #7503

Full Changelog: v1.56.6...v1.56.6.dev1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6.dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 255.89973974836954 6.151774848433542 0.003343355895887794 1840 1 94.9865199999067 1259.9916519999965
Aggregated Passed ✅ 230.0 255.89973974836954 6.151774848433542 0.003343355895887794 1840 1 94.9865199999067 1259.9916519999965

v1.56.6

02 Jan 06:40
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.56.5...v1.56.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 287.411814751915 6.114731230663012 0.0 1830 0 228.32058200003758 3272.637599999939
Aggregated Passed ✅ 250.0 287.411814751915 6.114731230663012 0.0 1830 0 228.32058200003758 3272.637599999939

v1.56.5

31 Dec 21:14
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.56.4...v1.56.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.5

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 268.0630784626629 6.174316845767241 0.0 1848 0 212.08500100010497 3189.481879000027
Aggregated Passed ✅ 230.0 268.0630784626629 6.174316845767241 0.0 1848 0 212.08500100010497 3189.481879000027

v1.56.4

29 Dec 04:56
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.56.3...v1.56.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.4

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 268.74238744669225 6.116896356155644 0.0 1829 0 214.29422199992132 1969.7571099999323
Aggregated Passed ✅ 240.0 268.74238744669225 6.116896356155644 0.0 1829 0 214.29422199992132 1969.7571099999323

v1.56.3

28 Dec 04:49
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.56.2...v1.56.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.3

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 276.9724297749999 6.148940938190872 0.003341815727277648 1840 1 112.37049800001842 1700.1428350000083
Aggregated Passed ✅ 250.0 276.9724297749999 6.148940938190872 0.003341815727277648 1840 1 112.37049800001842 1700.1428350000083