-
Notifications
You must be signed in to change notification settings - Fork 6
Permalink
Choose a base ref
{{ refName }}
default
Choose a head ref
{{ refName }}
default
Comparing changes
Choose two branches to see what’s changed or to start a new pull request.
If you need to, you can also or
learn more about diff comparisons.
Open a pull request
Create a new pull request by comparing changes across two branches. If you need to, you can also .
Learn more about diff comparisons here.
base repository: appl-team/appl
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.1.2
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
...
head repository: appl-team/appl
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.2.0
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
- 6 commits
- 121 files changed
- 1 contributor
Commits on Oct 20, 2024
-
[v0.1.3] better support structured output and image prompt, add more …
…usage examples. - Add supports for `response_format` (structured output) from OpenAI's API, add type annotation support. - No longer needs `import appl` explicitly to use APPL functions - `Tagged` compositor defaults to no indent inside (was 4 spaces) - Support pillow's Image as part of prompt - Add streamlit example, improve and add more "chat with codes" examples - Log token usage by default - Not configured server name fallback to litellm's interface - Support explicit appending prompts with `grow`
Configuration menu - View commit details
-
Copy full SHA for 2ec04e9 - Browse repository at this point
Copy the full SHA 2ec04e9View commit details
Commits on Nov 22, 2024
-
[v0.1.4] use lunary to visualize trace, introduce @Traceable, auto co…
…ntinuation for cutoff message - Support using [lunary](https://lunary.ai/) to display both function call trees (APPL functions and @Traceable function) and LLM calls - Support auto continuation of incomplete LLM generations (thanks @noahshinn) - continue the generation by repreating the last line and concatenate by overlapping - Add global executor pool, can be used to limit the number of parallel llm calls - Configurable to show streaming in rich.live or by print - make instructor as optional dependency - some code refact
Configuration menu - View commit details
-
Copy full SHA for a205095 - Browse repository at this point
Copy the full SHA a205095View commit details
Commits on Dec 5, 2024
-
[v0.1.5] Add LLM caching, audio support, tree-of-thoughts and virtual…
… tool example, ... - Add persistent Database for caching LLM calls - Support Audio as part of the prompt - Support using `gen` outside `ppl` function, similar usage as litellm's `completion` - Add example reimplementing tree-of-thoughts with parallelization (6x speedup) - Add example for (simplified) emulate tools using LLMs and function docstring - Allow using schema dict to specify available tools for LLMs - Allow specifying docstring as `SystemMessage` in the `ppl` decorator function - Simplified the example for defining concepts in prompts - Add tests for caching and `gen` outside `ppl` function - some re-organize for imports
Configuration menu - View commit details
-
Copy full SHA for c9cb6ba - Browse repository at this point
Copy the full SHA c9cb6baView commit details
Commits on Dec 13, 2024
-
Integrates langfuse for observability, store source code and git info…
… to display in metadata
Configuration menu - View commit details
-
Copy full SHA for 6b42395 - Browse repository at this point
Copy the full SHA 6b42395View commit details
Commits on Dec 16, 2024
-
[v0.2.0 alpha] better initialization, better configuration, better tr…
…acing and supports langfuse - Auto initialization, no longer need to call `appl.init()` - configurations can be further updated via `appl.init(kwargs)` and command line (see the next point). - Better configuration system - Use pydantic model to constraint both configs and global_vars for a better type checking - Use `jsonargparse` to support command line, see the [cmd args example](examples/usage/cmd_args.py) - Support using Langfuse to visualize the trace, see the [tracing example](examples/usage/tracing.py) - store metadata in the trace to be observed in Langfuse, including git info, command line, etc. - store code for the functions marked with `ppl` and `traceable` in the trace, can be viewed in Langfuse (a native code view to be supported). - add `print_trace` function can be called at the end to send the trace to Langfuse - setup `appltrace` command to print the trace to supported formats, including Langfuse (recommended), Lunary, plain html, chrome tracing, etc. - Misc - change default server to None, you have to setup your own default server by either appl config files (like `appl.yaml`), command line, or `appl.init(servers=...)` - change default setting to enable logging to file, disable logging llm call args and usage (response remains True) - rename `comp` to `compositor` of the ppl argument for better readability - some code refactoring and fixed some minor bugs
Configuration menu - View commit details
-
Copy full SHA for 537e4db - Browse repository at this point
Copy the full SHA 537e4dbView commit details
Commits on Dec 17, 2024
-
[v0.2.0] Update README and docs, add cursor rules.
- Update README and docs, add images to illustrate langfuse usage. - Change default streaming display from `live` to `print` with grey color. - Add special handling for claude model when specifying `response_format` as a Pydantic model.
Configuration menu - View commit details
-
Copy full SHA for 4fbdab8 - Browse repository at this point
Copy the full SHA 4fbdab8View commit details
There are no files selected for viewing