Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: appl-team/appl
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.1.2
Choose a base ref
...
head repository: appl-team/appl
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.2.0
Choose a head ref
  • 6 commits
  • 121 files changed
  • 1 contributor

Commits on Oct 20, 2024

  1. [v0.1.3] better support structured output and image prompt, add more …

    …usage examples.
    
    - Add supports for `response_format` (structured output) from OpenAI's API, add type annotation support.
    - No longer needs `import appl` explicitly to use APPL functions
    - `Tagged` compositor defaults to no indent inside (was 4 spaces)
    - Support pillow's Image as part of prompt
    - Add streamlit example, improve and add more "chat with codes" examples
    - Log token usage by default
    - Not configured server name fallback to litellm's interface
    - Support explicit appending prompts with `grow`
    dhh1995 committed Oct 20, 2024
    Copy the full SHA
    2ec04e9 View commit details

Commits on Nov 22, 2024

  1. [v0.1.4] use lunary to visualize trace, introduce @Traceable, auto co…

    …ntinuation for cutoff message
    
    - Support using [lunary](https://lunary.ai/) to display both function call trees (APPL functions and @Traceable function) and LLM calls
    - Support auto continuation of incomplete LLM generations (thanks @noahshinn)
      - continue the generation by repreating the last line and concatenate by overlapping
    - Add global executor pool, can be used to limit the number of parallel llm calls
    - Configurable to show streaming in rich.live or by print
    - make instructor as optional dependency
    - some code refact
    dhh1995 committed Nov 22, 2024
    Copy the full SHA
    a205095 View commit details

Commits on Dec 5, 2024

  1. [v0.1.5] Add LLM caching, audio support, tree-of-thoughts and virtual…

    … tool example, ...
    
    - Add persistent Database for caching LLM calls
    - Support Audio as part of the prompt
    - Support using `gen` outside `ppl` function, similar usage as litellm's `completion`
    - Add example reimplementing tree-of-thoughts with parallelization (6x speedup)
    - Add example for (simplified) emulate tools using LLMs and function docstring
    - Allow using schema dict to specify available tools for LLMs
    - Allow specifying docstring as `SystemMessage` in the `ppl` decorator function
    - Simplified the example for defining concepts in prompts
    - Add tests for caching and `gen` outside `ppl` function
    - some re-organize for imports
    dhh1995 committed Dec 5, 2024
    Copy the full SHA
    c9cb6ba View commit details

Commits on Dec 13, 2024

  1. Integrates langfuse for observability, store source code and git info…

    … to display in metadata
    dhh1995 committed Dec 13, 2024
    Copy the full SHA
    6b42395 View commit details

Commits on Dec 16, 2024

  1. [v0.2.0 alpha] better initialization, better configuration, better tr…

    …acing and supports langfuse
    
    - Auto initialization, no longer need to call `appl.init()`
      - configurations can be further updated via `appl.init(kwargs)` and command line (see the next point).
    - Better configuration system
      - Use pydantic model to constraint both configs and global_vars for a better type checking
      - Use `jsonargparse` to support command line, see the [cmd args example](examples/usage/cmd_args.py)
    - Support using Langfuse to visualize the trace, see the [tracing example](examples/usage/tracing.py)
      - store metadata in the trace to be observed in Langfuse, including git info, command line, etc.
      - store code for the functions marked with `ppl` and `traceable` in the trace, can be viewed in Langfuse (a native code view to be supported).
      - add `print_trace` function can be called at the end to send the trace to Langfuse
      - setup `appltrace` command to print the trace to supported formats, including Langfuse (recommended), Lunary, plain html, chrome tracing, etc.
    - Misc
      - change default server to None, you have to setup your own default server by either appl config files (like `appl.yaml`), command line, or `appl.init(servers=...)`
      - change default setting to enable logging to file, disable logging llm call args and usage (response remains True)
      - rename `comp` to `compositor` of the ppl argument for better readability
      - some code refactoring and fixed some minor bugs
    dhh1995 committed Dec 16, 2024
    Copy the full SHA
    537e4db View commit details

Commits on Dec 17, 2024

  1. [v0.2.0] Update README and docs, add cursor rules.

    - Update README and docs, add images to illustrate langfuse usage.
    - Change default streaming display from `live` to `print` with grey color.
    - Add special handling for claude model when specifying `response_format` as a Pydantic model.
    dhh1995 committed Dec 17, 2024
    Copy the full SHA
    4fbdab8 View commit details
Showing with 6,966 additions and 1,332 deletions.
  1. +237 −0 .cursorrules
  2. +2 −0 .gitignore
  3. +88 −14 README.md
  4. BIN docs/_assets/tracing/langfuse_convo.png
  5. BIN docs/_assets/tracing/langfuse_timeline.png
  6. BIN docs/_assets/tracing/lunary.png
  7. +4 −0 docs/blogs/index.md
  8. +2 −2 docs/setup.md
  9. +0 −3 docs/tutorials/1_get_started.md
  10. +1 −7 docs/tutorials/2_qa_example.md
  11. +27 −11 docs/tutorials/3_appl_function.md
  12. +1 −1 docs/tutorials/4_concurrent.md
  13. +0 −3 docs/tutorials/5_tool_calls.md
  14. +12 −26 docs/tutorials/6_prompt_coding.md
  15. +112 −17 docs/tutorials/7_tracing.md
  16. +0 −9 docs/tutorials/appendix/prompt_capture.md
  17. +0 −32 docs/tutorials/usage/instructor.md
  18. +11 −11 docs/tutorials/usage/servers.md
  19. +71 −0 docs/tutorials/usage/structured.md
  20. +61 −25 examples/advanced/chat_with_codes.py
  21. +76 −0 examples/advanced/chat_with_references.py
  22. +12 −7 examples/advanced/long_prompt.py
  23. +19 −14 examples/advanced/multi_agent_chat/option1_resume.py
  24. +1 −4 examples/advanced/multi_agent_chat/option2_history.py
  25. +1 −4 examples/advanced/multi_agent_chat/option3_generator.py
  26. +1 −4 examples/advanced/multi_agent_chat/option4_samectx.py
  27. +2 −4 examples/advanced/react_hanoi.py
  28. +10 −3 examples/advanced/repeat_until_valid.py
  29. +1,363 −0 examples/advanced/tree_of_thoughts/24.csv
  30. +17 −0 examples/advanced/tree_of_thoughts/README.md
  31. +5 −0 examples/advanced/tree_of_thoughts/appl.yaml
  32. BIN examples/advanced/tree_of_thoughts/time_comparison.png
  33. +372 −0 examples/advanced/tree_of_thoughts/tot.py
  34. +85 −0 examples/advanced/virtual_tool.py
  35. +4 −4 examples/appl.yaml
  36. +1 −4 examples/basic/answer_questions.py
  37. +4 −7 examples/basic/chatbot.py
  38. +0 −3 examples/basic/cot_sc.py
  39. +10 −0 examples/basic/explicit_grow_prompt.py
  40. +0 −3 examples/basic/hello_world.py
  41. +33 −0 examples/basic/image_prompt.py
  42. +0 −3 examples/basic/manage_context.py
  43. BIN examples/basic/pillow-logo-dark-text.webp
  44. +12 −26 examples/basic/prompt_coding.py
  45. +0 −3 examples/basic/tool_call.py
  46. +23 −0 examples/usage/cmd_args.py
  47. +0 −4 examples/usage/multiple_servers.py
  48. +0 −3 examples/usage/process_convo.py
  49. +3 −5 examples/usage/retrieve.py
  50. +17 −9 examples/usage/streaming.py
  51. +56 −0 examples/usage/streamlit_app.py
  52. +45 −0 examples/usage/structured_output.py
  53. +41 −0 examples/usage/structured_output_streaming.py
  54. +26 −0 examples/usage/structured_output_thoughts.py
  55. +35 −0 examples/usage/tracing.py
  56. +0 −27 examples/usage/use_instructor.py
  57. +0 −34 examples/usage/use_instructor_streaming.py
  58. +7 −3 mkdocs.yml
  59. +309 −51 pdm.lock
  60. +13 −2 pyproject.toml
  61. +3 −2 scripts/gen_ref_nav.py
  62. +46 −142 src/appl/__init__.py
  63. +2 −0 src/appl/caching/__init__.py
  64. +232 −0 src/appl/caching/db.py
  65. +37 −0 src/appl/caching/utils.py
  66. 0 src/appl/cli/__init__.py
  67. +65 −9 src/appl/cli/vis_trace.py
  68. +10 −10 src/appl/compositor.py
  69. +6 −5 src/appl/core/__init__.py
  70. +5 −1 src/appl/core/compile.py
  71. +236 −43 src/appl/core/config.py
  72. +8 −7 src/appl/core/context.py
  73. +10 −10 src/appl/core/function.py
  74. +243 −40 src/appl/core/generation.py
  75. +112 −31 src/appl/core/globals.py
  76. +1 −2 src/appl/core/io.py
  77. +105 −19 src/appl/core/message.py
  78. +5 −2 src/appl/core/modifiers.py
  79. +30 −0 src/appl/core/patch.py
  80. +10 −8 src/appl/core/printer.py
  81. +4 −1 src/appl/core/promptable/base.py
  82. +4 −2 src/appl/core/promptable/definition.py
  83. +2 −2 src/appl/core/promptable/formatter.py
  84. +184 −32 src/appl/core/response.py
  85. +27 −12 src/appl/core/runtime.py
  86. +7 −54 src/appl/core/server.py
  87. +104 −40 src/appl/core/tool.py
  88. +110 −145 src/appl/core/trace.py
  89. +2 −1 src/appl/core/types/__init__.py
  90. +28 −0 src/appl/core/types/caching.py
  91. +171 −25 src/appl/core/types/content.py
  92. +29 −0 src/appl/core/types/custom.py
  93. +0 −14 src/appl/core/types/deps.py
  94. +90 −0 src/appl/core/types/executor.py
  95. +38 −20 src/appl/core/types/futures.py
  96. +27 −15 src/appl/core/types/role.py
  97. +154 −0 src/appl/core/types/trace.py
  98. +134 −0 src/appl/core/utils.py
  99. +45 −13 src/appl/default_configs.yaml
  100. +175 −90 src/appl/func.py
  101. +10 −7 src/appl/role_changer.py
  102. +67 −58 src/appl/servers/api.py
  103. +31 −10 src/appl/servers/manager.py
  104. +299 −0 src/appl/settings.py
  105. +8 −1 src/appl/tracing/__init__.py
  106. +47 −17 src/appl/tracing/engine.py
  107. +322 −10 src/appl/tracing/printer.py
  108. +4 −2 src/appl/types.py
  109. +59 −4 src/appl/utils.py
  110. +6 −0 src/appl/version.py
  111. +165 −0 tests/test_caching.py
  112. +13 −0 tests/test_config.py
  113. +0 −1 tests/test_func.py
  114. +109 −5 tests/test_gen.py
  115. +0 −1 tests/test_message.py
  116. +17 −0 tests/test_misc.py
  117. +53 −7 tests/test_openai.py
  118. +2 −2 tests/test_parallel.py
  119. +17 −8 tests/test_prompt.py
  120. +3 −6 tests/test_prompt_helpers.py
  121. +1 −4 tests/test_tool.py
237 changes: 237 additions & 0 deletions .cursorrules
Original file line number Diff line number Diff line change
@@ -0,0 +1,237 @@
<general_rules>
Best Practices for writing code:
* DRY and SOLID Principles: Keep the code modular, maintainable, and efficient.
* Readability and Maintainability: Keep the code concise and readable, add comments to explain complex logic.
* Declarative and Functional: Prefer declarative configurations, functional paradigms, and immutability to ensure predictable behavior.
* Error Handling and Security: Ensure the code is secure with proper error handling mechanisms.
</general_rules>

<working_with_llms>
When calling Large Language Models (LLMs), you should use the package `appl` to build the application.

When working with LLMs, you should have clear format instructions to guide the LLMs to generate the desired output, and have appropriate parser to parse the output into the desired format. The format should be designed clear and easy to parse, consider using format like markdown's code block.

You should add `stream=True` for tasks that require LLMs to generate a large amount of text.
</working_with_llms>

<explain_appl>
APPL is a package that integrates prompts of LLMs into the code.
- Grow your prompt by calling `grow()`, a implicit newline is added between each component. When being asked to be implicit, you can remove the `grow()` function and leave the content inside `grow` as it is, APPL will automatically add the `grow()` function for you during runtime.
- The docstring of the `@ppl` function will not be counted as a part of the prompt by default. If that part is meant to be the system prompt, you can specified that using `@ppl(docstring_as="system")`.
- The `gen` function is a wrapper around `litellm.completion`, it returns a future object, it will automatically takes the prompt captured so far as the prompt for the LLM. See the example below for more details. Note that you do not need to wrap `gen` in AIRole() scope to call it for generation.
- You can use `with role:` to specify the role of the message, for example `with AIRole():` to specify the prompt growed in the scope as the assistant message. The default scope is `user`.
- To get the result of `gen` immediately, use `str()` to convert it to a string. Otherwise, it is a `Generation` object where you can take the `result` attribute to get the result.
- Try to delay the time you get the result of `gen` as much as possible, so that the code can be more parallelized.
- You should avoid using multi-line string in `@ppl` function as much as possible. But when needed, write them with indentation aligning with the code, it will be dedented similar to docstring before being used in the code.

<example>

```python
from appl import AIRole, gen, grow, ppl
from appl.const import NEWLINE


@ppl(ctx="copy") # copy the context (prompt) from caller, so that the prompt in different runs are independent
def get_answer(question: str):
grow(question) # grow the prompt by appending the question
# do not need with AIRole() scope here, `gen` is not bind to any role
return gen() # run llm generation with the current prompt, return as a future object


@ppl # marks APPL function
def answer_questions(quotation: str, questions: list[str]):
grow("Extract the name of the author from the quotation below and answer questions.")
grow(quotation) # append to the prompt
with AIRole(): # the prompt inside this scope will be used as the assistant message
grow("The name of the author is") # specify the prefix
response = gen(stop='.') # each stop sequence must contain non-whitespace, could not be '\n' only.
grow(response) # append the response to the prompt
return [get_answer(q) for q in questions] # parallelize calls, result contains a list of futures


quotation = '"Simplicity is the ultimate sophistication." -- Leonardo da Vinci'
questions = [
"In what era did the author live?",
"What is the most famous painting of the author?",
]
for ans in answer_questions(quotation, questions):
print(ans) # print the result of the future
```

The prompt and output for the three `gen` calls will looks like:

Prompt:
```yaml
- User:
Extract the name of the author from the quotation below and answer questions.
"Simplicity is the ultimate sophistication." -- Leonardo da Vinci
- Assistant:
The name of the author is
```
Output: Leonardo da Vinci.

Prompt:
```yaml
- User:
Extract the name of the author from the quotation below and answer questions.
"Simplicity is the ultimate sophistication." -- Leonardo da Vinci
- Assistant:
The name of the author is Leonardo da Vinci.
- User:
In what era did the author live?
```
Output: Renaissance era.

Prompt:
```yaml
- User:
Extract the name of the author from the quotation below and answer questions.
"Simplicity is the ultimate sophistication." -- Leonardo da Vinci
- Assistant:
The name of the author is Leonardo da Vinci.
- User:
What is the most famous painting of the author?
```
Output: Mona Lisa.

</example>

<example>
You are encouraged to use `response_format` to specify the format of the response to be a pydantic model.

```python
from pydantic import BaseModel
from appl import gen, grow, ppl

class Response(BaseModel):
answer: int
# Note: dict type is not supported yet by openai, but can be used for anthropic models.

@ppl
def get_answer(question: str):
grow(question)
# you should use `response_obj` to get the result when using response_format equals to a pydantic model
return gen(response_format=Response).response_obj


print(get_answer("1+1=?"))
```

The result will be: `answer=2`
</example>
<example>
You can use `records()` to return the prompt captured so far in this function. This can be useful to modularize the prompts.
For the system prompt, the example illustrates two ways to add it.

```python
from appl import SystemMessage, gen, grow, ppl, records

@ppl
def subprompt():
grow(f"Hello, {name}!")
return records() # return the prompt growed in the current function so far

@ppl
def hello1(name: str):
grow(SystemMessage("You are a helpful assistant.")) # one way to add system prompt
grow(subprompt())
return gen()

@ppl(docstring_as="system")
def hello2(name: str):
"""You are a helpful assistant."""
grow(subprompt())
return gen()

print(hello1("APPL"))
print(hello2("APPL"))
```
The prompt for both `gen` calls in `hello1` and `hello2` will looks like:
```yaml
- System:
You are a helpful assistant.
- User:
Hello, APPL!
```

</example>

<example>
You can use compositors to build the prompt, which specify the indexing, indentation, and separation between different parts of the prompt (growed by `grow()`) inside its scope. Some useful compositors include: Tagged, NumberedList.

```python
from appl import ppl, gen, grow
from appl.compositor import NumberedList, Tagged


@ppl
def guess_output(hints: list[str], inputs: str):
grow("Guess the output of the input.")
with Tagged("hints"):
with NumberedList():
grow(hints) # list will be captured one by one

with Tagged("inputs"):
grow(inputs)

grow("What's the output of the input?")

return gen()

print(guess_output(["The output is the sum of the numbers"], "1, 2, 3"))
```

The prompt will looks like:
```yaml
- User:
Guess the output of the input.
<hints>
1. The output is the sum of the numbers
</hints>
<inputs>
1, 2, 3
</inputs>
What's the output of the input?
```

</example>

<best_practices>
Though you can call LLMs with simple tasks sharing the same context multiple times, you are encouraged to combine them into a single call with proper formatting and parsing (or using pydantic model) to reduce cost. For example, when being asked to generate a person's name and age:
```python
class Person(BaseModel):
name: str
age: int

# you should NOT do this
@ppl
def wrong_way_to_get_name_and_age():
grow("Generate a person's name and age.")
grow("name:")
name = gen()
grow(name)

grow("age:")
age = gen()
grow(age)
return Person(name=name, age=age) # could generate in wrong format

# you could do this
@ppl
def parse_to_get_name_and_age() -> Person:
grow("Generate a person's name and age.")
grow("Response in JSON format wrapped in ```json and ```, with name and age fields.")
response = gen()
# omit the code to use `regex` and `json.loads` to parse the response into a dict
person: Person = parse_response(response)
return person

# or this (generally more recommended)
@ppl
def pydantic_to_get_name_and_age() -> Person:
grow("Generate a person's name and age.")
return gen(response_format=Person).response_obj
```
</best_practices>

</explain_appl>
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -9,11 +9,13 @@ temp*
*/*/temp*
*/*/*/temp*

logs/
*.tmp
dumps

# docs generated using scripts
docs/reference
docs/docs

# appl
appl.yml
Loading