-
Notifications
You must be signed in to change notification settings - Fork 31
/
changelog.txt
86 lines (72 loc) · 5.89 KB
/
changelog.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
5 Aug 2024 (v5.1.3)
- Removed dependencies for TaskGen Memory module. Now only has one dependency in requirements.txt: asyncio
- Removed dependencies for openai, as this will is LLM-specific
25 Jul 2024 (v5.1.1)
- Fixed a missing filter for strict_json_async to get only the text within {} for json
- For strict_json, changed the output to be f'''\nOutput in the following json template: ```{new_output_format}```
Update values enclosed in <> and remove the <>.
Your response must only be the updated json template beginning with {{ and ending with }}
Ensure the following output keys are present in the json: {' '.join(list(new_output_format.keys()))}'''
This helps by changing ['###key###'] into ###key### , which helps Llama 3 8B better understand how to format the json
- Changed all default model parameters in Tutorial Notebooks and code to be 'gpt-4o-mini' instead of 'gpt-3.5-turbo' for cheaper and better performance
10 Jul 2024 (v5.1.0)
- Updated strict_json prompt to be f'''\nOutput in the following json template: ```{new_output_format}```
Update values enclosed in <> and remove the <>.
Your response must only be the updated json template beginning with {{ and ending with }}
Ensure the following output keys are present in the json: {list(new_output_format.keys())}'''
This will help smaller models like Llama 3 8B generate more reliably.
- Amended the Tutorial to fix some issues where llm variable was not called
5 Jul 2024 (v5.0.0)
- HUGE: Async support added. Added async_chat, async_strict_json, FunctionAsync
- Changed prompt of strict_json to make it compatible with more LLMs:
'''\nOutput using the following json template: {new_output_format}
Update values enclosed in <> and remove the <>.
Output only a valid json beginning with {{ and ending with }} and ensure that the following keys are present: {list(new_output_format.keys())}'''
- `Function()` now allows just automatically inferring the `fn_description` and `output_format` from external function, using `Function(external_fn = fn)`
- By default, strictjson will not decode for all unicode escapes, just \\t, \\n, \\' and \\". Using type: code will do for all unicode escapes
- type: code now converts python```code``` -> code, which makes it more versatile for LLM code generation
- Added in dependencies: langchain, PyPDF2, python-docx, pandas, xlrd. These are for document processing in memory for TaskGen (but added to also provide features of TaskGen repo)
- Edited the `convert_to_dict` function in `base.py` to make it more robust to variations of incorrect keys, edited error message for incorrect key to make it more robust
5 May 2024 (v4.1.0)
- Changed prompt of strict_json to make it compatible with more LLMs and improve performance on ChatGPT:
'''\nOutput in the following json string format: {new_output_format}
Update text enclosed in <>. Output only a valid json string beginning with {{ and ending with }}'''
- Added more external LLM examples in Tutorial
22 Apr 2024 (v4.0.1)
- Changed StrictJSON prompt to make it better: Added "You must output valid json with all keys present." to system prompt of `strict_json` to make it more robust
- Modified openai>=1.3.6 to requirements.txt so that it is compatible with newer versions of LangChain
- Added dill>=0.3.7 to requirements.txt so that the agentic part can run on Colab
- Added return_as_json = True as an input variable to strict_json (default: False, so it will be a python dictionary), so that you can get the json string as output if needed. Works also for OpenAIJson mode
4 Mar 2024 (v4.0.0)
- Better parsing that caters for newlines and extra spaces for nested datatypes
- Changed the dictionary parsing of StrictJSON to ensure that we extract the leftmost and rightmost { and } respectively to do ast.literal_eval. Previously might have the quotation marks at the left and right which affected parsing
- Changed the bool parsing of StrictJSON to convert true/false to True/False first so as to make it parseable by ast.literal_eval
- Made list processing in StrictJSON more robust
- Moved Agentic Framework entirely to TaskGen
16 Feb 2024 (v3.0.2)
- Added select_function and use_function to do manual function usage for agents
15 Feb 2024 (v3.0.1)
- Added Agent.ipynb, a tutorial for agents
- Added StrictJSON Agents which are initialisable with name and description
- Agents can take in functions of class `Function`
- Agents can run(task) and also assign(task) and step() through them
- Modified `Function` to cater for external functions as well
- Added a fn_name in `Function` which will be the name of parameter (if provided), or name of the external function (if provided), otherwise it will be inferred by LLM from fn_description
- Added in `bool` type forcing to `strict_json`
9 Feb 2024 (v2.2.2)
- Exposed the LLM variable so user can use any llm they want
- changed `strict_function` naming to `Function` to match convention of class names being capitalised (`strict_function` still works for legacy support.)
- added legacy support for `strict_text` and `strict_output`, which were the earlier function names of the current `strict_json`
- Variables for `Function` now enclosed in <> in `fn_description` for better performance
8 Feb 2024 (v2.2.1)
- Added in LLM-based checks by using type: ensure <requirement> in output field
- Added in custom_checks and check_data in strict_json for user to implement their own custom check functions
5 Feb 2024 (v2.2.0)
- HUGE: Nested output formats of multiple lists and dictionaries are now supported!
- Now supports int, float, str, dict, list, Dict[], List[], Enum[] type forcing with LLM-based error correction
- Better handling of naming of variables in strict_function by using list of variables using variable_names
- Removed input_type and output_type from strict_function. Reason: input_type not needed as LLMs can flexibly perceive inputs, output_type is now handled with type forcing
8 Jan 2024 (v2.0.2)
- Installable by pip
- Support for OpenAI JSON Mode
- StrictJSON Functions