-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Teaching the AI assistant to call tools #586
Comments
Nice summary of the issues. I hope to play around with some of these ideas soon. Might even be a research topic! |
A very nice use of a LLM would be process a Note (such as a transcription of an obituary) for proper names and offer a linking list of people within 2 degrees of separation. Then push the remainder of unmatched name through Doug Blank's Data Entry gramplet. (Which allows quickly adding new Parents, sibs and children.) |
Something else I've been using manually a lot is taking the output of the OCR text recognition and making the LLM fix all the typos. In combination, this leads to almost perfect transcriptions. |
Maybe another way to make assistant smarter will be add access to all tree data with descriptions how to use it, like this tool https://github.com/mindsdb/mindsdb |
(NB: this is a feature request, but also the start of a discussion - I think we need some good ideas first.)
Currently, the AI assistant is not very smart as it can only retrieve individual Gramps objects and doesn't know anything about relationships, so you can't even ask it for your grandfather.
To solve that, we need to teach it how to call tools/functions.
In approaching that, there are several questions to answer:
One challenge I see is that the number of possible functions is quite large:
Although I haven't tried it myself yet, common lore is that for an LLM to identify the right function to call only works well if the number of functions is small, probably below 10.
What I find quite promising is leveraging query languages like GQL or @dsblank's Object QL, where I suspect the latter is a better choice.
What could be done is the following:
Now, with these embeddings at hand, when the assistant gets a question, it could
Funnily enough, this would even be less resource intensive than the retrieval-based answers, since it only needs a vector index of queries that can be computed in advance once and for all.
I don't think I'll have time to work on this myself in the next 2 months or so, but if anyone experiments with this or has other ideas, please share here!
🤖
The text was updated successfully, but these errors were encountered: