-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance Inline Query Functionality with Action Button and Improved Caching #101
Conversation
- Add 'private' to supported chat types for inline queries - Update inline_query() to: * Include an action button to fetch the answer * Use callback_data to store short prompts * Implement built-in cache for longer prompts - Add handle_callback_inline_query() as callback query handler to: * Read the prompt from callback_data or the built-in cache * Fetch the answer from OpenAI * Update the current message content with the status and answer - Add validate_answering_possibility method to verify user access and budget - Add process_used_tokens method to apply token usage - Use validate_answering_possibility in image, transcribe, prompt, and inline_query handlers - Add inline query support to is_allowed and is_within_budget methods - Convert error_handler, split_into_chunks, is_user_in_group, and is_group_chat to static methods - Fix lint warnings and convert unused method parameters to private
I've tried this and it worked quite well. there are certainly some constraints in telegram's inline mode that you had to work around. There was a prompt that for some reason did not generate an icon until i added some random extra characters to it.. some sort of edge bug possibly.. but overall i like this, hopefully will get merged. |
Hi @bugfloyd, awesome work and thanks for the writeup! Tested it and I love it It works well in private chats. The only issue I found is that when used in groups with the bot in it, you will get the answer twice (one inline, and then another one because you sent a query via the bot). What do you think the best solution would be? Enable this new inline button in private chats only? |
I am glad to hear that :) I believe having this feature enabled for group chats would be very handy specially when the bot is not/could not added to the group. |
Hello, bugfloyd! |
This was a nicely done pull request but becoming a bit stale. any chance to whip it into shape for merging @bugfloyd ? |
My apologies for the delay in responding to this PR. I have had a very busy couple of weeks, but I will do my best to get this PR ready for merge by this weekend. However, I have some concerns regarding disabling inline queries for group chats, as this feature could be very handy when the bot is not joined to the group. Therefore, I want to explore other possible solutions to prevent duplicate messages while still keeping the inline query feature enabled for group chats. If either of you have any ideas or suggestions, please feel free to share them with me. |
@bugfloyd I agree - definitely keep the inline mode for group chats, since it's not practical to add the bot to every group where it might be useful occasionally. If a duplicate message cannot be easily avoided i think it's a small "side effect" that we can live with for now and fixed at a later time. Most users would probably realize immediately after receiving a duplicate reply that there is no real reason to use the inline mode if the bot is already a direct member. |
# Conflicts: # bot/telegram_bot.py
Handle disallowed_message & budget_limit_message for inline queries
This PR is now ready for review and merging. I've addressed the known bug by disabling the bot cache for inline queries, ensuring that the internal inline caching system works correctly for all cases. Consequently, even short prompts no longer use the callback button data, and the internal cache is employed for all inline queries. Additionally, I've handled disallowed and budget_limit messages for inline queries. In the near future, I plan to submit several more PRs to address:
|
Thank you @bugfloyd great work! |
Done in #230:
|
Background
The current inline queries feature functions more like a decorative element, as it does not operate as a true inline query. The shortcomings include:
Existing Challenges and Possible Solutions
If we want to generate a response from OpenAI APIs and provide it within the inline result or the final output, there are some clear challenges:
The Solution
I retained the initial inline results as previews with just the prompt and no answers included, but added an action button (
InlineKeyboardButton
) in the final output to generate the actual answer to the prompt. This button triggers a callback, in which we read the prompt, get the answer for it, and edit the original message with the answer. While generating the answer, it also adds a status to the message.I faced challenges in resolving the following issue:
How can we get the original query (prompt) in the action callback function? I found
callback_data
to be a suitable place to store the prompt and read it in the callback, but unfortunately, Telegram imposes a 64-byte limit on this property. To overcome this limit, I implemented a built-in caching system for each user's inline queries. I ended up usingcallback_data
for short prompts because it is a more reliable solution, and it partially fixes the known bug below.Known Issues
When a user types a long inline prompt, the query gets cached and, upon clicking the action button and retrieving the answer, that cache is removed. If the user sends the exact same long prompt, the initial response with the action button is generated by the Telegram/bot cache. The action button does not work since we do not have cache available for the unique ID included in the
callback_data
(which came from the Telegram cache and includes the original unique ID). This bug does not exist for short prompts, as the code usescallback_data
to transport the prompt. Considering this bug as a real edge case, I think we are safe to use the current flow.Note:
For inline queries, I am using the user ID as the chat ID when getting answers from the OpenAI class. Doing so enables users to have a consistent inline query experience across different chats (e.g., asking something in a group and referring to it while having a private conversation at the same time).
Detailed Changes
inline_query
method to:callback_data
to store short promptshandle_callback_inline_query
as a callback query handler to:callback_data
or the built-in cachevalidate_answering_possibility
method to verify user access and budgetprocess_used_tokens
method to apply token usagevalidate_answering_possibility
inimage
,transcribe
,prompt
, andinline_query
handlersis_allowed
andis_within_budget
methodserror_handler
,split_into_chunks
,is_user_in_group
, andis_group_chat
to static methodsScreenshots
Future Plans and Enhancements
The current caching system for prompts serves as an initial MVP. We may consider refining or expanding it based on our requirements.
At present, only GPT models are supported in inline queries. We can also explore incorporating image generation using DALL-E, following a similar workflow.
Resolves #41