Replies: 5 comments 1 reply
-
hi @FreBon, I'm not sure which engine is interpreting the prompt you included above, but from the prompt I noticed a couple of problems:
|
Beta Was this translation helpful? Give feedback.
-
Hi @dluc, thanks for the response :) I don't know what template engine I'm using, but to test this I created a Console app and updated my prompt template to have send the user input as a parameter to ask. Then it seems that the as function is triggered thru the plug-in. But it was a really strange result .. This is how the code looks in the console application: And this was the result I got in the logs: I don't know how to handle the parameters to the template when using the Chat Completion service from Semantic Kernel, haven't found any way to pass that to the "GetStreamingChatMessageContentsAsync" it just takes the PromptExecutionSettings object not the KernelArgument (that has the PromptExecutionSettings as a property) as the function InvokePromptAsync on the Kernel does. How do i pass the parameters to the ChatCompleationService? And when it comes to the MarkDown format, that should not be sent to the KernelMemory, thats for the Semantic Kernel. I want to get the chat solution to produces nice looking responses .. But i guess i could remove it form the template and add it as a system message. |
Beta Was this translation helpful? Give feedback.
-
@FreBon Hey, were you able to figure this out? I am trying to solve this problem right now. |
Beta Was this translation helpful? Give feedback.
-
Hello, we need to be able to constrain the Kernel Memory to specific index/document/tag. Please advise how this can be accomplished through SK, as demostrated by the OP above. Thanks |
Beta Was this translation helpful? Give feedback.
-
It's recommended to use KM from a backend service, without direct calls coming from users, so you should have complete control of the requests sent to KM, including which indexes and tags to use. |
Beta Was this translation helpful? Give feedback.
-
Context / Scenario
Hi,
I'm using Semantic Kernel with Azure Open AI services to build a chat solution. This works great, but now I want to add Memory to handle when users paste URLS in the prompts, I'm extracting the URLs and I add them to the KernelMemory. I'm using it in serverless mode so i use the MemoryServerless instance that i get from the KernelMemoryBuilder to import the URLs.
This is how the memory builder looks:
I also add the momory plug-in:
When the user sends a prompt I'm using the Memory Plugin with a prompt template that looks like this:
Question
Greatfull for any guidence :)
Beta Was this translation helpful? Give feedback.
All reactions