Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copilot Extensions specific configuration support #526

Open
biosugar0 opened this issue Nov 19, 2024 · 6 comments · May be fixed by #531
Open

Copilot Extensions specific configuration support #526

biosugar0 opened this issue Nov 19, 2024 · 6 comments · May be fixed by #531
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested

Comments

@biosugar0
Copy link

Thank you for adding such a convenient feature!
#490

It would be even more useful if we could configure settings for each agent individually. For example, Perplexity AI allows you to use models like those described here:
https://docs.perplexity.ai/guides/model-cards

It might be helpful to have a configuration like the example below:

local opts = {
  debug = false,
  model = 'claude-3.5-sonnet', -- default model
  agents = { -- agent-specific configurations
    perplexityai = {
      model = 'llama-3.1-sonar-huge-128k-online', -- agent-specific model
    },
  },
  prompts = prompts,
}
local chat = require('CopilotChat')
chat.setup(opts)
@biosugar0
Copy link
Author

I haven’t conducted a detailed investigation yet, but since these Agents can be custom-built, it might be better to allow flexibility in their settings, depending on the case. For example, while the "model" parameter mentioned is relatively general, there could potentially be parameters specific to each Agent.

@deathbeam deathbeam added the enhancement New feature or request label Nov 19, 2024
@deathbeam
Copy link
Collaborator

Hmm so if I understand this correctly, this is sent to the /completions endpoint right. So I can just accept anything in the value for the agent config and then just merge it with the request we are building and just use that

@deathbeam
Copy link
Collaborator

deathbeam commented Nov 19, 2024

So i implemented it but idk if theres any way to verify it works (as the response do not contains model)

image

deathbeam added a commit to deathbeam/CopilotChat.nvim that referenced this issue Nov 19, 2024
Allow users to define extra per-agent configuration by introducing a new
`agents` config option. This enables customizing request parameters for
specific agents, such as setting different models for different AI
providers.

Closes CopilotC-Nvim#526

Signed-off-by: Tomas Slusny <[email protected]>
@deathbeam deathbeam linked a pull request Nov 19, 2024 that will close this issue
@biosugar0
Copy link
Author

It’s difficult to verify the behavior without a response. I noticed this issue because the behavior of websites referenced by PerplexityAI had changed.

When I used GPT-4o to make a search request in Japanese through PerplexityAI, it referenced Chinese websites and returned the answer in Japanese. However, when I specified llama-3.1-sonar-huge-128k-online, it correctly referenced Japanese websites and provided an appropriate response.

Currently, GPT-4o also seems to provide accurate responses, so it’s no longer a straightforward way to confirm the behavior.

@deathbeam deathbeam added the question Further information is requested label Nov 19, 2024
@deathbeam
Copy link
Collaborator

Hmm yea, dont rly want to add it if we dont know it works. Maybe it can be verified with some other extension? If anyone knows how to verify pls do tell.

@deathbeam deathbeam added the help wanted Extra attention is needed label Nov 21, 2024
@biosugar0
Copy link
Author

@deathbeam Looking at the code in the preview SDK, it seems that parameters can be configured for each agent in the CopilotRequestPayload.

https://github.com/copilot-extensions/preview-sdk.js/blob/f6756190b2ec70c6aea4eaa0d3caafd1d3f06ba5/index.d.ts#L93-L104

When I tried running and debugging the example at
https://github.com/copilot-extensions/blackbeard-extension locally, it seems that the "model" parameter cannot be configured. While the behavior used to change before, at least for now, it appears that modifying "model" is not possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants