Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some results are exactly the same as pFedPrompt paper Table 1 #2

Open
liuxingzhang opened this issue May 1, 2024 · 1 comment
Open

Comments

@liuxingzhang
Copy link

Thank you for your amazing work and open-souring the implementation.

I wonder why in the Table 1 of your paper, the results of

  • pFedPrompt
  • PromptFL+FT
  • PromptFL+FedPer
  • PromptFL+FedAMP

are exactly the same (same mean and same std) as the corresponding results in pFedPrompt paper's [1] Table 1? If the federated dataset partition and the seeds for running these experiments are the same as pFedPrompt's experiments, why are the results of PromptFL and PromptFL+FedProx in your experiments different from theirs?

[1] Guo, Tao, Song Guo, and Junxiao Wang. "Pfedprompt: Learning personalized prompt for vision-language models in federated learning." In Proceedings of the ACM Web Conference 2023, pp. 1364-1374. 2023.

@snow-zhai
Copy link

I have the same questions. Could you open all the baseline source code? Thank you,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants