You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the most valuable implementation!!!!!
Our lab is also interested in this work and we have tried your implementation. However, we find a very significant accuracy drop when LazyLLM is applied (narrative 17.89->13,24). How about your results? Do you get the reported results in the papers? We are quite glad for your response.
Best Regards
The text was updated successfully, but these errors were encountered:
Hello,
I'm glad my code came in came in useful. Unfortunately, I haven't done any extensive tests of those models, since this is only a hobby project. However, if you are getting significant accuracy drops it's likely due to suboptimal pruning rates' configuration. If I recall correctly, the authors of the original paper mentioned that they achived the best results with pruning rates decreasing from the first layer to the last layer. In the example code I provided the pruning rates are fixed at 0.1 for every layer, which could explain the accuracy drop.
Thank you for the most valuable implementation!!!!!
Our lab is also interested in this work and we have tried your implementation. However, we find a very significant accuracy drop when LazyLLM is applied (narrative 17.89->13,24). How about your results? Do you get the reported results in the papers? We are quite glad for your response.
Best Regards
The text was updated successfully, but these errors were encountered: