Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redundancy in the log files #22

Closed
rayrayraykk opened this issue Apr 18, 2022 · 6 comments
Closed

Redundancy in the log files #22

rayrayraykk opened this issue Apr 18, 2022 · 6 comments
Assignees
Labels
enhancement New feature or request

Comments

@rayrayraykk
Copy link
Collaborator

A Fedavg on 5% of FEMNIST trail will produce a 500 kb log each round:
with 80% eval logs like 2022-04-13 16:33:24,901 (client:264) INFO: Client #1: (Evaluation (test set) at Round #26) test_loss is 79.352451. And 10% is server results and 10% is train informations.

If the round is 500, 1000, or much larger, the log files will take up too much space with a lot of redundancy. @yxdyc

@rayrayraykk rayrayraykk added the enhancement New feature or request label Apr 18, 2022
@yxdyc
Copy link
Collaborator

yxdyc commented Apr 18, 2022

The output logging files are currently redundant, leading to disk space wastage and difficulties to find key results from the logging files. We may save the raw_metrics and training_metrics in other two log files with compact format.

@rayrayraykk
Copy link
Collaborator Author

The output logging files are currently redundant, leading to disk space wastage and difficulties to find key results from the logging files. We may save the raw_metrics and training_metrics in other two log files with compact format.

And we should delete the logs below, which have already been reported on the server-side.

for key in eval_metrics:

@rayrayraykk
Copy link
Collaborator Author

Another issue is the eval results of the global evaluation mode are empty, we should fix this.

@yxdyc
Copy link
Collaborator

yxdyc commented Apr 20, 2022

The output logging files are currently redundant, leading to disk space wastage and difficulties to find key results from the logging files. We may save the raw_metrics and training_metrics in other two log files with compact format.

And we should delete the logs below, which have already been reported on the server-side.

for key in eval_metrics:

We still need this log when we are in distributed mode. Do you agree we mask these logs only in the ``simulation'' mode? @rayrayraykk

@rayrayraykk
Copy link
Collaborator Author

The output logging files are currently redundant, leading to disk space wastage and difficulties to find key results from the logging files. We may save the raw_metrics and training_metrics in other two log files with compact format.

And we should delete the logs below, which have already been reported on the server-side.

for key in eval_metrics:

We still need this log when we are in distributed mode. Do you agree we mask these logs only in the ``simulation'' mode? @rayrayraykk

Agree.

@yxdyc
Copy link
Collaborator

yxdyc commented Apr 24, 2022

Solved in #29

@yxdyc yxdyc closed this as completed Apr 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants