-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(bench): memory usage depending on k_table_size
#272
Comments
Ran it on the server along with dhat for all the same |
For If this information is not enough for you, please give me any step, I will make you an itemization as per your request | k | repeat_count | total_bytes | total_blocks | t_gmax | t_gmax_blocks | t_end | t_end_blocks |
| --- | -------------- | ------------- | -------------- | -------- | --------------- | ------- | --------------- |
| 4.58 | 1 | 21,103,360,561 | 8,379,587 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 14.55 | 1000 | 21,149,395,860 | 8,607,137 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 15.55 | 2000 | 21,193,222,448 | 8,806,406 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 16.14 | 3000 | 21,242,455,990 | 9,068,519 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 16.55 | 4000 | 21,290,778,894 | 9,315,603 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 16.87 | 5000 | 21,335,068,150 | 9,518,476 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
|
mem_profiling.tar.gz |
If you want, I can choose some other way to double-check the correctness of the result. Still, the utility I used is not production-ready |
I observed 21GB usage for all settings. Does it mean same memory usage for different circuit, step_size etc? I was expecting larger circuit, more memory needed. |
Also ran benchmark where primamry & secondary |
I ran with different size of filled step circuit rows as I had earlier with benchmark. Now I also added a test with different size of step circuit table. As soon as the result is ready, I will provide it here |
In this table I used For 20, 21 - in process (with a custom allocator, it's going to be a long time) | k | total_bytes | total_blocks | t_gmax | t_gmax_blocks | t_end | t_end_blocks |
| --- | ------------- | -------------- | -------- | --------------- | ------- | --------------- |
| 17 | 46,470,334,145 | 8,384,081 | 26,415,304,923 | 15,860 | 958,633 | 1,566 |
| 18 | 66,329,954,655 | 13,391,175 | 27,059,130,587 | 15,860 | 958,633 | 1,566 |
| 19 | 103,941,909,225 | 22,422,684 | 28,346,781,915 | 15,860 | 958,633 | 1,566 | |
I am a little confused about these two (1)
And (2)
when k=17, the total bytes of (2) are much larger than table (1). But the table (1) also corresponds to Table size 17. Is that correct? |
I did some roughly memory estimate, when k=17, the memory usage is around 2GB. From your data above, it seems not subtract memory usage from other program? @cyphersnake |
You didn't notice that I said that to measure for different circuit sizes, I immediately chose COMMITMENT_KEY=27, which means about 16 GB for commitment key only Also these are different `k's. In the first case it is filled data, in the second case it is absolute data. I'll correct the names now, so it's not confusing |
k-table-size = 17 Total: 19,449,044,966 bytes in 6,417,946 blocks |
If we use the size of the commitment key together with the size of the circuit, the degree of influence of the latter on the result will be too high to estimate the difference in performance, so I made the measurements with COMMITMENT_KEY the maximum of the necessary ones and wrote about it |
I will now summarize the results and describe them separately, describing beforehand the design of each experiment and the motivation for that particular design. If you want measurements with any specific parameters, just describe them. The key dimensions are.
(the last two don't affect the result much) |
Common terms
Test 1
| filled_k | repeat_count | total_bytes | total_blocks | t_gmax | t_gmax_blocks | t_end | t_end_blocks |
| -------- | -------------- | ------------- | -------------- | -------- | --------------- | ------- | --------------- |
| 4.58 | 1 | 21,103,360,561 | 8,379,587 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 14.55 | 1000 | 21,149,395,860 | 8,607,137 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 15.55 | 2000 | 21,193,222,448 | 8,806,406 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 16.14 | 3000 | 21,242,455,990 | 9,068,519 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 16.55 | 4000 | 21,290,778,894 | 9,315,603 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
| 16.87 | 5000 | 21,335,068,150 | 9,518,476 | 1,294,194,541 | 30,297 | 958,609 | 1,563 |
Test 2
| k | total_bytes | total_blocks | t_gmax | t_gmax_blocks | t_end | t_end_blocks |
| --- | ------------- | -------------- | -------- | --------------- | ------- | --------------- |
| 17 | 46,470,334,145 | 8,384,081 | 26,415,304,923 | 15,860 | 958,633 | 1,566 |
| 18 | 66,329,954,655 | 13,391,175 | 27,059,130,587 | 15,860 | 958,633 | 1,566 |
| 19 | 103,941,909,225 | 22,422,684 | 28,346,781,915 | 15,860 | 958,633 | 1,566 |
| 20 | 177,592,778,430 | 39,557,793 | 30,922,084,571 | 15,860 | 958,633 | 1,566 |
| 21 | 323,176,881,971 | 71,974,399 | 36,072,689,883 | 15,860 | 958,633 | 1,566 | NOTE: In this test, it is not the absolute values that are important, as they are skewed by the large COMMITMENT_KEY, but the nature of the growth in memory usage. Test 3
| k | total_bytes | total_blocks | t_gmax | t_gmax_blocks | t_end | t_end_blocks |
| --- | ------------- | -------------- | -------- | --------------- | ------- | --------------- |
| 17 | 19,449,044,966 | 6,417,946 | 1,293,397,125 | 28,956 | 161,193 | 222 | |
**Motivaion** Generalized Implementation #272 **Overview** Thanks to cli+dhat we can now run any test cases and measure any memory usage
When table size k = 17 and commitment key size = 21, how do you get |
Let me draw your attention to the difference. The |
Test 1: Test 2: In both two tests, we have same circuit size but different commitment key. After we subtract the commitment key size from the memory, I would expect them to be similar size, but they are not. I don't understand why. Any idea about it? @cyphersnake |
Don't forget about serialization of commitment key in |
The keys of bn256 and grumpkin are total 8GB+8GB=16GB. Why the increase of memory is 8GB but not 0GB or 16GB? How does the serialization of commitment keys internally work? |
**Motivation** Now we have a default path for time-profling & mem-profiling, but it is not very convenient to vary the runtime parameters by hand. That's why I put all runtime parameters in a separate cli example. This allows us to do time-profiling & mem-profiling on different parameters directly on the command line Part of #272 **Overview** Fairly simple code that parses parameters with clap and runs circuit. Covers all our examples Once mem-profiling is in main, I'll add its description to the README along with this example ```console Usage: cli [OPTIONS] [PRIMARY_CIRCUIT] [SECONDARY_CIRCUIT] Arguments: [PRIMARY_CIRCUIT] [default: poseidon] [possible values: poseidon, trivial] [SECONDARY_CIRCUIT] [default: trivial] [possible values: poseidon, trivial] Options: --primary-circuit-k-table-size <PRIMARY_CIRCUIT_K_TABLE_SIZE> [default: 17] --primary-commitment-key-size <PRIMARY_COMMITMENT_KEY_SIZE> [default: 21] --primary-repeat-count <PRIMARY_REPEAT_COUNT> [default: 1] --primary-r-f <PRIMARY_R_F> [default: 10] --primary-r-p <PRIMARY_R_P> [default: 10] --secondary-circuit-k-table-size <SECONDARY_CIRCUIT_K_TABLE_SIZE> [default: 17] --secondary-commitment-key-size <SECONDARY_COMMITMENT_KEY_SIZE> [default: 21] --secondary-repeat-count <SECONDARY_REPEAT_COUNT> [default: 1] --secondary-r-f <SECONDARY_R_F> [default: 10] --secondary-r-p <SECONDARY_R_P> [default: 10] --limb-width <LIMB_WIDTH> [default: 32] --limbs-count <LIMBS_COUNT> [default: 10] --debug-mode --fold-step-count <FOLD_STEP_COUNT> [default: 1] --json-logs -h, --help Print help -V, --version Print version ```
**Motivaion** Generalized Implementation #272 **Overview** Thanks to cli+dhat we can now run any test cases and measure any memory usage
You need to do the same thing we did in #249, but with memory
The text was updated successfully, but these errors were encountered: