-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add configurable max blob size #675
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,6 +4,7 @@ import ( | |
"context" | ||
"fmt" | ||
"log" | ||
"math" | ||
"os" | ||
"time" | ||
|
||
|
@@ -114,6 +115,14 @@ func RunDisperserServer(ctx *cli.Context) error { | |
ratelimiter = ratelimit.NewRateLimiter(reg, globalParams, bucketStore, logger) | ||
} | ||
|
||
if config.MaxBlobSize < 0 || config.MaxBlobSize > 64*1024*1024 { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should it fail if config.MaxBlobSize == 0? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah, I will exclude 0 |
||
return fmt.Errorf("configured max blob size is invalid %v", config.MaxBlobSize) | ||
} | ||
|
||
if int64(math.Log2(float64(config.MaxBlobSize))) == int64(math.Log2(float64(config.MaxBlobSize-1))) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's faster to use bit operation, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this only runs once. I think it is more readable this way There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. but your solution works There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think there are utility functions to check if a number is power of 2 (in encoding or common package). Reusing that has no impact on readabilty. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. that function is in "github.com/Layr-Labs/eigenda/encoding/fft". I don't want to add a new strange dependancy, what do you think There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ideally we can pull it out to |
||
return fmt.Errorf("configured max blob size must be power of 2 %v", config.MaxBlobSize) | ||
} | ||
|
||
metrics := disperser.NewMetrics(reg, config.MetricsConfig.HTTPPort, logger) | ||
server := apiserver.NewDispersalServer( | ||
config.ServerConfig, | ||
|
@@ -123,6 +132,7 @@ func RunDisperserServer(ctx *cli.Context) error { | |
metrics, | ||
ratelimiter, | ||
config.RateConfig, | ||
config.MaxBlobSize, | ||
) | ||
|
||
// Enable Metrics Block | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks no issue if different disperser replicas are using different MaxBlobSize.
Should we add validation to the Node to check if the blob size is under a limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is issue. But we want it, it has to be different deployment, which the current devops repo has yet offered that.
I don't think we need to artificially constraint on blob size on the DA node size now, as far as the disperser isn't decentralized, but yes, once we got to the step. We need to have some rate limit or blob constraint. For now, I think we are good.
Empirically, there is no impact on validation speed with respect to the blob size.