-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a capability to increase the data corpus size for a workload #254
Comments
Is this a dup/subset of #253? |
It is a child issue, referenced in that one. Additional issues will be added to that parent issue as work progresses on these items. |
This capability is now available for the Closing this one. |
The data corpora supplied with the included workloads are generally small, under ~75 GB. They do not suffice for performance testing larger clusters, scale testing and longevity testing.
Since acquiring larger data sets is not straightforward, it would be helpful to be able to provide some mechanism to increase the corpus size for a workload. This could be done through duplicating (and appropriately modifying) the existing documents in the corpus or by synthesizing documents.
The text was updated successfully, but these errors were encountered: