-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge CV's list-objects prefix optimization + Add key length limit [JIRA: RCS-275] #1233
Conversation
This allows for an optimization for single object queries, causing the fold to only retrieve the item of interest.
Just a note for self, AWS S3 response for too long key:
|
I've got two error when prefix is longer than 1024. One is expected (does not mean good):
but another is unexpected:
|
Looks like I'd better add some guard both there and |
Include max key length check into this PR? If so, There has to be an option |
No, it's already defined in this pull request as macro in riak_cs.hrl. Have we supported key length longer than 1024? If yes, I'll update some in this pull request as well as safe guards. At least we need this optimization, don't we? |
I was just caremad because previous versions of riak cs supports keys So, if you are going to add validation to key length, I agree with you. |
P.S. For GET Object, AWS S3 responds with 400 |
Will add some |
PR for lager https://github.com/basho/lager/pull/292 (on top of 2.2.0) |
It seems that list objects with prefix with >1024 bytes is still failing with 7c4aa3b |
Memo: AWS S3 GET Bucket responds OK for >1024 bytes prefix |
I think it's ready again. |
-spec big_end_key(Prefix::binary() | undefined) -> binary(). | ||
big_end_key(undefined) -> | ||
big_end_key(<<>>); | ||
big_end_key(Prefix) when byte_size(Prefix) > ?MAX_S3_KEY_LENGTH -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why isn't the value in application env used here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's difficult to guess the "right" (not too bad) behavior in this situation...
The save start key and end one results in at most single entry. Then, is it better to use <<Prefix/binary, 255>>
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, just forgot about app env. Will push an update.
LocalCtx = LocalCtx0#key_context{bucket=Bucket, key=Key}, | ||
Ctx#context{bucket=Bucket, | ||
local_context=LocalCtx}. | ||
case byte_size(unicode:characters_to_binary(Key)) of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Key
is just a sequence of bytes instead of sequence of Unicode codepoints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Counter example: 600 0x81
s, less then 1024.
% for i in {1..600}; do echo -n %81; done | read LT1k
% S3CURL=.s3curl.15018.alice s3curl.pl --put rebar.config --id cs -- -x 127.0.0.1:15018 -s http://test.s3.amazonaws.com/${LT1k}
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>KeyTooLongError</Code><Message>Your key is too long</Message>
<Size>1200</Size><MaxSizeAllowed>1024</MaxSizeAllowed><RequestId></RequestId></Error>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought I've fixed it 😢
updated |
MaxLen when byte_size(Prefix) > MaxLen -> | ||
<<>>; | ||
MaxLen -> | ||
binary:copy(<<255>>, MaxLen - byte_size(Prefix)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails if MaxLen
is unlimited
@@ -105,6 +105,13 @@ test(UserConfig) -> | |||
?assert(lists:member([{prefix, "0/"}], CommonPrefixes3)), | |||
verify_object_list(ObjList4, 30), | |||
|
|||
%% Don't fail even if Prefix is longer than 1024, even if keys are | |||
%% restricted to be shorter than it. That's S3. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LoL
Merge CV's list-objects prefix optimization + Add key length limit [JIRA: RCS-275] Reviewed-by: shino
@borshop merge |
Merge CV's list-objects prefix optimization + Add key length limit [JIRA: RCS-275] Reviewed-by: shino
Release note: _[posted via JIRA by Kota Uenishi]_ |
@borshop: retry |
Merge CV's list-objects prefix optimization + Add key length limit [JIRA: RCS-275] Reviewed-by: shino Conflicts: src/riak_cs_s3_response.erl src/riak_cs_wm_object.erl src/riak_cs_wm_object_upload_part.erl
@angrycub 's awesome branch is for 1.5, this is a rebased version (although I updated some flavor for 2.1). Note that max key length of S3 is 1024 .