Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix ORC writer output corruption with string columns #7565

Merged
merged 4 commits into from
Mar 12, 2021
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 15 additions & 11 deletions cpp/src/io/orc/dict_enc.cu
Original file line number Diff line number Diff line change
Expand Up @@ -73,21 +73,25 @@ static __device__ void LoadNonNullIndices(volatile dictinit_state_s *s,
for (uint32_t i = 0; i < s->chunk.num_rows; i += 512) {
const uint32_t *valid_map = s->chunk.valid_map_base;
uint32_t is_valid, nz_pos;
if (t < 16) {
if (t < 512 / 32) {
vuule marked this conversation as resolved.
Show resolved Hide resolved
if (!valid_map) {
s->scratch_red[t] = 0xffffffffu;
} else {
uint32_t row = s->chunk.start_row + i + t * 32;
uint32_t v = (row < s->chunk.start_row + s->chunk.num_rows)
? valid_map[(row + s->chunk.column_offset) / 32]
: 0;
if (row & 0x1f) {
uint32_t v1 = (row + 32 < s->chunk.start_row + s->chunk.num_rows)
? valid_map[((row + s->chunk.column_offset) / 32) + 1]
: 0;
v = __funnelshift_r(v, v1, row + s->chunk.column_offset);
uint32_t const row = s->chunk.start_row + i + t * 32;
auto const chunk_end = s->chunk.start_row + s->chunk.num_rows;

auto const valid_map_idx = (row + s->chunk.column_offset) / 32;
uint32_t valid = (row < chunk_end) ? valid_map[valid_map_idx] : 0;

auto const rows_in_next_word = (row + s->chunk.column_offset) & 0x1f;
if (rows_in_next_word != 0) {
auto const rows_in_current_word = 32 - rows_in_next_word;
// Read next word if any rows are within the chunk
uint32_t const valid_next =
(row + rows_in_current_word < chunk_end) ? valid_map[valid_map_idx + 1] : 0;
valid = __funnelshift_r(valid, valid_next, rows_in_next_word);
}
s->scratch_red[t] = v;
s->scratch_red[t] = valid;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole block looks to be over optimized. It's pre-loading an offseted mask word into scratch_red only to have individual threads treating this shared memory as the new mask, and then doing an exclusive sum over the validity values.

This could be achieved using an iterator for validity, directly accessing the mask itself. and that iterator could be given an appropriate offset, which I believe would be the column offset + this dictionary chunk's start row.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I also had a few ideas on what can be simplified around this code, but wanted to keep such changes separate from the actual fix.

}
}
__syncthreads();
Expand Down