Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify / speed up implementation of character_length to unicode points #3049

Closed
Dandandan opened this issue Aug 5, 2022 · 2 comments · Fixed by #3054
Closed

Simplify / speed up implementation of character_length to unicode points #3049

Dandandan opened this issue Aug 5, 2022 · 2 comments · Fixed by #3054
Labels
enhancement New feature or request

Comments

@Dandandan
Copy link
Contributor

Dandandan commented Aug 5, 2022

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
It looks like postgresql and spark have a different, (probably much faster) implementation of calculating the string length, which doesn't depend on calculating the grapheme clusters, but on utf8 code points.

As an example: select length('ä')
PostgreSQL | Spark | DataFusion

Postgres Spark DataFusion
2 2 1

Describe the solution you'd like
Probably .chars().count() is a faster solution and matches the implementation of PostgreSQL (and Spark).
It can be applied to other string functions as well.

Describe alternatives you've considered
Accepting our own implementation being slower but "superior" to other solutions.

Additional context
This came up as being quite slow when profiling this benchmark: https://github.com/DataPsycho/data-pipelines-in-rust/tree/main/amazon_review_pipeline
We're using the grapheme option in other places as well, maybe here we could use the faster

@Dandandan Dandandan added the enhancement New feature or request label Aug 5, 2022
@andygrove
Copy link
Member

This is something that we could potentially add a configuration option for as well, defaulting to the fast version but allowing the user to select a slower and more accurate version.

@Dandandan
Copy link
Contributor Author

This is something that we could potentially add a configuration option for as well, defaulting to the fast version but allowing the user to select a slower and more accurate version.

In general, I think it would be better to choose one version of the algorithm rather than have the result depend on the configuration.

As we aim to be mostly compatible with PostgreSQL, it makes sense to me to choose counting the code points instead of grapheme clusters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants