-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use clickhouse-jdbc to query millions of data?And stream way for reading large result set? #929
Comments
By default, clickhouse-jdbc is synchronous and resource-efficient. It simply reads data one field at a time and then deserialize in the same thread. There's tiny buffer(8192 bytes) and objects like ClickHouseRecord and ClickHouseValue reused for less CPU and memory consumption. As long as you don't cache lots of data in your application, there's not much to worry about. Having said that, you may still run into OOM error when dealing with large field(e.g. a movie stored in a column) with limited memory.
Both JDBC driver and Java client uses streaming for both query and insert. There's nothing special, but you should avoid to use large SQL statement with many values expressions.
JDBC standard does not provide convenient way for loading/dumping data. However, you can use one-liner in Java client, for example: |
有对应的例子吗 |
I'll add more details into #928 and maybe examples in weekend, but for starters:
|
|
问题使用clickhouse-jdbc 查询千万数据量? Stream方式读取SELECT超大结果集 ?
需要导出百万级别数据 从ck中导出,clickhouse-jdbc怎么支持?如果使用api 解决这个问题?
The text was updated successfully, but these errors were encountered: