Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Subtask] [spark-connector] support JDBC catalog #1572

Open
Tracked by #1227
FANNG1 opened this issue Jan 17, 2024 · 2 comments · May be fixed by #6212
Open
Tracked by #1227

[Subtask] [spark-connector] support JDBC catalog #1572

FANNG1 opened this issue Jan 17, 2024 · 2 comments · May be fixed by #6212

Comments

@FANNG1
Copy link
Contributor

FANNG1 commented Jan 17, 2024

Describe the subtask

support DML&DDL operations to JDBC catalog

Parent issue

#1227

@caican00
Copy link
Collaborator

caican00 commented Jun 6, 2024

Jdbc catalog proposal. please help review it if you are free, thanks! cc @FANNG1
https://docs.google.com/document/d/1XWzvqV38YWh4ajudcxlFk9IkF414n5veE81_6chCqEQ/edit#heading=h.j5nl0d9xr4sd

@dataageek
Copy link

hi @FANNG1 @caican00
This would be a fantastic feature. If the Spark connector supports reading data using the registered Gravitino JDBC catalog, it would allow direct data reads from Spark without the need for temporary views to load the data. This capability could also improve JOIN operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants