-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Generate per-dialect options on init project #3733
Conversation
d3cb328
to
c5414fd
Compare
I tested this out locally and realized it's not helpful unless you explain in plain terms what each config means paired with an example. I recommend you update the template with this look and feel. You can leverage our existing docs to fill out the config descriptions. This mainly needs to be done for: snowflake, databricks, bigquery, postgres, duckdb, redshift What will also help is that most users if they're serious will want to read our docs to verify what connection config settings matter to them. Let's save them the extra step and print it to the terminal: https://sqlmesh.readthedocs.io/en/stable/integrations/overview/#execution-engines Nice to have, dynamically print a link in the terminal to the exact engine connection options based on the database. Example for Actually, as I was writing this. The "nice to have" path may be the best of both worlds where you add in the commented out configs and then dynamically print the link to the relevant engine as it's much easier to onboard in that flow vs. cluttering the yaml with elongated comments. Let me know what you think about the nice to have path being the need to have one. gateways:
local:
connection:
type: bigquery
# concurrent_tasks: 1
# register_comments: True
# pre_ping:
# pretty_sql:
# method: BigQueryConnectionMethod.OAUTH
# project:
# execution_project: # The name of the GCP project to bill for the execution of the models. If not set, the project associated with the model will be used. type: string, ex: my-project
# quota_project:
# location:
keyfile: # Path to the keyfile to be used with service-account method # type: string, ex: '/source/keyfile.json'
# keyfile_json:
# token:
# refresh_token:
# client_id:
# client_secret:
# token_uri:
# scopes:
# job_creation_timeout_seconds:
# job_execution_timeout_seconds:
# job_retries: 1
# job_retry_deadline_seconds:
# priority:
# maximum_bytes_billed: |
This was what I was about to recommend too, I think the dynamic generation is needed in order to be up to date with the latest options and outputting the documentation link as in "check here for more details" should do the trick. So, will simply keep the fields empty and/or commented out for now to maintain the existing implementation if everyone's fine with this. |
c7e1221
to
6b9bd8c
Compare
bf43bb7
to
62f8494
Compare
62f8494
to
1bac001
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I think we may need to update a few existing doc pages if this is merged, to avoid showing the outdated config template.
1c84f83
to
bb6be58
Compare
Fixes #3459
Dynamically generate the engine-appropriate connection options through Pydantic's
model_fields
. The following conventions are applied:Field
is not required, it's commented outField
has a (literal) default value it's appended to the RHS