-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-48585][SQL] Make built-in
JdbcDialect's method classifyException
throw out the original
exception
#46937
Conversation
… original exception
JdbcDialect.classifyException
throw out the original
exception
* @return `AnalysisException` or its sub-class. | ||
*/ | ||
def classifyException( | ||
e: Throwable, | ||
errorClass: String, | ||
messageParameters: Map[String, String], | ||
description: String): AnalysisException = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the code logic, it seems that there is no need for the field description
to exist. Let's remove it.
Although JdbcDialect
is marked as DeveloperApi
, this method has been added from Spark version 4.0
and the version 4.0
has not been released yet.
Can we directly remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there was a reason to call the legacy classifyException
method here. Can we dig into it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am investigating history.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This guarantees pre-implemented legacy classifyException
s from third-party to be correctly called
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To reduce confusion among reviewers, this PR |
description: String): AnalysisException = { | ||
classifyException(description, e) | ||
messageParameters: Map[String, String]): AnalysisException = { | ||
new AnalysisException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AnalysisException
seems unreasonable here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this should already be in the execution phase, so it's not reasonable.
Let me go through the history first.
eg:
and then have various built-in jdbc dialects WDYT @cloud-fan @yaooqinn @beliefer ? |
JdbcDialect.classifyException
throw out the original
exceptionclassifyException
throw out the original
exception
classifyException
throw out the original
exceptionbuilt-in
JdbcDialect's method classifyException
throw out the original
exception
I have already used this |
/** | ||
* Make the `classifyException` method throw out the original exception | ||
*/ | ||
trait JdbcDialectHelper extends JdbcDialect { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we give it a clearer name? like NoLegacyJDBCError
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, let me update it.
Done.
Merged into master for Spark 4.0. Thanks @panbingkun @cloud-fan @yaooqinn @beliefer |
Thanks all. |
What changes were proposed in this pull request?
The pr aims to make
built-in
JdbcDialect's method classifyException throw out theoriginal
exception.Why are the changes needed?
As discussed in #46912 (comment), the following code:
spark/sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala
Lines 746 to 751 in df4156a
have lost the original cause of the error, let's correct it.
Does this PR introduce any user-facing change?
Yes, more accurate error conditions for end users.
How was this patch tested?
Was this patch authored or co-authored using generative AI tooling?
No.