Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix compilation errors with Eclipse compiler #13532

Merged
merged 1 commit into from
Sep 19, 2022

Conversation

gzsombor
Copy link
Member

@gzsombor gzsombor commented Aug 7, 2022

Description

The Eclipse compiler seems to be more picky about things, like the package declaration and the places of the file, etc.
This patch tries to address these issues.

Is this change a fix, improvement, new feature, refactoring, or other?

Is this a change to the core query engine, a connector, client library, or the SPI interfaces? (be specific)

How would you describe this change to a non-technical end user or system administrator?

Related issues, pull requests, and links

Documentation

( ) No documentation is needed.
( ) Sufficient documentation is included in this PR.
( ) Documentation PR is available with #prnumber.
( ) Documentation issue #issuenumber is filed, and can be handled later.

Release notes

( ) No release notes entries required.
( ) Release notes entries required with the following suggested text:

# Section
* Fix some things. ({issue}`issuenumber`)

@cla-bot
Copy link

cla-bot bot commented Aug 7, 2022

Thank you for your pull request and welcome to the Trino community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. Continue to work with us on the review and improvements in this PR, and submit the signed CLA to [email protected]. Processing may take a few days. The CLA needs to be on file before we merge your changes. For more information, see https://github.com/trinodb/cla

@@ -296,10 +296,10 @@ public static <T> TupleDomain<T> intersect(List<? extends TupleDomain<? extends
}

@SuppressWarnings("unchecked")
private static <U, T extends U> TupleDomain<U> upcast(TupleDomain<T> domain)
private static <T> TupleDomain<T> upcast(TupleDomain<? extends T> domain)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a correct simplification.
Was the original code an incorrect Java, according to JLS?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The compiler complained about the usage:

The method upcast(TupleDomain<T>) in the type TupleDomain<T> is not applicable for the arguments (capture#20-of ? extends TupleDomain<? extends T>)

so it couldn't infer the U type, I'm not sure, if ECJ is too limited, or Javac is more lazy in this case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with this particular change. In the past, I've seen ECJ having problems where it shouldn't. I don't have any recent experience with this compiler.
Is there any reason you want to use ECJ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, because this is what Eclipse is using,and a bit annoying that I can't start the app, without fixing the errors 😉

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it possible to configure Eclipse to use javac as a compiler?

@findepi
Copy link
Member

findepi commented Aug 8, 2022

Re package names -- airlift/airbase#321

(it looks like it's valid Java to have package mismatching the directory name. Of course something we don't want to allow in the project)

@@ -402,7 +402,7 @@ public static boolean dynamicFilter(@SqlType("T") double input, @SqlType(VARCHAR
{
private NullableFunction() {}

private static final String NAME = "$internal$dynamic_filter_nullable_function";
protected static final String NAME = "$internal$dynamic_filter_nullable_function";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

The class has no subclasses. Did you mean to make it package private?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, package private would work as well.

@findepi findepi changed the title Compilation errors Fix compilation errors with Eclipse compiler Aug 8, 2022
@gzsombor gzsombor force-pushed the compilation-errors branch from 154e062 to d3d771b Compare August 8, 2022 20:29
@cla-bot
Copy link

cla-bot bot commented Aug 8, 2022

Thank you for your pull request and welcome to the Trino community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. Continue to work with us on the review and improvements in this PR, and submit the signed CLA to [email protected]. Processing may take a few days. The CLA needs to be on file before we merge your changes. For more information, see https://github.com/trinodb/cla

@findepi
Copy link
Member

findepi commented Aug 9, 2022

Let's structure this in the following commits

Fix package declaration for TestingRedirectHandlerInjector
Fix directory name for Phoenix TestDummy
Simplify generics in TupleDomain.upcast

This is equivalent and shorter. This also fixes compilation problem
when compiling with ECJ.

I am not sure what to do about NullableFunction.NAME visibility.
It looks like a trivial problem, but that means ECJ users will continue to run into such problems in the future.
Did you try reporting this to ECJ?

Also, when using Eclipse, do you have a choice which compiler to build with? E.g. can you delegate the build to maven / javac? (BTW we recommend IntelliJ for development)

@gzsombor
Copy link
Member Author

I think, there are a couple of issues around the visibility rules in ECJ and Javac, and in this case, ECJ has the less problem.
I mean, this is a compilation error in both of them:

@Deprecated(since = X.MSG)
public class X {
    private final static String MSG = "msg";
}

But, this is only a compilation error in ECJ - Javac is inconsistent

public class X {
    @Deprecated(since = Y.MSG2)
    static class Y {
        private final static String MSG2 = "msg";
    }
}

Interestingly, this works in both :

public class X {
    private final static String MSG = "msg";
    @Deprecated(since = MSG)
    static class Y {
    }
}

however, if we write this:

public class X {
    private final static String MSG = "msg";
    @Deprecated(since = X.MSG)
    static class Y {
    }
}

this fails in ECJ - which definitely looks bad.
So there is at least one bug in Javac and in ECJ - and I can't find too much detail in the JLS around annotations visibility.

@gzsombor gzsombor force-pushed the compilation-errors branch from d3d771b to a4bc00a Compare August 11, 2022 07:21
@cla-bot
Copy link

cla-bot bot commented Aug 11, 2022

Thank you for your pull request and welcome to the Trino community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. Continue to work with us on the review and improvements in this PR, and submit the signed CLA to [email protected]. Processing may take a few days. The CLA needs to be on file before we merge your changes. For more information, see https://github.com/trinodb/cla

@gzsombor gzsombor force-pushed the compilation-errors branch from a4bc00a to 6d02453 Compare August 18, 2022 23:01
@cla-bot cla-bot bot added the cla-signed label Aug 18, 2022
@gzsombor gzsombor force-pushed the compilation-errors branch from 6d02453 to 6c5e83f Compare August 21, 2022 14:15
@gzsombor
Copy link
Member Author

For me, it's unclear why these tests are failing, I suspect this is independent from the changes.
Can you help understanding the issues?

@hashhar
Copy link
Member

hashhar commented Aug 22, 2022

It indeed looks unrelated and is a flaky test.

2022-08-21 20:35:08 INFO: FAILURE     /    io.trino.tests.product.iceberg.TestIcebergSparkCompatibility.testTrinoReadsSparkRowLevelDeletes [PARQUET, PARQUET] (Groups: profile_specific_tests, iceberg) took 2.7 seconds
2022-08-21 20:35:08 SEVERE: Failure cause:
io.trino.tempto.query.QueryExecutionException: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error running query: [WRITING_JOB_ABORTED] org.apache.spark.SparkException: Writing job aborted
...
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/test_trino_reads_spark_row_level_deletes_PARQUET_PARQUET_118fjzn95i9c/metadata/00002-ccc5a200-6bc3-42f0-9f90-d93e835fc10d.metadata.json could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and no node(s) are excluded in this operation.
...
Click to see full stack trace
2022-08-21 20:35:08 INFO: FAILURE     /    io.trino.tests.product.iceberg.TestIcebergSparkCompatibility.testTrinoReadsSparkRowLevelDeletes [PARQUET, PARQUET] (Groups: profile_specific_tests, iceberg) took 2.7 seconds
2022-08-21 20:35:08 SEVERE: Failure cause:
io.trino.tempto.query.QueryExecutionException: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error running query: [WRITING_JOB_ABORTED] org.apache.spark.SparkException: Writing job aborted
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:43)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:230)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:79)
	at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:63)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:230)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:225)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:239)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.spark.SparkException: Writing job aborted
	at org.apache.spark.sql.errors.QueryExecutionErrors$.writingJobAbortedError(QueryExecutionErrors.scala:613)
	at org.apache.spark.sql.execution.datasources.v2.ExtendedV2ExistingTableWriteExec.writeWithV2(WriteDeltaExec.scala:129)
	at org.apache.spark.sql.execution.datasources.v2.ExtendedV2ExistingTableWriteExec.writeWithV2$(WriteDeltaExec.scala:72)
	at org.apache.spark.sql.execution.datasources.v2.WriteDeltaExec.writeWithV2(WriteDeltaExec.scala:50)
	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:309)
	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:308)
	at org.apache.spark.sql.execution.datasources.v2.WriteDeltaExec.run(WriteDeltaExec.scala:50)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:110)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:106)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:91)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:291)
	... 16 more
Caused by: java.lang.IllegalArgumentException: Self-suppression not permitted
	at java.base/java.lang.Throwable.addSuppressed(Throwable.java:1054)
	at org.apache.iceberg.TableMetadataParser.$closeResource(TableMetadataParser.java:129)
	at org.apache.iceberg.TableMetadataParser.internalWrite(TableMetadataParser.java:129)
	at org.apache.iceberg.TableMetadataParser.overwrite(TableMetadataParser.java:112)
	at org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadata(BaseMetastoreTableOperations.java:161)
	at org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:219)
	at org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:133)
	at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:317)
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:404)
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190)
	at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:295)
	at org.apache.iceberg.spark.source.SparkPositionDeltaWrite$PositionDeltaBatchWrite.commitOperation(SparkPositionDeltaWrite.java:265)
	at org.apache.iceberg.spark.source.SparkPositionDeltaWrite$PositionDeltaBatchWrite.commit(SparkPositionDeltaWrite.java:209)
	at org.apache.spark.sql.execution.datasources.v2.ExtendedV2ExistingTableWriteExec.writeWithV2(WriteDeltaExec.scala:112)
	... 53 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/test_trino_reads_spark_row_level_deletes_PARQUET_PARQUET_118fjzn95i9c/metadata/00002-ccc5a200-6bc3-42f0-9f90-d93e835fc10d.metadata.json could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2706)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
	at org.apache.hadoop.ipc.Client.call(Client.java:1519)
	at org.apache.hadoop.ipc.Client.call(Client.java:1416)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
	at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
	at jdk.internal.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy20.addBl
ock(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)

	at io.trino.tempto.query.JdbcQueryExecutor.execute(JdbcQueryExecutor.java:119)
	at io.trino.tempto.query.JdbcQueryExecutor.executeQuery(JdbcQueryExecutor.java:84)
	at io.trino.tests.product.iceberg.TestIcebergSparkCompatibility.testTrinoReadsSparkRowLevelDeletes(TestIcebergSparkCompatibility.java:1499)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
	at org.testng.internal.Invoker.invokeMethod(Invoker.java:645)
	at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851)
	at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177)
	at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
	at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error running query: [WRITING_JOB_ABORTED] org.apache.spark.SparkException: Writing job aborted
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:43)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:230)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:79)
	at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:63)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:230)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:225)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:239)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.spark.SparkException: Writing job aborted
	at org.apache.spark.sql.errors.QueryExecutionErrors$.writingJobAbortedError(QueryExecutionErrors.scala:613)
	at org.apache.spark.sql.execution.datasources.v2.ExtendedV2ExistingTableWriteExec.writeWithV2(WriteDeltaExec.scala:129)
	at org.apache.spark.sql.execution.datasources.v2.ExtendedV2ExistingTableWriteExec.writeWithV2$(WriteDeltaExec.scala:72)
	at org.apache.spark.sql.execution.datasources.v2.WriteDeltaExec.writeWithV2(WriteDeltaExec.scala:50)
	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:309)
	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:308)
	at org.apache.spark.sql.execution.datasources.v2.WriteDeltaExec.run(WriteDeltaExec.scala:50)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:110)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:106)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:91)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
	a
t org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:291)
	... 16 more
Caused by: java.lang.IllegalArgumentException: Self-suppression not permitted
	at java.base/java.lang.Throwable.addSuppressed(Throwable.java:1054)
	at org.apache.iceberg.TableMetadataParser.$closeResource(TableMetadataParser.java:129)
	at org.apache.iceberg.TableMetadataParser.internalWrite(TableMetadataParser.java:129)
	at org.apache.iceberg.TableMetadataParser.overwrite(TableMetadataParser.java:112)
	at org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadata(BaseMetastoreTableOperations.java:161)
	at org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:219)
	at org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:133)
	at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:317)
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:404)
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190)
	at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:295)
	at org.apache.iceberg.spark.source.SparkPositionDeltaWrite$PositionDeltaBatchWrite.commitOperation(SparkPositionDeltaWrite.java:265)
	at org.apache.iceberg.spark.source.SparkPositionDeltaWrite$PositionDeltaBatchWrite.commit(SparkPositionDeltaWrite.java:209)
	at org.apache.spark.sql.execution.datasources.v2.ExtendedV2ExistingTableWriteExec.writeWithV2(WriteDeltaExec.scala:112)
	... 53 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/test_trino_reads_spark_row_level_deletes_PARQUET_PARQUET_118fjzn95i9c/metadata/00002-ccc5a200-6bc3-42f0-9f90-d93e835fc10d.metadata.json could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2706)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
	at org.apache.hadoop.ipc.Client.call(Client.java:1519)
	at org.apache.hadoop.ipc.Client.call(Client.java:1416)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
	at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
	at jdk.internal.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)

	at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
	at io.trino.tempto.query.JdbcQueryExecutor.executeQueryNoParams(JdbcQueryExecutor.java:128)
	at io.trino.tempto.query.JdbcQueryExecutor.execute(JdbcQueryExecutor.java:112)
	... 15 more
	Suppressed: java.lang.Exception: Query: DELETE FROM iceberg_test.default.test_trino_reads_spark_row_level_deletes_PARQUET_PARQUET_118fjzn95i9c WHERE a = 13
		at io.trino.tempto.query.JdbcQueryExecutor.executeQueryNoParams(JdbcQueryExecutor.java:136)
		... 16 more

22/08/21 14:50:08 ERROR TThreadPoolServer: Thrift error occurred during processing of message.
org.apache.thrift.transport.TTransportException
	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
	at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:374)
	at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:451)
	at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:433)
	at org.apache.thrift.transport.TSaslServerTransport.read(TSaslServerTransport.java:43)
	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
	at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:425)
	at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:321)
	at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:225)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:52)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

@gzsombor
Copy link
Member Author

Thanks!
I think, I've implemented all the suggestions from @findepi , could you please have a look and advise, if there are something left to do?

Copy link
Member

@hashhar hashhar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In "Fix package declaration for TestingRedirectHandlerInjector":

TestingRedirectHandlerInjector is placed in correct folder - just the package declaration is wrong. So instead of changing the path to the file the first commit should just change package io.trino.jdbc to package io.trino.tests.product.jdbc.

Looks good other than that.

@gzsombor gzsombor force-pushed the compilation-errors branch 2 times, most recently from 26e866e to 48b2f43 Compare August 23, 2022 17:20
@github-actions github-actions bot added the jdbc Relates to Trino JDBC driver label Aug 23, 2022
@hashhar
Copy link
Member

hashhar commented Aug 23, 2022

@gzsombor In case you're still pushing can you also do following:

  • First commit has long commit - reword to something like Fix package declaration and adjust visibility of TrinoDriverUri
  • ECJ in commit message for Simplify generics in TupleDomain.upcast to ECJ (Eclipse Compiler)
  • Rewrod last commit to fit GitHub lengths as below:
Fix compilation error reported only by ECJ (Eclipse Compiler)

Javac seems to be more lenient.

Copy link
Member

@martint martint left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the changes in this PR are ok and stand on their own, but I don't think we should make changes solely for the purpose of appeasing the Eclipse compiler. It is not a goal of Trino to support compilers that don't follow the JLS closely, and we can't guarantee that certain changes won't inadvertently be rolled back or new incompatibilities introduced in the future.

@@ -402,7 +402,7 @@ public static boolean dynamicFilter(@SqlType("T") double input, @SqlType(VARCHAR
{
private NullableFunction() {}

private static final String NAME = "$internal$dynamic_filter_nullable_function";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm reading the JLS correctly, this seems to be a bug in the Eclipse compiler:

A member (class, interface, field, or method) of a class, interface, type parameter,
or reference type, or a constructor of a class, is accessible only if (i) the class, interface, type parameter, or reference type is accessible, and (ii) the member or
constructor is declared to permit access:
[...]
– Otherwise, the member or constructor is declared private. Access is
permitted only when the access occurs from within the body of the top level
class or interface that encloses the declaration of the member or constructor.

and

The body of a class declares members (fields, methods, classes, and interfaces),
instance and static initializers, and constructors (§8.1.7).

@@ -296,10 +296,10 @@ public static <T> TupleDomain<T> intersect(List<? extends TupleDomain<? extends
}

@SuppressWarnings("unchecked")
private static <U, T extends U> TupleDomain<U> upcast(TupleDomain<T> domain)
private static <T> TupleDomain<T> upcast(TupleDomain<? extends T> domain)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it possible to configure Eclipse to use javac as a compiler?

@gzsombor
Copy link
Member Author

Yes, you are right, the constant visibility problem is more of an eclipse bug - it seems that in the Java 6 era, Javac had a similar bug, and this is when this logic was introduced, to be compatible with that old Javac.
I've removed that commit from the pull request. I hope you can merge it, thanks!

This is equivalent and shorter. This also fixes compilation problem
when compiling with ECJ.
@hashhar hashhar requested a review from martint September 19, 2022 06:34
@martint martint merged commit 30d642d into trinodb:master Sep 19, 2022
@github-actions github-actions bot added this to the 397 milestone Sep 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla-signed jdbc Relates to Trino JDBC driver
Development

Successfully merging this pull request may close these issues.

4 participants