From 63768098fec4f7fdc5ae2d934901a32fbfc0d2cd Mon Sep 17 00:00:00 2001 From: yuqi Date: Wed, 15 Jan 2025 16:26:52 +0800 Subject: [PATCH] fix --- docs/hive-catalog-with-cloud-storage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/hive-catalog-with-cloud-storage.md b/docs/hive-catalog-with-cloud-storage.md index e8756c5113b..3a16f831e6c 100644 --- a/docs/hive-catalog-with-cloud-storage.md +++ b/docs/hive-catalog-with-cloud-storage.md @@ -271,7 +271,7 @@ To access S3-stored tables using Spark, you need to configure the SparkSession a sparkSession.sql("..."); ``` -:::Note +:::note Please download [Hadoop AWS jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws), [aws java sdk jar](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle) and place them in the classpath of the Spark. If the JARs are missing, Spark will not be able to access the S3 storage. Azure Blob Storage(ADLS) requires the [Hadoop Azure jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure), [Azure cloud sdk jar](https://mvnrepository.com/artifact/com.azure/azure-storage-blob) to be placed in the classpath of the Spark. for Google Cloud Storage(GCS), you need to download the [Hadoop GCS jar](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases) and place it in the classpath of the Spark.