NOTE: This functionality has been inlined in Apache Spark 2.x. This package is in maintenance mode and we only accept critical bug fixes.
A library for parsing and querying CSV data with Apache Spark, for Spark SQL and DataFrames.
This library requires Spark 1.3+
You can link against this library in your program at the following coordinates:
groupId: com.databricks
artifactId: spark-csv_2.10
version: 1.5.0
groupId: com.databricks
artifactId: spark-csv_2.11
version: 1.5.0
This package can be added to Spark using the --packages
command line option. For example, to include it when starting the spark shell:
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.5.0
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.10:1.5.0
This package allows reading CSV files in local or distributed filesystem as Spark DataFrames. When reading files the API accepts several options:
path
: location of files. Similar to Spark can accept standard Hadoop globbing expressions.header
: when set to true the first line of files will be used to name columns and will not be included in data. All types will be assumed string. Default value is false.delimiter
: by default columns are delimited using,
, but delimiter can be set to any characterquote
: by default the quote character is"
, but can be set to any character. Delimiters inside quotes are ignoredescape
: by default the escape character is\
, but can be set to any character. Escaped quote characters are ignoredparserLib
: by default it is "commons" can be set to "univocity" to use that library for CSV parsing.mode
: determines the parsing mode. By default it is PERMISSIVE. Possible values are:PERMISSIVE
: tries to parse all lines: nulls are inserted for missing tokens and extra tokens are ignored.DROPMALFORMED
: drops lines which have fewer or more tokens than expected or tokens which do not match the schemaFAILFAST
: aborts with a RuntimeException if encounters any malformed line
charset
: defaults to 'UTF-8' but can be set to other valid charset namesinferSchema
: automatically infers column types. It requires one extra pass over the data and is false by defaultcomment
: skip lines beginning with this character. Default is"#"
. Disable comments by setting this tonull
.nullValue
: specifies a string that indicates a null value, any fields matching this string will be set as nulls in the DataFramedateFormat
: specifies a string that indicates the date format to use when reading dates or timestamps. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to bothDateType
andTimestampType
. By default, it isnull
which means trying to parse times and date byjava.sql.Timestamp.valueOf()
andjava.sql.Date.valueOf()
.
The package also supports saving simple (non-nested) DataFrame. When writing files the API accepts several options:
path
: location of files.header
: when set to true, the header (from the schema in the DataFrame) will be written at the first line.delimiter
: by default columns are delimited using,
, but delimiter can be set to any characterquote
: by default the quote character is"
, but can be set to any character. This is written according toquoteMode
.escape
: by default the escape character is\
, but can be set to any character. Escaped quote characters are written.nullValue
: specifies a string that indicates a null value, nulls in the DataFrame will be written as this string.dateFormat
: specifies a string that indicates the date format to use writing dates or timestamps. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to bothDateType
andTimestampType
. If no dateFormat is specified, then "yyyy-MM-dd HH:mm:ss.S".codec
: compression codec to use when saving to file. Should be the fully qualified name of a class implementingorg.apache.hadoop.io.compress.CompressionCodec
or one of case-insensitive shorten names (bzip2
,gzip
,lz4
, andsnappy
). Defaults to no compression when a codec is not specified.quoteMode
: when to quote fields (ALL
,MINIMAL
(default),NON_NUMERIC
,NONE
), see Quote Modes
These examples use a CSV file available for download here:
$ wget https://github.com/databricks/spark-csv/raw/master/src/test/resources/cars.csv
CSV data source for Spark can infer data types:
CREATE TABLE cars
USING com.databricks.spark.csv
OPTIONS (path "cars.csv", header "true", inferSchema "true")
You can also specify column names and types in DDL.
CREATE TABLE cars (yearMade double, carMake string, carModel string, comments string, blank string)
USING com.databricks.spark.csv
OPTIONS (path "cars.csv", header "true")
Spark 1.4+:
Automatically infer schema (data types), otherwise everything is assumed string:
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "true") // Automatically infer data types
.load("cars.csv")
val selectedData = df.select("year", "model")
selectedData.write
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv")
You can manually specify the schema when reading data:
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType}
val sqlContext = new SQLContext(sc)
val customSchema = StructType(Array(
StructField("year", IntegerType, true),
StructField("make", StringType, true),
StructField("model", StringType, true),
StructField("comment", StringType, true),
StructField("blank", StringType, true)))
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.schema(customSchema)
.load("cars.csv")
val selectedData = df.select("year", "model")
selectedData.write
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv")
You can save with compressed output:
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "true") // Automatically infer data types
.load("cars.csv")
val selectedData = df.select("year", "model")
selectedData.write
.format("com.databricks.spark.csv")
.option("header", "true")
.option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.save("newcars.csv.gz")
Spark 1.3:
Automatically infer schema (data types), otherwise everything is assumed string:
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.load(
"com.databricks.spark.csv",
Map("path" -> "cars.csv", "header" -> "true", "inferSchema" -> "true"))
val selectedData = df.select("year", "model")
selectedData.save("newcars.csv", "com.databricks.spark.csv")
You can manually specify the schema when reading data:
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType};
val sqlContext = new SQLContext(sc)
val customSchema = StructType(Array(
StructField("year", IntegerType, true),
StructField("make", StringType, true),
StructField("model", StringType, true),
StructField("comment", StringType, true),
StructField("blank", StringType, true)))
val df = sqlContext.load(
"com.databricks.spark.csv",
schema = customSchema,
Map("path" -> "cars.csv", "header" -> "true"))
val selectedData = df.select("year", "model")
selectedData.save("newcars.csv", "com.databricks.spark.csv")
Spark 1.4+:
Automatically infer schema (data types), otherwise everything is assumed string:
import org.apache.spark.sql.SQLContext
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.load("cars.csv");
df.select("year", "model").write()
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv");
You can manually specify schema:
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.*;
SQLContext sqlContext = new SQLContext(sc);
StructType customSchema = new StructType(new StructField[] {
new StructField("year", DataTypes.IntegerType, true, Metadata.empty()),
new StructField("make", DataTypes.StringType, true, Metadata.empty()),
new StructField("model", DataTypes.StringType, true, Metadata.empty()),
new StructField("comment", DataTypes.StringType, true, Metadata.empty()),
new StructField("blank", DataTypes.StringType, true, Metadata.empty())
});
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.schema(customSchema)
.option("header", "true")
.load("cars.csv");
df.select("year", "model").write()
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv");
You can save with compressed output:
import org.apache.spark.sql.SQLContext
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.load("cars.csv");
df.select("year", "model").write()
.format("com.databricks.spark.csv")
.option("header", "true")
.option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.save("newcars.csv");
Spark 1.3:
Automatically infer schema (data types), otherwise everything is assumed string:
import org.apache.spark.sql.SQLContext
SQLContext sqlContext = new SQLContext(sc);
HashMap<String, String> options = new HashMap<String, String>();
options.put("header", "true");
options.put("path", "cars.csv");
options.put("inferSchema", "true");
DataFrame df = sqlContext.load("com.databricks.spark.csv", options);
df.select("year", "model").save("newcars.csv", "com.databricks.spark.csv");
You can manually specify schema:
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.*;
SQLContext sqlContext = new SQLContext(sc);
StructType customSchema = new StructType(new StructField[] {
new StructField("year", DataTypes.IntegerType, true, Metadata.empty()),
new StructField("make", DataTypes.StringType, true, Metadata.empty()),
new StructField("model", DataTypes.StringType, true, Metadata.empty()),
new StructField("comment", DataTypes.StringType, true, Metadata.empty()),
new StructField("blank", DataTypes.StringType, true, Metadata.empty())
});
HashMap<String, String> options = new HashMap<String, String>();
options.put("header", "true");
options.put("path", "cars.csv");
DataFrame df = sqlContext.load("com.databricks.spark.csv", customSchema, options);
df.select("year", "model").save("newcars.csv", "com.databricks.spark.csv");
You can save with compressed output:
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SaveMode;
SQLContext sqlContext = new SQLContext(sc);
HashMap<String, String> options = new HashMap<String, String>();
options.put("header", "true");
options.put("path", "cars.csv");
options.put("inferSchema", "true");
DataFrame df = sqlContext.load("com.databricks.spark.csv", options);
HashMap<String, String> saveOptions = new HashMap<String, String>();
saveOptions.put("header", "true");
saveOptions.put("path", "newcars.csv");
saveOptions.put("codec", "org.apache.hadoop.io.compress.GzipCodec");
df.select("year", "model").save("com.databricks.spark.csv", SaveMode.Overwrite,
saveOptions);
Spark 1.4+:
Automatically infer schema (data types), otherwise everything is assumed string:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('cars.csv')
df.select('year', 'model').write.format('com.databricks.spark.csv').save('newcars.csv')
You can manually specify schema:
from pyspark.sql import SQLContext
from pyspark.sql.types import *
sqlContext = SQLContext(sc)
customSchema = StructType([ \
StructField("year", IntegerType(), True), \
StructField("make", StringType(), True), \
StructField("model", StringType(), True), \
StructField("comment", StringType(), True), \
StructField("blank", StringType(), True)])
df = sqlContext.read \
.format('com.databricks.spark.csv') \
.options(header='true') \
.load('cars.csv', schema = customSchema)
df.select('year', 'model').write \
.format('com.databricks.spark.csv') \
.save('newcars.csv')
You can save with compressed output:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('cars.csv')
df.select('year', 'model').write.format('com.databricks.spark.csv').options(codec="org.apache.hadoop.io.compress.GzipCodec").save('newcars.csv')
Spark 1.3:
Automatically infer schema (data types), otherwise everything is assumed string:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.load(source="com.databricks.spark.csv", header = 'true', inferSchema = 'true', path = 'cars.csv')
df.select('year', 'model').save('newcars.csv', 'com.databricks.spark.csv')
You can manually specify schema:
from pyspark.sql import SQLContext
from pyspark.sql.types import *
sqlContext = SQLContext(sc)
customSchema = StructType([ \
StructField("year", IntegerType(), True), \
StructField("make", StringType(), True), \
StructField("model", StringType(), True), \
StructField("comment", StringType(), True), \
StructField("blank", StringType(), True)])
df = sqlContext.load(source="com.databricks.spark.csv", header = 'true', schema = customSchema, path = 'cars.csv')
df.select('year', 'model').save('newcars.csv', 'com.databricks.spark.csv')
You can save with compressed output:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.load(source="com.databricks.spark.csv", header = 'true', inferSchema = 'true', path = 'cars.csv')
df.select('year', 'model').save('newcars.csv', 'com.databricks.spark.csv', codec="org.apache.hadoop.io.compress.GzipCodec")
Spark 1.4+:
Automatically infer schema (data types), otherwise everything is assumed string:
library(SparkR)
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.4.0" "sparkr-shell"')
sqlContext <- sparkRSQL.init(sc)
df <- read.df(sqlContext, "cars.csv", source = "com.databricks.spark.csv", inferSchema = "true")
write.df(df, "newcars.csv", "com.databricks.spark.csv", "overwrite")
You can manually specify schema:
library(SparkR)
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.4.0" "sparkr-shell"')
sqlContext <- sparkRSQL.init(sc)
customSchema <- structType(
structField("year", "integer"),
structField("make", "string"),
structField("model", "string"),
structField("comment", "string"),
structField("blank", "string"))
df <- read.df(sqlContext, "cars.csv", source = "com.databricks.spark.csv", schema = customSchema)
write.df(df, "newcars.csv", "com.databricks.spark.csv", "overwrite")
You can save with compressed output:
library(SparkR)
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.4.0" "sparkr-shell"')
sqlContext <- sparkRSQL.init(sc)
df <- read.df(sqlContext, "cars.csv", source = "com.databricks.spark.csv", inferSchema = "true")
write.df(df, "newcars.csv", "com.databricks.spark.csv", "overwrite", codec="org.apache.hadoop.io.compress.GzipCodec")
This library is built with SBT, which is automatically downloaded by the included shell script. To build a JAR file simply run sbt/sbt package
from the project root. The build configuration includes support for both Scala 2.10 and 2.11.