-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[write] add Transaction with commit info and commit implementation #370
Changes from 18 commits
f40221a
b16491c
432a339
93fab4d
64d7eaf
05e9488
2282cd8
21928b8
532ea8c
215ed4e
78c8464
0f1f955
8cc9cc9
114c16f
b7c351f
9a9e9d3
6b0c2d4
d1af098
0ba047d
667a8e2
52bd5f2
7696d7d
fa6c81d
a3abbfa
d7ea4c4
bac1d09
bc541dd
fa1caf4
9d875cd
023b85a
c1c6e2a
da43cf2
26b8dbd
858f3fb
1ef5ffc
6ee69e7
0b2b1ed
2258549
f463e22
1149a17
3877ccc
0b5b301
3daed9b
327bbde
6d2b41a
0abd291
b793523
2f4e4d0
68edef2
673af96
559bbea
5afe8db
7f87591
76cdfaa
cc7598c
a1ba008
525b8ff
a86495a
0a2ecfc
c22f625
630c694
f5530f9
37db615
d7ad2e6
2141ecf
75c976c
81866c9
4908174
20ffd33
4aba873
b4feb4f
1fc535e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -46,6 +46,10 @@ static LOG_SCHEMA: LazyLock<SchemaRef> = LazyLock::new(|| { | |
.into() | ||
}); | ||
|
||
static LOG_COMMIT_INFO_SCHEMA: LazyLock<SchemaRef> = LazyLock::new(|| { | ||
StructType::new([Option::<CommitInfo>::get_struct_field(COMMIT_INFO_NAME)]).into() | ||
}); | ||
|
||
#[cfg_attr(feature = "developer-visibility", visibility::make(pub))] | ||
#[cfg_attr(not(feature = "developer-visibility"), visibility::make(pub(crate)))] | ||
fn get_log_schema() -> &'static SchemaRef { | ||
|
@@ -58,6 +62,10 @@ fn get_log_add_schema() -> &'static SchemaRef { | |
&LOG_ADD_SCHEMA | ||
} | ||
|
||
pub(crate) fn get_log_commit_info_schema() -> &'static SchemaRef { | ||
&LOG_COMMIT_INFO_SCHEMA | ||
} | ||
|
||
#[derive(Debug, Clone, PartialEq, Eq, Schema)] | ||
pub struct Format { | ||
/// Name of the encoding for files in this table | ||
|
@@ -149,22 +157,21 @@ impl Protocol { | |
#[derive(Debug, Clone, PartialEq, Eq, Schema)] | ||
#[cfg_attr(feature = "developer-visibility", visibility::make(pub))] | ||
#[cfg_attr(not(feature = "developer-visibility"), visibility::make(pub(crate)))] | ||
// TODO need to have a way to always write some fields but not always read them | ||
struct CommitInfo { | ||
/// The time this logical file was created, as milliseconds since the epoch. | ||
/// TODO should this be a Timestamp? | ||
pub(crate) timestamp: i64, | ||
pub(crate) timestamp: Option<i64>, | ||
/// An arbitrary string that identifies the operation associated with this commit. This is | ||
/// specified by the engine. | ||
pub(crate) operation: String, | ||
pub(crate) operation: Option<String>, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is this actually optional? Would a commit ever not have an associated action? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it is indeed optional according to the protocol. that is, we should never require it to be present when reading commit info There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Technically, the protocol has nothing to say about any of this. It just so happens that Delta-spark requires this field, along with our favorite (put another way -- it's optional from kernel-as-reader perspective, but if we want compat with Delta-spark then it's required from kernel-as-writer perspective) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. right, to be clear I mean that it is optional because the protocol says commit info can be anything. since we don't really have a way of communicating reader/writer optionality I can just add a comment saying that this is optional since it is actually optional from a read/enforcement perspective but that kernel always writes it |
||
/// Map of arbitrary string key-value pairs that provide additional information about the | ||
/// operation. This is specified by the engine. For now this is always empty. | ||
pub(crate) operation_parameters: HashMap<String, String>, | ||
/// The version of the delta_kernel crate used to write this commit. | ||
/// operation. This is specified by the engine. For now this is always empty on write. | ||
pub(crate) operation_parameters: Option<HashMap<String, String>>, | ||
/// The version of the delta_kernel crate used to write this commit. The kernel will always | ||
/// write this field, but it is optional since many tables will not have this field (i.e. any | ||
/// tables not written by kernel). | ||
pub(crate) kernel_version: Option<String>, | ||
/// A place for the engine to store additional metadata associated with this commit encoded as | ||
/// a map of strings. | ||
/// TODO need to have a way to always write this but not always read it | ||
pub(crate) engine_commit_info: Option<HashMap<String, String>>, | ||
} | ||
|
||
|
@@ -454,12 +461,12 @@ mod tests { | |
let expected = Arc::new(StructType::new(vec![StructField::new( | ||
"commitInfo", | ||
StructType::new(vec![ | ||
StructField::new("timestamp", DataType::LONG, false), | ||
StructField::new("operation", DataType::STRING, false), | ||
StructField::new("timestamp", DataType::LONG, true), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So to my questions above, are you using these options as nullable fields than truly optional ones? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
StructField::new("operation", DataType::STRING, true), | ||
StructField::new( | ||
"operationParameters", | ||
MapType::new(DataType::STRING, DataType::STRING, false), | ||
false, | ||
true, | ||
), | ||
StructField::new("kernelVersion", DataType::STRING, true), | ||
StructField::new( | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -664,9 +664,9 @@ fn parse_json_impl(json_strings: &StringArray, schema: ArrowSchemaRef) -> DeltaR | |
|
||
/// write an arrow RecordBatch to a JSON string by appending to a buffer. | ||
/// | ||
/// TODO: this should stream data to the JSON writer and output an iterator. | ||
pub(crate) fn write_json( | ||
data: Box<dyn Iterator<Item = Box<dyn EngineData>> + Send>, | ||
/// TODO (zach): this should stream data to the JSON writer and output an iterator. | ||
pub(crate) fn to_json_bytes( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think there was a comment thread before, recommending to rename this from (tho I guess as There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yup probably advocate to just keep this |
||
data: impl Iterator<Item = Box<dyn EngineData>> + Send, | ||
) -> DeltaResult<Vec<u8>> { | ||
let mut writer = LineDelimitedWriter::new(Vec::new()); | ||
for chunk in data.into_iter() { | ||
|
@@ -1425,63 +1425,6 @@ mod tests { | |
assert_eq!(reorder_indices, expect_reorder); | ||
} | ||
|
||
#[test] | ||
fn accepts_safe_decimal_casts() { | ||
use super::can_upcast_to_decimal; | ||
use ArrowDataType::*; | ||
|
||
assert!(can_upcast_to_decimal(&Decimal128(1, 0), 2u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Decimal128(1, 0), 2u8, 1i8)); | ||
assert!(can_upcast_to_decimal(&Decimal128(5, -2), 6u8, -2i8)); | ||
assert!(can_upcast_to_decimal(&Decimal128(5, -2), 6u8, -1i8)); | ||
assert!(can_upcast_to_decimal(&Decimal128(5, 1), 6u8, 1i8)); | ||
assert!(can_upcast_to_decimal(&Decimal128(5, 1), 6u8, 2i8)); | ||
assert!(can_upcast_to_decimal( | ||
&Decimal128(10, 5), | ||
arrow_schema::DECIMAL128_MAX_PRECISION, | ||
arrow_schema::DECIMAL128_MAX_SCALE - 5 | ||
)); | ||
|
||
assert!(can_upcast_to_decimal(&Int8, 3u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int8, 4u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int8, 4u8, 1i8)); | ||
assert!(can_upcast_to_decimal(&Int8, 7u8, 2i8)); | ||
|
||
assert!(can_upcast_to_decimal(&Int16, 5u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int16, 6u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int16, 6u8, 1i8)); | ||
assert!(can_upcast_to_decimal(&Int16, 9u8, 2i8)); | ||
|
||
assert!(can_upcast_to_decimal(&Int32, 10u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int32, 11u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int32, 11u8, 1i8)); | ||
assert!(can_upcast_to_decimal(&Int32, 14u8, 2i8)); | ||
|
||
assert!(can_upcast_to_decimal(&Int64, 20u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int64, 21u8, 0i8)); | ||
assert!(can_upcast_to_decimal(&Int64, 21u8, 1i8)); | ||
assert!(can_upcast_to_decimal(&Int64, 24u8, 2i8)); | ||
} | ||
|
||
#[test] | ||
fn rejects_unsafe_decimal_casts() { | ||
use super::can_upcast_to_decimal; | ||
use ArrowDataType::*; | ||
|
||
assert!(!can_upcast_to_decimal(&Decimal128(2, 0), 2u8, 1i8)); | ||
assert!(!can_upcast_to_decimal(&Decimal128(2, 0), 2u8, -1i8)); | ||
assert!(!can_upcast_to_decimal(&Decimal128(5, 2), 6u8, 4i8)); | ||
|
||
assert!(!can_upcast_to_decimal(&Int8, 2u8, 0i8)); | ||
assert!(!can_upcast_to_decimal(&Int8, 3u8, 1i8)); | ||
assert!(!can_upcast_to_decimal(&Int16, 4u8, 0i8)); | ||
assert!(!can_upcast_to_decimal(&Int16, 5u8, 1i8)); | ||
assert!(!can_upcast_to_decimal(&Int32, 9u8, 0i8)); | ||
assert!(!can_upcast_to_decimal(&Int32, 10u8, 1i8)); | ||
assert!(!can_upcast_to_decimal(&Int64, 19u8, 0i8)); | ||
assert!(!can_upcast_to_decimal(&Int64, 20u8, 1i8)); | ||
} | ||
|
||
#[test] | ||
fn test_write_json() -> DeltaResult<()> { | ||
let schema = Arc::new(ArrowSchema::new(vec![ArrowField::new( | ||
|
@@ -1494,7 +1437,7 @@ mod tests { | |
vec![Arc::new(StringArray::from(vec!["string1", "string2"]))], | ||
)?; | ||
let data: Box<dyn EngineData> = Box::new(ArrowEngineData::new(data)); | ||
let json = write_json(Box::new(std::iter::once(data)))?; | ||
let json = to_json_bytes(Box::new(std::iter::once(data)))?; | ||
assert_eq!( | ||
json, | ||
"{\"string\":\"string1\"}\n{\"string\":\"string2\"}\n".as_bytes() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a
u64
? There is no chance of this being negative?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't impl Schema on
u64
- theLONG
type isi64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is a long time issue in that java doesn't have unsigned types. Somewhere back in our issue list is something about figuring this out :p
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may or may not need
u64
support in general, but parquet timestamps are signed by definition: