Skip to content
This repository has been archived by the owner on Feb 18, 2024. It is now read-only.

Commit

Permalink
Improved ODBC docs
Browse files Browse the repository at this point in the history
  • Loading branch information
jorgecarleitao committed Mar 5, 2022
1 parent 76eb0f8 commit d971e40
Show file tree
Hide file tree
Showing 6 changed files with 31 additions and 10 deletions.
5 changes: 3 additions & 2 deletions examples/io_odbc.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
//! Example showing how to write to, and read from, an ODBC connector
//! Demo of how to write to, and read from, an ODBC connector
//!
//! On an Ubuntu, you need to run the following (to install the driver):
//! ```bash
Expand Down Expand Up @@ -28,13 +28,14 @@ fn main() -> Result<()> {
let query = "INSERT INTO example (c1, c2) VALUES (?, ?)";
let prepared = connection.prepare(query).unwrap();

// first, initialize buffers from odbc-api
// secondly, we initialize buffers from odbc-api
let fields = vec![
// (for now) the types here must match the tables' schema
Field::new("unused", DataType::Int32, true),
Field::new("unused", DataType::LargeUtf8, true),
];

// third, we initialize the writer
let mut writer = write::Writer::try_new(prepared, fields)?;

// say we have (or receive from a channel) a chunk:
Expand Down
1 change: 1 addition & 0 deletions guide/src/io/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,6 @@ This crate offers optional features that enable interoperability with different
* Parquet (`io_parquet`)
* JSON and NDJSON (`io_json`)
* Avro (`io_avro` and `io_avro_async`)
* ODBC-compliant databases (`io_odbc`)

In this section you can find a guide and examples for each one of them.
8 changes: 8 additions & 0 deletions guide/src/io/odbc.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# ODBC

When compiled with feature `io_odbc`, this crate can be used to read from, and write to
any [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity) interface:

```rust
{{#include ../../../examples/odbc.rs}}
```
15 changes: 12 additions & 3 deletions src/io/odbc/write/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ pub use schema::infer_descriptions;
pub use serialize::serialize;

/// Creates a [`api::buffers::ColumnarBuffer`] from [`api::ColumnDescription`]s.
///
/// This is useful when separating the serialization (CPU-bounded) to writing to the DB (IO-bounded).
pub fn buffer_from_description(
descriptions: Vec<api::ColumnDescription>,
capacity: usize,
Expand All @@ -23,15 +25,20 @@ pub fn buffer_from_description(
api::buffers::buffer_from_description(capacity, descs)
}

/// A writer of [`Chunk`] to an ODBC prepared statement.
/// A writer of [`Chunk`]s to an ODBC [`api::Prepared`] statement.
/// # Implementation
/// This struct mixes CPU-bounded and IO-bounded tasks and is not ideal
/// for an `async` context.
pub struct Writer<'a> {
fields: Vec<Field>,
buffer: api::buffers::ColumnarBuffer<api::buffers::AnyColumnBuffer>,
prepared: api::Prepared<'a>,
}

impl<'a> Writer<'a> {
/// Creates a new [`Writer`]
/// Creates a new [`Writer`].
/// # Errors
/// Errors iff any of the types from [`Field`] is not supported.
pub fn try_new(prepared: api::Prepared<'a>, fields: Vec<Field>) -> Result<Self> {
let buffer = buffer_from_description(infer_descriptions(&fields)?, 0);
Ok(Self {
Expand All @@ -41,7 +48,9 @@ impl<'a> Writer<'a> {
})
}

/// Writes a chunk to the writter.
/// Writes a chunk to the writer.
/// # Errors
/// Errors iff the execution of the statement fails.
pub fn write<A: AsRef<dyn Array>>(&mut self, chunk: &Chunk<A>) -> Result<()> {
if chunk.len() > self.buffer.num_rows() {
// if the chunk is larger, we re-allocate new buffers to hold it
Expand Down
2 changes: 1 addition & 1 deletion src/io/odbc/write/schema.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use super::super::api;
use crate::datatypes::{DataType, Field};
use crate::error::{ArrowError, Result};

/// Infers the [`ColumnDescription`] from the fields
/// Infers the [`api::ColumnDescription`] from the fields
pub fn infer_descriptions(fields: &[Field]) -> Result<Vec<api::ColumnDescription>> {
fields
.iter()
Expand Down
10 changes: 6 additions & 4 deletions src/io/odbc/write/serialize.rs
Original file line number Diff line number Diff line change
Expand Up @@ -121,12 +121,13 @@ fn bool(array: &BooleanArray, values: &mut [api::Bit]) {
}

fn bool_optional(array: &BooleanArray, values: &mut NullableSliceMut<api::Bit>) {
let (values, indicators) = values.raw_values();
array
.values()
.iter()
.zip(values.values().iter_mut())
.zip(values.iter_mut())
.for_each(|(from, to)| *to = api::Bit(from as u8));
write_validity(array.validity(), values.indicators());
write_validity(array.validity(), indicators);
}

fn primitive<T: NativeType>(array: &PrimitiveArray<T>, values: &mut [T]) {
Expand All @@ -145,8 +146,9 @@ fn write_validity(validity: Option<&Bitmap>, indicators: &mut [isize]) {
}

fn primitive_optional<T: NativeType>(array: &PrimitiveArray<T>, values: &mut NullableSliceMut<T>) {
values.values().copy_from_slice(array.values());
write_validity(array.validity(), values.indicators());
let (values, indicators) = values.raw_values();
values.copy_from_slice(array.values());
write_validity(array.validity(), indicators);
}

fn fixed_binary(array: &FixedSizeBinaryArray, writer: &mut BinColumnWriter) {
Expand Down

0 comments on commit d971e40

Please sign in to comment.