Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: Document existing functionality of pandas.DataFrame.to_sql() #11886 #26795

Merged
merged 20 commits into from
Aug 30, 2019
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 26 additions & 13 deletions pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2594,18 +2594,19 @@ def to_sql(
`index` is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
chunksize : int, optional
Rows will be written in batches of this size at a time. By default,
all rows will be written at once.
dtype : dict, optional
Specifying the datatype for columns. The keys should be the column
names and the values should be the SQLAlchemy types or strings for
the sqlite3 legacy mode.
method : {None, 'multi', callable}, default None
Specify the number of rows in each batch to be written at a time.
By default, all rows will be written at once.
dtype : dict or scalar, optional
Specifying the datatype for columns. If a dictionary is used, the
keys should be the column names and the values should be the
SQLAlchemy types or strings for the sqlite3 legacy mode. If a
scalar is provided, it will be applied to all columns.
method : {None, 'multi', callable}, optional
Controls the SQL insertion clause used:

* None : Uses standard SQL ``INSERT`` clause (one per row).
* 'multi': Pass multiple values in a single ``INSERT`` clause.
* callable with signature ``(pd_table, conn, keys, data_iter)``.
* callable with signature ``(pd_table, con, keys, data_iter)``.

Details and a sample callable implementation can be found in the
section :ref:`insert method <io.sql.method>`.
Expand Down Expand Up @@ -5270,23 +5271,32 @@ def _consolidate(self, inplace=False):
if inplace:
self._consolidate_inplace()
else:
f = lambda: self._data.consolidate()
TomAugspurger marked this conversation as resolved.
Show resolved Hide resolved

def f():
return self._data.consolidate()

cons_data = self._protect_consolidate(f)
return self._constructor(cons_data).__finalize__(self)

@property
def _is_mixed_type(self):
f = lambda: self._data.is_mixed_type
def f():
return self._data.is_mixed_type

return self._protect_consolidate(f)

@property
def _is_numeric_mixed_type(self):
f = lambda: self._data.is_numeric_mixed_type
def f():
return self._data.is_numeric_mixed_type

return self._protect_consolidate(f)

@property
def _is_datelike_mixed_type(self):
f = lambda: self._data.is_datelike_mixed_type
def f():
return self._data.is_datelike_mixed_type

return self._protect_consolidate(f)

def _check_inplace_setting(self, value):
Expand Down Expand Up @@ -10415,7 +10425,10 @@ def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwargs):
return getattr(grouped, name)(**kwargs)
axis = self._get_axis_number(axis)
method = getattr(type(self), name)
applyf = lambda x: method(x, axis=axis, skipna=skipna, **kwargs)

def applyf(x):
return method(x, axis=axis, skipna=skipna, **kwargs)

return grouped.aggregate(applyf)

@classmethod
Expand Down
23 changes: 12 additions & 11 deletions pandas/io/sql.py
Original file line number Diff line number Diff line change
Expand Up @@ -456,14 +456,14 @@ def to_sql(
Parameters
----------
frame : DataFrame, Series
name : string
name : str
Name of SQL table.
con : SQLAlchemy connectable(engine/connection) or database string URI
or sqlite3 DBAPI2 connection
Using SQLAlchemy makes it possible to use any DB supported by that
library.
If a DBAPI2 object, only sqlite3 is supported.
schema : string, default None
schema : str, optional
Name of SQL schema in database to write to (if database flavor
supports this). If None, use default schema (default).
if_exists : {'fail', 'replace', 'append'}, default 'fail'
Expand All @@ -472,18 +472,19 @@ def to_sql(
- append: If table exists, insert data. Create if does not exist.
index : boolean, default True
Write DataFrame index as a column.
index_label : string or sequence, default None
index_label : str or sequence, optional
Column label for index column(s). If None is given (default) and
`index` is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
chunksize : int, default None
If not None, then rows will be written in batches of this size at a
time. If None, all rows will be written at once.
dtype : single SQLtype or dict of column name to SQL type, default None
Optional specifying the datatype for columns. The SQL type should
be a SQLAlchemy type, or a string for sqlite3 fallback connection.
If all columns are of the same type, one single value can be used.
method : {None, 'multi', callable}, default None
chunksize : int, optional
Specify the number of rows in each batch to be written at a time.
By default, all rows will be written at once.
dtype : dict or scalar, optional
oguzhanogreden marked this conversation as resolved.
Show resolved Hide resolved
Specifying the datatype for columns. If a dictionary is used, the
keys should be the column names and the values should be the
SQLAlchemy types or strings for the sqlite3 fallback mode. If a
scalar is provided, it will be applied to all columns.
method : {None, 'multi', callable}, optional
Controls the SQL insertion clause used:

- None : Uses standard SQL ``INSERT`` clause (one per row).
Expand Down