Skip to content

Single API for reading, manipulating and writing data in csv, ods, xls, xlsx and xlsm files

License

Notifications You must be signed in to change notification settings

mayhemheroes/pyexcel

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pyexcel - Let you focus on data, instead of file formats

https://raw.githubusercontent.com/pyexcel/pyexcel.github.io/master/images/patreon.png https://pepy.tech/badge/pyexcel/month https://img.shields.io/static/v1?label=continuous%20templating&message=%E6%A8%A1%E7%89%88%E6%9B%B4%E6%96%B0&color=blue&style=flat-square https://img.shields.io/static/v1?label=coding%20style&message=black&color=black&style=flat-square https://readthedocs.org/projects/pyexcel/badge/?version=latest

Support the project

If your company has embedded pyexcel and its components into a revenue generating product, please support me on github, patreon or bounty source to maintain the project and develop it further.

If you are an individual, you are welcome to support me too and for however long you feel like. As my backer, you will receive early access to pyexcel related contents.

And your issues will get prioritized if you would like to become my patreon as pyexcel pro user.

With your financial support, I will be able to invest a little bit more time in coding, documentation and writing interesting posts.

Known constraints

Fonts, colors and charts are not supported.

Nor to read password protected xls, xlsx and ods files.

Introduction

Feature Highlights

A list of supported file formats
file format definition
csv comma separated values
tsv tab separated values
csvz a zip file that contains one or many csv files
tsvz a zip file that contains one or many tsv files
xls a spreadsheet file format created by MS-Excel 97-2003
xlsx MS-Excel Extensions to the Office Open XML SpreadsheetML File Format.
xlsm an MS-Excel Macro-Enabled Workbook file
ods open document spreadsheet
fods flat open document spreadsheet
json java script object notation
html html table of the data structure
simple simple presentation
rst rStructured Text presentation of the data
mediawiki media wiki table

  1. One application programming interface(API) to handle multiple data sources:
    • physical file
    • memory file
    • SQLAlchemy table
    • Django Model
    • Python data structures: dictionary, records and array
  2. One API to read and write data in various excel file formats.
  3. For large data sets, data streaming are supported. A genenerator can be returned to you. Checkout iget_records, iget_array, isave_as and isave_book_as.

Installation

You can install pyexcel via pip:

$ pip install pyexcel

or clone it and install it:

$ git clone https://github.com/pyexcel/pyexcel.git
$ cd pyexcel
$ python setup.py install

One liners

This section shows you how to get data from your excel files and how to export data to excel files in one line

Read from the excel files

Get a list of dictionaries

Suppose you want to process History of Classical Music:

History of Classical Music:

Name Period Representative Composers
Medieval c.1150-c.1400 Machaut, Landini
Renaissance c.1400-c.1600 Gibbons, Frescobaldi
Baroque c.1600-c.1750 JS Bach, Vivaldi
Classical c.1750-c.1830 Joseph Haydn, Wolfgan Amadeus Mozart
Early Romantic c.1830-c.1860 Chopin, Mendelssohn, Schumann, Liszt
Late Romantic c.1860-c.1920 Wagner,Verdi
Modernist 20th century Sergei Rachmaninoff,Calude Debussy

Let's get a list of dictionary out from the xls file:

>>> records = p.get_records(file_name="your_file.xls")

And let's check what do we have:

>>> for row in records:
...     print(f"{row['Representative Composers']} are from {row['Name']} period ({row['Period']})")
Machaut, Landini are from Medieval period (c.1150-c.1400)
Gibbons, Frescobaldi are from Renaissance period (c.1400-c.1600)
JS Bach, Vivaldi are from Baroque period (c.1600-c.1750)
Joseph Haydn, Wolfgan Amadeus Mozart are from Classical period (c.1750-c.1830)
Chopin, Mendelssohn, Schumann, Liszt are from Early Romantic period (c.1830-c.1860)
Wagner,Verdi are from Late Romantic period (c.1860-c.1920)
Sergei Rachmaninoff,Calude Debussy are from Modernist period (20th century)

Get two dimensional array

Instead, what if you have to use pyexcel.get_array to do the same:

>>> for row in p.get_array(file_name="your_file.xls", start_row=1):
...     print(f"{row[2]} are from {row[0]} period ({row[1]})")
Machaut, Landini are from Medieval period (c.1150-c.1400)
Gibbons, Frescobaldi are from Renaissance period (c.1400-c.1600)
JS Bach, Vivaldi are from Baroque period (c.1600-c.1750)
Joseph Haydn, Wolfgan Amadeus Mozart are from Classical period (c.1750-c.1830)
Chopin, Mendelssohn, Schumann, Liszt are from Early Romantic period (c.1830-c.1860)
Wagner,Verdi are from Late Romantic period (c.1860-c.1920)
Sergei Rachmaninoff,Calude Debussy are from Modernist period (20th century)

where start_row skips the header row.

Get a dictionary

You can get a dictionary too:

>>> my_dict = p.get_dict(file_name="your_file.xls", name_columns_by_row=0)

And let's have a look inside:

>>> from pyexcel._compact import OrderedDict
>>> isinstance(my_dict, OrderedDict)
True
>>> for key, values in my_dict.items():
...     print(key + " : " + ','.join([str(item) for item in values]))
Name : Medieval,Renaissance,Baroque,Classical,Early Romantic,Late Romantic,Modernist
Period : c.1150-c.1400,c.1400-c.1600,c.1600-c.1750,c.1750-c.1830,c.1830-c.1860,c.1860-c.1920,20th century
Representative Composers : Machaut, Landini,Gibbons, Frescobaldi,JS Bach, Vivaldi,Joseph Haydn, Wolfgan Amadeus Mozart,Chopin, Mendelssohn, Schumann, Liszt,Wagner,Verdi,Sergei Rachmaninoff,Calude Debussy

Please note that my_dict is an OrderedDict.

Get a dictionary of two dimensional array

Suppose you have a multiple sheet book as the following:

Top Violinist:

Name Period Nationality
Antonio Vivaldi 1678-1741 Italian
Niccolo Paganini 1782-1840 Italian
Pablo de Sarasate 1852-1904 Spainish
Eugene Ysaye 1858-1931 Belgian
Fritz Kreisler 1875-1962 Astria-American
Jascha Heifetz 1901-1987 Russian-American
David Oistrakh 1908-1974 Russian
Yehundi Menuhin 1916-1999 American
Itzhak Perlman 1945- Israeli-American
Hilary Hahn 1979- American

Noteable Violin Makers:

Maker Period Country
Antonio Stradivari 1644-1737 Cremona, Italy
Giovanni Paolo Maggini 1580-1630 Botticino, Italy
Amati Family 1500-1740 Cremona, Italy
Guarneri Family 1626-1744 Cremona, Italy
Rugeri Family 1628-1719 Cremona, Italy
Carlo Bergonzi 1683-1747 Cremona, Italy
Jacob Stainer 1617-1683 Austria

Most Expensive Violins:

Name Estimated Value Location
Messiah Stradivarious $ 20,000,000 Ashmolean Museum in Oxford, England
Vieuxtemps Guarneri $ 16,000,000 On loan to Anne Akiko Meyers
Lady Blunt $ 15,900,000 Anonymous bidder

Here is the code to obtain those sheets as a single dictionary:

>>> book_dict = p.get_book_dict(file_name="book.xls")

And check:

>>> isinstance(book_dict, OrderedDict)
True
>>> import json
>>> for key, item in book_dict.items():
...     print(json.dumps({key: item}))
{"Most Expensive Violins": [["Name", "Estimated Value", "Location"], ["Messiah Stradivarious", "$ 20,000,000", "Ashmolean Museum in Oxford, England"], ["Vieuxtemps Guarneri", "$ 16,000,000", "On loan to Anne Akiko Meyers"], ["Lady Blunt", "$ 15,900,000", "Anonymous bidder"]]}
{"Noteable Violin Makers": [["Maker", "Period", "Country"], ["Antonio Stradivari", "1644-1737", "Cremona, Italy"], ["Giovanni Paolo Maggini", "1580-1630", "Botticino, Italy"], ["Amati Family", "1500-1740", "Cremona, Italy"], ["Guarneri Family", "1626-1744", "Cremona, Italy"], ["Rugeri Family", "1628-1719", "Cremona, Italy"], ["Carlo Bergonzi", "1683-1747", "Cremona, Italy"], ["Jacob Stainer", "1617-1683", "Austria"]]}
{"Top Violinist": [["Name", "Period", "Nationality"], ["Antonio Vivaldi", "1678-1741", "Italian"], ["Niccolo Paganini", "1782-1840", "Italian"], ["Pablo de Sarasate", "1852-1904", "Spainish"], ["Eugene Ysaye", "1858-1931", "Belgian"], ["Fritz Kreisler", "1875-1962", "Astria-American"], ["Jascha Heifetz", "1901-1987", "Russian-American"], ["David Oistrakh", "1908-1974", "Russian"], ["Yehundi Menuhin", "1916-1999", "American"], ["Itzhak Perlman", "1945-", "Israeli-American"], ["Hilary Hahn", "1979-", "American"]]}

Write data

Export an array

Suppose you have the following array:

>>> data = [['G', 'D', 'A', 'E'], ['Thomastik-Infield Domaints', 'Thomastik-Infield Domaints', 'Thomastik-Infield Domaints', 'Pirastro'], ['Silver wound', '', 'Aluminum wound', 'Gold Label Steel']]

And here is the code to save it as an excel file :

>>> p.save_as(array=data, dest_file_name="example.xls")

Let's verify it:

>>> p.get_sheet(file_name="example.xls")
pyexcel_sheet1:
+----------------------------+----------------------------+----------------------------+------------------+
| G                          | D                          | A                          | E                |
+----------------------------+----------------------------+----------------------------+------------------+
| Thomastik-Infield Domaints | Thomastik-Infield Domaints | Thomastik-Infield Domaints | Pirastro         |
+----------------------------+----------------------------+----------------------------+------------------+
| Silver wound               |                            | Aluminum wound             | Gold Label Steel |
+----------------------------+----------------------------+----------------------------+------------------+

And here is the code to save it as a csv file :

>>> p.save_as(array=data,
...           dest_file_name="example.csv",
...           dest_delimiter=':')

Let's verify it:

>>> with open("example.csv") as f:
...     for line in f.readlines():
...         print(line.rstrip())
...
G:D:A:E
Thomastik-Infield Domaints:Thomastik-Infield Domaints:Thomastik-Infield Domaints:Pirastro
Silver wound::Aluminum wound:Gold Label Steel

Export a list of dictionaries

>>> records = [
...     {"year": 1903, "country": "Germany", "speed": "206.7km/h"},
...     {"year": 1964, "country": "Japan", "speed": "210km/h"},
...     {"year": 2008, "country": "China", "speed": "350km/h"}
... ]
>>> p.save_as(records=records, dest_file_name='high_speed_rail.xls')

Export a dictionary of single key value pair

>>> henley_on_thames_facts = {
...     "area": "5.58 square meters",
...     "population": "11,619",
...     "civial parish": "Henley-on-Thames",
...     "latitude": "51.536",
...     "longitude": "-0.898"
... }
>>> p.save_as(adict=henley_on_thames_facts, dest_file_name='henley.xlsx')

Export a dictionary of single dimensonal array

>>> ccs_insights = {
...     "year": ["2017", "2018", "2019", "2020", "2021"],
...     "smart phones": [1.53, 1.64, 1.74, 1.82, 1.90],
...     "feature phones": [0.46, 0.38, 0.30, 0.23, 0.17]
... }
>>> p.save_as(adict=ccs_insights, dest_file_name='ccs.csv')

Export a dictionary of two dimensional array as a book

Suppose you want to save the below dictionary to an excel file :

>>> a_dictionary_of_two_dimensional_arrays = {
...      'Sheet 1':
...          [
...              [1.0, 2.0, 3.0],
...              [4.0, 5.0, 6.0],
...              [7.0, 8.0, 9.0]
...          ],
...      'Sheet 2':
...          [
...              ['X', 'Y', 'Z'],
...              [1.0, 2.0, 3.0],
...              [4.0, 5.0, 6.0]
...          ],
...      'Sheet 3':
...          [
...              ['O', 'P', 'Q'],
...              [3.0, 2.0, 1.0],
...              [4.0, 3.0, 2.0]
...          ]
...  }

Here is the code:

>>> p.save_book_as(
...    bookdict=a_dictionary_of_two_dimensional_arrays,
...    dest_file_name="book.xls"
... )

If you want to preserve the order of sheets in your dictionary, you have to pass on an ordered dictionary to the function itself. For example:

>>> data = OrderedDict()
>>> data.update({"Sheet 2": a_dictionary_of_two_dimensional_arrays['Sheet 2']})
>>> data.update({"Sheet 1": a_dictionary_of_two_dimensional_arrays['Sheet 1']})
>>> data.update({"Sheet 3": a_dictionary_of_two_dimensional_arrays['Sheet 3']})
>>> p.save_book_as(bookdict=data, dest_file_name="book.xls")

Let's verify its order:

>>> book_dict = p.get_book_dict(file_name="book.xls")
>>> for key, item in book_dict.items():
...     print(json.dumps({key: item}))
{"Sheet 2": [["X", "Y", "Z"], [1, 2, 3], [4, 5, 6]]}
{"Sheet 1": [[1, 2, 3], [4, 5, 6], [7, 8, 9]]}
{"Sheet 3": [["O", "P", "Q"], [3, 2, 1], [4, 3, 2]]}

Please notice that "Sheet 2" is the first item in the book_dict, meaning the order of sheets are preserved.

Transcoding

Note

Please note that pyexcel-cli can perform file transcoding at command line. No need to open your editor, save the problem, then python run.

The following code does a simple file format transcoding from xls to csv:

>>> p.save_as(file_name="birth.xls", dest_file_name="birth.csv")

Again it is really simple. Let's verify what we have gotten:

>>> sheet = p.get_sheet(file_name="birth.csv")
>>> sheet
birth.csv:
+-------+--------+----------+
| name  | weight | birth    |
+-------+--------+----------+
| Adam  | 3.4    | 03/02/15 |
+-------+--------+----------+
| Smith | 4.2    | 12/11/14 |
+-------+--------+----------+

Note

Please note that csv(comma separate value) file is pure text file. Formula, charts, images and formatting in xls file will disappear no matter which transcoding tool you use. Hence, pyexcel is a quick alternative for this transcoding job.

Let use previous example and save it as xlsx instead

>>> p.save_as(file_name="birth.xls",
...           dest_file_name="birth.xlsx") # change the file extension

Again let's verify what we have gotten:

>>> sheet = p.get_sheet(file_name="birth.xlsx")
>>> sheet
pyexcel_sheet1:
+-------+--------+----------+
| name  | weight | birth    |
+-------+--------+----------+
| Adam  | 3.4    | 03/02/15 |
+-------+--------+----------+
| Smith | 4.2    | 12/11/14 |
+-------+--------+----------+

Excel book merge and split operation in one line

Merge all excel files in directory into a book where each file become a sheet

The following code will merge every excel files into one file, say "output.xls":

from pyexcel.cookbook import merge_all_to_a_book
import glob


merge_all_to_a_book(glob.glob("your_csv_directory\*.csv"), "output.xls")

You can mix and match with other excel formats: xls, xlsm and ods. For example, if you are sure you have only xls, xlsm, xlsx, ods and csv files in your_excel_file_directory, you can do the following:

from pyexcel.cookbook import merge_all_to_a_book
import glob


merge_all_to_a_book(glob.glob("your_excel_file_directory\*.*"), "output.xls")

Split a book into single sheet files

Suppose you have many sheets in a work book and you would like to separate each into a single sheet excel file. You can easily do this:

>>> from pyexcel.cookbook import split_a_book
>>> split_a_book("megabook.xls", "output.xls")
>>> import glob
>>> outputfiles = glob.glob("*_output.xls")
>>> for file in sorted(outputfiles):
...     print(file)
...
Sheet 1_output.xls
Sheet 2_output.xls
Sheet 3_output.xls

for the output file, you can specify any of the supported formats

Extract just one sheet from a book

Suppose you just want to extract one sheet from many sheets that exists in a work book and you would like to separate it into a single sheet excel file. You can easily do this:

>>> from pyexcel.cookbook import extract_a_sheet_from_a_book
>>> extract_a_sheet_from_a_book("megabook.xls", "Sheet 1", "output.xls")
>>> if os.path.exists("Sheet 1_output.xls"):
...     print("Sheet 1_output.xls exists")
...
Sheet 1_output.xls exists

for the output file, you can specify any of the supported formats

Hidden feature: partial read

Most pyexcel users do not know, but other library users were requesting partial read

When you are dealing with huge amount of data, e.g. 64GB, obviously you would not like to fill up your memory with those data. What you may want to do is, record data from Nth line, take M records and stop. And you only want to use your memory for the M records, not for beginning part nor for the tail part.

Hence partial read feature is developed to read partial data into memory for processing.

You can paginate by row, by column and by both, hence you dictate what portion of the data to read back. But remember only row limit features help you save memory. Let's you use this feature to record data from Nth column, take M number of columns and skip the rest. You are not going to reduce your memory footprint.

Why did not I see above benefit?

This feature depends heavily on the implementation details.

pyexcel-xls (xlrd), pyexcel-xlsx (openpyxl), pyexcel-ods (odfpy) and pyexcel-ods3 (pyexcel-ezodf) will read all data into memory. Because xls, xlsx and ods file are effective a zipped folder, all four will unzip the folder and read the content in xml format in full, so as to make sense of all details.

Hence, during the partial data is been returned, the memory consumption won't differ from reading the whole data back. Only after the partial data is returned, the memory comsumption curve shall jump the cliff. So pagination code here only limits the data returned to your program.

With that said, pyexcel-xlsxr, pyexcel-odsr and pyexcel-htmlr DOES read partial data into memory. Those three are implemented in such a way that they consume the xml(html) when needed. When they have read designated portion of the data, they stop, even if they are half way through.

In addition, pyexcel's csv readers can read partial data into memory too.

Let's assume the following file is a huge csv file:

>>> import datetime
>>> import pyexcel as pe
>>> data = [
...     [1, 21, 31],
...     [2, 22, 32],
...     [3, 23, 33],
...     [4, 24, 34],
...     [5, 25, 35],
...     [6, 26, 36]
... ]
>>> pe.save_as(array=data, dest_file_name="your_file.csv")

And let's pretend to read partial data:

>>> pe.get_sheet(file_name="your_file.csv", start_row=2, row_limit=3)
your_file.csv:
+---+----+----+
| 3 | 23 | 33 |
+---+----+----+
| 4 | 24 | 34 |
+---+----+----+
| 5 | 25 | 35 |
+---+----+----+

And you could as well do the same for columns:

>>> pe.get_sheet(file_name="your_file.csv", start_column=1, column_limit=2)
your_file.csv:
+----+----+
| 21 | 31 |
+----+----+
| 22 | 32 |
+----+----+
| 23 | 33 |
+----+----+
| 24 | 34 |
+----+----+
| 25 | 35 |
+----+----+
| 26 | 36 |
+----+----+

Obvious, you could do both at the same time:

>>> pe.get_sheet(file_name="your_file.csv",
...     start_row=2, row_limit=3,
...     start_column=1, column_limit=2)
your_file.csv:
+----+----+
| 23 | 33 |
+----+----+
| 24 | 34 |
+----+----+
| 25 | 35 |
+----+----+

The pagination support is available across all pyexcel plugins.

Note

No column pagination support for query sets as data source.

Formatting while transcoding a big data file

If you are transcoding a big data set, conventional formatting method would not help unless a on-demand free RAM is available. However, there is a way to minimize the memory footprint of pyexcel while the formatting is performed.

Let's continue from previous example. Suppose we want to transcode "your_file.csv" to "your_file.xls" but increase each element by 1.

What we can do is to define a row renderer function as the following:

>>> def increment_by_one(row):
...     for element in row:
...         yield element + 1

Then pass it onto save_as function using row_renderer:

>>> pe.isave_as(file_name="your_file.csv",
...             row_renderer=increment_by_one,
...             dest_file_name="your_file.xlsx")

Note

If the data content is from a generator, isave_as has to be used.

We can verify if it was done correctly:

>>> pe.get_sheet(file_name="your_file.xlsx")
your_file.csv:
+---+----+----+
| 2 | 22 | 32 |
+---+----+----+
| 3 | 23 | 33 |
+---+----+----+
| 4 | 24 | 34 |
+---+----+----+
| 5 | 25 | 35 |
+---+----+----+
| 6 | 26 | 36 |
+---+----+----+
| 7 | 27 | 37 |
+---+----+----+

Stream APIs for big file : A set of two liners

When you are dealing with BIG excel files, you will want pyexcel to use constant memory.

This section shows you how to get data from your BIG excel files and how to export data to excel files in two lines at most, without eating all your computer memory.

Two liners for get data from big excel files

Get a list of dictionaries

Suppose you want to process the following coffee data again:

Top 5 coffeine drinks:

Coffees Serving Size Caffeine (mg)
Starbucks Coffee Blonde Roast venti(20 oz) 475
Dunkin' Donuts Coffee with Turbo Shot large(20 oz.) 398
Starbucks Coffee Pike Place Roast grande(16 oz.) 310
Panera Coffee Light Roast regular(16 oz.) 300

Let's get a list of dictionary out from the xls file:

>>> records = p.iget_records(file_name="your_file.xls")

And let's check what do we have:

>>> for r in records:
...     print(f"{r['Serving Size']} of {r['Coffees']} has {r['Caffeine (mg)']} mg")
venti(20 oz) of Starbucks Coffee Blonde Roast has 475 mg
large(20 oz.) of Dunkin' Donuts Coffee with Turbo Shot has 398 mg
grande(16 oz.) of Starbucks Coffee Pike Place Roast has 310 mg
regular(16 oz.) of Panera Coffee Light Roast has 300 mg

Please do not forgot the second line to close the opened file handle:

>>> p.free_resources()

Get two dimensional array

Instead, what if you have to use pyexcel.get_array to do the same:

>>> for row in p.iget_array(file_name="your_file.xls", start_row=1):
...     print(f"{row[1]} of {row[0]} has {row[2]} mg")
venti(20 oz) of Starbucks Coffee Blonde Roast has 475 mg
large(20 oz.) of Dunkin' Donuts Coffee with Turbo Shot has 398 mg
grande(16 oz.) of Starbucks Coffee Pike Place Roast has 310 mg
regular(16 oz.) of Panera Coffee Light Roast has 300 mg

Again, do not forgot the second line:

>>> p.free_resources()

where start_row skips the header row.

Data export in one liners

Export an array

Suppose you have the following array:

>>> data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]

And here is the code to save it as an excel file :

>>> p.isave_as(array=data, dest_file_name="example.xls")

But the following line is not required because the data source are not file sources:

>>> # p.free_resources()

Let's verify it:

>>> p.get_sheet(file_name="example.xls")
pyexcel_sheet1:
+---+---+---+
| 1 | 2 | 3 |
+---+---+---+
| 4 | 5 | 6 |
+---+---+---+
| 7 | 8 | 9 |
+---+---+---+

And here is the code to save it as a csv file :

>>> p.isave_as(array=data,
...            dest_file_name="example.csv",
...            dest_delimiter=':')

Let's verify it:

>>> with open("example.csv") as f:
...     for line in f.readlines():
...         print(line.rstrip())
...
1:2:3
4:5:6
7:8:9

Export a list of dictionaries

>>> records = [
...     {"year": 1903, "country": "Germany", "speed": "206.7km/h"},
...     {"year": 1964, "country": "Japan", "speed": "210km/h"},
...     {"year": 2008, "country": "China", "speed": "350km/h"}
... ]
>>> p.isave_as(records=records, dest_file_name='high_speed_rail.xls')

Export a dictionary of single key value pair

>>> henley_on_thames_facts = {
...     "area": "5.58 square meters",
...     "population": "11,619",
...     "civial parish": "Henley-on-Thames",
...     "latitude": "51.536",
...     "longitude": "-0.898"
... }
>>> p.isave_as(adict=henley_on_thames_facts, dest_file_name='henley.xlsx')

Export a dictionary of single dimensonal array

>>> ccs_insights = {
...     "year": ["2017", "2018", "2019", "2020", "2021"],
...     "smart phones": [1.53, 1.64, 1.74, 1.82, 1.90],
...     "feature phones": [0.46, 0.38, 0.30, 0.23, 0.17]
... }
>>> p.isave_as(adict=ccs_insights, dest_file_name='ccs.csv')
>>> p.free_resources()

Export a dictionary of two dimensional array as a book

Suppose you want to save the below dictionary to an excel file :

>>> a_dictionary_of_two_dimensional_arrays = {
...      'Sheet 1':
...          [
...              [1.0, 2.0, 3.0],
...              [4.0, 5.0, 6.0],
...              [7.0, 8.0, 9.0]
...          ],
...      'Sheet 2':
...          [
...              ['X', 'Y', 'Z'],
...              [1.0, 2.0, 3.0],
...              [4.0, 5.0, 6.0]
...          ],
...      'Sheet 3':
...          [
...              ['O', 'P', 'Q'],
...              [3.0, 2.0, 1.0],
...              [4.0, 3.0, 2.0]
...          ]
...  }

Here is the code:

>>> p.isave_book_as(
...    bookdict=a_dictionary_of_two_dimensional_arrays,
...    dest_file_name="book.xls"
... )

If you want to preserve the order of sheets in your dictionary, you have to pass on an ordered dictionary to the function itself. For example:

>>> from pyexcel._compact import OrderedDict
>>> data = OrderedDict()
>>> data.update({"Sheet 2": a_dictionary_of_two_dimensional_arrays['Sheet 2']})
>>> data.update({"Sheet 1": a_dictionary_of_two_dimensional_arrays['Sheet 1']})
>>> data.update({"Sheet 3": a_dictionary_of_two_dimensional_arrays['Sheet 3']})
>>> p.isave_book_as(bookdict=data, dest_file_name="book.xls")
>>> p.free_resources()

Let's verify its order:

>>> import json
>>> book_dict = p.get_book_dict(file_name="book.xls")
>>> for key, item in book_dict.items():
...     print(json.dumps({key: item}))
{"Sheet 2": [["X", "Y", "Z"], [1, 2, 3], [4, 5, 6]]}
{"Sheet 1": [[1, 2, 3], [4, 5, 6], [7, 8, 9]]}
{"Sheet 3": [["O", "P", "Q"], [3, 2, 1], [4, 3, 2]]}

Please notice that "Sheet 2" is the first item in the book_dict, meaning the order of sheets are preserved.

File format transcoding on one line

Note

Please note that the following file transcoding could be with zero line. Please install pyexcel-cli and you will do the transcode in one command. No need to open your editor, save the problem, then python run.

The following code does a simple file format transcoding from xls to csv:

>>> import pyexcel
>>> p.save_as(file_name="birth.xls", dest_file_name="birth.csv")

Again it is really simple. Let's verify what we have gotten:

>>> sheet = p.get_sheet(file_name="birth.csv")
>>> sheet
birth.csv:
+-------+--------+----------+
| name  | weight | birth    |
+-------+--------+----------+
| Adam  | 3.4    | 03/02/15 |
+-------+--------+----------+
| Smith | 4.2    | 12/11/14 |
+-------+--------+----------+

Note

Please note that csv(comma separate value) file is pure text file. Formula, charts, images and formatting in xls file will disappear no matter which transcoding tool you use. Hence, pyexcel is a quick alternative for this transcoding job.

Let use previous example and save it as xlsx instead

>>> import pyexcel
>>> p.isave_as(file_name="birth.xls",
...            dest_file_name="birth.xlsx") # change the file extension

Again let's verify what we have gotten:

>>> sheet = p.get_sheet(file_name="birth.xlsx")
>>> sheet
pyexcel_sheet1:
+-------+--------+----------+
| name  | weight | birth    |
+-------+--------+----------+
| Adam  | 3.4    | 03/02/15 |
+-------+--------+----------+
| Smith | 4.2    | 12/11/14 |
+-------+--------+----------+

Available Plugins

A list of file formats supported by external plugins
Package name Supported file formats Dependencies
pyexcel-io csv, csvz [1], tsv, tsvz [2]  
pyexcel-xls xls, xlsx(read only), xlsm(read only) xlrd, xlwt
pyexcel-xlsx xlsx openpyxl
pyexcel-ods3 ods pyexcel-ezodf, lxml
pyexcel-ods ods odfpy
Dedicated file reader and writers
Package name Supported file formats Dependencies
pyexcel-xlsxw xlsx(write only) XlsxWriter
pyexcel-libxlsxw xlsx(write only) libxlsxwriter
pyexcel-xlsxr xlsx(read only) lxml
pyexcel-xlsbr xlsb(read only) pyxlsb
pyexcel-odsr read only for ods, fods lxml
pyexcel-odsw write only for ods loxun
pyexcel-htmlr html(read only) lxml,html5lib
pyexcel-pdfr pdf(read only) camelot

Plugin shopping guide

Since 2020, all pyexcel-io plugins have dropped the support for python versions which are lower than 3.6. If you want to use any of those Python versions, please use pyexcel-io and its plugins versions that are lower than 0.6.0.

Except csv files, xls, xlsx and ods files are a zip of a folder containing a lot of xml files

The dedicated readers for excel files can stream read

In order to manage the list of plugins installed, you need to use pip to add or remove a plugin. When you use virtualenv, you can have different plugins per virtual environment. In the situation where you have multiple plugins that does the same thing in your environment, you need to tell pyexcel which plugin to use per function call. For example, pyexcel-ods and pyexcel-odsr, and you want to get_array to use pyexcel-odsr. You need to append get_array(..., library='pyexcel-odsr').

Other data renderers
Package name Supported file formats Dependencies Python versions
pyexcel-text write only:rst, mediawiki, html, latex, grid, pipe, orgtbl, plain simple read only: ndjson r/w: json tabulate 2.6, 2.7, 3.3, 3.4 3.5, 3.6, pypy
pyexcel-handsontable handsontable in html handsontable same as above
pyexcel-pygal svg chart pygal 2.7, 3.3, 3.4, 3.5 3.6, pypy
pyexcel-sortable sortable table in html csvtotable same as above
pyexcel-gantt gantt chart in html frappe-gantt except pypy, same as above

Footnotes

[1]zipped csv file
[2]zipped tsv file

Acknowledgement

All great work have been done by odf, ezodf, xlrd, xlwt, tabulate and other individual developers. This library unites only the data access code.

License

New BSD License

About

Single API for reading, manipulating and writing data in csv, ods, xls, xlsx and xlsm files

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • Python 99.6%
  • Other 0.4%