-
Notifications
You must be signed in to change notification settings - Fork 6
/
Copy pathREADME.rst
660 lines (503 loc) · 26.3 KB
/
README.rst
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
.. _readme:
onETL
=====
|Repo Status| |PyPI Latest Release| |PyPI License| |PyPI Python Version| |PyPI Downloads|
|Documentation| |CI Status| |Test Coverage| |pre-commit.ci Status|
.. |Repo Status| image:: https://www.repostatus.org/badges/latest/active.svg
:alt: Repo status - Active
:target: https://github.com/MobileTeleSystems/onetl
.. |PyPI Latest Release| image:: https://img.shields.io/pypi/v/onetl
:alt: PyPI - Latest Release
:target: https://pypi.org/project/onetl/
.. |PyPI License| image:: https://img.shields.io/pypi/l/onetl.svg
:alt: PyPI - License
:target: https://github.com/MobileTeleSystems/onetl/blob/develop/LICENSE.txt
.. |PyPI Python Version| image:: https://img.shields.io/pypi/pyversions/onetl.svg
:alt: PyPI - Python Version
:target: https://pypi.org/project/onetl/
.. |PyPI Downloads| image:: https://img.shields.io/pypi/dm/onetl
:alt: PyPI - Downloads
:target: https://pypi.org/project/onetl/
.. |Documentation| image:: https://readthedocs.org/projects/onetl/badge/?version=stable
:alt: Documentation - ReadTheDocs
:target: https://onetl.readthedocs.io/
.. |CI Status| image:: https://github.com/MobileTeleSystems/onetl/workflows/Tests/badge.svg
:alt: Github Actions - latest CI build status
:target: https://github.com/MobileTeleSystems/onetl/actions
.. |Test Coverage| image:: https://codecov.io/gh/MobileTeleSystems/onetl/branch/develop/graph/badge.svg?token=RIO8URKNZJ
:alt: Test coverage - percent
:target: https://codecov.io/gh/MobileTeleSystems/onetl
.. |pre-commit.ci Status| image:: https://results.pre-commit.ci/badge/github/MobileTeleSystems/onetl/develop.svg
:alt: pre-commit.ci - status
:target: https://results.pre-commit.ci/latest/github/MobileTeleSystems/onetl/develop
|Logo|
.. |Logo| image:: docs/_static/logo_wide.svg
:alt: onETL logo
:target: https://github.com/MobileTeleSystems/onetl
What is onETL?
--------------
Python ETL/ELT library powered by `Apache Spark <https://spark.apache.org/>`_ & other open-source tools.
Goals
-----
* Provide unified classes to extract data from (**E**) & load data to (**L**) various stores.
* Provides `Spark DataFrame API <https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html>`_ for performing transformations (**T**) in terms of *ETL*.
* Provide direct assess to database, allowing to execute SQL queries, as well as DDL, DML, and call functions/procedures. This can be used for building up *ELT* pipelines.
* Support different `read strategies <https://onetl.readthedocs.io/en/stable/strategy/index.html>`_ for incremental and batch data fetching.
* Provide `hooks <https://onetl.readthedocs.io/en/stable/hooks/index.html>`_ & `plugins <https://onetl.readthedocs.io/en/stable/plugins.html>`_ mechanism for altering behavior of internal classes.
Non-goals
---------
* onETL is not a Spark replacement. It just provides additional functionality that Spark does not have, and improves UX for end users.
* onETL is not a framework, as it does not have requirements to project structure, naming, the way of running ETL/ELT processes, configuration, etc. All of that should be implemented in some other tool.
* onETL is deliberately developed without any integration with scheduling software like Apache Airflow. All integrations should be implemented as separated tools.
* Only batch operations, no streaming. For streaming prefer `Apache Flink <https://flink.apache.org/>`_.
Requirements
------------
* **Python 3.7 - 3.13**
* PySpark 2.3.x - 3.5.x (depends on used connector)
* Java 8+ (required by Spark, see below)
* Kerberos libs & GCC (required by ``Hive``, ``HDFS`` and ``SparkHDFS`` connectors)
Supported storages
------------------
Database
~~~~~~~~
+--------------------+--------------+-------------------------------------------------------------------------------------------------------------------------+
| Type | Storage | Powered by |
+====================+==============+=========================================================================================================================+
| Database | Clickhouse | Apache Spark `JDBC Data Source <https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html>`_ |
+ +--------------+ +
| | MSSQL | |
+ +--------------+ +
| | MySQL | |
+ +--------------+ +
| | Postgres | |
+ +--------------+ +
| | Oracle | |
+ +--------------+ +
| | Teradata | |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | Hive | Apache Spark `Hive integration <https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | Kafka | Apache Spark `Kafka integration <https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | Greenplum | VMware `Greenplum Spark connector <https://docs.vmware.com/en/VMware-Greenplum-Connector-for-Apache-Spark/index.html>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | MongoDB | `MongoDB Spark connector <https://www.mongodb.com/docs/spark-connector/current>`_ |
+--------------------+--------------+-------------------------------------------------------------------------------------------------------------------------+
| File | HDFS | `HDFS Python client <https://pypi.org/project/hdfs/>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | S3 | `minio-py client <https://pypi.org/project/minio/>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | SFTP | `Paramiko library <https://pypi.org/project/paramiko/>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | FTP | `FTPUtil library <https://pypi.org/project/ftputil/>`_ |
+ +--------------+ +
| | FTPS | |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | WebDAV | `WebdavClient3 library <https://pypi.org/project/webdavclient3/>`_ |
+ +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | Samba | `pysmb library <https://pypi.org/project/pysmb/>`_ |
+--------------------+--------------+-------------------------------------------------------------------------------------------------------------------------+
| Files as DataFrame | SparkLocalFS | Apache Spark `File Data Source <https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html>`_ |
| +--------------+ +
| | SparkHDFS | |
| +--------------+-------------------------------------------------------------------------------------------------------------------------+
| | SparkS3 | `Hadoop AWS <https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html>`_ library |
+--------------------+--------------+-------------------------------------------------------------------------------------------------------------------------+
.. documentation
Documentation
-------------
See https://onetl.readthedocs.io/
How to install
---------------
.. _install:
Minimal installation
~~~~~~~~~~~~~~~~~~~~
.. _minimal-install:
Base ``onetl`` package contains:
* ``DBReader``, ``DBWriter`` and related classes
* ``FileDownloader``, ``FileUploader``, ``FileMover`` and related classes, like file filters & limits
* ``FileDFReader``, ``FileDFWriter`` and related classes, like file formats
* Read Strategies & HWM classes
* Plugins support
It can be installed via:
.. code:: bash
pip install onetl
.. warning::
This method does NOT include any connections.
This method is recommended for use in third-party libraries which require for ``onetl`` to be installed,
but do not use its connection classes.
With DB and FileDF connections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _spark-install:
All DB connection classes (``Clickhouse``, ``Greenplum``, ``Hive`` and others)
and all FileDF connection classes (``SparkHDFS``, ``SparkLocalFS``, ``SparkS3``)
require Spark to be installed.
.. _java-install:
Firstly, you should install JDK. The exact installation instruction depends on your OS, here are some examples:
.. code:: bash
yum install java-1.8.0-openjdk-devel # CentOS 7 + Spark 2
dnf install java-11-openjdk-devel # CentOS 8 + Spark 3
apt-get install openjdk-11-jdk # Debian-based + Spark 3
.. _spark-compatibility-matrix:
Compatibility matrix
^^^^^^^^^^^^^^^^^^^^
+--------------------------------------------------------------+-------------+-------------+-------+
| Spark | Python | Java | Scala |
+==============================================================+=============+=============+=======+
| `2.3.x <https://spark.apache.org/docs/2.3.1/#downloading>`_ | 3.7 only | 8 only | 2.11 |
+--------------------------------------------------------------+-------------+-------------+-------+
| `2.4.x <https://spark.apache.org/docs/2.4.8/#downloading>`_ | 3.7 only | 8 only | 2.11 |
+--------------------------------------------------------------+-------------+-------------+-------+
| `3.2.x <https://spark.apache.org/docs/3.2.4/#downloading>`_ | 3.7 - 3.10 | 8u201 - 11 | 2.12 |
+--------------------------------------------------------------+-------------+-------------+-------+
| `3.3.x <https://spark.apache.org/docs/3.3.4/#downloading>`_ | 3.7 - 3.10 | 8u201 - 17 | 2.12 |
+--------------------------------------------------------------+-------------+-------------+-------+
| `3.4.x <https://spark.apache.org/docs/3.4.3/#downloading>`_ | 3.7 - 3.12 | 8u362 - 20 | 2.12 |
+--------------------------------------------------------------+-------------+-------------+-------+
| `3.5.x <https://spark.apache.org/docs/3.5.4/#downloading>`_ | 3.8 - 3.13 | 8u371 - 20 | 2.12 |
+--------------------------------------------------------------+-------------+-------------+-------+
.. _pyspark-install:
Then you should install PySpark via passing ``spark`` to ``extras``:
.. code:: bash
pip install onetl[spark] # install latest PySpark
or install PySpark explicitly:
.. code:: bash
pip install onetl pyspark==3.5.4 # install a specific PySpark version
or inject PySpark to ``sys.path`` in some other way BEFORE creating a class instance.
**Otherwise connection object cannot be created.**
With File connections
~~~~~~~~~~~~~~~~~~~~~
.. _files-install:
All File (but not *FileDF*) connection classes (``FTP``, ``SFTP``, ``HDFS`` and so on) requires specific Python clients to be installed.
Each client can be installed explicitly by passing connector name (in lowercase) to ``extras``:
.. code:: bash
pip install onetl[ftp] # specific connector
pip install onetl[ftp,ftps,sftp,hdfs,s3,webdav,samba] # multiple connectors
To install all file connectors at once you can pass ``files`` to ``extras``:
.. code:: bash
pip install onetl[files]
**Otherwise class import will fail.**
With Kerberos support
~~~~~~~~~~~~~~~~~~~~~
.. _kerberos-install:
Most of Hadoop instances set up with Kerberos support,
so some connections require additional setup to work properly.
* ``HDFS``
Uses `requests-kerberos <https://pypi.org/project/requests-kerberos/>`_ and
`GSSApi <https://pypi.org/project/gssapi/>`_ for authentication.
It also uses ``kinit`` executable to generate Kerberos ticket.
* ``Hive`` and ``SparkHDFS``
require Kerberos ticket to exist before creating Spark session.
So you need to install OS packages with:
* ``krb5`` libs
* Headers for ``krb5``
* ``gcc`` or other compiler for C sources
The exact installation instruction depends on your OS, here are some examples:
.. code:: bash
apt install libkrb5-dev krb5-user gcc # Debian-based
dnf install krb5-devel krb5-libs krb5-workstation gcc # CentOS, OracleLinux
Also you should pass ``kerberos`` to ``extras`` to install required Python packages:
.. code:: bash
pip install onetl[kerberos]
Full bundle
~~~~~~~~~~~
.. _full-bundle:
To install all connectors and dependencies, you can pass ``all`` into ``extras``:
.. code:: bash
pip install onetl[all]
# this is just the same as
pip install onetl[spark,files,kerberos]
.. warning::
This method consumes a lot of disk space, and requires for Java & Kerberos libraries to be installed into your OS.
.. _quick-start:
Quick start
------------
MSSQL → Hive
~~~~~~~~~~~~
Read data from MSSQL, transform & write to Hive.
.. code:: bash
# install onETL and PySpark
pip install onetl[spark]
.. code:: python
# Import pyspark to initialize the SparkSession
from pyspark.sql import SparkSession
# import function to setup onETL logging
from onetl.log import setup_logging
# Import required connections
from onetl.connection import MSSQL, Hive
# Import onETL classes to read & write data
from onetl.db import DBReader, DBWriter
# change logging level to INFO, and set up default logging format and handler
setup_logging()
# Initialize new SparkSession with MSSQL driver loaded
maven_packages = MSSQL.get_packages()
spark = (
SparkSession.builder.appName("spark_app_onetl_demo")
.config("spark.jars.packages", ",".join(maven_packages))
.enableHiveSupport() # for Hive
.getOrCreate()
)
# Initialize MSSQL connection and check if database is accessible
mssql = MSSQL(
host="mssqldb.demo.com",
user="onetl",
password="onetl",
database="Telecom",
spark=spark,
# These options are passed to MSSQL JDBC Driver:
extra={"applicationIntent": "ReadOnly"},
).check()
# >>> INFO:|MSSQL| Connection is available
# Initialize DBReader
reader = DBReader(
connection=mssql,
source="dbo.demo_table",
columns=["on", "etl"],
# Set some MSSQL read options:
options=MSSQL.ReadOptions(fetchsize=10000),
)
# checks that there is data in the table, otherwise raises exception
reader.raise_if_no_data()
# Read data to DataFrame
df = reader.run()
df.printSchema()
# root
# |-- id: integer (nullable = true)
# |-- phone_number: string (nullable = true)
# |-- region: string (nullable = true)
# |-- birth_date: date (nullable = true)
# |-- registered_at: timestamp (nullable = true)
# |-- account_balance: double (nullable = true)
# Apply any PySpark transformations
from pyspark.sql.functions import lit
df_to_write = df.withColumn("engine", lit("onetl"))
df_to_write.printSchema()
# root
# |-- id: integer (nullable = true)
# |-- phone_number: string (nullable = true)
# |-- region: string (nullable = true)
# |-- birth_date: date (nullable = true)
# |-- registered_at: timestamp (nullable = true)
# |-- account_balance: double (nullable = true)
# |-- engine: string (nullable = false)
# Initialize Hive connection
hive = Hive(cluster="rnd-dwh", spark=spark)
# Initialize DBWriter
db_writer = DBWriter(
connection=hive,
target="dl_sb.demo_table",
# Set some Hive write options:
options=Hive.WriteOptions(if_exists="replace_entire_table"),
)
# Write data from DataFrame to Hive
db_writer.run(df_to_write)
# Success!
SFTP → HDFS
~~~~~~~~~~~
Download files from SFTP & upload them to HDFS.
.. code:: bash
# install onETL with SFTP and HDFS clients, and Kerberos support
pip install onetl[hdfs,sftp,kerberos]
.. code:: python
# import function to setup onETL logging
from onetl.log import setup_logging
# Import required connections
from onetl.connection import SFTP, HDFS
# Import onETL classes to download & upload files
from onetl.file import FileDownloader, FileUploader
# import filter & limit classes
from onetl.file.filter import Glob, ExcludeDir
from onetl.file.limit import MaxFilesCount
# change logging level to INFO, and set up default logging format and handler
setup_logging()
# Initialize SFTP connection and check it
sftp = SFTP(
host="sftp.test.com",
user="someuser",
password="somepassword",
).check()
# >>> INFO:|SFTP| Connection is available
# Initialize downloader
file_downloader = FileDownloader(
connection=sftp,
source_path="/remote/tests/Report", # path on SFTP
local_path="/local/onetl/Report", # local fs path
filters=[
# download only files matching the glob
Glob("*.csv"),
# exclude files from this directory
ExcludeDir("/remote/tests/Report/exclude_dir/"),
],
limits=[
# download max 1000 files per run
MaxFilesCount(1000),
],
options=FileDownloader.Options(
# delete files from SFTP after successful download
delete_source=True,
# mark file as failed if it already exist in local_path
if_exists="error",
),
)
# Download files to local filesystem
download_result = downloader.run()
# Method run returns a DownloadResult object,
# which contains collection of downloaded files, divided to 4 categories
download_result
# DownloadResult(
# successful=[
# LocalPath('/local/onetl/Report/file_1.json'),
# LocalPath('/local/onetl/Report/file_2.json'),
# ],
# failed=[FailedRemoteFile('/remote/onetl/Report/file_3.json')],
# ignored=[RemoteFile('/remote/onetl/Report/file_4.json')],
# missing=[],
# )
# Raise exception if there are failed files, or there were no files in the remote filesystem
download_result.raise_if_failed() or download_result.raise_if_empty()
# Do any kind of magic with files: rename files, remove header for csv files, ...
renamed_files = my_rename_function(download_result.success)
# function removed "_" from file names
# [
# LocalPath('/home/onetl/Report/file1.json'),
# LocalPath('/home/onetl/Report/file2.json'),
# ]
# Initialize HDFS connection
hdfs = HDFS(
host="my.name.node",
user="someuser",
password="somepassword", # or keytab
)
# Initialize uploader
file_uploader = FileUploader(
connection=hdfs,
target_path="/user/onetl/Report/", # hdfs path
)
# Upload files from local fs to HDFS
upload_result = file_uploader.run(renamed_files)
# Method run returns a UploadResult object,
# which contains collection of uploaded files, divided to 4 categories
upload_result
# UploadResult(
# successful=[RemoteFile('/user/onetl/Report/file1.json')],
# failed=[FailedLocalFile('/local/onetl/Report/file2.json')],
# ignored=[],
# missing=[],
# )
# Raise exception if there are failed files, or there were no files in the local filesystem, or some input file is missing
upload_result.raise_if_failed() or upload_result.raise_if_empty() or upload_result.raise_if_missing()
# Success!
S3 → Postgres
~~~~~~~~~~~~~~~~
Read files directly from S3 path, convert them to dataframe, transform it and then write to a database.
.. code:: bash
# install onETL and PySpark
pip install onetl[spark]
.. code:: python
# Import pyspark to initialize the SparkSession
from pyspark.sql import SparkSession
# import function to setup onETL logging
from onetl.log import setup_logging
# Import required connections
from onetl.connection import Postgres, SparkS3
# Import onETL classes to read files
from onetl.file import FileDFReader
from onetl.file.format import CSV
# Import onETL classes to write data
from onetl.db import DBWriter
# change logging level to INFO, and set up default logging format and handler
setup_logging()
# Initialize new SparkSession with Hadoop AWS libraries and Postgres driver loaded
maven_packages = SparkS3.get_packages(spark_version="3.5.4") + Postgres.get_packages()
spark = (
SparkSession.builder.appName("spark_app_onetl_demo")
.config("spark.jars.packages", ",".join(maven_packages))
.getOrCreate()
)
# Initialize S3 connection and check it
spark_s3 = SparkS3(
host="s3.test.com",
protocol="https",
bucket="my-bucket",
access_key="somekey",
secret_key="somesecret",
# Access bucket as s3.test.com/my-bucket
extra={"path.style.access": True},
spark=spark,
).check()
# >>> INFO:|SparkS3| Connection is available
# Describe file format and parsing options
csv = CSV(
delimiter=";",
header=True,
encoding="utf-8",
)
# Describe DataFrame schema of files
from pyspark.sql.types import (
DateType,
DoubleType,
IntegerType,
StringType,
StructField,
StructType,
TimestampType,
)
df_schema = StructType(
[
StructField("id", IntegerType()),
StructField("phone_number", StringType()),
StructField("region", StringType()),
StructField("birth_date", DateType()),
StructField("registered_at", TimestampType()),
StructField("account_balance", DoubleType()),
],
)
# Initialize file df reader
reader = FileDFReader(
connection=spark_s3,
source_path="/remote/tests/Report", # path on S3 there *.csv files are located
format=csv, # file format with specific parsing options
df_schema=df_schema, # columns & types
)
# Read files directly from S3 as Spark DataFrame
df = reader.run()
# Check that DataFrame schema is same as expected
df.printSchema()
# root
# |-- id: integer (nullable = true)
# |-- phone_number: string (nullable = true)
# |-- region: string (nullable = true)
# |-- birth_date: date (nullable = true)
# |-- registered_at: timestamp (nullable = true)
# |-- account_balance: double (nullable = true)
# Apply any PySpark transformations
from pyspark.sql.functions import lit
df_to_write = df.withColumn("engine", lit("onetl"))
df_to_write.printSchema()
# root
# |-- id: integer (nullable = true)
# |-- phone_number: string (nullable = true)
# |-- region: string (nullable = true)
# |-- birth_date: date (nullable = true)
# |-- registered_at: timestamp (nullable = true)
# |-- account_balance: double (nullable = true)
# |-- engine: string (nullable = false)
# Initialize Postgres connection
postgres = Postgres(
host="192.169.11.23",
user="onetl",
password="somepassword",
database="mydb",
spark=spark,
)
# Initialize DBWriter
db_writer = DBWriter(
connection=postgres,
# write to specific table
target="public.my_table",
# with some writing options
options=Postgres.WriteOptions(if_exists="append"),
)
# Write DataFrame to Postgres table
db_writer.run(df_to_write)
# Success!