Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AIRFLOW-2524] Add Amazon SageMaker Training #3658

Merged
merged 8 commits into from
Aug 12, 2018

Conversation

troychen728
Copy link
Contributor

Make sure you have checked all steps below.

JIRA

Description

  • Here are some details about my PR, including screenshots of any UI changes:
    • This PR allows user to start a Amazon SageMaker Training job using the SageMakerCreateTrainingJobOperotar
    • User can also check the progress(state) of the training job through the SageMakerTrainingSensor

Tests

  • My PR adds the following unit tests OR does not need testing for this extremely good reason:
    • tests/contrib/hooks/test_sagemaker_hook.py
    • tests/contrib/operators/test_sagemaker_create_training_job_operator.py
    • tests/contrib/sensors/test_sagemaker_base_sensor.py
    • tests/contrib/sensors/test_sagemaker_training_sensor.py

Moto Library does not support SageMaker, so the tests in this PR are done with unittest.mock library. Since most AWS services in Airflow use Moto for unit test, new PRs to update unit tests may be sent if Moto supports SageMaker in the future.

Commits

  • My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
    1. Subject is separated from body by a blank line
    2. Subject is limited to 50 characters
    3. Subject does not end with a period
    4. Subject uses the imperative mood ("add", not "adding")
    5. Body wraps at 72 characters
    6. Body explains "what" and "why", not "how"

Documentation

  • In case of new functionality, my PR adds documentation that describes how to use it.
    • When adding new operators/hooks/sensors, the autoclass documentation generation needs to be added.

Code Quality

  • Passes git diff upstream/master -u -- "*.py" | flake8 --diff

Add SageMaker Hook, Training Operator & Sensor

Co-authored-by: srrajeev-aws <[email protected]>
@codecov-io
Copy link

codecov-io commented Jul 27, 2018

Codecov Report

Merging #3658 into master will increase coverage by 0.52%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3658      +/-   ##
==========================================
+ Coverage   77.11%   77.63%   +0.52%     
==========================================
  Files         206      204       -2     
  Lines       15772    15801      +29     
==========================================
+ Hits        12162    12267     +105     
+ Misses       3610     3534      -76
Impacted Files Coverage Δ
airflow/api/common/experimental/mark_tasks.py 66.92% <0%> (-1.08%) ⬇️
airflow/hooks/druid_hook.py 87.67% <0%> (-1.07%) ⬇️
airflow/www/app.py 99.01% <0%> (-0.99%) ⬇️
airflow/__init__.py 80.43% <0%> (ø) ⬆️
airflow/utils/log/gcs_task_handler.py 0% <0%> (ø) ⬆️
airflow/plugins_manager.py 92.59% <0%> (ø) ⬆️
airflow/operators/python_operator.py 95.03% <0%> (ø) ⬆️
airflow/bin/cli.py 64.35% <0%> (ø) ⬆️
airflow/hooks/presto_hook.py 39.13% <0%> (ø) ⬆️
airflow/sensors/hdfs_sensor.py 100% <0%> (ø) ⬆️
... and 15 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 096ba9e...2ef4f6f. Read the comment docs.

self.sagemaker_conn_id = sagemaker_conn_id
self.use_db_config = use_db_config
self.region_name = region_name
super(SageMakerHook, self).__init__(*args, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would move this to the top of the init (which is more conventional).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, Fixed

class SageMakerHook(AwsHook):
"""
Interact with Amazon SageMaker.
sagemaker_conn_is is required for using
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

"""

def __init__(self,
sagemaker_conn_id=None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this replace aws_conn_id?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it doesn't. Its only used if user want to use config stored in db. Sagemaker hook still uses aws_conn_id to get credentials.

"""
Contains general sensor behavior for SageMaker.
Subclasses should implement get_emr_response() and state_from_response() methods.
Subclasses should also implement NON_TERMINAL_STATES and FAILED_STATE constants.
Copy link
Contributor

@jrderuiter jrderuiter Jul 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I think it would be better to implement these constants as properties, which raise a NotImplementedError in the base class. That way, you make it clearer for users (and for IDEs) that these should be overridden in a subclass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I replaced the constant with a method that raises an error if not implemented.

Copy link
Contributor

@Fokko Fokko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @troychen728 for taking the time to contribute to Apache Airflow.

I have a general comment about how how the operator is designed. Until now the operator would also check if the submitted jobs runs properly and ends correctly. In your case this is being done in separate sensors which is not my preference. I would expect the operator to track the job and make sure it finished successfully. What are your thoughts on this?

:param region_name: The AWS region_name
:type region_name: string
:param sagemaker_conn_id: The SageMaker connection ID to use.
:type aws_conn_id: string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be sagemaker_conn_id

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Fokko,
Thank you so much for your review. I really appreciate your feedback. I didn't figure out how to reply to your request, so I'll just reply to you here. The main reason why I separate it to operator and sensor is that the success of the training job have two stages: successfully kick off a training job, and the training job successfully finishes. The operator tells about the first status, and the sensor tells the latter one. Also, since a training job is hosted at an AWS instance, not the instance that is hosting Airflow, so this way, other operators can set upstream to the operator, rather than the sensor, if they aren't dependent on the model actually being created. Also, by using the sensor, users can set parameters like poke_interval, which makes more sense for a sensor rather than an operator.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Keliang, thanks for explaining the Sagemaker process. I think it is very similar to for example the Druid hook that we have: https://github.com/apache/incubator-airflow/blob/master/airflow/hooks/druid_hook.py#L93

This hook will kick of a job using a HTTP POST of a json document to the druid cluster, and make sure that it receives a http 200. And then it will continue to poll the job by invoking the API periodically.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fokko - To further add to Keliang explanation of separating the operator to kick off the job and sensor to monitor the job is to provide flexibility to users. Based on their use case(s), they may have the requirements to kick of multiple jobs/tasks in parallel and then monitor the completion of all Amazon Sagemaker job(s) downstream. Some these jobs may take hours and we don't want to hold the pipeline to initiate other downstream jobs hampering the users from meeting their required SLA. Since there are many other known and unknown scenarios, we are careful not to club both the initialization and monitoring of the job.

The design is similar to the Amazon EMR - https://github.com/apache/incubator-airflow/blob/master/airflow/contrib/example_dags/example_emr_job_flow_manual_steps.py

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Fokko

I totally agree with what @srrajeev-aws mentioned here. And as a developer and maintainer of SageMaker sdk, I can confirm that a lot users will use SageMaker on data with large size which causes

  1. Training is time-consuming.
  2. Users may run parallel training jobs
  3. Users may want to manage ongoing training jobs.

So an asynchronous way of training would be preferred than a blocking one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srrajeev-aws In this case you would just kick off multiple operators in parallel. This is inherent of the concept of a DAG, if the training jobs don't have any dependencies on each other, they will just run in parallel. The only flexibility that the decoupling of the kicking of the job, and monitoring the job is in the case when you don't care about the outcome of the job. This is also analoge to Druid, an indexing job can take up to a couple of hours.
Having a separate operator and sensor would make the DAGs unnecessarily complicated, since in practice you will always use them as a pair.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fokko
Thank you very much for your reply. I agree with you that multiple jobs can still run in parallel, but to my understanding, there's also an option to run in sequential. More importantly, I think one thing that is very essential is that, what if there's already a training job running (whether initiated from Airflow or other means) before a dag is scheduled to run? Since a training job can takes days to finish, I think it might be a potential use case for Airflow users. As a result, a DAG can start with a sensor, and have other nodes dependent on the sensor. It would not be possible to accomplish this if operator, and sensor are coupled together.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@troychen728 The use case you mention justifies adding the sensor, but not removing the functionality from the operator.

Running sequentially is not really suited for production workflows, it should not be a concern.

Having a sensor per operator problematic though (for normal usage), hence keeping the functionality in the operator sounds the most sensible to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fokko @gglanzani
I agree with you that giving user an easier way is always helpful, so I updated the code, and added an wait option in operator. The default is True, and if it is not set to False by user, the operator will only signal success after the training job has finished. The control logic should be very similar to that of Druid hook you mentioned before.

:param sagemaker_conn_id: The SageMaker connection ID to use.
:type aws_conn_id: string
:param use_db_config: Whether or not to use db config
associated with sagemaker_conn_id.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing :type use_db_config: bool

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added


@apply_defaults
def __init__(self,
sagemaker_conn_id=None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make the order of the arguments congruent with the docstring, or the other way around

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the order

'AlgorithmSpecification':
create_training_params['AlgorithmSpecification'],
'RoleArn': 'string',
'InputDataConfig':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do some autoformatting here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

if status in non_terminal_states:
running = True
elif status in failed_state:
raise AirflowException("SageMaker job failed")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to add a reason why it failed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing out. I added the failure reason in the error message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Fokko,
I hope all is well. How do the changes look to you? Is there anything else you would like me to change?

return self.conn.list_hyper_parameter_tuning_job(
NameContains=name_contains, StatusEquals=status_equals)

def create_training_job(self, training_job_config, wait=True):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we rename wait to wait_for_completion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to wait_for_completion

Copy link
Contributor

@Fokko Fokko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @troychen728 for changing the logic. Having the ability to switch off the waiting seems fair to me. I still have a few minor small comments, after that it LGTM.

Create a training job
:param training_job_config: the config for training
:type training_job_config: dict
:param wait: if the program should keep running until job finishes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the program should keep running until job finishes -> if the operator should block until job finishes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

@Fokko
Copy link
Contributor

Fokko commented Aug 12, 2018

Thanks @troychen728. Looks good, thanks for the PR. Merging to master!

@Fokko Fokko merged commit 4d2f83b into apache:master Aug 12, 2018
lxneng pushed a commit to lxneng/incubator-airflow that referenced this pull request Aug 20, 2018
Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>
Fokko pushed a commit that referenced this pull request Sep 12, 2018
Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>
ashb pushed a commit to ashb/airflow that referenced this pull request Oct 22, 2018
Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>
galak75 pushed a commit to VilledeMontreal/incubator-airflow that referenced this pull request Nov 23, 2018
Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>
aliceabe pushed a commit to aliceabe/incubator-airflow that referenced this pull request Jan 3, 2019
Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>
cfei18 pushed a commit to cfei18/incubator-airflow that referenced this pull request Jan 23, 2019
Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>
wmorris75 pushed a commit to modmed/incubator-airflow that referenced this pull request Jul 31, 2019
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564516048 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564515968 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564515909 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564515887 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564507924 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564507818 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564507092 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564507071 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564507049 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564506218 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564506121 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564505391 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564504191 -0400

parent 6ef0e37
author Ash Berlin-Taylor <[email protected]> 1564493832 +0100
committer wayne.morris <[email protected]> 1564504099 -0400

[AIRFLOW-5052] Added the include_deleted param to salesforce_hook

[AIRFLOW-1840] Support back-compat on old celery config

The new names are in-line with Celery 4, but if
anyone upgrades Airflow
without following the UPDATING.md instructions
(which we probably assume
most people won't, not until something stops
working) their workers
would suddenly just start failing. That's bad.

This will issue a warning but carry on working as
expected. We can
remove the deprecation settings (but leave the
code in config) after
this release has been made.

Closes apache#3549 from ashb/AIRFLOW-1840-back-compat

(cherry picked from commit a4592f9)
Signed-off-by: Bolke de Bruin <[email protected]>

[AIRFLOW-2812] Fix error in Updating.md for upgrading to 1.10

Closes apache#3654 from nrhvyc/AIRFLOW-2812

[AIRFLOW-2816] Fix license text in docs/license.rst

(cherry picked from commit af15f11)
Signed-off-by: Bolke de Bruin <[email protected]>

[AIRFLOW-2817] Force explicit choice on GPL dependency (apache#3660)

By default one of Apache Airflow's dependencies pulls in a GPL
library. Airflow should not install (and upgrade) without an explicit choice.

This is part of the Apache requirements as we cannot depend on Category X
software.

(cherry picked from commit c37fc0b)
Signed-off-by: Bolke de Bruin <[email protected]>
(cherry picked from commit b39e453)
Signed-off-by: Bolke de Bruin <[email protected]>

[AIRFLOW-2869] Remove smart quote from default config

Closes apache#3716 from wdhorton/remove-smart-quote-
from-cfg

(cherry picked from commit 67e2bb9)
Signed-off-by: Bolke de Bruin <[email protected]>
(cherry picked from commit 700f5f0)
Signed-off-by: Bolke de Bruin <[email protected]>

[AIRFLOW-2140] Don't require kubernetes for the SparkSubmit hook (apache#3700)

This extra dep is a quasi-breaking change when upgrading - previously
there were no deps outside of Airflow itself for this hook. Importing
the k8s libs breaks installs that aren't also using Kubernetes.

This makes the dep optional for anyone who doesn't explicitly use the
functionality

(cherry picked from commit 0be002e)
Signed-off-by: Bolke de Bruin <[email protected]>
(cherry picked from commit f58246d)
Signed-off-by: Bolke de Bruin <[email protected]>

[AIRFLOW-2859] Implement own UtcDateTime (apache#3708)

The different UtcDateTime implementations all have issues.
Either they replace tzinfo directly without converting
or they do not convert to UTC at all.

We also ensure all mysql connections are in UTC
in order to keep sanity, as mysql will ignore the
timezone of a field when inserting/updating.

(cherry picked from commit 6fd4e60)
Signed-off-by: Bolke de Bruin <[email protected]>
(cherry picked from commit 8fc8c7a)
Signed-off-by: Bolke de Bruin <[email protected]>

[AIRFLOW-2895] Prevent scheduler from spamming heartbeats/logs

Reverts most of AIRFLOW-2027 until the issues with it can be fixed.

Closes apache#3747 from aoen/revert_min_file_parsing_time_commit

[AIRFLOW-2979] Make celery_result_backend conf Backwards compatible (apache#3832)

(apache#2806) Renamed `celery_result_backend` to `result_backend` and broke backwards compatibility.

[AIRFLOW-2524] Add Amazon SageMaker Training (apache#3658)

Add SageMaker Hook, Training Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>

[AIRFLOW-2524] Add Amazon SageMaker Tuning (apache#3751)

Add SageMaker tuning Operator and sensor
Co-authored-by: srrajeev-aws <[email protected]>

[AIRFLOW-2524] Add SageMaker Batch Inference (apache#3767)

* Fix for comments
* Fix sensor test
* Update non_terminal_states and failed_states to static variables of SageMakerHook

Add SageMaker Transform Operator & Sensor
Co-authored-by: srrajeev-aws <[email protected]>

[AIRFLOW-2763] Add check to validate worker connectivity to metadata Database

[AIRFLOW-2786] Gracefully handle Variable import errors (apache#3648)

Variables that are added through a file are not
checked as explicity as creating a Variable in the
web UI. This handles exceptions that could be caused
by improper keys or values.

[AIRFLOW-2860] DruidHook: time check is wrong (apache#3745)

[AIRFLOW-2773] Validates Dataflow Job Name

Closes apache#3623 from kaxil/AIRFLOW-2773

[AIRFLOW-2845] Asserts in contrib package code are changed on raise ValueError and TypeError (apache#3690)

[AIRFLOW-1917] Trim extra newline and trailing whitespace from log (apache#3862)

[AIRFLOW-XXX] Fix SlackWebhookOperator docs (apache#3915)

The docs refer to `conn_id` while the actual argument is `http_conn_id`.

[AIRFLOW-2912] Add Deploy and Delete operators for GCF (apache#3969)

Both Deploy and Delete operators interact with Google
Cloud Functions to manage functions. Both are idempotent
and make use of GcfHook - hook that encapsulates
communication with GCP over GCP API.

[AIRFLOW-3078] Basic operators for Google Compute Engine (apache#4022)

Add GceInstanceStartOperator, GceInstanceStopOperator and GceSetMachineTypeOperator.

Each operator includes:
- core logic
- input params validation
- unit tests
- presence in the example DAG
- docstrings
- How-to and Integration documentation

Additionally, in GceHook error checking if response is 200 OK was added:

Some types of errors are only visible in the response's "error" field
and the overall HTTP response is 200 OK.

That is why apart from checking if status is "done" we also check
if "error" is empty, and if not an exception is raised with error
message extracted from the "error" field of the response.

In this commit we also separated out Body Field Validator to
separate module in tools - this way it can be reused between
various GCP operators, it has proven to be usable in at least
two of them now.

Co-authored-by: sprzedwojski <[email protected]>
Co-authored-by: potiuk <[email protected]>

[AIRFLOW-3183] Fix bug in DagFileProcessorManager.max_runs_reached() (apache#4031)

The condition is intended to ensure the function
will return False if any file's run_count is still smaller
than max_run. But the operator used here is "!=".
Instead, it should be "<".

This is because in DagFileProcessorManager,
there is no statement helping limit the upper
limit of run_count. It's possible that
files' run_count will be bigger than max_run.
In such case, max_runs_reached() method
may fail its purpose.

[AIRFLOW-3099] Don't ever warn about missing sections of config (apache#4028)

Rather than looping through and setting each config variable
individually, and having to know which sections are optional and which
aren't, instead we can just call a single function on ConfigParser and
it will read the config from the dict, and more importantly here, never
error about missing sections - it will just create them as needed.

[AIRFLOW-3089] Drop hard-coded url scheme in google auth redirect. (apache#3919)

The google auth provider hard-codes the `_scheme` in the callback url to
`https` so that airflow generates correct urls when run behind a proxy
that terminates tls. But this means that google auth can't be used when
running without https--for example, during local development. Also,
hard-coding `_scheme` isn't the correct solution to the problem of
running behind a proxy. Instead, the proxy should be configured to set
the `X-Forwarded-Proto` header to `https`; Flask interprets this header
and generates the appropriate callback url without hard-coding the
scheme.

[AIRFLOW-3178] Handle percents signs in configs for airflow run (apache#4029)

* [AIRFLOW-3178] Don't mask defaults() function from ConfigParser

ConfigParser (the base class for AirflowConfigParser) expects defaults()
to be a function - so when we re-assign it to be a property some of the
methods from ConfigParser no longer work.

* [AIRFLOW-3178] Correctly escape percent signs when creating temp config

Otherwise we have a problem when we come to use those values.

* [AIRFLOW-3178] Use os.chmod instead of shelling out

There's no need to run another process for a built in Python function.

This also removes a possible race condition that would make temporary
config file be readable by more than the airflow or run-as user
The exact behaviour would depend on the umask we run under, and the
primary group of our user, likely this would mean the file was readably
by members of the airflow group (which in most cases would be just the
airflow user). To remove any such possibility we chmod the file
before we write to it

[AIRFLOW-2216] Use profile for AWS hook if S3 config file provided in aws_default connection extra parameters (apache#4011)

Use profile for AWS hook if S3 config file provided in
aws_default connection extra parameters
Add test to validate profile set

[AIRFLOW-3138] Use current data type for migrations (apache#3985)

* Use timestamp instead of timestamp with timezone for migration.

[AIRFLOW-3119] Enable debugging with Celery(apache#3950)

This will enable --loglevel when launching a
celery worker and inherit that LOGGING_LEVEL
setting from airflow.cfg

[AIRFLOW-3197] EMRHook is missing new parameters of the AWS API (apache#4044)

Allow passing any params to the CreateJobFlow API, so that we don't have
to stay up to date with AWS api changes.

[AIRFLOW-3203] Fix DockerOperator & some operator test (apache#4049)

- For argument `image`, no need to explicitly
  add "latest" if tag is omitted.
  "latest" will be used by default if no
  tag provided. This is handled by `docker` package itself.

- Intermediate variable `cpu_shares` is not needed.

- Fix wrong usage of `cpu_shares` and `cpu_shares`.
  Based on
  https://docker-py.readthedocs.io/en/stable/api.html#docker.api.container.ContainerApiMixin.create_host_config,
  They should be an arguments of
  self.cli.create_host_config()
  rather than
  APIClient.create_container().

- Change name of the corresponding test script,
  to ensure it can be discovered.

- Fix the test itself.

- Some other test scripts are not named properly,
  which result in failure of test discovery.

[AIRFLOW-3232] More readable GCF operator documentation (apache#4067)

[AIRFLOW-3231] Basic operators for Google Cloud SQL (apache#4097)

Add CloudSqlInstanceInsertOperator, CloudSqlInstancePatchOperator and CloudSqlInstanceDeleteOperator.

Each operator includes:
- core logic
- input params validation
- unit tests
- presence in the example DAG
- docstrings
- How-to and Integration documentation

Additionally, small improvements to GcpBodyFieldValidator were made:
- add simple list validation capability (type="list")
- introduced parameter allow_empty, which can be set to False
	to test for non-emptiness of a string instead of specifying
	a regexp.

Co-authored-by: sprzedwojski <[email protected]>
Co-authored-by: potiuk <[email protected]>

[AIRFLOW-2524] Update SageMaker hook and operators (apache#4091)

This re-works the SageMaker functionality in Airflow to be more complete, and more useful for the kinds of operations that SageMaker supports.

We removed some files and operators here, but these were only added after the last release so we don't need to worry about any sort of back-compat.

[AIRFLOW-3276] Cloud SQL: database create / patch / delete operators (apache#4124)

[AIRFLOW-2192] Allow non-latin1 usernames with MySQL backend by adding a SQL_ENGINE_ENCODING param and default to UTF-8 (apache#4087)

Compromised of:

Since we have unicode_literals importred and the engine arguments must be strings in Python2 explicitly make 'utf-8' a string.

replace bare exception with conf.AirflowConfigException for missing value.

It's just got for strings apparently.

Add utf-8 to default_airflow.cfg - question do I still need the try try/except block or can we depend on defaults (I note some have both).

Get rid of try/except block and depend on default_airflow.cfg

Use __str__ since calling str just gives us back a newstr as well.

Test that a panda user can be saved.

[AIRFLOW-3295] Fix potential security issue in DaskExecutor (apache#4128)

When user decides to use TLS/SSL encryption
for DaskExecutor communications,
`Distributed.Security` object will be created.

However, argument `require_encryption` is missed
to be set to `True` (its default value is `False`).

This may fail the TLS/SSL encryption setting-up.

[AIRFLOW-XXX] Fix flake8 errors from apache#4144

[AIRFLOW-2574] Cope with '%' in SQLA DSN when running migrations (apache#3787)

Alembic uses a ConfigParser like Airflow does, and "%% is a special
value in there, so we need to escape it. As per the Alembic docs:

> Note that this value is passed to ConfigParser.set, which supports
> variable interpolation using pyformat (e.g. `%(some_value)s`). A raw
> percent sign not part of an interpolation symbol must therefore be
> escaped, e.g. `%%`

[AIRFLOW-3090] Demote dag start/stop log messages to debug (apache#3920)

[AIRFLOW-3090] Specify path of key file in log message (apache#3921)

[AIRFLOW-3111] Fix instructions in UPDATING.md and remove comment (apache#3944)

artifacts in default_airflow.cfg

- fixed incorrect instructions in UPDATING.md regarding core.log_filename_template and elasticsearch.elasticsearch_log_id_template
- removed comments referencing "additional curly braces" from
default_airflow.cfg since they're irrelevant to the rendered airflow.cfg

[AIRFLOW-3127] Fix out-dated doc for Celery SSL (apache#3967)

Now in `airflow.cfg`, for Celery-SSL, the item names are
"ssl_active", "ssl_key", "ssl_cert", and "ssl_cacert".
(since PR https://github.com/apache/incubator-airflow/pull/2806/files)

But in the documentation
https://airflow.incubator.apache.org/security.html?highlight=celery
or
https://github.com/apache/incubator-airflow/blob/master/docs/security.rst,
it's "CELERY_SSL_ACTIVE", "CELERY_SSL_KEY", "CELERY_SSL_CERT", and
"CELERY_SSL_CACERT", which is out-dated and may confuse readers.

[AIRFLOW-3187] Update airflow.gif file with a slower version (apache#4033)

[AIRFLOW-3164] Verify server certificate when connecting to LDAP (apache#4006)

Misconfiguration and improper checking of exceptions disabled
server certificate checking. We now only support TLS connections
and do not support insecure connections anymore.

[AIRFLOW-2779] Add license headers to doc files (apache#4178)

This adds ASF license headers to all the .rst and .md files with the
exception of the Pull Request template (as that is included verbatim
when opening a Pull Request on Github which would be messy)

Added the include_deleted parameter to salesforce hook
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants