-
Notifications
You must be signed in to change notification settings - Fork 14.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Airflow Task/DAG fails if connection to OTEL collector fails (when otel integration is enabled) #34405
Comments
Thanks for opening your first issue here! Be sure to follow the issue template! If you are willing to raise PR to address this issue please do so, no need to wait for approval. |
Yes, I think we should add some configuration on how to handle OTEL connection failure, but I don't know if we should treat it as a bug fix or a new feature. |
Interesting, thanks for the Issue. Personally, I feel like we can treat this as a bugfix. Especially if we are wrapping it in a config option. I can see the argument for making "log and move on" the default behavior though, let's discuss it a bit and see what folks think and I can sort out the solution once we have some idea how to proceed. I'll cross-post this to the mailing list and try to get some conversation going. |
I too think we should treat it as a bug, mainly because airflow can still function and process tasks/dags even without exporting telemetry data, therefore any such dependency is virtual and should be avoided. I would be in favor of making it a configurable option with the default behavior of "log and move on". |
@potiuk @kaxil @eladkal @ferruzzi I think this is a show-stopping issue for Open Telemetry integration in Airflow. @potiuk said that he thinks this expected behavior (see: #40286 (comment)), but I strongly disagree for the following reasons:
At the very least, we need to make this a config, but I honestly think the default value should be "warn and continue", rather than "fail the task" as it's so dangerous in the current state. |
Apache Airflow version
2.7.1
What happened
I enabled the experimental OTEL integration, and sometimes the connection to OTEL collector fails. Such connection failures are expected and common. However, right now the task seems to fail and there is an extra point of failure added to each task and DAG. Sometimes the failures are before the DAG is even started, and task-level retries can't help.
The only error message I see in this case is the connection failure.
This is not printed to the Airflow UI, only to the worker logs, so it's not obvious why a task/DAG failed.
What you think should happen instead
In this situation, Airflow should print a warning and continue with the task.
When any other python application is auto-instrumented with otel, the automatic instrumentation works in the desired way, it ignores connection failures and only prints out a warning message.
Maybe this setting could be configurable, but the desired behaviour should be to ignore the exception.
How to reproduce
Enable OTEL integration, and turn off the collector. Run any DAG/task and they will fail.
Operating System
Ubuntu 22.04.3 LTS
Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.6.0
apache-airflow-providers-celery==3.3.3
apache-airflow-providers-common-sql==1.7.1
apache-airflow-providers-ftp==3.5.1
apache-airflow-providers-http==4.5.1
apache-airflow-providers-imap==3.3.1
apache-airflow-providers-openlineage==1.0.2
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-slack==8.0.0
apache-airflow-providers-snowflake==5.0.0
apache-airflow-providers-sqlite==3.4.3
apache-airflow-providers-ssh==3.7.2
Deployment
Other Docker-based deployment
Deployment details
Docker based custom deployment on ECS Fargate.
Separate fargate tasks for webserver, worker, scheduler and triggerer.
Otel collector is running as an agent in each task.
Anything else
The task fails everytime the connection to otel collector fails. However why the otel collector fails sometimes is the subject of another investigation. Maybe it has to do with something with the size of data/metrics being sent to the collector. But I believe those reasons are not very relevant to this bug.
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: