Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add check for PEM exists for Hive and Livy DAG. #621

Merged
merged 2 commits into from
Sep 21, 2022
Merged

Conversation

rajaths010494
Copy link
Contributor

@rajaths010494 rajaths010494 commented Sep 5, 2022

While running hive or livy dag where we use ssh to login into the machine pem file was missing in /tmp folder.
Now PEM is downloaded into /tmp in SSH task so PEM is available at the same worker node.
This PR checks whether the file exists or not while copying the pem into /tmp and downloading before SSH into the machine.
If it doesn't exist then it will raise an exception.

closes: #620

@codecov
Copy link

codecov bot commented Sep 5, 2022

Codecov Report

Base: 98.30% // Head: 98.30% // No change to project coverage 👍

Coverage data is based on head (513c50d) compared to base (8e11ca6).
Patch has no changes to coverable lines.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #621   +/-   ##
=======================================
  Coverage   98.30%   98.30%           
=======================================
  Files          79       79           
  Lines        4131     4131           
=======================================
  Hits         4061     4061           
  Misses         70       70           

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Collaborator

@pankajkoti pankajkoti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking that the tasks in the DAG could be running on different workers and hence there could be failure as the file system would not be common between them. Can we think if this could be the reason and if yes, how we can resolve this?

@rajaths010494
Copy link
Contributor Author

I am thinking that the tasks in the DAG could be running on different workers and hence there could be failure as the file system would not be common between them. Can we think if this could be the reason and if yes, how we can resolve this?

image

when I checked in the worker node the pem file wasn't present. So added the check if the file exists or not. as you mentioned if a different worker for tasks but I see only one worker node.

image

if that's the case multiple workers are present then files created in one worker should also be made available in the workers as well.

@pankajkoti
Copy link
Collaborator

I am thinking that the tasks in the DAG could be running on different workers and hence there could be failure as the file system would not be common between them. Can we think if this could be the reason and if yes, how we can resolve this?

image

when I checked in the worker node the pem file wasn't present. So added the check if the file exists or not. as you mentioned if a different worker for tasks but I see only one worker node.
image

if that's the case multiple workers are present then files created in one worker should also be made available in the workers as well.

yes, currently we have one worker so that may not be the problem. However, I believe there is no sync mechanism available which makes files created on one worker node available on other worker nodes. Even if such a sync existed, there would be delays in sync and we should not rely on a such sync with lags.

Although this change is needed and helps us in raising errors earlier, it still won't solve the problem that the DAG is failing when the copy fails. I believe we need something more to solve this problem of intermittent failures.

@rajaths010494
Copy link
Contributor Author

I am thinking that the tasks in the DAG could be running on different workers and hence there could be failure as the file system would not be common between them. Can we think if this could be the reason and if yes, how we can resolve this?

image when I checked in the worker node the pem file wasn't present. So added the check if the file exists or not. as you mentioned if a different worker for tasks but I see only one worker node. image if that's the case multiple workers are present then files created in one worker should also be made available in the workers as well.

yes, currently we have one worker so that may not be the problem. However, I believe there is no sync mechanism available which makes files created on one worker node available on other worker nodes. Even if such a sync existed, there would be delays in sync and we should not rely on a such sync with lags.

Although this change is needed and helps us in raising errors earlier, it still won't solve the problem that the DAG is failing when the copy fails. I believe we need something more to solve this problem of intermittent failures.

yes, we can merge the copy task into the task which uses PEM and make it one so whenever task on any worker it copies the PEM at that time and then ssh into the machine.

@rajaths010494
Copy link
Contributor Author

I am thinking that the tasks in the DAG could be running on different workers and hence there could be failure as the file system would not be common between them. Can we think if this could be the reason and if yes, how we can resolve this?

image when I checked in the worker node the pem file wasn't present. So added the check if the file exists or not. as you mentioned if a different worker for tasks but I see only one worker node. image if that's the case multiple workers are present then files created in one worker should also be made available in the workers as well.

yes, currently we have one worker so that may not be the problem. However, I believe there is no sync mechanism available which makes files created on one worker node available on other worker nodes. Even if such a sync existed, there would be delays in sync and we should not rely on a such sync with lags.
Although this change is needed and helps us in raising errors earlier, it still won't solve the problem that the DAG is failing when the copy fails. I believe we need something more to solve this problem of intermittent failures.

yes, we can merge the copy task into the task which uses PEM and make it one so whenever task on any worker it copies the PEM at that time and then ssh into the machine.

@pankajkoti I have modified the code to download the PEM as part of the SSH task, so now PEM gets downloaded into the same worker where SSH tasks run.

@rajaths010494 rajaths010494 merged commit 8cef32f into main Sep 21, 2022
@rajaths010494 rajaths010494 deleted the fix_ssh_error branch September 21, 2022 08:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add check for PEM exists for Hive and Livy DAG.
2 participants