Skip to content

Commit

Permalink
Catch errors raised when we try to delete objects that don't exist.
Browse files Browse the repository at this point in the history
This will happen from time to time as we're calling #delete_container_objects
on each worker instance.

The calls to OpenShift are asynchronous so the next worker we tell
to #delete_container_objects may try to delete objects that have
already been removed.

Additionally, this makes it difficult to track when objects have
been deleted. Ideally we would scale a deployment down and have the
last worker delete the deployment iteslf, but a previous worker may
not have deleted the worker record by the time we tell the next worker
to exit.
  • Loading branch information
carbonin committed Feb 23, 2018
1 parent 37f03a8 commit 9f6efae
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions lib/container_orchestrator.rb
Original file line number Diff line number Diff line change
Expand Up @@ -47,18 +47,26 @@ def delete_deployment_config(name)
scale(name, 0)
connection.delete_deployment_config(name, my_namespace)
delete_replication_controller(rc.metadata.name) if rc
rescue KubeException => e
raise unless e.message =~ /not found/
end

def delete_replication_controller(name)
kube_connection.delete_replication_controller(name, my_namespace)
rescue KubeException => e
raise unless e.message =~ /not found/
end

def delete_service(name)
kube_connection.delete_service(name, my_namespace)
rescue KubeException => e
raise unless e.message =~ /not found/
end

def delete_secret(name)
kube_connection.delete_secret(name, my_namespace)
rescue KubeException => e
raise unless e.message =~ /not found/
end

private
Expand Down

0 comments on commit 9f6efae

Please sign in to comment.