Skip to content
This repository has been archived by the owner on Oct 10, 2020. It is now read-only.

Atomic/install.py: Record installs for later use #950

Closed
wants to merge 1 commit into from

Conversation

baude
Copy link
Member

@baude baude commented Mar 23, 2017

When installing an image, we now write a small bit of json
to /var/lib/atomic/install.json. The json format is:

{
<image_name>: {
id: <image_id>,
install_date: <install_date_in_utc
}
}

This will be used in update, run, etc to ensure that any image
with an INSTALL label is first installed.

Atomic/util.py Outdated
@staticmethod
def write_install_data(install_data):
with open(ATOMIC_INSTALL_JSON, 'w') as write_data:
json.dump(install_data, write_data)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be risky. If something goes wrong in json.dump() we lose the entire db of the images. We should probably write to a temporary file in the same directory first, and then move it to ATOMIC_INSTALL_JSON

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I agree create a tmpfile and rename.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use file locks? To make sure that two processes accessing the file at the same time don't overight? Or should we create a different file for each installed image?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll implement a file locking mechanism on write.

Atomic/util.py Outdated
@staticmethod
def write_install_data(install_data):
with open(ATOMIC_INSTALL_JSON, 'w') as write_data:
json.dump(install_data, write_data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use file locks? To make sure that two processes accessing the file at the same time don't overight? Or should we create a different file for each installed image?

atomic Outdated
@@ -124,6 +124,8 @@ def create_parser(help_text):
help=_("show atomic version and exit"))
parser.add_argument('--debug', default=False, action='store_true',
help=_("show debug messages"))
parser.add_argument('-i', '--ignore', default=False, action='store_true',
help=_("allows you to ignore certain errors"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty vague. Either specify what you are ignoring, or don't give the user the option.

@baude
Copy link
Member Author

baude commented Mar 24, 2017

updated with a file locking mechanism

Atomic/util.py Outdated
def write_install_data(install_data):
with open(ATOMIC_INSTALL_JSON, 'w') as write_data:
time_out = 0
f_lock = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the lock protect the entire (read + modify + write) operation? Otherwise with two processes trying to write at the same time, we can lose what was written by the process that terminated first if they read the same initial file content.
An alternative can probably be to keep one separate file for each container, like:

ATOMIC_VAR_LIB/install-data/${NAME}.json

so we don't require any sort of synchronization. Would it work?

Prune can cleanup the files that are left there without a related container still active (if the user manually uninstalled it, or the previous rm failed for any reason).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kind of like the separate file per container.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with separate files is it makes it significantly more complicated to look up images by hash, etc. And eventually we come down to the same problem of reading or writing the same file. I'll extend the locking.

@baude baude force-pushed the install branch 3 times, most recently from a99ad0f to 66878b4 Compare March 28, 2017 15:08
When installing an image, we now write a small bit of json
to /var/lib/atomic/install.json.  The json format is:

{
	<image_name>: {
			     id: <image_id>,
			     install_date: <install_date_in_utc
			 }
}

This will be used in update, run, etc to ensure that any image
with an INSTALL label is first installed.
@baude
Copy link
Member Author

baude commented Mar 28, 2017

@rhatdan implemented locking with @giuseppe and also addressed the ignore comment you made. Ready for merging.

@baude
Copy link
Member Author

baude commented Mar 28, 2017

def wrapper(*args, **kwargs):
get_lock()
ret = func(*args, **kwargs)
release_lock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we do a try: finally block incase func raises an exception?

@rhatdan
Copy link
Member

rhatdan commented Mar 28, 2017

LGTM Make sure @giuseppe Likes it.

@giuseppe
Copy link
Collaborator

the decorator looks very neat!

@rh-atomic-bot r+ 831a9a0

@rh-atomic-bot
Copy link

⌛ Testing commit 831a9a0 with merge 1b216a4...

@rh-atomic-bot
Copy link

☀️ Test successful - status-redhatci
Approved by: giuseppe
Pushing 1b216a4 to master...

cgwalters added a commit to cgwalters/atom that referenced this pull request Apr 13, 2017
This reverts: projectatomic#950
See: projectatomic#966

The idea here originally was good, but we have to be a *lot* more
careful in introducing data stores than this.  They need "schema design",
documentation, and most importantly IMO: lifecycle coupling.

If I e.g. `rm -rf /var/lib/docker` (as a lot of documentation tells
people to do, this install data would stick around.  Similarly
for deleting content directly in `/ostree/repo`.  You could say
that's a bug, but it'll happen.

The best way to do metadata is to use the underlying store.  OSTree
in particular already has an API for storing metadata.  Any containers
we write there should use that.

I'm not sure offhand if we can do something similar with Docker.
But basically I don't think it's worth carrying another data store
*just* to show a warning.
cgwalters pushed a commit to cgwalters/atom that referenced this pull request Apr 13, 2017
This reverts: projectatomic#950
See: projectatomic#966

The idea here originally was good, but we have to be a *lot* more
careful in introducing data stores than this.  They need "schema design",
documentation, and most importantly IMO: lifecycle coupling.

If I e.g. `rm -rf /var/lib/docker` (as a lot of documentation tells
people to do, this install data would stick around.  Similarly
for deleting content directly in `/ostree/repo`.  You could say
that's a bug, but it'll happen.

The best way to do metadata is to use the underlying store.  OSTree
in particular already has an API for storing metadata.  Any containers
we write there should use that.

I'm not sure offhand if we can do something similar with Docker.
But basically I don't think it's worth carrying another data store
*just* to show a warning.
jlebon added a commit to jlebon/atomic-devmode that referenced this pull request May 23, 2017
The atomic CLI changed behaviour recently (in
projectatomic/atomic#950), to require containers
with INSTALL labels to be explicitly installed before being run. This is
probably something we should have done to begin with.

Downstream RHBZ: #1454439
jlebon added a commit to projectatomic/atomic-devmode that referenced this pull request May 23, 2017
The atomic CLI changed behaviour recently (in
projectatomic/atomic#950), to require containers
with INSTALL labels to be explicitly installed before being run. This is
probably something we should have done to begin with.

Downstream RHBZ: #1454439
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants