-
Notifications
You must be signed in to change notification settings - Fork 141
requires root privileges and touches /var during build #966
Conversation
bot, retest this please |
1 similar comment
bot, retest this please |
Tests still failing? |
LGTM if you fix tests failing. |
bot, retest this please |
Atomic/util.py
Outdated
os.makedirs(ATOMIC_VAR_LIB) | ||
|
||
if not os.path.exists(ATOMIC_INSTALL_JSON): | ||
open(ATOMIC_INSTALL_JSON, 'a').close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why we create an empty file explicitly. It's not valid JSON. Why not simply have "absence of file" = empty? Basically we don't explicitly write it when we're just opening for read.
Atomic/util.py
Outdated
if not os.path.exists(lock_file_name): | ||
open(lock_file_name, 'a').close() | ||
|
||
return open(lock_file_name, "r") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this leak though? Don't we want to close()
in release_lock()
?
Atomic/util.py
Outdated
if not os.path.exists(ATOMIC_INSTALL_JSON): | ||
open(ATOMIC_INSTALL_JSON, 'a').close() | ||
|
||
return open(ATOMIC_INSTALL_JSON, 'r') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here too.
Atomic/util.py
Outdated
install_data = cls._read_install_data(cls.install_file_handle) | ||
try: | ||
install_data = cls._read_install_data() | ||
except IOError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is already caught by _read_install_data
, no? So you should already be getting {}
back in that case.
Atomic/util.py
Outdated
|
||
@staticmethod | ||
def _read_install_data(file_handle): | ||
def get_install_file_handle(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a function? Seems like we can just merge it into _read_install_data()
so that you can just use context managers there to open and close, e.g.:
try:
with open(ATOMIC_INSTALL_JSON, 'r') as f:
return json.load(f)
except IOError as e:
if e.errno == errno.ENOENT:
return {}
raise e
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with this, except I think for backcompat with the previous "empty file" model we should make it something like:
try:
with open(ATOMIC_INSTALL_JSON, 'r') as f:
if os.fstat(f.fileno).st_size == 0:
return {}
return json.load(f)
except IOError as e:
if e.errno == errno.ENOENT:
return {}
raise e
We shouldn't do anything at module import time, as that happens during builds, where we don't want to touch the host system (and may not have privileges to do so). This required some refactoring of the locking code too, as simply instantiating the decorator was creating files too. Closes: projectatomic#963
I went ahead and patched things up as above - there were more fixes required in the decorator, and I didn't run any tests other than simply doing as non-root:
We clearly need to tweak the redhat-ci default build process to be more mock-like. |
Atomic/util.py
Outdated
with open(lock_file_name, "r") as f: | ||
while time_out < 10.5: # Ten second attempt to get a lock | ||
try: | ||
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this lock be released when f
is closed though?
Just tried this out:
with open("lockfile", "r") as f:
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
with open("lockfile", "r") as f:
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
which works without issue. Whereas
f = open("lockfile", "r")
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
g = open("lockfile", "r")
fcntl.flock(g, fcntl.LOCK_EX | fcntl.LOCK_NB)
throws a BlockingIOError as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, let's try making it a context manager. There is also https://pypi.python.org/pypi/flockcontext ...dunno about adding the dep.
OK, this passes the tests now. |
|
||
# Call the user code | ||
yield |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we yield the file handle here, so that... (continued in next comment)
def write_install_data(cls, new_data): | ||
cls._write_install_data(new_data) | ||
install_data = cls.read_install_data() | ||
with file_lock(ATOMIC_INSTALL_JSON): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here, we can do the whole read, update, write under lock?
def delete_by_id(cls, iid, ignore=False): | ||
install_data = cls._read_install_data(cls.install_file_handle) | ||
install_data = cls.read_install_data() | ||
try: | ||
id_key = InstallData.get_install_name_by_id(iid, install_data=install_data) | ||
except ValueError as e: | ||
if not ignore: | ||
raise ValueError(str(e)) | ||
return | ||
del install_data[id_key] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, this won't take effect, right? The write
methods (both old and new) are merging entries, so can deletes actually occur?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably, but that's not a regression from this patch.
I could think of many more things to do to improve this, not least of which is moving it to its own file with documentation, and such...but can we do that kind of thing in a separate PR?
I'd really like to get FAHC going again.
(And really, do we even really need this InstallData thing? Can't we (for system containers) use the ostree commit metadata?)
Atomic/util.py
Outdated
with file_lock(ATOMIC_INSTALL_JSON): | ||
for x in new_data: | ||
install_data[x] = new_data[x] | ||
temp_file = tempfile.NamedTemporaryFile(mode='w', delete=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this line and the rest below are supposed to be unindented by one level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hah, yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pushed a fixup for that ⬇️
As far as I can tell right now, the only place that reads (not writes) InstallData right now is:
And that's just for a warning. How about we just delete this code? I'd like to deprecate the |
I am fine with moving that way. |
@rhatdan you are in favor not honoring the INSTALL label? What's going to happen to all the images already out there? |
This reverts: projectatomic#950 See: projectatomic#966 The idea here originally was good, but we have to be a *lot* more careful in introducing data stores than this. They need "schema design", documentation, and most importantly IMO: lifecycle coupling. If I e.g. `rm -rf /var/lib/docker` (as a lot of documentation tells people to do, this install data would stick around. Similarly for deleting content directly in `/ostree/repo`. You could say that's a bug, but it'll happen. The best way to do metadata is to use the underlying store. OSTree in particular already has an API for storing metadata. Any containers we write there should use that. I'm not sure offhand if we can do something similar with Docker. But basically I don't think it's worth carrying another data store *just* to show a warning.
This reverts: projectatomic#950 See: projectatomic#966 The idea here originally was good, but we have to be a *lot* more careful in introducing data stores than this. They need "schema design", documentation, and most importantly IMO: lifecycle coupling. If I e.g. `rm -rf /var/lib/docker` (as a lot of documentation tells people to do, this install data would stick around. Similarly for deleting content directly in `/ostree/repo`. You could say that's a bug, but it'll happen. The best way to do metadata is to use the underlying store. OSTree in particular already has an API for storing metadata. Any containers we write there should use that. I'm not sure offhand if we can do something similar with Docker. But basically I don't think it's worth carrying another data store *just* to show a warning.
I am fine with de-emphasizing it, not removing it. |
A good chunk of the |
so the current RHEL rsyslog container is not a use case? same with the openscap image? We have spent years trying to get people to use labels. Maybe I am missing something. |
Yeah, we need to make the rsyslog container a system container. |
I think we will need to continue to support INSTALL Labels but we really want to move most cases where an app is dropping content on the host to use something like system containers. INSTALL Labels begin to fall apart when you don't require docker. |
Basically |
There were things blocking this before, like rpm-ostree not supporting local RPM installs, etc. that we've fixed now. Though to really complete that picture we need livefs which I'm working on. |
Anyways I think this PR is in an OK state - it could be a lot better but I'd really like to just get FAHC going and revisit this in later PRs. |
LGTM |
🙀 |
Note CI is currently down projectatomic/papr#33 |
☀️ Test successful - status-redhatci |
Fixes issue #963
#963