Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot set mtime on a symbolic link #412

Closed
jimmyH opened this issue Sep 25, 2011 · 1 comment
Closed

Cannot set mtime on a symbolic link #412

jimmyH opened this issue Sep 25, 2011 · 1 comment
Milestone

Comments

@jimmyH
Copy link
Contributor

jimmyH commented Sep 25, 2011

Hi,

If you try to do a utimensat(AT_SYMLINK_NOFOLLOW) on a symlink, the mtime of the symlink is set to the current time and not to the requested time.

To work around this I have made the change below - however I don't know if there are any nasty side-effects from this change.....

diff --git a/module/zfs/zpl_inode.c b/module/zfs/zpl_inode.c
index dbfe61a..17acf37 100644
--- a/module/zfs/zpl_inode.c
+++ b/module/zfs/zpl_inode.c
@@ -354,6 +354,8 @@ const struct inode_operations zpl_symlink_inode_operations = {
.readlink = generic_readlink,
.follow_link = zpl_follow_link,
.put_link = zpl_put_link,

  •   .setattr        = zpl_setattr,
    
  •   .getattr        = zpl_getattr,
    

    };

    const struct inode_operations zpl_special_inode_operations = {

regards,

James.

@behlendorf
Copy link
Contributor

Thanks for the bug an potential fix. What you propose may in fact be exactly the right fix but I need to give it some more thought and do some testing first.

mmaybee pushed a commit to mmaybee/openzfs that referenced this issue Aug 27, 2021
Ingest written blocks to the zettacache.  Note that this is an on-disk format
change due to some code cleanup.  Several synergistic improvements as well:

don't allow overwrites for sync-to-convergence; clean up now-unused code in
Agent

limit disk i/o queue depth.  This prevents starvation of the 512 Tokio
blocking threads.  Otherwise we could use up all of these threads with the
write_raw() blocking task that does the pwrite() syscall.

separate disk i/o queue depth limits for reads vs writes.  This prevents
starvation of reads (for cache hits) when the writes (for inserts) are at the
max depth

limit buffering of cache insertions.  When the buffer is full, the insertion
is dropped (ignored).  When the cache is slower than the network (which it is
when ingesting writes, but not when inserting from cache misses), we don't
want to wait for the cache, slowing down overall write performance; instead
we buffer a small amount (about a second's worth) and once that limit is
reached we don't insert the next block to the cache.  Victim blocks are
selected randomly (based on arrival order).  In the future we might want to
have the kernel tell us which writes contain data that's more likely to be
read in the future (e.g. ZFS metadata).

block allocator: continue looking from last allocation location.  This vastly
reduces CPU usage, once evictions have happened and the free space is
fragmented.

TARGET_CACHE_SIZE_PCT default to 80

more destructuring code cleanup in range_tree.rs

zettacache: remove unnecessary chunk_summary

fix DOSE-602 cross-PendingFreesLog object consolidation breaks pool
sdimitro pushed a commit to sdimitro/zfs that referenced this issue May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants