Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use Tar.jl to create and extract tarballs #29

Merged
merged 3 commits into from
Apr 28, 2020
Merged

use Tar.jl to create and extract tarballs #29

merged 3 commits into from
Apr 28, 2020

Conversation

StefanKarpinski
Copy link
Collaborator

No description provided.

git-tree-sha1 = "5b08ed6036d9d3f0ee6369410b830f8873d4024c"
uuid = "b99e7846-7c00-51b0-8f62-c81ae34c0232"
version = "0.5.8"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:)

@staticfloat
Copy link
Member

Did we change something in the internals? These tests are failing.

@staticfloat
Copy link
Member

I figured out the test errors, pushed a fix to #30. I'll merge that, then merge this, then deploy it somewhere tomorrow.

@staticfloat staticfloat reopened this Apr 28, 2020
@staticfloat staticfloat merged commit dab0c1d into master Apr 28, 2020
@staticfloat staticfloat deleted the sk/Tar.jl branch April 28, 2020 07:11
johnnychen94 added a commit to johnnychen94/StorageMirrorServer.jl that referenced this pull request Apr 29, 2020
push!(paths, joinpath(path, file))
open(tarball, write=true) do io
open(pipeline(compress, io), write=true) do io
Tar.create(tree_path, io)
Copy link
Contributor

@johnnychen94 johnnychen94 Apr 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've observed a performance regression forking this PR and #32 by building tarballs for General registry and [email protected]+5, which have 15 tarballs(1 registry, 1 package source codes and 13 artifacts) to build.

  • before: 24.134260 seconds (7.35 M allocations: 472.674 MiB, 1.60% gc time)
  • after: 32.119186 seconds (5.74 M allocations: 438.242 MiB, 0.39% gc time)

Juno.@profiler tells me that Tar.create is the bottleneck.

The overall changes can be found at johnnychen94/StorageMirrorServer.jl@f77fb30


bash-3.2$ gtar --version
tar (GNU tar) 1.30
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by John Gilmore and Jay Fenlason.
julia> versioninfo()
Julia Version 1.4.1
Commit 381693d3df* (2020-04-14 17:20 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin18.7.0)
  CPU: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-8.0.1 (ORCJIT, skylake)
Environment:
  JULIA_NUM_THREADS = 8

(StorageServer) pkg> st Tar
Project StorageServer v0.1.0
Status `~/Documents/Julia/StorageServer/Project.toml`
  [a4e569a6] Tar v1.3.0

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being 33% slower than the hyper-optimized GNU tar is quite good. I've created an issue about using sendfile to optimize tarball creation and extraction: JuliaIO/Tar.jl#33. However, matching performance of GNU tar isn't really a high priority. It's also possible since we're not sending data directly to a real file descriptor but to a TranscodingStream that the sendfile optimization wouldn't actually help in this case, but it's also possible that we should be using a buffer size that matches what TranscodingStreams uses and maybe avoid multiple buffers. In general, what's needed is an API for saying "send this much data from here to here" that just does whatever is most efficient for the given source and destination.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants