Skip to content
This repository has been archived by the owner on May 15, 2024. It is now read-only.

hang at Ionic.Zlib.ParallelDeflateOuputStream #74

Closed
AartBluestoke opened this issue Jun 8, 2016 · 8 comments · Fixed by #76
Closed

hang at Ionic.Zlib.ParallelDeflateOuputStream #74

AartBluestoke opened this issue Jun 8, 2016 · 8 comments · Fixed by #76

Comments

@AartBluestoke
Copy link
Contributor

AartBluestoke commented Jun 8, 2016

using Ionic.Zip 1.9.9.0 (and 1.9.3, i refreshed to latest library to test) the following code hangs at zip.Save()

{
  using (var zip = new ZipFile(m_Args.TmpFile)) 
          {
            zip.UseZip64WhenSaving = Zip64Option.Always;
            zip.ParallelDeflateThreshold = 0;
            zip.Password = m_Args.ClearTextPassword; 
            zip.Encryption = EncryptionAlgorithm.WinZipAes256;

     foreach (var f in m_Args.Files)
      {
        zip.AddEntry(f, (E,S)=>WriteFileEntry(S,...)); // multiple files here, this code is slightly simplified from the production code this was taken from
    }

            // this is when it all hangs on completion of a file
           // stack trace below shows that 100% of runtime is now spent on waitOne after completion of the run (logger and runtime before getting to this point show the the expected output has been created, but it just hasn't cleaned up)
            zip.Save();
}

WriteFileEntry(System.IO.Stream a_To, ...){
 a_To.Write(data, 0, data.Length); // and so on
 a_To.Flush();
}

Attaching a profiler after it has hung shows that the main thread is waiting inside save, on the stream close. I believe this is at the end of the first file, as the status bar on the app doesn't get to "zipping file 2/5", but the last row log shown is the end of the first file.
image

@AartBluestoke
Copy link
Contributor Author

AartBluestoke commented Jun 8, 2016

the non-parallel write completes with no problems.

may be a dupe of

edit: passing a single file seems to work, multiple files fail. (don't have many multiple file tests, will update.)
edit: manually creating, using and Disposing parallelDeflateOutputStreams() doesn't seem to cause hung threads.

@AartBluestoke
Copy link
Contributor Author

AartBluestoke commented Jun 8, 2016

Executing other multiple file tests, another one hung on completion of the second file, the first was added without any issues, and again in the stack traces of the hung process there are no worker threads present.

@AartBluestoke
Copy link
Contributor Author

possible cause:

Zip.ZipEntry._WriteEntryData() {
...
  this._WriteDelegate(this.FileName, (Stream) output);
// no lines of code here
this.FinishOutputStream(s, entryCounter, stream1, stream2, output);
}
 internal void FinishOutputStream(Stream s, CountingStream entryCounter, Stream encryptor, Stream compressor, CrcCalculatorStream output)
    {
      if (output == null)
        return;
      output.Close(); // hangs on final flush of underlying stream
}

The output stream is flushed successfully at the end of the load, but for some reason the extra flush on close hangs...

image

The first flush calls EmitPendingBuffers(false,false).
The second flush comes in via Close and calls EmitPendingBuffers(true,false).

...

_newlyCompressedBlob is an AutoResetEvent.
_newlyCompressedBlob is only signaled at the end of every _DeflateOne.

therefore if you call EmitPendingBuffers(true) when there are no pending buffers, then the wait at the beginning of the function will never be passed ...
if (doAll || mustWait) _newlyCompressedBlob.WaitOne();

AartBluestoke added a commit to AartBluestoke/DotNetZip.Semverd that referenced this issue Jun 9, 2016
should resolve haf#74.

_lastFilled is incremented when work is created
_latestCompressed is set by _DeflateOne
_lastWritten is set by EmitPendingBuffers

All are monotonically incremented.

line 883: if _lastCompressed is equal to _lastFilled then all work is complete, waiting can hang
line 971: if _lastWritten==_lastCompressed, but there is still a worker thread running (ie _lastFilled>_lastCompressed) then the last chunk may not be written by the doAll flush

the change on 971 may slightly change the multi-thread behavior of the proc, if flush(true) is being called from a different thread to write. But in that case the entire current implementation is flaky anyway
@haf haf closed this as completed in #76 Jun 9, 2016
@AndrewBennet
Copy link

This still seems to be a problem. On the latest NuGet release (1.10.1), zipping a file consisting of 1024 * 128 * 9 random bytes still leads to a corrupt archive. Unzipping gives the error "bad read of entry {filename} from compressed archive.". Decompiling the NuGet package shows me that the NuGet release includes this fix...

@haf
Copy link
Owner

haf commented Oct 26, 2016

@AndrewBennet Thanks for reporting. Please file a new bug with a repro and/or a PR for its fix.

@AndrewBennet
Copy link

@haf Actually, on better testing, it seems that this has been fixed. Sorry about that - I guess my Visual Studio was aggresively caching the original Ionic DLL.

@mjkkirschner
Copy link

mjkkirschner commented Jul 9, 2018

Hi @AndrewBennet - we believe a user is encountering this old bug you referenced. Is there some more information available about it? I am looking for guidelines for what size files should be avoided/potential workarounds without upgrading.

@AndrewBennet
Copy link

As mentioned by the raiser of the issue:

non-parallel write completes with no problems

So if you disable the parallelism, you should be fine. Also, it seemed to only happen for files where the size was a multiple of 64kB, so you could potentially try to avoid files of certain sizes.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants