Skip to content

Commit

Permalink
HADOOP-13327 spec out output stream and syncable, some minor cleanup …
Browse files Browse the repository at this point in the history
…of a few interfaces in the process
  • Loading branch information
steveloughran committed May 1, 2017
1 parent 93c4f2c commit 8177448
Show file tree
Hide file tree
Showing 5 changed files with 402 additions and 216 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,6 @@ public interface CanSetDropBehind {
* UnsupportedOperationException If this stream doesn't support
* setting the drop-behind.
*/
public void setDropBehind(Boolean dropCache)
void setDropBehind(Boolean dropCache)
throws IOException, UnsupportedOperationException;
}
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
*/
package org.apache.hadoop.fs;

import java.io.*;
import java.io.DataOutputStream;
import java.io.FilterOutputStream;
import java.io.IOException;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,17 +31,17 @@ public interface Syncable {
* @deprecated As of HADOOP 0.21.0, replaced by hflush
* @see #hflush()
*/
@Deprecated public void sync() throws IOException;
@Deprecated void sync() throws IOException;

/** Flush out the data in client's user buffer. After the return of
* this call, new readers will see the data.
* @throws IOException if any error occurs
*/
public void hflush() throws IOException;
void hflush() throws IOException;

/** Similar to posix fsync, flush out the data in client's user buffer
* all the way to the disk device (but the disk may have it in its cache).
* @throws IOException if error occurs
*/
public void hsync() throws IOException;
void hsync() throws IOException;
}
Original file line number Diff line number Diff line change
Expand Up @@ -202,21 +202,21 @@ directory contains many thousands of files.

Consider a directory `"/d"` with the contents:

a
part-0000001
part-0000002
...
part-9999999
a
part-0000001
part-0000002
...
part-9999999


If the number of files is such that HDFS returns a partial listing in each
response, then, if a listing `listStatus("/d")` takes place concurrently with the operation
`rename("/d/a","/d/z"))`, the result may be one of:

[a, part-0000001, ... , part-9999999]
[part-0000001, ... , part-9999999, z]
[a, part-0000001, ... , part-9999999, z]
[part-0000001, ... , part-9999999]
[a, part-0000001, ... , part-9999999]
[part-0000001, ... , part-9999999, z]
[a, part-0000001, ... , part-9999999, z]
[part-0000001, ... , part-9999999]

While this situation is likely to be a rare occurrence, it MAY happen. In HDFS
these inconsistent views are only likely when listing a directory with many children.
Expand Down Expand Up @@ -604,7 +604,7 @@ The result is `FSDataOutputStream`, which through its operations may generate ne
until the output stream `close()` operation is completed.
This is a significant difference between the behavior of object stores
and that of filesystems, as it allows >1 client to create a file with `overwrite==false`,
and potentially confuse file/directory logic. In particular, using create() to acquire
and potentially confuse file/directory logic. In particular, using `create()` to acquire
an exclusive lock on a file (whoever creates the file without an error is considered
the holder of the lock) is not a valid algorithm when working with object stores.

Expand Down
Loading

0 comments on commit 8177448

Please sign in to comment.