-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDFS-17565. EC: dfs.datanode.ec.reconstruction.threads should support… #6928
base: trunk
Are you sure you want to change the base?
Conversation
… dynamic reconfigured.
💔 -1 overall
This message was automatically generated. |
...-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a checkstyle warning as well which needs to be fixed
throws ReconfigurationException { | ||
String result = null; | ||
try { | ||
if (property.equals(DFS_DN_EC_RECONSTRUCTION_THREADS_KEY)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this if condition? it would be always true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this if condition? it would be always true?
For now, this if condition is not necessary. I wanted to use this method to handle all about the striped block configuration refresh. Sine only one configuration, I will remove this if condition.
Integer.parseInt(newVal)); | ||
result = Long.toString(size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what are we doing here? converting newVal to Integer & then again converting into string? shouldn't we just check null & directly pass the string, rather than this conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is special case, if the string passed is null, the default value will be used. So there's an extra conversion from int to string.
💔 -1 overall
This message was automatically generated. |
try { | ||
int size = (newVal == null ? DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT : | ||
Integer.parseInt(newVal)); | ||
result = Long.toString(size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
result = Integer.toString(size);
?
@@ -37,6 +38,9 @@ | |||
import java.util.concurrent.TimeUnit; | |||
import java.util.concurrent.atomic.AtomicInteger; | |||
|
|||
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SLOW_IO_WARNING_THRESHOLD_KEY; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove unused import.
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment, rest LGTM
Preconditions.checkArgument(poolSize == this.stripedReconstructionPool.getMaximumPoolSize(), | ||
"The maximum pool size should be equal to core pool size"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this isn't required, this looks like validating the ThreadPoolExecutor
, that ain't a hadoop thing & isn't required here in the scope either
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description of PR
dfs.datanode.ec.reconstruction.threads should support dynamic reconfigured, then we can adjust the speed of ec block copy. Especially HDFS-17550 wanna decommissioning DataNode by EC block reconstruction.
How was this patch tested?
unit test
For code changes: