Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch json logging #36833

Merged
merged 75 commits into from
Jan 29, 2019
Merged
Show file tree
Hide file tree
Changes from 54 commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
3ad2a57
NodeId available on all log lines
pgomulka Dec 19, 2018
3ea4695
revert back changes due to use of json layout
pgomulka Dec 19, 2018
76b9a45
Merge branch 'master' into feature/logging-structured
pgomulka Dec 19, 2018
5593c46
remove unused improts
pgomulka Dec 19, 2018
aeb686e
cleanup in plugin
pgomulka Dec 19, 2018
fe87cc5
adding new option with clustestatelistener, pattern converter and thr…
pgomulka Dec 20, 2018
73ce302
exception handling
pgomulka Dec 27, 2018
3f838c7
json message and exception works
pgomulka Dec 27, 2018
e699d27
common wrapper layout class
pgomulka Dec 27, 2018
af54281
removed debugging code
pgomulka Dec 31, 2018
3bb69eb
passing dignity test
pgomulka Dec 31, 2018
5e1f125
fixing build
pgomulka Jan 2, 2019
18908ed
Merge branch 'master' into feature/logging-structured
pgomulka Jan 2, 2019
2c2653c
fix failing test
pgomulka Jan 2, 2019
97d2cec
fix failing test
pgomulka Jan 2, 2019
fb2f531
fix import
pgomulka Jan 2, 2019
20ee653
extending logs test
pgomulka Jan 2, 2019
55dc6eb
fix parsing and exception formatting
pgomulka Jan 3, 2019
b7ad650
fix failing test
pgomulka Jan 3, 2019
39a1ef7
fix checkstyle
pgomulka Jan 3, 2019
372207f
Merge branch 'master' into feature/logging-structured
pgomulka Jan 3, 2019
dec2024
small cleanup
pgomulka Jan 4, 2019
1da7c97
json logs cleanup
pgomulka Jan 4, 2019
4acd89d
Merge branch 'master' into feature/logging-structured
pgomulka Jan 4, 2019
28c20c1
test cleanup
pgomulka Jan 4, 2019
0686fee
sometimes HttpServerTransport is logging first, and then the server d…
pgomulka Jan 4, 2019
669e9ec
additional json tests
pgomulka Jan 9, 2019
de17fc1
docker log4j config cleanup
pgomulka Jan 10, 2019
147ca9c
incorrect docker appender ref
pgomulka Jan 10, 2019
7a2b537
the right order of reading values from clusterListener
pgomulka Jan 10, 2019
6a01097
add missing marker in a pattern
pgomulka Jan 11, 2019
0af53c0
empty lines cleanup
pgomulka Jan 11, 2019
4f8cdae
Merge branch 'master' into feature/logging-structured
pgomulka Jan 14, 2019
4aa84d7
addressing Nik's comments
pgomulka Jan 14, 2019
1d0d66a
follow up after Daniel's comments
pgomulka Jan 14, 2019
0e84d02
failing test
pgomulka Jan 14, 2019
490b56d
unused imports
pgomulka Jan 14, 2019
1119f5e
failing tests
pgomulka Jan 14, 2019
66b1420
rename test log name
pgomulka Jan 14, 2019
1f91bad
method rename
pgomulka Jan 15, 2019
7bde657
Merge branch 'master' into feature/logging-structured
pgomulka Jan 15, 2019
bcf5f85
rename name to server
pgomulka Jan 15, 2019
c7bd58a
rename revert and level corrected
pgomulka Jan 15, 2019
66c1942
wrong assertion
pgomulka Jan 15, 2019
b84cf9a
rename log name files in package tests
pgomulka Jan 15, 2019
12677cf
addressing Daniels' second round of comments
pgomulka Jan 15, 2019
5d78edf
javadocs
pgomulka Jan 16, 2019
4cbea2b
additional test verifing old config
pgomulka Jan 17, 2019
f780f74
unused import
pgomulka Jan 17, 2019
8c3c766
empty unused test
pgomulka Jan 17, 2019
c3ebfc0
small fixes after review
pgomulka Jan 18, 2019
18aca44
comment cleanup after review
pgomulka Jan 18, 2019
a4d9336
documentation and licence fix
pgomulka Jan 18, 2019
6bc7d1c
typo printted -> printed
pgomulka Jan 18, 2019
7c208c8
setOnce argument ordering
pgomulka Jan 21, 2019
a6e81fa
javadoc typo
pgomulka Jan 22, 2019
f01a4ff
refactor cluster state listeners to use setOnce
pgomulka Jan 23, 2019
53ead59
removed empty line
pgomulka Jan 23, 2019
88d1368
methods rename and cleanup
pgomulka Jan 23, 2019
72bd776
javadoc typo
pgomulka Jan 23, 2019
ee3322c
Merge pull request #7 from pgomulka/fix/observer-logging
pgomulka Jan 23, 2019
c1a4206
keep the old appenders and let the nodeIDlistener start earlier
pgomulka Jan 24, 2019
cbe83bc
Merge branch 'master' into feature/logging-structured
pgomulka Jan 24, 2019
b12f8ee
improved documentation and more robust test
pgomulka Jan 24, 2019
254d32b
Merge branch 'master' into feature/logging-structured
pgomulka Jan 24, 2019
1951e2a
split logging config in 2 for docs
pgomulka Jan 24, 2019
1502105
enable log print out for this test
pgomulka Jan 25, 2019
7eaaada
rename logs to .json
pgomulka Jan 25, 2019
1291ead
migration logging
pgomulka Jan 25, 2019
30d9675
old log rename and documentation update
pgomulka Jan 28, 2019
faa81fd
making test more stable
pgomulka Jan 28, 2019
5503ad5
doc changes after review
pgomulka Jan 28, 2019
b563423
fix doc
pgomulka Jan 28, 2019
42481b3
Merge branch 'master' into feature/logging-structured
pgomulka Jan 28, 2019
8aa18ec
Merge branch 'master' into feature/logging-structured
pgomulka Jan 28, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion distribution/archives/integ-test-zip/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ integTestRunner {
*/
if (System.getProperty("tests.rest.cluster") == null) {
systemProperty 'tests.logfile',
"${ -> integTest.nodes[0].homeDir}/logs/${ -> integTest.nodes[0].clusterName }.log"
"${ -> integTest.nodes[0].homeDir}/logs/${ -> integTest.nodes[0].clusterName }_server.log"
} else {
systemProperty 'tests.logfile', '--external--'
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@

package org.elasticsearch.test.rest;

import org.elasticsearch.common.logging.NodeNameInLogsIntegTestCase;
import org.elasticsearch.common.logging.JsonLogsIntegTestCase;
import org.hamcrest.Matcher;

import java.io.IOException;
import java.io.BufferedReader;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
Expand All @@ -32,7 +32,7 @@

import static org.hamcrest.Matchers.is;

public class NodeNameInLogsIT extends NodeNameInLogsIntegTestCase {
public class JsonLogsFormatAndParseIT extends JsonLogsIntegTestCase {
@Override
protected Matcher<String> nodeNameMatcher() {
return is("node-0");
Expand All @@ -41,7 +41,7 @@ protected Matcher<String> nodeNameMatcher() {
@Override
protected BufferedReader openReader(Path logFile) {
assumeFalse("Skipping test because it is being run against an external cluster.",
logFile.getFileName().toString().equals("--external--"));
logFile.getFileName().toString().equals("--external--"));
return AccessController.doPrivileged((PrivilegedAction<BufferedReader>) () -> {
try {
return Files.newBufferedReader(logFile, StandardCharsets.UTF_8);
Expand Down
45 changes: 40 additions & 5 deletions distribution/docker/src/docker/config/log4j2.properties
Original file line number Diff line number Diff line change
@@ -1,9 +1,44 @@
status = error

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.rolling.type = Console
appender.rolling.name = rolling
appender.rolling.layout.type = ESJsonLayout
appender.rolling.layout.type_name = server

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling

appender.deprecation_rolling.type = Console
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.layout.type = ESJsonLayout
appender.deprecation_rolling.layout.type_name = deprecation

logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false

appender.index_search_slowlog_rolling.type = Console
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.layout.type = ESJsonLayout
appender.index_search_slowlog_rolling.layout.type_name = index_search_slowlog

logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

appender.index_indexing_slowlog_rolling.type = Console
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.layout.type = ESJsonLayout
appender.index_indexing_slowlog_rolling.layout.type_name = index_indexing_slowlog


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: not need for an extra newline here

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
26 changes: 15 additions & 11 deletions distribution/src/config/log4j2.properties
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,15 @@ logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
appender.console.layout.type = ESJsonLayout
appender.console.layout.type_name = console

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log
appender.rolling.layout.type = ESJsonLayout
appender.rolling.layout.type_name = server

appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
Expand All @@ -37,8 +38,9 @@ rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
appender.deprecation_rolling.layout.type = ESJsonLayout
appender.deprecation_rolling.layout.type_name = deprecation

appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
Expand All @@ -54,8 +56,9 @@ logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_search_slowlog_rolling.layout.type = ESJsonLayout
appender.index_search_slowlog_rolling.layout.type_name = index_search_slowlog

appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
Expand All @@ -71,8 +74,9 @@ logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_indexing_slowlog_rolling.layout.type = ESJsonLayout
appender.index_indexing_slowlog_rolling.layout.type_name = index_indexing_slowlog

appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%i.log.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
Expand Down
88 changes: 63 additions & 25 deletions docs/reference/setup/logging-config.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,41 +22,44 @@ will resolve to `/var/log/elasticsearch/production.log`.
--------------------------------------------------
appender.rolling.type = RollingFile <1>
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log <2>
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz <3>
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log <2>
appender.rolling.layout.type = ESJsonLayout <3>
appender.rolling.layout.type_name = server <4>
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz <5>
pgomulka marked this conversation as resolved.
Show resolved Hide resolved
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4>
appender.rolling.policies.time.interval = 1 <5>
appender.rolling.policies.time.modulate = true <6>
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy <7>
appender.rolling.policies.size.size = 256MB <8>
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <6>
appender.rolling.policies.time.interval = 1 <7>
appender.rolling.policies.time.modulate = true <8>
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy <9>
appender.rolling.policies.size.size = 256MB <10>
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete <9>
appender.rolling.strategy.action.type = Delete <11>
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName <10>
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <11>
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize <12>
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB <13>
appender.rolling.strategy.action.condition.type = IfFileName <12>
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <13>
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize <14>
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB <15>
--------------------------------------------------

<1> Configure the `RollingFile` appender
<2> Log to `/var/log/elasticsearch/production.log`
<3> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd-i.log`; logs
<3> Use JSON layout.
<4> `type_name` is a flag populating the `type` field in a `ESJsonLayout`.
It can be used to distinguish different types of logs more easily when parsing them.
<5> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd-i.log`; logs
pgomulka marked this conversation as resolved.
Show resolved Hide resolved
will be compressed on each roll and `i` will be incremented
<4> Use a time-based roll policy
<5> Roll logs on a daily basis
<6> Align rolls on the day boundary (as opposed to rolling every twenty-four
<6> Use a time-based roll policy
<7> Roll logs on a daily basis
<8> Align rolls on the day boundary (as opposed to rolling every twenty-four
hours)
<7> Using a size-based roll policy
<8> Roll logs after 256 MB
<9> Use a delete action when rolling logs
<10> Only delete logs matching a file pattern
<11> The pattern is to only delete the main logs
<12> Only delete if we have accumulated too many compressed logs
<13> The size condition on the compressed logs is 2 GB
<9> Using a size-based roll policy
<10> Roll logs after 256 MB
<11> Use a delete action when rolling logs
<12> Only delete logs matching a file pattern
<13> The pattern is to only delete the main logs
<14> Only delete if we have accumulated too many compressed logs
<15> The size condition on the compressed logs is 2 GB

NOTE: Log4j's configuration parsing gets confused by any extraneous whitespace;
if you copy and paste any Log4j settings on this page, or enter any Log4j
Expand Down Expand Up @@ -194,3 +197,38 @@ files (four rolled logs, and the active log).

You can disable it in the `config/log4j2.properties` file by setting the deprecation
log level to `error`.


[float]
[[json-logging]]
=== JSON log format

To make parsing Elasticsearch logs easier, logs are now printed in a JSON format.
This is configured by a Log4J layout property `appender.rolling.layout.type = ESJsonLayout`.
This layout requires a `type_name` attribute to be set which is used to distinguish
logs streams when parsing.
[source,properties]
--------------------------------------------------
appender.rolling.layout.type = ESJsonLayout
appender.rolling.layout.type_name = server
--------------------------------------------------
:es-json-layout-java-doc: {elasticsearch-javadoc}/org/elasticsearch/common/logging/ESJsonLayout.html

Each line contains a single JSON document with the properties configured in `ESJsonLayout`.
See this class {es-json-layout-java-doc}[javadoc] for more details.
However if a JSON document contains an exception, it will be printed over multiple lines.
The first line will contain regular properties and subsequent lines will contain the
stacktrace formatted as a JSON array.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: "the stacktrace"



NOTE: You can still use your own custom layout. To do that replace the line
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the docs need to take into account that we now log as JSON and in plain text.

`appender.rolling.layout.type` with a different layout. See sample below:
[source,properties]
--------------------------------------------------
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
pgomulka marked this conversation as resolved.
Show resolved Hide resolved
--------------------------------------------------
2 changes: 1 addition & 1 deletion qa/die-with-dignity/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ integTestRunner {
systemProperty 'tests.security.manager', 'false'
systemProperty 'tests.system_call_filter', 'false'
systemProperty 'pidfile', "${-> integTest.getNodes().get(0).pidFile}"
systemProperty 'log', "${-> integTest.getNodes().get(0).homeDir}/logs/${-> integTest.getNodes().get(0).clusterName}.log"
systemProperty 'log', "${-> integTest.getNodes().get(0).homeDir}/logs/${-> integTest.getNodes().get(0).clusterName}_server.log"
systemProperty 'runtime.java.home', "${project.runtimeJavaHome}"
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,14 @@

import org.apache.http.ConnectionClosedException;
import org.apache.lucene.util.Constants;
import org.elasticsearch.cli.Terminal;
import org.elasticsearch.client.Request;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.logging.JsonLogLine;
import org.elasticsearch.common.logging.JsonLogsStream;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.hamcrest.Matcher;
import org.hamcrest.Matchers;

import java.io.BufferedReader;
import java.io.IOException;
Expand All @@ -34,10 +38,12 @@
import java.nio.file.Path;
import java.util.Iterator;
import java.util.List;
import java.util.stream.Stream;

import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.either;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.hasItem;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.hasToString;
import static org.hamcrest.Matchers.instanceOf;
Expand All @@ -53,7 +59,7 @@ public void testDieWithDignity() throws Exception {
final int pid = Integer.parseInt(pidFileLines.get(0));
Files.delete(pidFile);
IOException e = expectThrows(IOException.class,
() -> client().performRequest(new Request("GET", "/_die_with_dignity")));
() -> client().performRequest(new Request("GET", "/_die_with_dignity")));
Matcher<IOException> failureMatcher = instanceOf(ConnectionClosedException.class);
if (Constants.WINDOWS) {
/*
Expand All @@ -64,9 +70,9 @@ public void testDieWithDignity() throws Exception {
* https://issues.apache.org/jira/browse/HTTPASYNC-134
*
* So we catch it here and consider it "ok".
*/
*/
failureMatcher = either(failureMatcher)
.or(hasToString(containsString("An existing connection was forcibly closed by the remote host")));
.or(hasToString(containsString("An existing connection was forcibly closed by the remote host")));
}
assertThat(e, failureMatcher);

Expand All @@ -85,28 +91,62 @@ public void testDieWithDignity() throws Exception {
}
});

// parse the logs and ensure that Elasticsearch died with the expected cause
final List<String> lines = Files.readAllLines(PathUtils.get(System.getProperty("log")));
try {
// parse the logs and ensure that Elasticsearch died with the expected cause
Path path = PathUtils.get(System.getProperty("log"));
try (Stream<JsonLogLine> stream = JsonLogsStream.from(path)) {
final Iterator<JsonLogLine> it = stream.iterator();

final Iterator<String> it = lines.iterator();
boolean fatalError = false;
boolean fatalErrorInThreadExiting = false;

boolean fatalError = false;
boolean fatalErrorInThreadExiting = false;
while (it.hasNext() && (fatalError == false || fatalErrorInThreadExiting == false)) {
final JsonLogLine line = it.next();
if (isFatalError(line)) {
fatalError = true;
} else if (isFatalErrorInThreadExiting(line) || isWarnExceptionReceived(line)) {
fatalErrorInThreadExiting = true;
assertThat(line.stacktrace(),
hasItem(Matchers.containsString("java.lang.OutOfMemoryError: die with dignity")));
}
}

while (it.hasNext() && (fatalError == false || fatalErrorInThreadExiting == false)) {
final String line = it.next();
if (line.matches(".*\\[ERROR\\]\\[o\\.e\\.ExceptionsHelper\\s*\\] \\[node-0\\] fatal error")) {
fatalError = true;
} else if (line.matches(".*\\[ERROR\\]\\[o\\.e\\.b\\.ElasticsearchUncaughtExceptionHandler\\] \\[node-0\\]"
+ " fatal error in thread \\[Thread-\\d+\\], exiting$")) {
fatalErrorInThreadExiting = true;
assertTrue(it.hasNext());
assertThat(it.next(), equalTo("java.lang.OutOfMemoryError: die with dignity"));
assertTrue(fatalError);
assertTrue(fatalErrorInThreadExiting);
}
} catch (AssertionError ae) {
Path path = PathUtils.get(System.getProperty("log"));
debugLogs(path);
throw ae;
}
}

private boolean isWarnExceptionReceived(JsonLogLine line) {
return line.level().equals("WARN")
&& line.component().equals("o.e.h.AbstractHttpServerTransport")
&& line.nodeName().equals("node-0")
&& line.message().contains("caught exception while handling client http traffic");
}

private void debugLogs(Path path) throws IOException {
try (BufferedReader reader = Files.newBufferedReader(path)) {
Terminal terminal = Terminal.DEFAULT;
reader.lines().forEach(line -> terminal.println(line));
}
}

private boolean isFatalErrorInThreadExiting(JsonLogLine line) {
return line.level().equals("ERROR")
&& line.component().equals("o.e.b.ElasticsearchUncaughtExceptionHandler")
&& line.nodeName().equals("node-0")
&& line.message().matches("fatal error in thread \\[Thread-\\d+\\], exiting$");
}

assertTrue(fatalError);
assertTrue(fatalErrorInThreadExiting);
private boolean isFatalError(JsonLogLine line) {
return line.level().equals("ERROR")
&& line.component().equals("o.e.ExceptionsHelper")
&& line.nodeName().equals("node-0")
&& line.message().contains("fatal error");
}

@Override
Expand Down
Loading