Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch json logging #36833

Merged
merged 75 commits into from
Jan 29, 2019
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
3ad2a57
NodeId available on all log lines
pgomulka Dec 19, 2018
3ea4695
revert back changes due to use of json layout
pgomulka Dec 19, 2018
76b9a45
Merge branch 'master' into feature/logging-structured
pgomulka Dec 19, 2018
5593c46
remove unused improts
pgomulka Dec 19, 2018
aeb686e
cleanup in plugin
pgomulka Dec 19, 2018
fe87cc5
adding new option with clustestatelistener, pattern converter and thr…
pgomulka Dec 20, 2018
73ce302
exception handling
pgomulka Dec 27, 2018
3f838c7
json message and exception works
pgomulka Dec 27, 2018
e699d27
common wrapper layout class
pgomulka Dec 27, 2018
af54281
removed debugging code
pgomulka Dec 31, 2018
3bb69eb
passing dignity test
pgomulka Dec 31, 2018
5e1f125
fixing build
pgomulka Jan 2, 2019
18908ed
Merge branch 'master' into feature/logging-structured
pgomulka Jan 2, 2019
2c2653c
fix failing test
pgomulka Jan 2, 2019
97d2cec
fix failing test
pgomulka Jan 2, 2019
fb2f531
fix import
pgomulka Jan 2, 2019
20ee653
extending logs test
pgomulka Jan 2, 2019
55dc6eb
fix parsing and exception formatting
pgomulka Jan 3, 2019
b7ad650
fix failing test
pgomulka Jan 3, 2019
39a1ef7
fix checkstyle
pgomulka Jan 3, 2019
372207f
Merge branch 'master' into feature/logging-structured
pgomulka Jan 3, 2019
dec2024
small cleanup
pgomulka Jan 4, 2019
1da7c97
json logs cleanup
pgomulka Jan 4, 2019
4acd89d
Merge branch 'master' into feature/logging-structured
pgomulka Jan 4, 2019
28c20c1
test cleanup
pgomulka Jan 4, 2019
0686fee
sometimes HttpServerTransport is logging first, and then the server d…
pgomulka Jan 4, 2019
669e9ec
additional json tests
pgomulka Jan 9, 2019
de17fc1
docker log4j config cleanup
pgomulka Jan 10, 2019
147ca9c
incorrect docker appender ref
pgomulka Jan 10, 2019
7a2b537
the right order of reading values from clusterListener
pgomulka Jan 10, 2019
6a01097
add missing marker in a pattern
pgomulka Jan 11, 2019
0af53c0
empty lines cleanup
pgomulka Jan 11, 2019
4f8cdae
Merge branch 'master' into feature/logging-structured
pgomulka Jan 14, 2019
4aa84d7
addressing Nik's comments
pgomulka Jan 14, 2019
1d0d66a
follow up after Daniel's comments
pgomulka Jan 14, 2019
0e84d02
failing test
pgomulka Jan 14, 2019
490b56d
unused imports
pgomulka Jan 14, 2019
1119f5e
failing tests
pgomulka Jan 14, 2019
66b1420
rename test log name
pgomulka Jan 14, 2019
1f91bad
method rename
pgomulka Jan 15, 2019
7bde657
Merge branch 'master' into feature/logging-structured
pgomulka Jan 15, 2019
bcf5f85
rename name to server
pgomulka Jan 15, 2019
c7bd58a
rename revert and level corrected
pgomulka Jan 15, 2019
66c1942
wrong assertion
pgomulka Jan 15, 2019
b84cf9a
rename log name files in package tests
pgomulka Jan 15, 2019
12677cf
addressing Daniels' second round of comments
pgomulka Jan 15, 2019
5d78edf
javadocs
pgomulka Jan 16, 2019
4cbea2b
additional test verifing old config
pgomulka Jan 17, 2019
f780f74
unused import
pgomulka Jan 17, 2019
8c3c766
empty unused test
pgomulka Jan 17, 2019
c3ebfc0
small fixes after review
pgomulka Jan 18, 2019
18aca44
comment cleanup after review
pgomulka Jan 18, 2019
a4d9336
documentation and licence fix
pgomulka Jan 18, 2019
6bc7d1c
typo printted -> printed
pgomulka Jan 18, 2019
7c208c8
setOnce argument ordering
pgomulka Jan 21, 2019
a6e81fa
javadoc typo
pgomulka Jan 22, 2019
f01a4ff
refactor cluster state listeners to use setOnce
pgomulka Jan 23, 2019
53ead59
removed empty line
pgomulka Jan 23, 2019
88d1368
methods rename and cleanup
pgomulka Jan 23, 2019
72bd776
javadoc typo
pgomulka Jan 23, 2019
ee3322c
Merge pull request #7 from pgomulka/fix/observer-logging
pgomulka Jan 23, 2019
c1a4206
keep the old appenders and let the nodeIDlistener start earlier
pgomulka Jan 24, 2019
cbe83bc
Merge branch 'master' into feature/logging-structured
pgomulka Jan 24, 2019
b12f8ee
improved documentation and more robust test
pgomulka Jan 24, 2019
254d32b
Merge branch 'master' into feature/logging-structured
pgomulka Jan 24, 2019
1951e2a
split logging config in 2 for docs
pgomulka Jan 24, 2019
1502105
enable log print out for this test
pgomulka Jan 25, 2019
7eaaada
rename logs to .json
pgomulka Jan 25, 2019
1291ead
migration logging
pgomulka Jan 25, 2019
30d9675
old log rename and documentation update
pgomulka Jan 28, 2019
faa81fd
making test more stable
pgomulka Jan 28, 2019
5503ad5
doc changes after review
pgomulka Jan 28, 2019
b563423
fix doc
pgomulka Jan 28, 2019
42481b3
Merge branch 'master' into feature/logging-structured
pgomulka Jan 28, 2019
8aa18ec
Merge branch 'master' into feature/logging-structured
pgomulka Jan 28, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions distribution/src/config/jvm.options
Original file line number Diff line number Diff line change
Expand Up @@ -120,3 +120,5 @@ ${error.file}

# temporary workaround for C2 bug with JDK 10 on hardware with AVX-512
10-:-XX:UseAVX=2

-Dlog4j2.loggerContextFactory=org.elasticsearch.common.logging.MarkerLoggerContextFactory
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That will be very problematic in integration tests. Another argument against option3

64 changes: 59 additions & 5 deletions distribution/src/config/log4j2.properties
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,37 @@ logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
appender.console.layout.pattern ={\
pgomulka marked this conversation as resolved.
Show resolved Hide resolved
"type": "console", \
"timestamp": "%d{ISO8601}", \
"level": "%-5p", \
"class": "%c{1.}", \
"cluster_name": ${sys:es.logs.cluster_name}", \
"node_name": "%node_name", \
"node_id": "%node_id_raw", \
%marker\
"message": "%.-10000m"\
}%n

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the log file name change according to the first subtask in #32850? As this is a breaking change, we'd also need to add it to the breaking changes docs + also change the logging docs (this comment applies to all log file names)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

almost forgot about this one - thanks for spotting
would this be better if we use the ${cluster_name}_server pattern
we already have this pattern for deprecated and other types of logs

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess a prefix would be easier to match than a suffix but as the Beats team needs to handle this, can you please check with them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the Filebeat perspective, ${cluster_name}_${log_type} works. It's something we are using today.

appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
appender.rolling.layout.pattern ={\
"type": "rolling", \
"timestamp": "%d{ISO8601}", \
"level": "%-5p", \
"class": "%c{1.}", \
"cluster_name": ${sys:es.logs.cluster_name}", \
"node_name": "%node_name", \
"node_id": "%node_id_raw", \
%marker\
"message": "%.-10000m"\
}%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz



appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
Expand All @@ -38,7 +61,18 @@ appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
appender.deprecation_rolling.layout.pattern = {\
"type": "rolling", \
"timestamp": "%d{ISO8601}", \
"level": "%-5p", \
"class": "%c{1.}", \
"cluster_name": ${sys:es.logs.cluster_name}", \
"node_name": "%node_name", \
"node_id": "%node_id_raw", \
%marker\
"message": "%.-10000m"\
}%n

appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
Expand All @@ -55,7 +89,17 @@ appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_search_slowlog_rolling.layout.pattern = {\
"type": "rolling", \
"timestamp": "%d{ISO8601}", \
"level": "%-5p", \
"class": "%c{1.}", \
"cluster_name": ${sys:es.logs.cluster_name}", \
"node_name": "%node_name", \
"node_id": "%node_id_raw", \
%marker\
"message": "%.-10000m"\
}%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
Expand All @@ -72,7 +116,17 @@ appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_indexing_slowlog_rolling.layout.pattern = {\
"type": "rolling", \
"timestamp": "%d{ISO8601}", \
"level": "%-5p", \
"class": "%c{1.}", \
"cluster_name": ${sys:es.logs.cluster_name}", \
"node_name": "%node_name", \
"node_id": "%node_id_raw", \
%marker\
"message": "%.-10000m"\
}%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%i.log.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
Expand Down
4 changes: 2 additions & 2 deletions plugins/discovery-ec2/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ dependencies {
compile "org.apache.httpcomponents:httpcore:${versions.httpcore}"
compile "commons-logging:commons-logging:${versions.commonslogging}"
compile "commons-codec:commons-codec:${versions.commonscodec}"
compile 'com.fasterxml.jackson.core:jackson-databind:2.6.7.1'
compile 'com.fasterxml.jackson.core:jackson-annotations:2.6.0'
compile "com.fasterxml.jackson.core:jackson-databind:${versions.jackson}"
compile "com.fasterxml.jackson.core:jackson-annotations:${versions.jackson}"
}

dependencyLicenses {
Expand Down
4 changes: 2 additions & 2 deletions plugins/repository-s3/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ dependencies {
compile "commons-logging:commons-logging:${versions.commonslogging}"
compile "commons-codec:commons-codec:${versions.commonscodec}"
compile "com.fasterxml.jackson.core:jackson-core:${versions.jackson}"
compile 'com.fasterxml.jackson.core:jackson-databind:2.6.7.1'
compile 'com.fasterxml.jackson.core:jackson-annotations:2.6.0'
compile "com.fasterxml.jackson.core:jackson-databind:${versions.jackson}"
compile "com.fasterxml.jackson.core:jackson-annotations:${versions.jackson}"
compile "com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:${versions.jackson}"
compile "joda-time:joda-time:${versions.joda}"

Expand Down
4 changes: 4 additions & 0 deletions server/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,10 @@ dependencies {
// repackaged jna with native bits linked against all elastic supported platforms
compile "org.elasticsearch:jna:${versions.jna}"

compile "com.fasterxml.jackson.core:jackson-databind:${versions.jackson}"
pgomulka marked this conversation as resolved.
Show resolved Hide resolved
compile("com.fasterxml.jackson.core:jackson-annotations:${versions.jackson}")


if (!isEclipse && !isIdea) {
java9Compile sourceSets.main.output
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package org.elasticsearch.common.logging;

import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.Marker;
import org.apache.logging.log4j.MarkerManager;
import org.apache.logging.log4j.core.Logger;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.message.Message;
import org.apache.logging.log4j.message.MessageFactory;
import org.apache.logging.log4j.spi.ExtendedLogger;
import org.apache.logging.log4j.util.StringBuilderFormattable;
import org.apache.logging.log4j.util.Strings;

import java.util.WeakHashMap;
import java.util.concurrent.atomic.AtomicReference;

public class MarkerLogger extends Logger {
pgomulka marked this conversation as resolved.
Show resolved Hide resolved


/*
* We can not use the built-in Marker tracking (MarkerManager) because the MarkerManager holds a permanent reference to the marker;
* however, we have transient markers from index-level and shard-level components so this would effectively be a memory leak. Since we
* can not tie into the lifecycle of these components, we have to use a mechanism that enables garbage collection of such markers when
* they are no longer in use.
*/
private static final WeakHashMap<String, Marker> markers = new WeakHashMap<>();

/**
* Return the size of the cached markers. This size can vary as markers are cached but collected during GC activity when a given prefix
* is no longer in use.
*
* @return the size of the cached markers
*/
static int markersSize() {
return markers.size();
}

/**
* The marker for this prefix logger.
*/
private final Marker marker;

/**
* Obtain the prefix for this prefix logger. This can be used to create a logger with the same prefix as this one.
*
* @return the prefix
*/
public String prefix() {
return marker.getName();
}

/**
* The constructor.
* //TODO consider what happens when UNKOWN_NODE_ID are not in use anymore
*/
protected MarkerLogger(Logger logger, AtomicReference<NodeIdListener> nodeIdListener) {
super(logger.getContext(), logger.getName(), logger.getMessageFactory());
String prefix = getPrefix(nodeIdListener);

final Marker actualMarker;
// markers is not thread-safe, so we synchronize access
synchronized (markers) {
final Marker maybeMarker = markers.get(prefix);
if (maybeMarker == null) {
if (nodeIdListener.get() != null && Strings.isNotEmpty(nodeIdListener.get().getNodeId().get())) {
actualMarker = new MarkerManager.Log4jMarker(prefix);
} else {
actualMarker = new AtomicRefMarker(nodeIdListener);
}

/*
* We must create a new instance here as otherwise the marker will hold a reference to the key in the weak hash map; as
* those references are held strongly, this would give a strong reference back to the key preventing them from ever being
* collected. This also guarantees that no other strong reference can be held to the prefix anywhere.
*/
// noinspection RedundantStringConstructorCall
markers.put(new String(prefix), actualMarker);
} else {
actualMarker = maybeMarker;
}
}
this.marker = actualMarker;
}

private String getPrefix(AtomicReference<NodeIdListener> nodeIdListenerRef) {
NodeIdListener nodeIdListener = nodeIdListenerRef.get();
if (nodeIdListener != null) {
AtomicReference<String> nodeId = nodeIdListener.getNodeId();
return nodeId.get();
}
return NodeIdListener.UNKOWN_NODE_ID;
}


@Override
public void logMessage(final String fqcn, final Level level, final Marker marker, final Message message, final Throwable t) {
super.logMessage(fqcn, level, this.marker, message, t);
}

static class AtomicRefMarker implements Marker, StringBuilderFormattable {

private AtomicReference<NodeIdListener> nodeId;

/**
* Constructs a new Marker.
*/
public AtomicRefMarker(AtomicReference<NodeIdListener> nodeId) {
this.nodeId = nodeId;
}

@Override
public String getName() {
if (nodeId.get() != null && Strings.isNotEmpty(nodeId.get().getNodeId().get())) {
return nodeId.get().getNodeId().get();
}
return NodeIdListener.UNKOWN_NODE_ID;
}

@Override
public void formatTo(StringBuilder buffer) {
buffer.append(getName());
}
//just using the marker for its name in logs since other methos are unimplemented

@Override
public Marker addParents(Marker... markers) {
return null;
}

@Override
public Marker[] getParents() {
return new Marker[0];
}

@Override
public boolean hasParents() {
return false;
}

@Override
public boolean isInstanceOf(Marker m) {
return false;
}

@Override
public boolean isInstanceOf(String name) {
return false;
}

@Override
public boolean remove(Marker marker) {
return false;
}

@Override
public Marker setParents(Marker... markers) {
return null;
}


}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package org.elasticsearch.common.logging;

import org.apache.logging.log4j.core.Logger;
import org.apache.logging.log4j.core.LoggerContext;

import java.util.concurrent.atomic.AtomicReference;

public class MarkerLoggerContext extends LoggerContext {

private final AtomicReference<NodeIdListener> nodeIdListener;

public MarkerLoggerContext(String name, AtomicReference<NodeIdListener> nodeIdListener) {
super(name);
this.nodeIdListener = nodeIdListener;
}

@Override
public Logger getLogger(final String name) {
Logger logger = getLogger(name, null);
MarkerLogger marker = new MarkerLogger(logger,nodeIdListener);
return marker;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package org.elasticsearch.common.logging;

import org.apache.logging.log4j.core.impl.Log4jContextFactory;

import java.util.concurrent.atomic.AtomicReference;

public class MarkerLoggerContextFactory extends Log4jContextFactory {

private static final AtomicReference<NodeIdListener> nodeIdListener = new AtomicReference<>();

public MarkerLoggerContextFactory() {
super(new MarkerLoggerContextSelector(nodeIdListener));
}

public void setNodeIdListener(NodeIdListener nodeIdListener) {
MarkerLoggerContextFactory.nodeIdListener.set(nodeIdListener);
}
}
Loading