Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs appear on the console without control or suppression, bypassing ILogger abstraction #2386

Open
8 tasks done
ArturWincenciak opened this issue Dec 18, 2024 · 0 comments
Open
8 tasks done

Comments

@ArturWincenciak
Copy link

ArturWincenciak commented Dec 18, 2024

Description

We have encountered a critical issue with logging. Despite configuring through the ASP .NET Core ILogger abstraction, which we have integrated with NLog to produce JSON-formatted logs, redundant log messages appear directly on the console in a plain-text format. This behavior is problematic for our observability setup and creates significant operational challenges.

Observed Behavior:

Occasionally, log messages like the following appear directly on the console:

%3|1733736806.005|FAIL|rdkafka#consumer-1| [thrd:localhost:19092/bootstrap]: localhost:19092/0: Connect to ipv4#127.0.0.1:19092 failed: Unknown error (after 2032ms in state CONNECT)
%4|1733736807.517|SESSTMOUT|rdkafka#consumer-1| [thrd:main]: Consumer group session timed out (in join-state steady) after 45092 ms without a successful response from the group coordinator (broker -1, last error was Success): revoking assignment and rejoining group
%3|1733736814.215|FAIL|rdkafka#consumer-1| [thrd:localhost:19092/bootstrap]: localhost:19092/0: Connect to ipv6#[::1]:19092 failed: Unknown error (after 2037ms in state CONNECT)

We expect all logs to be routed through our ILogger implementation, where we have strict control over formatting. However, the library redundantly emits these logs directly to the console, bypassing our logging configuration.

We have configured the producer builder using the following approach:

return new ProducerBuilder<string, TMessage>(producerConfig)
    .SetValueSerializer(new ProtobufSerializer<TMessage>(schemaRegistryClient))
    .SetLogHandler(HandleProducerLog)
    .SetErrorHandler(HandleProducerError)
    .Build();

While the SetLogHandler and SetErrorHandler correctly captures and formats the logs, redundant plain-text logs still appear on the console, and there seems to be on way to to suppress them.

Why This Is Critical:

  1. Impact on Observability
  • We use a sidecar container to capture console logs and forward them to OpenSearch. This sidecar expects logs to be in valid JSON format. When plain-text logs like the above appear, the sidecar crashes due to parsing errors.
  1. Loss of Critical Logs:
  • If the sidecar crashes, subsequent critical JSON logs are missed, potentially leading to undetected failures in production.

How to reproduce

  1. Set up a local Redpanda instance in Docker.
  2. Run a local application with a Kafka producer configured to connect to the Redpanda instance.
  3. Ensure the producer successfully connects and stars sending messages to Redpanda.
  4. Kill the Redpanda Docker container.
  5. Observer the application logs:
    • You will see critical error logs captured by the SetLogHandler, SetErrorHandler and routed though ILogger, as expected.
    • Additionally, the redundant plain-text error logs will appear directly on the console, bypassing ILogger , which is the core issue.

Client configuration

private ISchemaRegistryClient CreateSchemaRegistry()
{
	var schemaRegistryConfig = new SchemaRegistryConfig
	{
		Url = _options.Value.NewsPublisher.SchemaRegistry,
	};
	return new CachedSchemaRegistryClient(schemaRegistryConfig);
}

private IProducer<string, TMessage> CreateProducer()
{
	var producerConfig = new ProducerConfig
	{
		BootstrapServers = _options.Value.NewsPublisher.BootstrapServer,
		EnableIdempotence = false,
		Acks = Acks.All,
		CompressionType = CompressionType.Gzip,
		Partitioner = Partitioner.Murmur2
	};

	if (_options.Value.NewsPublisher.UseSasl)
	{
		producerConfig.SaslUsername = _options.Value.NewsPublisher.SaslUsername;
		producerConfig.SaslPassword = _options.Value.NewsPublisher.SaslPassword;
		producerConfig.SecurityProtocol = SecurityProtocol.SaslSsl;
		producerConfig.SaslMechanism = SaslMechanism.ScramSha256;
	}

	var schemaRegistryClient = _schemaRegistry.Value;
	return new ProducerBuilder<string, TMessage>(producerConfig)
		.SetValueSerializer(new ProtobufSerializer<TMessage>(schemaRegistryClient))
		.SetLogHandler(HandleProducerLog)
		.SetErrorHandler(HandleProducerError)
		.Build();
}

Checklist

Please provide the following information:

  • A complete (i.e. we can run it), minimal program demonstrating the problem. No need to supply a project file.
  • Confluent.Kafka nuget version.
    • Confluent.Kafka --version 2.6.1
  • Apache Kafka version.
    • docker.redpanda.com/redpandadata/redpanda:v24.1.12
  • Client configuration.
  • Operating system.
    • The issue can be easily reproduced on Windows 11 using Rancher Desktop.
    • In production, it occurs in a Kubernetes (K8s) environment.
  • Provide logs (with "debug" : "..." as necessary in configuration).
  • Provide broker log excerpts.
    • Broker logs are not needed in this case, as they are not relevant. The broker is simply shut down to reproduce the issue on the client.
  • Critical issue.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant