-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented Sentry #237
Implemented Sentry #237
Conversation
How about implementing in the application level logging tool (goglog)? |
Hi @tsaikd, Sentry can work alongside goglog, complementing the existing logging mechanism without introducing significant overhead. I feel this is required in the project, optional of-course based on if GS_SENTRY_DSN is set or not. |
In the PR, you only send the the error logs to sentry in output/elastic module. I mean, you can send the application level error logs to sentry in goglog package which will imply for wider error logs. |
Hi @tsaikd, |
@tsaikd , can you please review and approve this please? |
Simple tests to check sentry sdk usage. ConfigureScope, WithScope do not work atomically. There are some messages with the wrong log level. func TestSentryParallel(t *testing.T) {
err := sentry.Init(sentry.ClientOptions{
Dsn: "",
TracesSampleRate: 1.0,
})
require.NoError(t, err)
hubInfo := sentry.CurrentHub().Clone()
hubInfo.ConfigureScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelInfo)
})
hubWarn := sentry.CurrentHub().Clone()
hubWarn.ConfigureScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelWarning)
})
for i := 0; i < 60; i++ {
i := i
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
t.Parallel()
switch {
case i%6 == 0:
sentry.WithScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelInfo)
sentry.CaptureMessage(fmt.Sprintf("Scope Info %d", i))
})
case i%6 == 1:
sentry.WithScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelWarning)
sentry.CaptureMessage(fmt.Sprintf("Scope Warn %d", i))
})
case i%6 == 2:
sentry.ConfigureScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelInfo)
})
sentry.CaptureMessage(fmt.Sprintf("Configure Info %d", i))
case i%6 == 3:
sentry.ConfigureScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelWarning)
})
sentry.CaptureMessage(fmt.Sprintf("Configure Warn %d", i))
case i%6 == 4:
hubInfo.CaptureMessage(fmt.Sprintf("Hub Info %d", i))
case i%6 == 5:
hubWarn.CaptureMessage(fmt.Sprintf("Hub Warn %d", i))
}
})
}
t.Cleanup(func() {
sentry.Flush(5 * time.Second)
hubInfo.Flush(5 * time.Second)
hubWarn.Flush(5 * time.Second)
})
} |
@tsaikd , Tests updated. can you please review now? |
@tsaikd , Fixes have been done, can you please review now |
@tsaikd can you please review onces. |
Hi @tsaikd, we've made the changes you suggested by removing flush after each call. Regarding the data race issue previously raised, we couldn't find a better solution. We tried assigning hubs for Error, Warn, and Fatal directly in the struct, but faced issues because goglog initializes before Sentry. Our current approach assigns these hubs once to the struct for reuse, or alternatively, we can create new functions for each hub. If you have any other suggestions, please let us know. We're open to exploring different approaches. |
What's your use case? Do you want to log into sentry from some inputs (e.g. files/docker), or log the gogstash app execution messages into sentry? |
Often, we encounter failures or issues with ingestions to OpenSearch/Elasticsearch without us even noticing, in such cases sentry will keep us informed of us failures, specially wherein we are using Docker and gogstash container recovers from the failures on docker restart polices, This will help us stay aware of any issues, allowing us to take appropriate actions, such as resolving field mapping or index conflicts. |
Do you mean you are running the gogstash in a docker container to collect the other application logs (e.g. nginx) and save to the ElasticSearch? |
Yes, we use a Docker container for gogstash to collect logs from various sources like Node.js applications and Nginx. These logs are initially ingested into Redis, and then fed into gogstash using Redis as an input. Gogstash handles the flushing of logs to OpenSearch/Elasticsearch. However, we've observed frequent failures on the OpenSearch/Elasticsearch side due to incorrect data types sent from the applications. These failures often go unnoticed. |
In your use case, the sentry should not be an output module, and that's why you have the issue. You can put the sentry config at |
@tsaikd As suggested, we tried initialising the sentry in config, but the actual problem was with goglog, so we have changed how goglog will be loaded initially, now the variable for different sentry hub is working fine, but to fix this we had to nove sentry to gogstash function. Could you please review now |
This PR integrates Sentry into Gogstash, to enhance error monitoring and logging capabilities.
With this integration, we can now capture and report errors more effectively, including those that occur during requests to Elastic/OpenSearch.
Sentry can be enabled by setting the GS_SENTRY_DSN environment variable.
This can be done in export/linux environments or within Docker environments.