Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase custom_insights_events.max_samples_stored #1541

Merged
merged 1 commit into from
Oct 13, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 19 additions & 17 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,17 @@

## v8.12.0

Version 8.12.0 of the agent delivers some valuable code cleanup.
Version 8.12.0 of the agent delivers some valuable code cleanup, and increases the default number of recorded Custom Events.

* **Cleanup: Remove orphaned code from unit tests**

As outlined by [newrelic/newrelic-ruby-agent#1181](https://github.com/newrelic/newrelic-ruby-agent/issues/1181), the project's unit tests have ended up with orphaned content that has become vestigal. Some good related cleanup was performed for this release. [PR#1537](https://github.com/newrelic/newrelic-ruby-agent/pull/1537)

Thank you to [@ohbarye](https://github.com/ohbarye) for contributing this helpful cleanup!

* **Increase default for `custom_insights_events.max_samples_stored`**

New Relic has discovered a large number of [Custom Events](https://docs.newrelic.com/docs/data-apis/custom-data/custom-events/report-custom-event-data/) are dropped due to the configured value for `custom_insights_events.max_samples_stored`. In an effort to help customers receive more of their custom events, we're raising the default maximum value for custom events stored per minute from 1,000 events to 3,000 events. The highest possible number of events that can be sent per minute is 100,000.
## v8.11.0

Version 8.11.0 of the agent updates the `newrelic deployments` command to work with API keys issued to newer accounts, fixes a memory leak in the instrumentation of Curb error handling, further preps for Ruby 3.2.0 support, and includes several community member driven cleanup and improvement efforts. Thank you to everyone involved!
Expand Down Expand Up @@ -60,15 +62,15 @@


* **Bugfix: Missing unscoped metrics when instrumentation.thread.tracing is enabled**

Previously, when `instrumentation.thread.tracing` was set to true, some puma applications encountered a bug where a varying number of unscoped metrics would be missing. The agent now will correctly store and send all unscoped metrics.

Thank you to @texpert for providing details of their situation to help resolve the issue.


* **Bugfix: gRPC instrumentation causes ArgumentError when other Google gems are present**

Previously, when the agent had gRPC instrumentation enabled in an application using other gems (such as google-ads-googleads), the instrumentation could cause the error `ArgumentError: wrong number of arguments (given 3, expected 2)`. The gRPC instrumentation has been updated to prevent this issue from occurring in the future.
Previously, when the agent had gRPC instrumentation enabled in an application using other gems (such as google-ads-googleads), the instrumentation could cause the error `ArgumentError: wrong number of arguments (given 3, expected 2)`. The gRPC instrumentation has been updated to prevent this issue from occurring in the future.

Thank you to @FeminismIsAwesome for bringing this issue to our attention.

Expand Down Expand Up @@ -105,26 +107,26 @@


* **Bugfix: Error when setting the yaml configuration with `transaction_tracer.transaction_threshold: apdex_f`**
Originally, the agent was only checking the `transaction_tracer.transaction_threshold` from the newrelic.yml correctly if it was on two lines.

Originally, the agent was only checking the `transaction_tracer.transaction_threshold` from the newrelic.yml correctly if it was on two lines.

Example:

```
# newrelic.yml
transaction_tracer:
transaction_threshold: apdex_f
transaction_threshold: apdex_f
```

When this was instead changed to be on one line, the agent was not able to correctly identify the value of apdex_f.
When this was instead changed to be on one line, the agent was not able to correctly identify the value of apdex_f.

Example:
```
# newrelic.yml
transaction_tracer.transaction_threshold: apdex_f
```
This would cause prevent transactions from finishing due to the error `ArgumentError: comparison of Float with String failed`. This has now been corrected and the agent is able to process newrelic.yml with a one line `transaction_tracer.transaction_threshold: apdex_f` correctly now.
This would cause prevent transactions from finishing due to the error `ArgumentError: comparison of Float with String failed`. This has now been corrected and the agent is able to process newrelic.yml with a one line `transaction_tracer.transaction_threshold: apdex_f` correctly now.

Thank you to @oboxodo for bringing this to our attention.


Expand All @@ -134,8 +136,8 @@


## v8.9.0


* **Add support for Dalli 3.1.0 to Dalli 3.2.2**

Dalli versions 3.1.0 and above include breaking changes where the agent previously hooked into the gem. We have updated our instrumentation to correctly hook into Dalli 3.1.0 and above. At this time, 3.2.2 is the latest Dalli version and is confirmed to be supported.
Expand All @@ -147,9 +149,9 @@

* **Bugfix: Use read_nonblock instead of read on pipe**

Previously, our PipeChannelManager was using read which could cause Resque jobs to get stuck in some versions. This change updates the PipeChannelManager to use read_nonblock instead. This method can leverage error handling to allow the instrumentation to gracefully log a message and exit the stuck Resque job.
Previously, our PipeChannelManager was using read which could cause Resque jobs to get stuck in some versions. This change updates the PipeChannelManager to use read_nonblock instead. This method can leverage error handling to allow the instrumentation to gracefully log a message and exit the stuck Resque job.



## v8.8.0

* **Support Makara database adapters with ActiveRecord**
Expand Down Expand Up @@ -246,7 +248,7 @@
* **Bugfix: Error events missing attributes when created outside of a transaction**

Previously the agent was not assigning a priority to error events that were created by calling notice_error outside the scope of a transaction. This caused issues with sampling when the error event buffer was full, resulting in a `NoMethodError: undefined method '<' for nil:NilClass` in the newrelic_agent.log. This bugfix ensures that a priority is always assigned on error events so that the agent will be able to sample these error events correctly. Thank you to @olleolleolle for bringing this issue to our attention.



## v8.6.0
Expand Down
2 changes: 1 addition & 1 deletion lib/new_relic/agent/configuration/default_source.rb
Original file line number Diff line number Diff line change
Expand Up @@ -2004,7 +2004,7 @@ def self.enforce_fallback(allowed_values: nil, fallback: nil)
:description => 'If `true`, the agent captures [custom events](/docs/insights/new-relic-insights/adding-querying-data/inserting-custom-events-new-relic-apm-agents).'
},
:'custom_insights_events.max_samples_stored' => {
:default => 1000,
:default => 3000,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any impact to increasing this number? I'm wondering how we can up with 1000 in the first place, and why 3000 is the new number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Hannah! Please see this slack thread (internal)

1000 is actually lower than what all the other agents have at this time. The spec default is 10K per limit, so we're getting closer to the spec with a smaller performance impact by changing the value to 3K.

:public => true,
:type => Integer,
:allowed_from_server => true,
Expand Down
2 changes: 1 addition & 1 deletion newrelic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ common: &default_settings

# Specify a maximum number of custom Insights events to buffer in memory at a
# time.
# custom_insights_events.max_samples_stored: 1000
# custom_insights_events.max_samples_stored: 3000

# If false, the agent will not add database_name parameter to transaction or #
# slow sql traces.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def test_post_includes_metadata
NewRelic::Agent.agent.send(:harvest_and_send_custom_event_data)
post = last_custom_event_post

assert_equal({"reservoir_size" => 1000, "events_seen" => 10}, post.reservoir_metadata)
assert_equal({"reservoir_size" => 3000, "events_seen" => 10}, post.reservoir_metadata)
end

def last_custom_event_post
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def test_sends_all_event_capacities_on_connect
expected = {
'harvest_limits' => {
"analytic_event_data" => 1200,
"custom_event_data" => 1000,
"custom_event_data" => 3000,
"error_event_data" => 100,
"span_event_data" => 2000,
"log_event_data" => 10000
Expand All @@ -30,7 +30,7 @@ def test_sets_event_report_period_on_connect_response
"report_period_ms" => 5000,
"harvest_limits" => {
"analytic_event_data" => 1200,
"custom_event_data" => 1000,
"custom_event_data" => 3000,
"error_event_data" => 100,
"log_event_data" => 10000
}
Expand All @@ -51,7 +51,7 @@ def test_resets_event_report_period_on_reconnect
"report_period_ms" => 5000,
"harvest_limits" => {
"analytic_event_data" => 1200,
"custom_event_data" => 1000,
"custom_event_data" => 3000,
"error_event_data" => 100,
"log_event_data" => 10000
}
Expand Down