Skip to content

Commit

Permalink
Remove all references to removed output.X.flush_interval settings (el…
Browse files Browse the repository at this point in the history
…astic#4880)

* Remove all references to flush_interval

The outputs setting flush_interval has been removed in the past. Remove
this setting from reference configuration and docs as well.

* Fail if removed flush_interval is configured
  • Loading branch information
Steffen Siering authored and tsg committed Aug 14, 2017
1 parent a8b2192 commit a1b64cd
Show file tree
Hide file tree
Showing 13 changed files with 6 additions and 70 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ https://github.com/elastic/beats/compare/v6.0.0-beta1...master[Check the HEAD di
*Affecting all Beats*

- The log directory (`path.log`) for Windows services is now set to `C:\ProgramData\[beatname]\logs`. {issue}4764[4764]
- Fail if removed setting output.X.flush_interval is explicitly configured.

*Auditbeat*

Expand Down
8 changes: 0 additions & 8 deletions auditbeat/auditbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -236,11 +236,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -440,9 +435,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions filebeat/filebeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -641,11 +641,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -845,9 +840,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions heartbeat/heartbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -385,11 +385,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -589,9 +584,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions libbeat/_meta/config.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -171,11 +171,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -375,9 +370,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
10 changes: 0 additions & 10 deletions libbeat/docs/outputconfig.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -292,12 +292,6 @@ spooler size.

The http request timeout in seconds for the Elasticsearch request. The default is 90.

===== `flush_interval`

The number of seconds to wait for new events between two bulk API index requests.
If `bulk_max_size` is reached before this interval expires, additional bulk index
requests are made.

===== `ssl`

Configuration options for SSL parameters like the certificate authority to use
Expand Down Expand Up @@ -731,10 +725,6 @@ The ACK reliability level required from broker. 0=no response, 1=wait for local

Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error.

===== `flush_interval`

The number of seconds to wait for new events between two producer API calls.

===== `ssl`

Configuration options for SSL parameters like the root CA for Kafka connections. See
Expand Down
1 change: 0 additions & 1 deletion libbeat/outputs/fileout/file.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ func makeFileout(
}

// disable bulk support in publisher pipeline
cfg.SetInt("flush_interval", -1, -1)
cfg.SetInt("bulk_max_size", -1, -1)

fo := &fileOutput{beat: beat, stats: stats}
Expand Down
2 changes: 0 additions & 2 deletions libbeat/outputs/logstash/logstash_integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -148,12 +148,10 @@ func newTestElasticsearchOutput(t *testing.T, test string) *testOutputer {
index := testElasticsearchIndex(test)
connection := esConnect(t, index)

flushInterval := 0
bulkSize := 0
config, _ := common.NewConfigFrom(map[string]interface{}{
"hosts": []string{getElasticsearchHost()},
"index": connection.index,
"flush_interval": &flushInterval,
"bulk_max_size": &bulkSize,
"username": os.Getenv("ES_USER"),
"password": os.Getenv("ES_PASS"),
Expand Down
5 changes: 5 additions & 0 deletions libbeat/outputs/output_reg.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (

"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
"github.com/elastic/beats/libbeat/common/cfgwarn"
)

var outputReg = map[string]Factory{}
Expand Down Expand Up @@ -43,5 +44,9 @@ func Load(info beat.Info, stats *Stats, name string, config *common.Config) (Gro
return Group{}, fmt.Errorf("output type %v undefined", name)
}

if err := cfgwarn.CheckRemoved5xSetting(config, "flush_interval"); err != nil {
return Fail(err)
}

return factory(info, stats, config)
}
1 change: 0 additions & 1 deletion libbeat/tests/system/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,6 @@ def test_console_output_size_flush(self):
console={
"pretty": "false",
"bulk_max_size": 1,
"flush_interval": "1h"
}
)

Expand Down
8 changes: 0 additions & 8 deletions metricbeat/metricbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -605,11 +605,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -809,9 +804,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions packetbeat/packetbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -623,11 +623,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -827,9 +822,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions winlogbeat/winlogbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -200,11 +200,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -404,9 +399,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down

0 comments on commit a1b64cd

Please sign in to comment.