Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick #4880 to 6.0: Remove all references to removed output.X.flush_interval settings #4971

Merged
merged 1 commit into from
Aug 24, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ https://github.com/elastic/beats/compare/v6.0.0-beta1...master[Check the HEAD di
- Fail if removed setting output.X.flush_interval is explicitly configured.
- Rename the `/usr/bin/beatname.sh` script (e.g. `metricbeat.sh`) to `/usr/bin/beatname`. {pull}4933[4933]
- Beat does not start if elasticsearch index pattern was modified but not the template name and pattern. {issue}4769[4769]
- Fail if removed setting output.X.flush_interval is explicitly configured.

*Auditbeat*

Expand Down
8 changes: 0 additions & 8 deletions auditbeat/auditbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -237,11 +237,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -441,9 +436,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions filebeat/filebeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -628,11 +628,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -832,9 +827,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions heartbeat/heartbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -386,11 +386,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -590,9 +585,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions libbeat/_meta/config.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -172,11 +172,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -376,9 +371,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
10 changes: 0 additions & 10 deletions libbeat/docs/outputconfig.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -292,12 +292,6 @@ spooler size.

The http request timeout in seconds for the Elasticsearch request. The default is 90.

===== `flush_interval`

The number of seconds to wait for new events between two bulk API index requests.
If `bulk_max_size` is reached before this interval expires, additional bulk index
requests are made.

===== `ssl`

Configuration options for SSL parameters like the certificate authority to use
Expand Down Expand Up @@ -731,10 +725,6 @@ The ACK reliability level required from broker. 0=no response, 1=wait for local

Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error.

===== `flush_interval`

The number of seconds to wait for new events between two producer API calls.

===== `ssl`

Configuration options for SSL parameters like the root CA for Kafka connections. See
Expand Down
1 change: 0 additions & 1 deletion libbeat/outputs/fileout/file.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ func makeFileout(
}

// disable bulk support in publisher pipeline
cfg.SetInt("flush_interval", -1, -1)
cfg.SetInt("bulk_max_size", -1, -1)

fo := &fileOutput{beat: beat, stats: stats}
Expand Down
2 changes: 0 additions & 2 deletions libbeat/outputs/logstash/logstash_integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -148,12 +148,10 @@ func newTestElasticsearchOutput(t *testing.T, test string) *testOutputer {
index := testElasticsearchIndex(test)
connection := esConnect(t, index)

flushInterval := 0
bulkSize := 0
config, _ := common.NewConfigFrom(map[string]interface{}{
"hosts": []string{getElasticsearchHost()},
"index": connection.index,
"flush_interval": &flushInterval,
"bulk_max_size": &bulkSize,
"username": os.Getenv("ES_USER"),
"password": os.Getenv("ES_PASS"),
Expand Down
5 changes: 5 additions & 0 deletions libbeat/outputs/output_reg.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"fmt"

"github.com/elastic/beats/libbeat/common"
"github.com/elastic/beats/libbeat/common/cfgwarn"
)

var outputReg = map[string]Factory{}
Expand Down Expand Up @@ -42,5 +43,9 @@ func Load(info common.BeatInfo, stats *Stats, name string, config *common.Config
return Group{}, fmt.Errorf("output type %v undefined", name)
}

if err := cfgwarn.CheckRemoved5xSetting(config, "flush_interval"); err != nil {
return Fail(err)
}

return factory(info, stats, config)
}
1 change: 0 additions & 1 deletion libbeat/tests/system/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,6 @@ def test_console_output_size_flush(self):
console={
"pretty": "false",
"bulk_max_size": 1,
"flush_interval": "1h"
}
)

Expand Down
8 changes: 0 additions & 8 deletions metricbeat/metricbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -592,11 +592,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -796,9 +791,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions packetbeat/packetbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -624,11 +624,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -828,9 +823,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down
8 changes: 0 additions & 8 deletions winlogbeat/winlogbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -201,11 +201,6 @@ output.elasticsearch:
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1s

# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true

Expand Down Expand Up @@ -405,9 +400,6 @@ output.elasticsearch:
# on error.
#required_acks: 1

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1s

# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
Expand Down