Skip to content
This repository has been archived by the owner on Aug 7, 2023. It is now read-only.

Commit

Permalink
Promote Burrow 1.0 RC branch to master (linkedin#283)
Browse files Browse the repository at this point in the history
* Merge burrow-1.0 RC branch

* Burrow 1.0 Release Candidate (linkedin#258)

* Replace burrow with the proposed 1.0 framework
Look, it's essentially a complete rewrite. There's almost nothing left of the original code here, and none of the modules have been fleshed out yet.

The overall changes:
* Make burrow itself a lib wrapped with main, so we can wrap it inside other applications
* Move to a modular framework with well-defined interfaces between components
* Switch logging to uber/zap and lumberjack
* Start with being able to have parallel operation (notifier active eveywhere) so we can share load between instances

* Restructure a bit to resolve import cycles

* Make sure to gitignore the built binary

* Move modules to internal packages

* Tweak logging to work on windows

* Clean up coordinators a little more

* Fix syscalls for unix vs windows

* First pass at inmemory storage module

* tests for inmemory, and fixes found during testing

* Additional tests to make sure channels are closed after replies

* Actually start the mainLoop

* Assure only 1 storage module is allow, and add coordinator tests

* Fix storage code and tests for problems found while testing evaluators

* Add a fixture for storage to create a coordinator with storage module for testing code outside storage

* Fixes to evaluator code based on testing

* Tests for the evaluator coordinator and caching module

* Add a fixture for the evaluator that other testing can use

* Add start/stop and multiple request tests for the evaluator coordinator

* Remove extra parens

* Fix config name

* Add group whitelists to storage module, along with tests

* Fix a potential bug in min-distance where we would never create a new offset

* moar logging

* Add a group delete request for storage modules

* Added expiration of group data via lazy deletion on request

* First pass at cluster module for kafka with limited tests

* Add a shim interface for sarama.Client and sarama.Broker

* Switch kafka cluster module to use the shim interface for sarama

* Add tests for the rest of the kafka cluster module

* Add a storage request for setting partition owner for a group

* Add kafka_client consumer module and tests

* Add consumer coordinator tests

* Move the storage request send helper to a new file

* Refactor names for the sarama shims

* Add a shim for go-zookeeper so we'll be able to test

* Implement the kafkazk consumer module and tests

* Add tests for validation routines

* comment fix

* Add tests for helpers

* Add whitelist support to consumers

* Have the PID creator also check if the process exists before exiting

* Restructure main ZK as a coordinator to use the common interface

* Start notifiers, clean up some testing

* Add tests for HTTP notifier module

* Refactor notifier coordinator to move common logic out of the modules

* Refactor notifier whitelist and threshold accept logic to coordinator

* Move template execution up to a coordinator method for consistency

* Email notifier

* Slack notifier and tests

* Use asserts instead of panics for the HTTP tests

* Fix a case in the storage fixture where it won't get all the commits

* Check http notifier profile configs

* Make maxlag template helper use the CurrentLag field

* Rename NotifierModule to just Module

* Rename StorageModule to just Module

* Rename EvaluatorModule to just Module

* Add support for ZK locks, as well as tests

* Add a ticker that can be stopped and restarted

* Make the notifier coordinator use a ZK lock with the restartable ticker

* Add HTTP server and tests

* Update dependencies

* Clean up HTTP tests so we test the router configuration code

* Few more HTTP server tests, and flesh out log level set/get

* Reorder imports

* Fix copyright comments

* Formatting cleanup

* Set httprouter to master, since it hasn't released in 2 years

* touch up logging

* Remember to set the config as valid

* Use master branch of testify

* Updates found in testing

* Check for null fields in member metadata

* Fixes to metadata handling

* Add a worker pool for inmemory to consistently process groups

* Remove the kafka_client mainLoop, as it's not useful

* Fix formatting and a duplicate logging field

* Add support for CORS headers on the HTTP server

* Add a template helper for formatting timestamps using normal Time format strings

* Add support for basic auth in the HTTP notifier

* Refactor config to use viper instead of gcfg

* add more logging in Kafka clients, and fix config loading

* fix typo in client-id config string

* Catch errors when starting coordinators

* Log the http listener info

* Clean up some of the logging

* Fix logging and notifiers from testing (linkedin#259)

* Fix notifier logic in 1.0 (linkedin#261)

* Fix how the extras field is pulled into the HTTP response structs

* Make sure the module accept group is always called

* Pause before testing stop on the storage coordinator

* Fix conditions where notifications are sent, and add a much more robust test

* 1.0 - Add jitter to notifier evaluations (linkedin#263)

* Change the loop for evaluations to be started to be timed with jitter for each consumer group

* reorder imports

* Burrow 1.0 config defaults (linkedin#264)

* Add owners to consumer group status response

* If no storage module configured, use a default

* If no evaluator module configured, use a default

* Fix default http server

* ConfigurationValid gets set by Start, not before

* cleanup methods that don't need to be exported

* Burrow 1.0 group blacklist (linkedin#266)

* Add group blacklists

* Reduce logging level for storage purging expired groups

* Start evaluator and httpserver before clusters/consumers

* Remove the requirement that you must have a cluster and consumer module defined

* Explicitly update metadata for topics that had errors fetching offsets (linkedin#267)

* Refresh metadata on leader failures as well (linkedin#268)

* Make sure that whenever we are reading the cluster map in the notifier, we have a lock (linkedin#269)

* Burrow 1.0 - No negative lag (linkedin#271)

* Lag values should always be unsigned ints

* unnecessary cast

* Update deps

* Start notifier before clusters and consumers (linkedin#272)

* Remove slack notifier (linkedin#278)

* Remove slack notifier
* Add example slack templates

* Burrow 1.0 - Godocs for everything (linkedin#281)

* Godoc docs for everything, and resolve all golint issues

* Burrow 1.0 - Doc cleanup (linkedin#282)

* Update example configuration files

* Fix example email template

* Update docs
  • Loading branch information
Todd Palino authored Dec 1, 2017
1 parent 51d6ebc commit b5fb124
Show file tree
Hide file tree
Showing 84 changed files with 12,993 additions and 3,962 deletions.
18 changes: 10 additions & 8 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
burrow-src
Burrow
.*.swp
!config
log
.idea
Burrow.iml
tmp
burrow-src
.*.swp
!config
log
.idea
Burrow
Burrow.exe
Burrow.iml
tmp
vendor
9 changes: 6 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
## X.Y.Z (TBD)
## 1.0.0 (TBD)

Features:
- Added request info to HTTP responses (#64 and #45)
- Code overhaul - more modular and now with tests
- Actual documentation (godoc)
- Support for topic deletion in Kafka clusters
- Removed Slack notifier in favor of just using the HTTP notifier

Bugfixes:
- Fix an issue where maxlag partition is selected badly
- Too many to count

## 0.1.1 (2016-05-01)

Expand Down
6 changes: 0 additions & 6 deletions Godeps

This file was deleted.

213 changes: 213 additions & 0 deletions Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

43 changes: 43 additions & 0 deletions Gopkg.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
[[constraint]]
name = "github.com/Shopify/sarama"
version = "1.12.0"

[[constraint]]
name = "go.uber.org/zap"
version = "1.7.1"

[[constraint]]
name = "gopkg.in/natefinch/lumberjack.v2"
version = "2.1"

[[constraint]]
branch = "master"
name = "github.com/linkedin/Burrow"

[[constraint]]
name = "github.com/pborman/uuid"
version = "1.1.0"

[[constraint]]
branch = "master"
name = "github.com/samuel/go-zookeeper"

[[constraint]]
name = "github.com/julienschmidt/httprouter"
branch = "master"

[[constraint]]
name = "github.com/stretchr/testify"
branch = "master"

[[constraint]]
name = "github.com/karrick/goswarm"
version = "1.4.7"

[[constraint]]
name = "github.com/OneOfOne/xxhash"
version = "1.2.1"

[[constraint]]
name = "github.com/spf13/viper"
version = "1.0.0"
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,19 +16,19 @@ Burrow is a monitoring companion for [Apache Kafka](http://kafka.apache.org) tha
### Prerequisites
Burrow is written in Go, so before you get started, you should [install and set up Go](https://golang.org/doc/install).

If you have not yet installed the [Go Package Manager](https://github.com/pote/gpm), please go over there and follow their short installation instructions. GPM is used to automatically pull in the dependencies for Burrow so you don't have to chase them all down.
If you have not yet installed the [Go Dependency Management Tool](https://github.com/golang/dep), please go over there and follow their short installation instructions. dep is used to automatically pull in the dependencies for Burrow so you don't have to chase them all down.

### Build and Install
```
$ go get github.com/linkedin/Burrow
$ cd $GOPATH/src/github.com/linkedin/Burrow
$ gpm install
$ dep ensure
$ go install
```

### Running Burrow
```
$ $GOPATH/bin/Burrow --config path/to/burrow.cfg
$ $GOPATH/bin/Burrow --config-dir /path/containing/config
```

### Using Docker
Expand All @@ -55,7 +55,7 @@ Install Docker Compose and then:
For information on how to write your configuration file, check out the [detailed wiki](https://github.com/linkedin/Burrow/wiki)

## License
Copyright 2015 LinkedIn Corp. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
Copyright 2017 LinkedIn Corp. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
Expand Down
Loading

0 comments on commit b5fb124

Please sign in to comment.