Skip to content
This repository has been archived by the owner on Jun 25, 2020. It is now read-only.

Overhall the telemetry collection for Stackdriver Nozzle #152

Merged
merged 16 commits into from
Nov 20, 2017

Conversation

johnsonj
Copy link
Contributor

@johnsonj johnsonj commented Nov 8, 2017

This design replaces the heartbeat.Heartbeater with a simplified implementation that matches functionality. This reduces the complexity and provides a simple interface for keeping track of events within the nozzle.

This work was done to fix #150 and as pre-work to allow the nozzle to write cumulative/delta metrics for its own telemetry instead of gauges.

  • Remove the re-defined heartbeat.Heartbeater interface in the stackdriver package by moving the MetricHandler into the stackdriver package.
  • Reduce the functionality of the heartbeat.Handler to recording values (instead of storing, flushing) and rename to telemetry.Sink
  • Replace heartbeat.Heartbeater with telemetry.Counter

This change is Reviewable

@johnsonj
Copy link
Contributor Author

johnsonj commented Nov 8, 2017

/cc @knyar @fluffle

@fluffle
Copy link
Collaborator

fluffle commented Nov 9, 2017

I've not reviewed in depth here because I have a question for you: why invent your own implementation of metric counters in package telemetry when Go has them in the standard library (package expvar) and there are many other implementations on Github?

I can't find any that have deep Stackdriver API integration, but I think that's an orthogonal problem to the one the Stackdriver nozzle is trying to solve.

I can see that package telemetry needs to exist, because it's also got the responsibility of writing the set of counters to the firehose and recording them to any configured Sink, but I think that modelling all the metrics inside a map[string]int is going to cause you problems. I also find it a bit confusing that you create multiple Counters when each Counter can Count multiple things -- why not just have one that counts everything?


Review status: 0 of 21 files reviewed at latest revision, all discussions resolved, some commit checks failed.


Comments from Reviewable

@johnsonj
Copy link
Contributor Author

johnsonj commented Nov 9, 2017

expvar is news to me! It looks promising, I will spend more time looking into it replacing this (or being apart of this). I didn't explorer external packages because it felt simple and the cost of integration exceeded the effort. This is code I expect to evolve significantly as we build out various dashboards so having full control will be useful.

The internal map is super simplified to match what it's replacing. I'd like to see it evolve like this:

map[string]Counter

type Counter struct {
    FirstSeen time
    LastSeen time
    LastValue int
    CumulativeValue int 
}

That will allow us to emit DELTA/CUMULATIVE metrics to Stackdriver instead of the gauge values we do today. I don't want this to be apart of this PR because it's getting too big to review.

I do not like the multiple counters thing. Here's why it exists:

adapter := stackdrier.NewMetricAdapter(stackdriver.NewMetricClient(), telemetry.NewCounter(nil))
telemetrySink := stackdriver.NewTelemetrySink(adapter)
appCounter := telemetry.NewCounter(telemetrySink)

The TelemetrySink requires an adapter and the adapter requires a counter. I was considering breaking this by having the TelemetrySink write to a stackdriver.MetricClient instead. Interested what your thoughts are there.

@johnsonj
Copy link
Contributor Author

johnsonj commented Nov 9, 2017

expvar is pretty awesome. I believe I can satisfy the telemetry coverage we need (and break the dependency graph) by using package level metrics (either an expvar.Map in each or a value per interesting metric) and implementing something in the telemetry package to extract values of interest and publish them to Stackdriver.

This implementation guarentees that Increment is non-blocking. This is
achieved by having the emit call grab the mutex to the data before it
grabs a handle on it, then resets the data and returns the mutex. No
mater how slow emit() takes it will not block Increment() for longer
than it takes to create a new map.

This design is also considerably simpler. No log handler is needed and
the running goroutine is no longer responsible for flushing, stopping,
and keeping track of incoming Increment calls.
Move the MetricHandler into the stackdriver package. This removes the
need for redefining the Heartbeat interface in the stackdriver package.

The MetricHandler has much in common with of the MetricAdapter and is
more suited to live nearby.
- rename class/associated variables
- extract "heartbeater" action string into a constant.
- fix up mocks/ names to match refactoring
- replace uint with int. it makes for awkward conversions and is
  unnecessary.
- reduce telemetry.Sink to Record() instead of expecting the sinks to
  keep track of metrics then flush.
- Remove telemetry.Counter and use the expvar package to perform all
  telemetry collection.
- Remove reporting to Stackdriver Monitoring. This will be reintroduced
  in a future change.
- Introduce telemetry.Reporter/telemetry.Sink to periodically report on
  all registered metrics.
@johnsonj
Copy link
Contributor Author

johnsonj commented Nov 10, 2017

@fluffle new PR description & ready for review.

Overhall the telemetry collection for Stackdriver Nozzle.

This change removes the heartbeater from the nozzle. The existing implementation had performance and blocking issues (#150 for example) in addition to relying on lazy creation of metric descriptors and incorrectly reporting the values as a gauge to Stackdriver Monitoring. It also created a strange dependency cycle as the heartbeater was a dependency of the stackdriver.MetricAdapter and the heartbeat.Heartbeater required a stackdriver.MetricAdapter.

The new implementation relies on the expvar stdlib package for metrics collection. This provides a unified way for all packages to write metrics. The reporting code is decoupled from how metrics are recorded which breaks the dependency cycle. It also provides a debug endpoint via http://localhost:6060/debug/vars.

The metric descriptors are now created on boot instead of when they are first encountered. This is important for metrics that are not frequently seen, such as firehose.errors.unknown. Users need to build a graph of this data before it occurs.

The metrics are now reported as cumulative values captured between {NozzleBootTime, ReportingTime}. This allows Stackdriver to detect zero values, missing data due to shut down (instead of extrapolating), and accurately add values between nozzle instances.

  • Replace heartbeat.Heartbeater and associated telemetry collection with the expvar stdlib package.
  • Introduce the telemetry.Reporter to extract associated data
  • Introduce the telemetry.Sink with telemetry.logSink, stackdriver.telemetrySink implementations for reporting data

@johnsonj johnsonj changed the title Replace heartbeater with simplified telemetry code Overhall the telemetry collection for Stackdriver Nozzle Nov 10, 2017
@fluffle
Copy link
Collaborator

fluffle commented Nov 10, 2017

Code-wise this looks great, and using expvar gives me the /varz page I've been wanting too, so thanks for that!

I've left a number of very opinionated comments on how to name and organize metrics, because getting this right from the start makes building consoles that display meaningful information much easier. I'm sorry it's so prescriptive :-/


Reviewed 11 of 25 files at r1, 19 of 21 files at r2.
Review status: all files reviewed at latest revision, 6 unresolved discussions.


src/stackdriver-nozzle/nozzle/nozzle.go, line 53 at r2 (raw file):

func init() {
	firehoseErrsEmptyCount = expvar.NewInt("firehose.errors.empty")

My other comment applies for all these errors too. Even though you don't have a "requests" counter, these errors are all related because they come from the same source, and we may well want to aggregate the total number of firehose errors together while disregarding the exact error type.


src/stackdriver-nozzle/nozzle/nozzle.go, line 60 at r2 (raw file):

	firehoseErrsClosePolicyViolation = expvar.NewInt("firehose.errors.close.policy_violation")

	nozzleEvents = expvar.NewInt("nozzle.events")

Naming nit here: "nozzle.events.received", because otherwise it is not immediately clear whether nozzle.events is a total that includes the dropped count and the received count. For extra credit, it'd be good to export a "nozzle.events.total" so it's easy to calculate the percentage of total events that were dropped by the nozzle.


src/stackdriver-nozzle/stackdriver/metric_adapter.go, line 51 at r2 (raw file):

func init() {
	requestCount = expvar.NewInt("metrics.requests")

Now we're doing things like this I have what amounts to very specific instructions on metric naming and organization. Sorry this is so prescriptive, but doing things in certain ways makes it much, much easier to build good monitoring dashboards.

  • If you are making a request and counting the error responses, you want two exported variables: one Int for counting requests and one Map (basically an enum) for counting the errors by a small number of concrete error types.
  • Name them consistently: have a common prefix for the pair of vars with "requests" and "errors" as the final, differing element.
  • Export the map keys as the "error" label value for the stackdriver metric: don't break related errors for the same request out into metrics with different names because it makes aggregating them together again (so you can divide by the total number of requests to produce an error ratio) hard.

So here I would recommend you have:

    timeSeriesCount = expvar.NewInt("nozzle.metrics.timeseries.count")
    timeSeriesReqs = expvar.NewInt("nozzle.metrics.timeseries.requests")
    timeSeriesErrs = expvar.NewMap("nozzle.metrics.timeseries.errors")  // has two map values "unknown" and "out_of_order"
    descriptorReqs = expvar.NewInt("nozzle.metrics.descriptor.requests")
    descriptorErrs = expvar.NewMap("nozzle.metrics.descriptor.errors")

You should increment descriptorReqs/Errs in CreateMetricDescriptor rather than PostMetricEvents, because that's where you're actually making a request to stackdriver. In general metric increments should be close to the thing they are intended to measure.


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 81 at r2 (raw file):

		client:       client,
		projectPath:  fmt.Sprintf("projects/%s", projectID),
		metricPrefix: metricPrefix,

I think that keeping nozzle internal metrics logically separate from those derived from the firehose might be a good idea. So i'm not sure you need to use the path prefix here. I'd argue that two separate hierarchies is better:

custom.googleapis.com/firehose/gorouter.total_requests etc. and
custom.googleapis.com/stackdriver-nozzle/metrics.requests

Otherwise you have stackdriver-nozzle as the lone "subdirectory" of custom.googleapis.com/firehose, because PCF doesn't do path-style metric names.

Alternatively, if you are going to prefix all metrics with a "nozzle" origin for filtering purposes (see other comment) then that may be enough to distinguish the nozzle metrics from other firehose metrics, thus you could put them under the same path prefix without the "subdirectory".


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 82 at r2 (raw file):

		projectPath:  fmt.Sprintf("projects/%s", projectID),
		metricPrefix: metricPrefix,
		labels:       map[string]string{"subscription_id": subscriptionId, "director": director},

Since we expect to be monitoring more than one stackdriver nozzle, we need the "index" label to contain the GCE VM instance name or ID for all the metrics the nozzle exports.

I can't give 100% concrete advice on how best to do this right now because I am working with [email protected] to figure out why the current "index" from the firehose envelope is a UUID that appears to be completely disconnected from anything in GCE, but the desire is that we have "index" be something that we can directly relate back to the GCE VM instance this binary is running on. Prefer instance name for now, because that's the route I currently expect we will go down?


src/stackdriver-nozzle/telemetry/reporter.go, line 75 at r2 (raw file):

		// Filter out known golang data series so only stackdriver-nozzle specific metrics are recorded.
		// This may not be comprehensive in the long term but it is simple and fast.
		if point.Key != "cmdline" && point.Key != "memstats" {

It might be easier to enforce that all nozzle metrics begin "nozzle.", a bit like the way we prepend the event origin to firehose metrics to end up with "gorouter.total_requests", then explicitly whitelist those.


Comments from Reviewable

@johnsonj
Copy link
Contributor Author

Awesome feedback! That's exactly what I want. I left the names alone because I figured we'd want an all up discussion. Now to implement.


Review status: all files reviewed at latest revision, 6 unresolved discussions.


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 81 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

I think that keeping nozzle internal metrics logically separate from those derived from the firehose might be a good idea. So i'm not sure you need to use the path prefix here. I'd argue that two separate hierarchies is better:

custom.googleapis.com/firehose/gorouter.total_requests etc. and
custom.googleapis.com/stackdriver-nozzle/metrics.requests

Otherwise you have stackdriver-nozzle as the lone "subdirectory" of custom.googleapis.com/firehose, because PCF doesn't do path-style metric names.

Alternatively, if you are going to prefix all metrics with a "nozzle" origin for filtering purposes (see other comment) then that may be enough to distinguish the nozzle metrics from other firehose metrics, thus you could put them under the same path prefix without the "subdirectory".

custom.googleapis.com/stackdriver-nozzle makes total sense.


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 82 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

Since we expect to be monitoring more than one stackdriver nozzle, we need the "index" label to contain the GCE VM instance name or ID for all the metrics the nozzle exports.

I can't give 100% concrete advice on how best to do this right now because I am working with [email protected] to figure out why the current "index" from the firehose envelope is a UUID that appears to be completely disconnected from anything in GCE, but the desire is that we have "index" be something that we can directly relate back to the GCE VM instance this binary is running on. Prefer instance name for now, because that's the route I currently expect we will go down?

I was hoping to accomplish this by associating the TimeSeries with the gce_instance Monitored Resource (see: detectMonitoredResource above). I know this gives us the sharding (and avoiding out of order errors). I believe it also gives us the ability to slice metrics per writer VM.

If we're going to add an additional label to the firehose metrics as well then we should probably add it here oo.


Comments from Reviewable

old: custom.google.com/metricPrefix/stackdriver-nozzle/..
new: custom.google.com/stackdriver-nozzle/..
- All naming follows stackdriver-nozzle/nozzle.
- More consistent naming for .requests, .count
- Added nozzle.events.total for easier calculations
- Errors now roll up as a single map. Added map support to
  stackdriver.telemetrySink. The key is reported as the "kind" value in
  the metric label.

TODO: Fix batching in stackdriver.telemetrySink

Snapshot of reported metric:
  "nozzle.events.dropped": 0,
  "nozzle.events.received": 5303,
  "nozzle.events.total": 5303,
  "nozzle.firehose.errors": {"close_normal_closure": 0, "close_policy_violation": 0, "close_unknown": 0, "empty": 0, "unknown": 0},
  "nozzle.logs.count": 8,
  "nozzle.metrics.descriptor.errors": 0,
  "nozzle.metrics.descriptor.requests": 0,
  "nozzle.metrics.firehose_events.count": 0,
  "nozzle.metrics.firehose_events.sampled": 3599,
  "nozzle.metrics.timeseries.count": 0,
  "nozzle.metrics.timeseries.errors": {"out_of_order": 0, "unknown": 0},
  "nozzle.metrics.timeseries.requests": 0
@johnsonj
Copy link
Contributor Author

Lots of naming/data type fixes. Let me know what you think!

I've got a small TODO before it can be merged and I want to make sure the way I'm doing map reporting to stackdriver is useful.


Review status: 12 of 25 files reviewed at latest revision, 6 unresolved discussions.


src/stackdriver-nozzle/nozzle/nozzle.go, line 53 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

My other comment applies for all these errors too. Even though you don't have a "requests" counter, these errors are all related because they come from the same source, and we may well want to aggregate the total number of firehose errors together while disregarding the exact error type.

Done.


src/stackdriver-nozzle/nozzle/nozzle.go, line 60 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

Naming nit here: "nozzle.events.received", because otherwise it is not immediately clear whether nozzle.events is a total that includes the dropped count and the received count. For extra credit, it'd be good to export a "nozzle.events.total" so it's easy to calculate the percentage of total events that were dropped by the nozzle.

Done.


src/stackdriver-nozzle/stackdriver/metric_adapter.go, line 51 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

Now we're doing things like this I have what amounts to very specific instructions on metric naming and organization. Sorry this is so prescriptive, but doing things in certain ways makes it much, much easier to build good monitoring dashboards.

  • If you are making a request and counting the error responses, you want two exported variables: one Int for counting requests and one Map (basically an enum) for counting the errors by a small number of concrete error types.
  • Name them consistently: have a common prefix for the pair of vars with "requests" and "errors" as the final, differing element.
  • Export the map keys as the "error" label value for the stackdriver metric: don't break related errors for the same request out into metrics with different names because it makes aggregating them together again (so you can divide by the total number of requests to produce an error ratio) hard.

So here I would recommend you have:

timeSeriesCount = expvar.NewInt("nozzle.metrics.timeseries.count")
timeSeriesReqs = expvar.NewInt("nozzle.metrics.timeseries.requests")
timeSeriesErrs = expvar.NewMap("nozzle.metrics.timeseries.errors") // has two map values "unknown" and "out_of_order"
descriptorReqs = expvar.NewInt("nozzle.metrics.descriptor.requests")
descriptorErrs = expvar.NewMap("nozzle.metrics.descriptor.errors")

You should increment descriptorReqs/Errs in CreateMetricDescriptor rather than PostMetricEvents, because that's where you're actually making a request to stackdriver. In general metric increments should be close to the thing they are intended to measure.

Done.


src/stackdriver-nozzle/telemetry/reporter.go, line 75 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

It might be easier to enforce that all nozzle metrics begin "nozzle.", a bit like the way we prepend the event origin to firehose metrics to end up with "gorouter.total_requests", then explicitly whitelist those.

Done.


Comments from Reviewable

@fluffle
Copy link
Collaborator

fluffle commented Nov 14, 2017

:lgtm: Looking good! Minor nits / commentary but nothing that needs another RT.


Reviewed 1 of 25 files at r1, 13 of 13 files at r3.
Review status: all files reviewed at latest revision, 5 unresolved discussions.


src/stackdriver-nozzle/stackdriver/metric_adapter.go, line 60 at r3 (raw file):

	timeSeriesErrs = expvar.NewMap("nozzle.metrics.timeseries.errors")

	timeSeriesErrOutOfOrder = &expvar.Int{}

expvar will create Ints for you if you Add("key", 1) to a nonexistent key, so what you're doing here is not strictly necessary. But I like this approach, because it explicitly exports zero values for the map data when no errors have been seen.


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 82 at r2 (raw file):

Previously, johnsonj (Jeff Johnson) wrote…

I was hoping to accomplish this by associating the TimeSeries with the gce_instance Monitored Resource (see: detectMonitoredResource above). I know this gives us the sharding (and avoiding out of order errors). I believe it also gives us the ability to slice metrics per writer VM.

If we're going to add an additional label to the firehose metrics as well then we should probably add it here oo.

That looks useful (and #TIL, so thanks!) We should definitely use the gce_instance monitored resource instead of an index label, you made the right call here. It looks like that provides 3 levels of granularity: project, zone and instance. This is good \o/

I guess the big problem is that we can't directly attribute firehose metrics to a gce_instance yet. I'm not sure that matters, or whether Stackdriver requires the instance_id field to be set to an actual instance ID. Maybe we can get away with creating monitored resource protos with the index uuid BOSH provides as the instance_id and dropping the index label. I don't know how we would figure out what zone a given uuid is in without asking the BOSH director, though.

I wonder if it's safe for us to conflate diego containers with GKE ones and use the gke_container resource instead of instanceIndex. Probably not :-/


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 115 at r3 (raw file):

		if _, ok := series.Value.(*expvar.Map); ok {
			labels = append(labels, &labelpb.LabelDescriptor{Key: mapName, ValueType: labelpb.LabelDescriptor_INT64})

This should be LableDescriptor_STRING, because the value for this label is the (string) map key e.g. "out_of_order", and the value for the metric will be the expvar's value.

Also I think "kind" is a bit generic as a label name, though I can understand why you don't want to use "error", just in case you want to export maps of Other Things. Naming is hard, and expvar doesn't provide any way of associating metadata with a metric :-(


src/stackdriver-nozzle/stackdriver/telemetry_sink_test.go, line 145 at r3 (raw file):

			labels := req.MetricDescriptor.Labels
			Expect(labels).To(HaveLen(3))
			Expect(labels).To(ContainElement(&labelpb.LabelDescriptor{Key: "kind", ValueType: labelpb.LabelDescriptor_INT64}))

STRING here too.


src/stackdriver-nozzle/telemetry/reporter.go, line 42 at r3 (raw file):

}

const Prefix = "nozzle"

Now you've got this constant here, is it possible to use it wherever metrics are created? Otherwise, if you want to change it you have a fun multi-file search-and-replace to do. Or does this re-introduce the circular dependency problem?


Comments from Reviewable

@johnsonj
Copy link
Contributor Author

Review status: all files reviewed at latest revision, 3 unresolved discussions.


src/stackdriver-nozzle/stackdriver/metric_adapter.go, line 60 at r3 (raw file):

Previously, fluffle (Alex Bee) wrote…

expvar will create Ints for you if you Add("key", 1) to a nonexistent key, so what you're doing here is not strictly necessary. But I like this approach, because it explicitly exports zero values for the map data when no errors have been seen.

I went that route first but settled on this one for that reason. Got to have zeros!


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 82 at r2 (raw file):

Previously, fluffle (Alex Bee) wrote…

That looks useful (and #TIL, so thanks!) We should definitely use the gce_instance monitored resource instead of an index label, you made the right call here. It looks like that provides 3 levels of granularity: project, zone and instance. This is good \o/

I guess the big problem is that we can't directly attribute firehose metrics to a gce_instance yet. I'm not sure that matters, or whether Stackdriver requires the instance_id field to be set to an actual instance ID. Maybe we can get away with creating monitored resource protos with the index uuid BOSH provides as the instance_id and dropping the index label. I don't know how we would figure out what zone a given uuid is in without asking the BOSH director, though.

I wonder if it's safe for us to conflate diego containers with GKE ones and use the gke_container resource instead of instanceIndex. Probably not :-/

Probably not


src/stackdriver-nozzle/stackdriver/telemetry_sink.go, line 115 at r3 (raw file):

Previously, fluffle (Alex Bee) wrote…

This should be LableDescriptor_STRING, because the value for this label is the (string) map key e.g. "out_of_order", and the value for the metric will be the expvar's value.

Also I think "kind" is a bit generic as a label name, though I can understand why you don't want to use "error", just in case you want to export maps of Other Things. Naming is hard, and expvar doesn't provide any way of associating metadata with a metric :-(

Thanks! Fixing _STRING

I was thinking the best way to do this would be to implement something like this in telemetry:

type Group struct {
    expvar.Map
    Category string
}

But it felt like overkill for now. kind is a bit lame but it's discover-able.


src/stackdriver-nozzle/stackdriver/telemetry_sink_test.go, line 145 at r3 (raw file):

Previously, fluffle (Alex Bee) wrote…

STRING here too.

Done.


Comments from Reviewable

Kind is a categorical value, not an integer.
This change makes this part of the pipeline consistent with
nozzle.metrics.firehose_events to take the guess work out of tracking
events through the nozzle.
Be more explicit about values we want to track.

This hack is dropped in telemetry/reporter.go:
	expvar.Do(func (val) {
	  if strings.StartsWith("nozzle", val.Key) {
	    ..

We can now specify the category of map values instead of using the generic "kind"

/debug/vars after this change:
	"stackdriver-nozzle/firehose.errors": {"close_normal_closure": 0, "close_policy_violation": 0, "close_unknown": 0, "empty": 0, "unknown": 0},
	"stackdriver-nozzle/firehose_events.dropped": 0,
	"stackdriver-nozzle/firehose_events.received": 0,
	"stackdriver-nozzle/firehose_events.total": 0,
	"stackdriver-nozzle/logs.count": 0,
	"stackdriver-nozzle/metrics.descriptor.errors": 0,
	"stackdriver-nozzle/metrics.descriptor.requests": 0,
	"stackdriver-nozzle/metrics.firehose_events.emitted.count": 0,
	"stackdriver-nozzle/metrics.firehose_events.sampled.count": 0,
	"stackdriver-nozzle/metrics.timeseries.count": 0,
	"stackdriver-nozzle/metrics.timeseries.errors": {"out_of_order": 0, "unknown": 0},
	"stackdriver-nozzle/metrics.timeseries.requests": 0
@johnsonj
Copy link
Contributor Author

Thanks @fluffle! I've made a big code change but minor functionality by introducing telemetry.Counter, telemetry.CounterMap. These wrap the native expvar types and allow us to be explicit about which metrics we gather. I really like this approach now. I wonder if the telemetry/ package would make a good golang library.


Review status: all files reviewed at latest revision, 3 unresolved discussions.


src/stackdriver-nozzle/telemetry/reporter.go, line 42 at r3 (raw file):

Previously, fluffle (Alex Bee) wrote…

Now you've got this constant here, is it possible to use it wherever metrics are created? Otherwise, if you want to change it you have a fun multi-file search-and-replace to do. Or does this re-introduce the circular dependency problem?

Circular dependency was resolved because the telemetry package doesn't rely on stackdriver. Instead it defines the interface it needs (telemetry.Sink) and let's the stackdriver package satisfy it. Way more go-langy.


Comments from Reviewable

@johnsonj johnsonj merged commit ebbfab0 into cloudfoundry-community:develop Nov 20, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants