Skip to content

Commit

Permalink
Merge pull request #808 from weaveworks/717-k8s-service-account-auth
Browse files Browse the repository at this point in the history
k8s: Use service account token by default and improve error logging
  • Loading branch information
Alfonso Acosta committed Jan 14, 2016
2 parents 3f6e9a3 + e0dfeb1 commit ada38a3
Show file tree
Hide file tree
Showing 3 changed files with 73 additions and 41 deletions.
78 changes: 41 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,44 +150,48 @@ sudo scope launch --service-token=<token>

## <a name="using-weave-scope-with-kubernetes"></a>Using Weave Scope with Kubernetes

To use Scope's Kubernetes integration, you need to start Scope with the
`--probe.kubernetes true` flag. Scope needs to be installed on all
nodes (master and minions), but this flag should only be enabled on the
Kubernetes master node.

As per the normal requirements, you will need to run Scope on every
machine you want to monitor, as shown in [Getting
Started](#getting-started). However, when launching Scope you
need to pass different arguments to the Kubernetes master and minion
nodes.

On the master node you need to launch Scope with Kubernetes support:

```
sudo scope launch --probe.kubernetes true
```

Depending on your setup, you may find that Kubernetes has renamed your
Docker bridge interface. In this instance you'll need to tell Scope
about the new name when launching it. For example, if your Docker bridge is
named `cbr0`:

Scope comes with built-in Kubernetes support. We recommend to run Scope natively
in your Kubernetes cluster using
[this resource definitions](https://github.com/TheNewNormal/kube-charts/tree/master/weavescope/manifests).

1. If you are running a Kubernetes version lower than 1.1, make sure your
cluster allows running pods in privileged mode (required by the Scope
probes). To allow privileged pods, your API Server and all your Kubelets must
be provided with flag `--allow_privileged` at launch time.

2. Make sure your cluster supports
[DaemonSets](https://github.com/kubernetes/kubernetes/blob/master/docs/design/daemon.md)
in your cluster. DaemonSets are needed to ensure that each Kubernetes node
runs a Scope Probe:

* To enable them in an existing cluster, make sure to add a
`--runtime-config=extensions/v1beta1/daemonsets=true` argument to the
[apiserver](https://github.com/kubernetes/kubernetes/blob/master/docs/admin/kube-apiserver.md)'s configuration
(normally found at `/etc/kubernetes/manifest/kube-apiserver.manifest`) followed by a
[restart of the apiserver and controller manager](https://github.com/kubernetes/kubernetes/issues/18656).

* If you are creating a new cluster, set `KUBE_ENABLE_DAEMONSETS=true` in
your cluster configuration.

3. Download the resource definitions:

```
for I in app-rc app-svc probe-ds; do curl -s -L https://raw.githubusercontent.com/TheNewNormal/kube-charts/master/weavescope/manifests/scope-$I.yaml -o scope-$I.yaml; done
```

4. Tweak the Scope probe configuration at `scope-probe-ds.yaml`, namely:
* If you have an account at http://scope.weave.works and want to use Scope in
Cloud Service Mode, uncomment the `--probe.token=foo` argument, substitute `foo`
by the token found in your account page, and comment out the
`$(WEAVE_SCOPE_APP_SERVICE_HOST):$(WEAVE_SCOPE_APP_SERVICE_PORT)` argument.

5. Install Scope in your cluster (order is important):

```
kubectl create -f scope-app-rc.yaml # Only if you want to run Scope in Standalone Mode
kubectl create -f scope-app-svc.yaml # Only if you want to run Scope in Standalone Mode
kubectl create -f scope-probe-ds.yaml
```
sudo DOCKER_BRIDGE=cbr0 scope launch --probe.docker.bridge cbr0 --probe.kubernetes true
```

On each minion node you need to launch Scope telling it
to connect to the master node.

```
sudo scope launch --no-app kubernetes-master.my.network
```

Again, if your Docker bridge interface is named differently, you'll
need to pass that to your probe when launching it.

Once the first few reports come in, the UI should begin displaying two
Kubernetes-specific views "Pods", and "Pods by Service".


## <a name="developing"></a>Developing
Expand Down
33 changes: 30 additions & 3 deletions probe/kubernetes/client.go
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
package kubernetes

import (
"log"
"time"

"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/cache"
"k8s.io/kubernetes/pkg/client/unversioned"
"k8s.io/kubernetes/pkg/fields"
"k8s.io/kubernetes/pkg/labels"
"k8s.io/kubernetes/pkg/util"
)

// These constants are keys used in node metadata
Expand All @@ -31,9 +33,34 @@ type client struct {
serviceStore *cache.StoreToServiceLister
}

// runReflectorUntil is equivalent to cache.Reflector.RunUntil, but it also logs
// errors, which cache.Reflector.RunUntil simply ignores
func runReflectorUntil(r *cache.Reflector, resyncPeriod time.Duration, stopCh <-chan struct{}) {
loggingListAndWatch := func() {
if err := r.ListAndWatch(stopCh); err != nil {
log.Printf("Kubernetes reflector error: %v", err)
}
}
go util.Until(loggingListAndWatch, resyncPeriod, stopCh)
}

// NewClient returns a usable Client. Don't forget to Stop it.
func NewClient(addr string, resyncPeriod time.Duration) (Client, error) {
c, err := unversioned.New(&unversioned.Config{Host: addr})
var config *unversioned.Config
if addr != "" {
config = &unversioned.Config{Host: addr}
} else {
// If no API server address was provided, assume we are running
// inside a pod. Try to connect to the API server through its
// Service environment variables, using the default Service
// Account Token.
var err error
if config, err = unversioned.InClusterConfig(); err != nil {
return nil, err
}
}

c, err := unversioned.New(config)
if err != nil {
return nil, err
}
Expand All @@ -47,8 +74,8 @@ func NewClient(addr string, resyncPeriod time.Duration) (Client, error) {
serviceReflector := cache.NewReflector(serviceListWatch, &api.Service{}, serviceStore, resyncPeriod)

quit := make(chan struct{})
podReflector.RunUntil(quit)
serviceReflector.RunUntil(quit)
runReflectorUntil(podReflector, resyncPeriod, quit)
runReflectorUntil(serviceReflector, resyncPeriod, quit)

return &client{
quit: quit,
Expand Down
3 changes: 2 additions & 1 deletion prog/probe.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ func probeMain() {
dockerInterval = flag.Duration("docker.interval", 10*time.Second, "how often to update Docker attributes")
dockerBridge = flag.String("docker.bridge", "docker0", "the docker bridge name")
kubernetesEnabled = flag.Bool("kubernetes", false, "collect kubernetes-related attributes for containers, should only be enabled on the master node")
kubernetesAPI = flag.String("kubernetes.api", "http://localhost:8080", "Address of kubernetes master api")
kubernetesAPI = flag.String("kubernetes.api", "", "Address of kubernetes master api")
kubernetesInterval = flag.Duration("kubernetes.interval", 10*time.Second, "how often to do a full resync of the kubernetes data")
weaveRouterAddr = flag.String("weave.router.addr", "", "IP address or FQDN of the Weave router")
procRoot = flag.String("proc.root", "/proc", "location of the proc filesystem")
Expand Down Expand Up @@ -144,6 +144,7 @@ func probeMain() {
p.AddReporter(kubernetes.NewReporter(client))
} else {
log.Printf("Kubernetes: failed to start client: %v", err)
log.Printf("Kubernetes: make sure to run Scope inside a POD with a service account or provide a valid kubernetes.api url")
}
}

Expand Down

0 comments on commit ada38a3

Please sign in to comment.