Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate to Istio v1alpha3 Gateway / VirtualService #1228

Merged
merged 11 commits into from
Jun 29, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 0 additions & 3 deletions cmd/controller/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,6 @@ func main() {
deploymentInformer := kubeInformerFactory.Apps().V1().Deployments()
endpointsInformer := kubeInformerFactory.Core().V1().Endpoints()
coreServiceInformer := kubeInformerFactory.Core().V1().Services()
ingressInformer := kubeInformerFactory.Extensions().V1beta1().Ingresses()
vpaInformer := vpaInformerFactory.Poc().V1alpha1().VerticalPodAutoscalers()

// Build all of our controllers, with the clients constructed above.
Expand All @@ -171,7 +170,6 @@ func main() {
opt,
routeInformer,
configurationInformer,
ingressInformer,
autoscaleEnableScaleToZero,
),
service.NewController(
Expand Down Expand Up @@ -203,7 +201,6 @@ func main() {
deploymentInformer.Informer().HasSynced,
coreServiceInformer.Informer().HasSynced,
endpointsInformer.Informer().HasSynced,
ingressInformer.Informer().HasSynced,
} {
if ok := cache.WaitForCacheSync(stopCh, synced); !ok {
logger.Fatalf("failed to wait for cache at index %v to sync", i)
Expand Down
18 changes: 9 additions & 9 deletions config/200-clusterrole.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ rules:
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
- apiGroups: ["build.dev"]
resources: ["builds"]
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
- apiGroups: ["config.istio.io"]
resources: ["routerules"]
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
---
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
- apiGroups: ["networking.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
Expand All @@ -53,7 +53,7 @@ rules:
verbs: ["get", "list", "update", "patch", "watch"]
- apiGroups: ["build.dev"]
resources: ["builds"]
verbs: ["get", "list", "update", "patch", "watch"]
- apiGroups: ["config.istio.io"]
resources: ["routerules"]
verbs: ["get", "list", "update", "patch", "watch"]
verbs: ["get", "list", "update", "patch", "watch"]
- apiGroups: ["networking.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
221 changes: 221 additions & 0 deletions config/202-gateway.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
# We stand up a new Gateway service to receive all external traffic
# for Knative pods. These pods are basically standalone Envoy proxy
# pods to convert all external traffic into cluster traffic.
#
#
# The reason for standing up these pods are because Istio Gateway
# cannot not share these ingress pods. Istio provide a default, but
# we don't want to use it and causing unwanted sharing with users'
# Gateways if they have some.
#
# The YAML is cloned from Istio's. However, in the future we may want
# to incorporate more of our logic to tailor to our users' specific
# needs.

# This is the shared Gateway for all Knative routes to use.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: knative-shared-gateway
namespace: knative-serving
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rest of the stuff in this file is in istio-system. I'm wondering if/what/how these pieces are associated with each other.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the selector below applied over the istio-system namespace? Do we have to run stuff in istio-system for this to work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so -- their recommendation was to copy this over from istio.yaml -- I just changed things minimally. I could move the Gateway to istio-system as well.

spec:
selector:
knative: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
---
# This is the Service definition for the ingress pods serving
# Knative's shared Gateway.
#
# Source: istio/charts/ingressgateway/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: knative-ingressgateway
namespace: istio-system
labels:
chart: ingressgateway-0.8.0
release: RELEASE-NAME
heritage: Tiller
knative: ingressgateway
spec:
type: LoadBalancer
selector:
knative: ingressgateway
ports:
-
name: http
nodePort: 32380
port: 80
-
name: https
nodePort: 32390
port: 443
-
name: tcp
nodePort: 32400
port: 32400
---
# This is the corresponding Deployment to backed the aforementioned Service.
#
# Source: istio/charts/ingressgateway/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: knative-ingressgateway
namespace: istio-system
labels:
app: knative-ingressgateway
chart: ingressgateway-0.8.0
release: RELEASE-NAME
heritage: Tiller
knative: ingressgateway
spec:
replicas:
template:
metadata:
labels:
knative: ingressgateway
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-ingressgateway-service-account
containers:
- name: ingressgateway
image: "docker.io/istio/proxyv2:0.8.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 32400
args:
- proxy
- router
- -v
- "2"
- --discoveryRefreshDelay
- '1s' #discoveryRefreshDelay
- --drainDuration
- '45s' #drainDuration
- --parentShutdownDuration
- '1m0s' #parentShutdownDuration
- --connectTimeout
- '10s' #connectTimeout
- --serviceCluster
- knative-ingressgateway
- --zipkinAddress
- zipkin:9411
- --statsdUdpAddress
- istio-statsd-prom-bridge:9125
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
- istio-pilot:8080
resources:
{}

env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: ingressgateway-certs
mountPath: "/etc/istio/ingressgateway-certs"
readOnly: true
volumes:
- name: istio-certs
secret:
secretName: "istio.default"
optional: true
- name: ingressgateway-certs
secret:
secretName: "istio-ingressgateway-certs"
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
---
# This is the horizontal pod autoscaler to make sure the ingress Pods
# scale up to meet traffic demand.
#
# Source: istio/charts/ingressgateway/templates/autoscale.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: knative-ingressgateway
namespace: istio-system
spec:
minReplicas: 1
# TODO(1411): Document/fix this. We are choosing an arbitrary 10 here.
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: knative-ingressgateway
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 60
25 changes: 0 additions & 25 deletions docs/spec/errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -424,31 +424,6 @@ status:
message: "Configuration 'abc' referenced in traffic not found"
```

### Unable to create Ingress

If the Route is unable to create an Ingress resource to route its
traffic to Revisions, the `IngressReady` condition will be marked
as `False` with a reason of `NoIngress`.

```http
GET /apis/serving.knative.dev/v1alpha1/namespaces/default/routes/my-service
```

```yaml
...
status:
traffic: []
conditions:
- type: Ready
status: False
reason: NoIngress
message: "Unable to create Ingress 'my-service-ingress'"
- type: IngressReady
status: False
reason: NoIngress
message: "Unable to create Ingress 'my-service-ingress'"
```

### Latest Revision of a Configuration deleted

If the most recent Revision is deleted, the Configuration will set
Expand Down
13 changes: 6 additions & 7 deletions docs/spec/motivation.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,12 @@ We define serverless workloads as computing workloads that are:
* Primarily driven by application level (L7 -- HTTP, for example)
request traffic

While Kubernetes provides basic primitives like Deployment, Service,
and Ingress in support of this model, our experience suggests that a
more compact and richer opinionated model has substantial benefit for
developers. In particular, by standardizing on higher-level primitives
which perform substantial amounts of automation of common
infrastructure, it should be possible to build consistent toolkits
that provide a richer experience than updating yaml files with
While Kubernetes provides basic primitives like Deployment, and Service in
support of this model, our experience suggests that a more compact and richer
opinionated model has substantial benefit for developers. In particular, by
standardizing on higher-level primitives which perform substantial amounts of
automation of common infrastructure, it should be possible to build consistent
toolkits that provide a richer experience than updating yaml files with
`kubectl`.

The Knative Serving APIs consist of Compute API (these documents),
Expand Down
5 changes: 4 additions & 1 deletion hack/update-codegen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,11 @@ CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${SERVING_ROOT}; ls -d -1 ./vendor/k8s.io/code-g
# instead of the $GOPATH directly. For normal projects this can be dropped.
${CODEGEN_PKG}/generate-groups.sh "deepcopy,client,informer,lister" \
github.com/knative/serving/pkg/client github.com/knative/serving/pkg/apis \
"serving:v1alpha1 istio:v1alpha2" \
"serving:v1alpha1 istio:v1alpha3" \
--go-header-file ${SERVING_ROOT}/hack/boilerplate/boilerplate.go.txt

# Update code to change Gatewaies -> Gateways to workaround cleverness of codegen pluralizer.
find -name '*.go' -exec grep -l atewaies {} \; | xargs sed 's/atewaies/ateways/g' -i
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol - I love this :P

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've asked them to fix it upstream. I sent them a PR to add an exception but they want me to change my PR to handle more general cases. For now I think this may suffice.


# Make sure our dependencies are up-to-date
${SERVING_ROOT}/hack/update-deps.sh
2 changes: 1 addition & 1 deletion pkg/apis/istio/register.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,5 @@ limitations under the License.
package istio

const (
GroupName = "config.istio.io"
GroupName = "networking.istio.io"
)
Loading