-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebase 1.6.1 #13653
Rebase 1.6.1 #13653
Conversation
@sttts upstreams of note: |
hack/godep-save.sh
Outdated
@@ -31,4 +31,4 @@ REQUIRED_BINS=( | |||
"./..." | |||
) | |||
|
|||
"${GODEP}" save -t "${REQUIRED_BINS[@]}" | |||
GOPATH=$GOPATH:$GOPATH/src/k8s.io/kubernetes/staging "${GODEP}" save -t "${REQUIRED_BINS[@]}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are other tools going to need this $GOPATH
addition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only godep. We create symlinks in vendor/ to the staging dirs. So other tools like go-build have no problems.
This symlink business might go away when we extend our tooling to handle upstream commits over multiple repos. But as that is not trivial, we try to postpone that.
hack/godep-save.sh
Outdated
|
||
# godep fails to copy all package in staging because it gets confused with the symlinks. | ||
# Hence, we copy over manually until we have proper staging repo tooling. | ||
rsync -avx --include='*.go' --include='*/' --exclude='*' $GOPATH/src/k8s.io/kubernetes/staging/src/* vendor/k8s.io/kubernetes/staging/src/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need a --delete
in case changes later remove a file here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We remove vendor/ before godep-save. So this is fine.
hack/godep-save.sh
Outdated
echo "s/$NEWREV/$OLDREV/" >> "$TMPGOPATH/undo.sed" | ||
} | ||
|
||
undo::forks::in::godep::json () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use ::
to denote namespace-y things ... just use _
or -
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Namespaces in bash... :')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was about to write the same comment a few lines above. I totally agree with Steve here. I'd suggest the prefix to be godep::save
, which would give you godep::save::undo_forks_in_godep_json
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's be consistent with that and don't make our Bash guru said 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, using -.
[test] |
[test] |
hack/build-go.sh
Outdated
@@ -7,6 +7,10 @@ source "$(dirname "${BASH_SOURCE}")/lib/init.sh" | |||
build_targets=("$@") | |||
platform="$(os::build::host_platform)" | |||
|
|||
# Set build tags for these binaries | |||
readonly OS_GOFLAGS_TAGS="include_gcs include_oss" | |||
readonly OS_GOFLAGS_TAGS_$(os::build::platform_arch)="gssapi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@smarterclayton why did these get put into build-cross
only in current master
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second line adds the build requirement that gssapi devel packages (used with CGo) are installed. Not sure we want that in general, maybe only for releases?
@@ -102,6 +100,7 @@ func TestKubeletDefaults(t *testing.T) { | |||
RegistryPullQPS: 5.0, | |||
ResolverConfig: kubetypes.ResolvConfDefault, | |||
KubeletCgroups: "", | |||
CgroupsPerQOS: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@derekwaynecarr @sjenning can you please review the new node config in the 1.6 rebase here?
We are seeing this on the CI AWS machines when running hack/end-to-end-docker.sh (in contrast, it's fine on a Fedora 25 machine):
I0411 12:32:33.862520 1647 kubelet.go:1757] skipping pod synchronization - [Failed to start ContainerManager failed to initialise top level QOS containers: root container /kubepods doesn't exist]
Do we miss any other settings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sttts What OS type/version and Docker version are the CI machines running?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# uname -a
Linux ip-172-18-13-151.ec2.internal 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 9 10:09:10 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
# rpm -q docker
docker-1.12.6-14.el7.x86_64
# systemctl cat docker
# /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
Requires=rhel-push-plugin.socket
Requires=docker-cleanup.timer
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--authorization-plugin=rhel-push-plugin \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=10min
Restart=on-abnormal
MountFlags=slave
[Install]
WantedBy=multi-user.target
# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: devicemapper
Pool Name: docker-docker--pool
Pool Blocksize: 524.3 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 19.92 MB
Data Space Total: 7.457 GB
Data Space Available: 7.438 GB
Metadata Space Used: 40.96 kB
Metadata Space Total: 29.36 MB
Metadata Space Available: 29.32 MB
Thin Pool Minimum Free Space: 745.5 MB
Udev Sync Supported: true
Deferred Removal Enabled: true
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge null host overlay
Authorization: rhel-push-plugin
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Security Options: seccomp selinux
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 4
Total Memory: 15.26 GiB
Name: ip-172-18-13-151.ec2.internal
ID: 4GNM:G4KZ:FPMN:DQSK:ZWXO:WPQQ:TXXT:S432:5H3B:Z7PH:3WCO:NAUM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
ci.dev.openshift.redhat.com:5000
172.30.0.0/16
127.0.0.0/8
Registries: docker.io (secure)
test/end-to-end/core.sh
Outdated
@@ -129,7 +129,7 @@ os::cmd::expect_success 'oc env -n default dc/docker-registry REGISTRY_MIDDLEWAR | |||
os::cmd::expect_success 'oc rollout status dc/docker-registry' | |||
os::log::info "Restore configured to enable mirroring" | |||
|
|||
registry_pod="$(oc get pod -n default -l deploymentconfig=docker-registry --show-all=false --template='{{(index .items 0).metadata.name}}')" | |||
registry_pod="$(oc get pod -n default -l deploymentconfig=docker-registry --template '{{range .items}}{{if not .metadata.deletionTimestamp}}{{.metadata.name}}{{end}}{{end}}')" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's more than one pod here it's going to smash the names together without spaces -- also, what has changed to lead to this happening?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't know yet. It's just taking ~10 seconds on my laptop for the old pod to go from Terminating to 100% deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rollout does not synchronize with the deletion. I think excluding pods which are in deletion state is correct.
We shouldn't have more than one active pod if our instance=1 deployment works. If not, the test breaks. We could add a space and select the first pod, but that would hide a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the behavior change here is the terminated --> deleted transition is much slower and this causes the get
here to fail... agree that excluding terminated pods here is reasonable, as long as we're not plastering over a performance/behavior regression with this
for _, fqKind := range fqKinds { | ||
if unversioned && fqKind.Group == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you particular on the value of the group? Why isn't meta.k8s.io
valid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metav1.Status
is only registered as unversioned into the core v1 group. Our build apigroup webhook storage returns Status
in the New()
method (
origin/pkg/util/rest/webhook.go
Line 56 in 34fd627
func (h *WebHook) New() runtime.Object { |
Hence, we have two choice: 1) we can register Status
under the build apigroup (sttts@adc5280) or 2) make the api installer happy with unversioned kinds as well.
We chose (2) here to avoid side-effects from Status
being registered under another apigroup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hence, we have two choice: 1) we can register Status under the build apigroup (sttts/origin@adc5280) or 2) make the api installer happy with unversioned kinds as well.
Option 1 seems more correct. I haven't fully considered the repercussions, but perhaps a followup issue is in order.
Does the serialized return include an apiVersion
? Do we know what it is? meta.k8s.io
or build.openshift.io
seem good, but ""
seems less good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We haven't checked the returned version, but we should. Taking a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a todo in the description above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I start to think that the Status
type is actually wrong, compare
func (w *WebHook) ServeHTTP(writer http.ResponseWriter, req *http.Request, ctx apirequest.Context, name, subpath string) error { |
Build
if I read that code correctly. This also matches the integration tests: origin/test/integration/webhook_test.go
Line 116 in 8262531
err = json.Unmarshal(body, returnedBuild) |
@smarterclayton you wrote this ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a PR: sttts#75
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merged sttts#75. It reverts this patch in APIInstaller. getResourceKind
and replaces the Status
type by Build
in the webhook storage.
@@ -141,7 +141,7 @@ func resolveContainerSecurityContext(provider kscc.SecurityContextConstraintsPro | |||
// since that is how the sc provider will eventually apply settings in the runtime. | |||
// This results in an SC that is based on the Pod's PSC with the set fields from the container | |||
// overriding pod level settings. | |||
container.SecurityContext = sc.DetermineEffectiveSecurityContext(pod, container) | |||
container.SecurityContext = sc.InternalDetermineEffectiveSecurityContext(pod, container) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is worth a followup issue. Upstream is actively trying to remove this method, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added to #13732
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I took the quick way out and remained on internal for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can get rid of InternalDetermineEffectiveSecurityContext
using some conversion. I would leave it as it is until upstream really removes it. No need to rush here IMO.
var outputVersion schema.GroupVersion | ||
outputVersionString := kcmdutil.GetFlagString(cmd, "output-version") | ||
if len(outputVersionString) == 0 { | ||
outputVersion = *clientConfig.GroupVersion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this going to be incorrect every time that you're getting resources from multiple groups. oc export builds,deployments
. I really think we want to move away from choosing the --output-version
like upstream and using use oc convert
instead. We can make that one spot correct with a preferred version chain.
I have no suggestion that I think could make this work properly for the scenario I described.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what you're suggesting we do here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deprecate output version, select the version you want via your input (oc export jobs.v1.batch,deployments.v1beta1.apps
, etc)
can we make export output whatever version you requested?
hack/godep-save.sh
Outdated
echo "s/$NEWREV/$OLDREV/" >> "$TMPGOPATH/undo.sed" | ||
} | ||
|
||
undo::forks::in::godep::json () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was about to write the same comment a few lines above. I totally agree with Steve here. I'd suggest the prefix to be godep::save
, which would give you godep::save::undo_forks_in_godep_json
here.
hack/godep-save.sh
Outdated
echo "s/$NEWREV/$OLDREV/" >> "$TMPGOPATH/undo.sed" | ||
} | ||
|
||
undo::forks::in::godep::json () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's be consistent with that and don't make our Bash guru said 😉
@@ -3301,64 +3506,60 @@ | |||
}, | |||
{ | |||
"ImportPath": "google.golang.org/grpc", | |||
"Comment": "v1.0.4", | |||
"Rev": "777daa17ff9b5daef1cfdf915088a2ada3332bf0" | |||
"Comment": "v1.0.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we are taking an older version of grpc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see there's another commit which vendors grpc inside coreos/etcd. Why we would bother with those multilevel Godeps instead of having one, which is verified. I think none of our tools that check Godeps looks into the nested one, and with the proliferation of those (I know at least one another doing this) we should be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generated code is not compatible. Upstream kube uses 1.0.2. etcd uses 1.0.4.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The options are vendor inside coreos/etcd or downgrade etcd to the same version kube is using.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, looking at the current commitchecker implementation I think it does check all of Godeps.json
.
} | ||
|
||
// We don't show the deployment history when running `oc rollback --dry-run`. | ||
if d.config == nil && !isNotDeployed { | ||
sorted := deploymentsHistory | ||
var sorted []*v1.ReplicationController | ||
// TODO(rebase-1.6): we should really convert the describer to use a versioned clientset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have another post-rebase issue wrt to that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added to #13732
@@ -151,15 +151,15 @@ func TestValidatePodSecurityPolicyReview(t *testing.T) { | |||
}, | |||
}, | |||
}, | |||
`spec.serviceAccountNames[0]: Invalid value: "my bad sa": must match the regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* (e.g. 'example.com')`: { | |||
`spec.serviceAccountNames[0]: Invalid value: "my bad sa": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')`: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could squash all the changes fixing this test string into a single commit. At least fix-test: pkg/image/api/validation/validation_test.go
and fix-test: pkg/api
. I'd also squash fix-test: pkg/deploy/api/validation/validation_test.go error output
with it as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do in the next rebase to origin master.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -155,7 +159,7 @@ func GetBootstrapClusterRoles() []authorizationapi.ClusterRole { | |||
"replicationcontrollers/status", "resourcequotas", "resourcequotas/status", "securitycontextconstraints", "serviceaccounts", "services", | |||
"services/status").RuleOrDie(), | |||
|
|||
authorizationapi.NewRule(read...).Groups(appsGroup).Resources("statefulsets", "statefulsets/status").RuleOrDie(), | |||
authorizationapi.NewRule(read...).Groups(appsGroup).Resources("statefulsets", "statefulsets/status", "deployments", "deployments/scale", "deployments/status").RuleOrDie(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- You need similar changes in
AdminRoleName
,EditRoleName
,ViewRoleName
wrt do adding deployments toapps
. - We should drop jobs and horizontalpodautoscalers from extensions, everywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ncdc ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need similar changes in AdminRoleName, EditRoleName, ViewRoleName wrt do adding deployments to apps.
Don't give /status
access to the namespaced roles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll fix jobs and hpas. I'm going to make cleaning up /status
a post-rebase follow-up, as it's currently there for the namespaced roles.
Commits that lgtm (I'll keep updating)
Commits that I don't like, but can't think of a small fix for
Commits with substantitive comments
|
pkg/cmd/server/kubernetes/master.go
Outdated
return groupVersionResources, nil | ||
} | ||
namespaceController := namespacecontroller.NewNamespaceController(kubeClient, clientPool, gvrFn, c.ControllerManager.NamespaceSyncPeriod.Duration, kapi.FinalizerKubernetes) | ||
namespaceController := namespacecontroller.NewNamespaceController(kubeClient, clientPool, gvrFn, namespaceInformer, c.ControllerManager.NamespaceSyncPeriod.Duration, kapiv1.FinalizerKubernetes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not c.Informers.KubernetesInformers().Core().V1().Namespaces()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ncdc ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brain fart?
@@ -676,7 +681,7 @@ func newAuthenticator(config configapi.MasterConfig, restOptionsGetter restoptio | |||
authenticators = append(authenticators, certauth) | |||
} | |||
|
|||
resultingAuthenticator := union.NewFailOnError(authenticators...) | |||
resultingAuthenticator := union.New(authenticators...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks incorrect. We should still be NewFailOnError
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fixed in a follow-up from yesterday.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
topLevelAuthenticators = append(topLevelAuthenticators, anonymous.NewAuthenticator()) | ||
|
||
return group.NewAuthenticatedGroupAdder(union.NewFailOnError(topLevelAuthenticators...)), nil | ||
return group.NewAuthenticatedGroupAdder(union.New(topLevelAuthenticators...)), nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NewFailOnError
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed as well: fb66c26
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
etcdConfig.CertFile = masterConfig.EtcdClientInfo.ClientCert.CertFile | ||
etcdConfig.CAFile = masterConfig.EtcdClientInfo.CA | ||
|
||
storageFactory, err := kubeapiserver.NewStorageFactory( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I recall it, this is the one that does the funny parsing of the options. I don't think that we want to respect the kube-arg for dealing with this. Seems like we should respect our config instead and use NewDefaultStorageFactory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean you want to drop everything server.APIEnablement.RuntimeConfig
does?
Then we end up with this: sttts@f57955d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
if err != nil { | ||
return nil, fmt.Errorf("error determining service IP ranges: %v", err) | ||
} | ||
if err := s.SecureServing.MaybeDefaultWithSelfSignedCerts(s.GenericServerRunOptions.AdvertiseAddress.String(), apiServerServiceIP); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't have to do this. By the time we're called, certs should be available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Informers: informers, | ||
} | ||
|
||
return kmaster, nil | ||
} | ||
|
||
func ClientCARegistrationHook(options *configapi.MasterConfig) (*master.ClientCARegistrationHook, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't get this for free?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, hardcoded in the master. As long as we do not decompose or just plainly use master.BuildMasterConfig
, we have to reimplement it: sttts@c12bd72
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
ReconcilerSyncLoopPeriod: metav1.Duration{Duration: 5 * time.Second}, | ||
ReconcilerSyncLoopPeriod: metav1.Duration{Duration: 60 * time.Second}, | ||
Controllers: []string{"*"}, | ||
EnableTaintManager: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@derekwaynecarr taints on or off?
pkg/cmd/server/api/types.go
Outdated
// Location is the path to a configuration file that contains the plugin's | ||
// configuration | ||
// Location is the path to a legacy configuration file that contains the plugin's | ||
// configuration. DEPRECATED. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deprecated? Because kube now lets you split bits apart?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least legacy in kube:
origin/vendor/k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/admission/config.go
Line 108 in 9949be4
// convert the legacy format to the new admission control format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done: sttts@4d3c856
[test] |
1 similar comment
[test] |
[test] |
1 similar comment
[test] |
Evaluated for origin testextended up to bd5ed9c |
continuous-integration/openshift-jenkins/testextended Running (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended/256/) (Base Commit: decce00) (Extended Tests: all) |
Evaluated for origin test up to bd5ed9c |
continuous-integration/openshift-jenkins/test Running (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/988/) (Base Commit: decce00) |
lgtm. I plan to merge on green. |
LGTM, 👍 on merge when green |
green on merge
…On Thu, Apr 27, 2017 at 5:26 PM, Maciej Szulik ***@***.***> wrote:
LGTM, 👍 on merge when green
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#13653 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFfxAs35FgJq-dOWYCJ2c0rGt_gEwRks5r0LOpgaJpZM4M1oSL>
.
|
Two e2e flakes, known upstream. We should skip those for now. One OpenShift flake, we should watch prevalence. One bad unit test check. This needs to be fixed asap. |
Do we have issues for those three items? |
@stevekuznetsov the unit test fix follows in a minute. The e2e flakes are listed in the follow-up task issue #13732. |
I will collect all open comments from this PR tomorrow and add them to #13732 or create individual issues. |
Bump kubernetes to 1.6.1 and master from Apr 4.
Replaces #13483
Fixes #13419
Fixes #13722
Fixes #13571
xref #12458
1.6 post-rebase tasks: #13732
TODOs:
Status
apiVersion in build webhook endpoint, compare Rebase 1.6.1 #13653 (comment): PR Build instead of Status in build webhook sttts/origin#75W0413 17:32:29.303436 41108 authentication.go:362] AnonymousAuth is not allowed with the AllowAll authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
10f939a#r112429397
TODOs for the next rebase to origin master:
Broken extended tests (run on local Fedora 25 as AWS CI is down)
https://docs.google.com/a/redhat.com/spreadsheets/d/1OXZtuzmU86jxkN2qe7YuOCFXNDPtg1oEVD-CHIZI6u0/edit?usp=sharing