Steering Committee Meeting Summary
The Kubernetes Steering Committee had their first public meeting on Feburary 13th. In prior meetings, while the decisions of the meeting were public, only the SC could attend. In the future, they will alternate biweekly with closed and open meetings, and LWKD will cover the open ones.
The SC has introduced User Groups, an entity for contributors with a specific concern or activity to coordinate, but (unlike SIGs) not own any code and (inlike Working Groups) having no specific goals to accomplish. The Big Data SIG was mentioned as a possible convert to a UG. Speaking of Working Groups, the SC is currently documenting how WGs should form, and how they should disband, such as Resource Management WG, which has accomplished its original goals. Finally, they’re working on a list of approved subprojects, as well as owner automation and process around subprojects. This will become a big cleanup effort.
There was discussion about the newly introduced CNCF SIGs, and how this affects the Kubernetes project (TL;DR: we don’t know yet). Dims reported on the Licensing subproject, managed by SIG-Contribex, where we’re scanning Kubernetes code for contributions under noncompliant licenses using the FOSSA tool. And Brian has located a vendor to handle a Kubernetes Bug Bounty program for us.
Finally, Paris briefed the SC on the recent Slack vandalism attack.
Next Deadline: 1.14 Beta, branch on Feb. 19th. Exceptions close Feb. 25th
You have until next week to file your last-minute Exception Requests. Burndown meetings also begin next week (look for your invitation if you have failing tests), leading up to Code Freeze on March 7th.
After a brief removal to work out some issues, Kustomize support is back in
kubectl. Direct integration with commands like
kubectl apply is still pending, but this is a step towards making sure that vanilla Kubernetes provides some basic workflow tools and guidance. If you are already Kustomize via its own command, you can keep doing that, but it will be available via
kubectl kustomize too and included with future releases.
More of a reminder than an important PR itself, but 1.10 has officially set sail for the seas beyond. If you are still targeting it for any local development, make sure you lock down your environment very carefully.
And as one sets, so another shall rise. The forthcoming 1.14 release has been branched and CI will be keeping you honest on all future PRs. Please do check the CI signal report for 1.14 and help out with stabilizing flaky tests!
- Limit the size of a single JSON patch to 10,000 operations, which should be enough for anyone, and limit apiserver request size overall
- While we’re at it, limit the number of process IDs per node, and discard excess events if the health check message channel is full
- Want to know what permissions you have in a namespace? Use
kubectl auth can-i --list
- Make messages container create, start, and stop events consistent, which will make automation easier, but in the short term might break some of your parsing code
- The PodSandbox class now has a
kubeadmin initnow has
--configflags, and kubeadm doesn’t bork diffs with characters like ‘%’ anymore
- Make sure containers are stopped before attempting a restart or remove
- smb remount on Windows is fixed
- kubectl’s discovery requests to the APIserver are much faster now, and API aggregation is 2x faster too
- Per-zone volumes work in vSphere now
- Discovery clients are being split out into packages by functionality area
- kubelet OS and arch labels to GA and mark the beta versions of the labels as deprecated in 1.18
kubectl get --export, which never actually worked, is now deprecated and will be removed in an unspecified future version