Kubernetes Weekly Community Meeting

videos

#1

We’ll use this topic to post community meeting notes and videos. The community meeting happens every Thursday at 6pm UTC (1pm EST / 10am PST). It is open to the public and streamed to the Youtube channel.

See this page for more information:


#2

April 19, 2018 - (recording)

  • Moderators: Paris Pittman [SIG ContribEx]

  • Note Taker: Jaice Singer DuMars (Google)

  • [0:01] Demo - Skaffold Matt Rickard - Google (mrick@google.com)

    • https://github.com/GoogleContainerTools/skaffold
    • Tool for developing applications on Kubernetes
    • Allows you to step into CI/CD
    • skafflod-dev / skaffold-run are the two primary components
    • Q: What is the plan around integration for new Kubernetes releases?
      • Pinned to 1.10, have integration testing but not version skew
      • Want to follow the Kubernetes support process of ~2 releases
    • Q: Why would this not be in CNCF/part of k8s?
      • Trying to keep it unopinionated
      • If a community project makes sense, we will examine that
      • MFarina: Ecosystem projects are the preference to avoid contention
    • Q: So what are the non docker image formats this tool supports?
      • Only supports bazel
      • Working on java support
      • This is the other build tool we’re working on integrating next for skaffold. https://github.com/google/jib
      • Minimal arbitrary support, but requires a file to query and parse to determine SC dependencies, currently in-tree but might move to a plugin model
  • [0:14] Release Updates

  • 1.11 [Josh Berkus ~ Release Lead] (confirmed)

  • Patch Release Updates

    • None
  • Graph o’ the Week

    • YouTube Channel Stats!
    • ~7000 subscribers and growing
    • Old videos and high engagement videos get the most attention
    • SIG recordings are typically used as a sleep aid
    • 4:45 average view time on videos
    • If you are posting videos, use descriptive titles, tags
    • Desktops primary viewing device, also TVs
    • You can turn the video speed up to 1.5 if you want to get through the material faster

[0:23] SIG Updates

  • CLI (Maciej Szulik - confirmed)

    • Printing of objects being moved to server - currently in beta, in 1.10 users were able to opt in to it
    • You can opt out via flag, but it is on by default in 1.11
    • No user-facing impact, but if there are, contact sig-cli
    • Different patterns across the repo, and trying to unify by providing identical flags and output
    • unified flag handling will unify the code base, ux, and simplify the code base
  • AWS (Justin SB - Confirmed)

    • Our first sig repository: aws-encryption-provider ~ encryption at rest in etcd
    • Justin SB is now a Googler
    • CP breakout is blocked by non-technical issues
    • From Micah Hausler (EKS) to Everyone: (10:29 AM): Small correction: We are actively working on the CP breakout here at AWS (we’ve had a ad-hoc community-based meeting to get it going) - meeting notes
    • Need help working on this
  • GCP (Adam Worrell - confirmed) (bit.ly/k8s-sig-gcp)

    • Not thriving, 3 meetings total but having a lack of topics
    • There’s only one lead, but someone has expressed interest
    • Organizationally important, but there don’t seem to be externally-interested parties
    • There are lurkers, but not a specific community
    • Community, please use this opportunity

Announcements

  • SIG UI is looking for more active contributors to revitalize the dashboard. Please join their communication channels and attend the next meeting to announce your interest.

  • KubeCon EU Update

  • Current contributor track session voting will be emailed to attendees today!

  • RSVP for Contributor Summit [here]

  • SIG Leads, please do your updates for the 5 minute updates

  • CNCF meet the maintainers group is organizing ~ please sign up for attending the CNCF booth

Shoutouts!

  • Join #shoutouts to add yours to the weekly announcements

  • @maciekpytel for providing some nuance and clarity around node autoscaler

  • @cblecker for fielding so many issues and PRs.


#3

May 10, 2018

  • Moderators: Tim Pepper [SIG Contributor Experience, SIG Release]

  • Note Taker: Jorge Castro / Christian Roy

  • Demo: Ambassador API Gateway built on Envoy/K8S ( https://www.getambassador.io ) (richard@datawire.io)

    • https://github.com/datawire/ambassador

    • Link to slides

    • Kubernetes only, simple architecture

      • Apache licensed

      • Declarative configuration via kubernetes annotations

      • Built on Envoy - designed for machine configuration

      • Operates as a sidecar to envoy, async notified of config changes and configures envoy accordingly

      • Concept of shadowing traffic - takes all the incoming requests and sends it to another service but filters the responses, good for debugging in production.

  • Release Updates:

  • SIG Updates:

    • Architecture [Brian Grant]

      • Working on our charter

        • Improving conformance tests

        • Provide technical expertise/advice/overview across SIGs

        • Formalizing proposal processes into KEPs, more structure, make it more obvious

        • API review process. Used to be informal, we want to formalize that.

      • Weekly meeting with alternating full meeting (decisions) and office hours (discussions)

        • Office hours are available for people who want to ask questions on how to best implement incoming ideas (API review, etc.)

      • Meeting and note information

    • Contributor Experience [Paris Pittman]

  • Announcements:

    • Shoutouts!

      • See someone doing something great in the community? Mention them in #shoutouts on slack and we’ll mention them during the community meeting:

      • Ihor Dvoretskyi thanks @justaugustus, who made a GREAT job as a Kubernetes 1.11 release features shadow

      • Josh Berkus to Aish Sundar for doing a truly phenomenal job as CI signal lead on the 1.11 release team

      • Tim Pepper to Aaron Crickenberger for being such a great leader on the project during recent months

      • Chuck Ha shouts out to the doc team - “Working on the website is such a good experience now that it’s on hugo. Page rebuild time went from ~20 seconds to 60ms” :heart emoji:

      • Jason de Tiber would like to thank Leigh Capili (@stealthybox) for the hard work and long hours helping to fix kubeadm upgrade issues. (2nd shoutout in a row for Leigh! -ed)

      • Jorge Castro and Paris Pittman would like to thank Vanessa Heric and the rest of the CNCF/Linux Foundation personnel that helped us pull off another great Contributor Summit and Kubecon

      • Top Stackoverflow Users in the Kubernetes Tag for the month

        • Anton Kostenko, Nicola Ben, Maruf Tuhin, Jonah Benton, Const

    • Message from the docs team re: hugo transition:

      • We've successfully migrated kubernetes.io from a Jekyll site framework to Hugo. Any open pull requests for k/website need to be revised to incorporate the repo's new content structure. (Changes in `docs/` must now change `content/en/docs/`.)

      • More about the framework change: https://kubernetes.io/blog/2018/05/05/hugo-migration/

    • KEP Section for the Community Meeting? [Jorge Castro]

      • Lots of KEPs coming in via PR, should we have current KEPs in flight as a standing agenda item in the community meeting?

      • When starting a KEP, send an email FYI to the appropriate SIGs and Arch as github notifications are noisy and missed.

      • Would be good to help us bootstrap the KEP processes for people if we got some visibility on them, but still need a site of KEPs

    • Kubernetes Application Survey results [Matt Farina] WG


Help Wanted?

      • SIG UI is looking for additional contributors (with javascript and/or go knowledge) and maintainers

        • Piotr and and Konrad from google have offered to bring folks up to speed.

        • Take a look at open issues to get started or reach out to their slack channel, mailing list, or next meeting.

SIG UI mailing list:

https://groups.google.com/forum/#!forum/kubernetes-sig-ui

#4

Here are the notes from this week’s meeting:

May 17, 2018 - (recording)

  • Moderators: Paris Pittman [SIG Contributor Experience]

  • Note Taker: Solly Ross

  • Demo: Gardener Demo ( vasu.chandrasekhara@sap.com and rafael.franzke@sap.com)

    • [occurred towards end of video instead]

    • Open Source: https://gardener.cloud/

    • Mission

      • Mange, maintain, operate multiple k8s clusters

      • Work across public and private clouds

    • Architecture

      • Self-hosted

      • Kube-centric

      • Steps

        • Boot initial "garden" cluster using kubify ( https://github.com/gardener/kubify , open source)

        • Deploy Gardener to "garden" cluster + dashboard (Gardener is extension API server)

        • Run/use "seed" cluster to run control plane components, terraformer for each cluster "shoot" cluster (1 seed per hosting platform, region, etc)

        • Each set of control plane components corresponds to a "shoot" cluster with actual nodes (machine controller + machine API objects control this)

        • VPN between "seed" cluster and "shoot" clusters so that API server, monitoring can talk to node

    • Secrets are created for each shoot to easily download kubeconfigs, etc

    • Declarative config for each cluster ("shoot") with status info as well

    • Uses cluster API machine resources, working with Cluster API WG

    • Q: is it stable, or in development?

      • A: used internally, but still in development

    • Q: baremetal support?

      • If there's an infra API that can be used to control baremetal, then that can be used

    • Detailed Blog describing Gardener's architecture: https://kubernetes.io/blog/2018/05/17/gardener/

  • Release Updates:

    • 1.11 [Josh Berkus, RT Lead / Aish Sundar CI Signal Lead] (Week 7)

      • Next Deadline: Docs, Open Placeholder PRs Required, May 25th

      • 1.11.0 Beta0 released yesterday.

      • We are delaying/shortening Code Freeze as discussed. See new calendar for current deadlines.

        • Stable passing tests, low bug count → small code freeze periods → more development time

        • Code slush: May 29th

        • Code freeze: June 5th

      • Many thanks to dims, liggit, timothysc, krousey, kow3ns, yliaog, k82cn, mrhohn, msau42, shyamvs, directxman12 for debugging fails and closing issues, and AishSundar, Cole Mickens, Mohammed Ahmed, and Zach Arnold for working with the SIGs to get attention on issues and test failures.

      • Help wanted to on scalability and performance

    • 1.10 [Maciek Pytel, PRM]

  • SIG Updates:

    • Scheduling [Bobby Salamat]

      • Priority and Preemption

        • Have gotten good feedback over the past quarter

        • Moving to beta/enabled by default in 1.11

      • Equivalence Cache Scheduling

        • Caching predicate results for given inputs as long as conditions don't change in cluster

      • Gang Scheduling

        • Schedule a bunch of pods together, don't schedule only a subset

        • Kube-arbitrator has a prototype that seems to work well

        • Need to collect more requirements

        • Q: Can we use batch scheduling to improve throughput?

          • A: Maybe use a Firmament-like approach?

          • Q: is this a step along the way for perf optimization on the current schedule?

          • A: Engineers from Huawei are working on this, but ran into issues with things like pod-antiafinity, actually binding the pods

      • Taint based eviction to beta

      • Scheduling framework

        • Still in design framework

      • Pod scheduling policy

        • Lots of opinions, progress has been slow

        • Existing design proposal with lots of opinions

    • Scalability [Bob Wise]

      • Slides

      • Schedule for large runs of perf is even-odd day

      • Different Per Axes (there's not just one axis, e.g. "number of nodes")

        • Nodes, Pod Churn, Pod density, Networking, Secrets, Active Namespaces

      • Pro Tips

        • Lock your etcd version

        • Test your cluster with Kubemark

      • Recommended reading in slides

        • Perf regression study

        • Scalability good practices

      • WIP Items

        • Better testing of real workloads (cluster-loader)

        • More scalability testing in presubmit tests

          • Concerns around run time issues

        • Sonobouy per testing

      • Q: Limitations on scalability come down to etcd perf, do we work with etcd engineers?

        • A: Perf is generally not an etcd issue wrt bottlenecks

        • A: Etcd tends to be regressions across etcd versions, not etcd as bottleneck

        • A: Range locking issues being improved in 3.3,3.4

        • A: talk to Shyam about this for more info

    • API Machinery [Daniel Smith, confirmed]

      • New Dynamic client with better interface!

        • Old is under "deprecated" directory

        • Clientside QPS rate limit behavior changed

      • CRD Versioning

        • Design issue with versioning priorities found, but no-op conversion will still land in 1.11

      • Apply WG

        • Feature branch for apply, trying to put things in master when possible

        • Won't reintegrate before 1.11 (feature branch work will continue through code freeze)

  • Announcements:

    • Shoutouts!

      • Warm welcome to @liz and @cha for their journeys in joining the k8s org! Both of you have been having a big impact in #sig-cluster-lifecycle - stealthybox

      • @chancez and @danderson for a great conversation on bare metal options and concerns! - mauilion

      • shoutout to @liggitt, master wrangler of e2e test bugs. Jordan has fixed many "fun" bugs . Thanks for helping keep things green! :smile: - bentheelder

      • As a new contributor, I can 100% endorse @carolynvs for being REALLY GOOD at bringing in new contributors, and dedicating a lot of time and effort to make sure they are successful. -teague_cole

    • Help Wanted!

      • SIG UI looking for new contributors to go up the ladder to maintainers. Start with an open issue and reach out to the mailing list and slack channel.

      • SIG Scalability is looking for contributors!

      • We need more contributor mentors! Fill this out.

        • The next Meet Our Contributors (mentors on demand!) will be on June 6th. Check out kubernetes.io/community for time slots and to copy to your calendar.

    • Kubecon Follow Ups

    • Other

      • Don't forget to check out discuss.kubernetes.io !

      • DockerCon Kubernetes Contributor AMA during Community Day - June 13th. 3 hour window; specific time TBA


#5

Sorry I am late from last week, but here we go:

May 24, 2018 - (recording)

  • Moderators: Josh Berkus [SIG-Release]

  • Note Taker: Tim Pepper [VMware/SIGs Release & ContribX]

  • [ 0:00 ] Demo -- Workflows as CRD [ Jesse Suen (Jesse_Suen@intuit.com)]

    • Link to slides: https://drive.google.com/file/d/1Z5TMIr6r4hC7N5KeVqajC3c3NcYqK4_z/view?usp=sharing

    • Link to repositories: https://github.com/argoproj/argo

    • Argo: a fancy job controller for workflows, DAGs implemented as CRD. Originally intended for CI/CD pipelines, but is seeing usage for other workflows like machine learning.

    • Used with kubeflow

    • Component architecture interfacing wsith k8s api server and leveraging sidecars in pods for workload artifact management

    • Argo command line gives validation of commands, but is effectively a kubectl wrapper

    • Workflows can be defined as a top down iterative list of steps, or as a DAG of dependencies

  • [ 0:16 ] Release Updates

    • 1.11 Update [Josh Berkus, Release Lead]

      • Next Deadline: Docs Placeholder PRs Due Tomorrow for feature list !!!

      • Code Slush on Tuesday

      • Current CI status and schedule

        • CI Signal : Tracking 3 open issues , all are test issue being actively worked on.

        • Code freeze coming June 5, make sure your issues/PRs are up to date with labels and priorities and status

      • Changing Burndown Meeting Schedule , please comment..looking for less conflicted times and ones friendlier for more timezones

    • Patch Release Updates?

      • 1.10.3 released monday

  • [ 0:21 ] SIG Updates

    • SIG Service Catalog [Doug Davis] (confirmed)

      • Beta as of Oct 2017

      • Key development activities

        • New svcat cmd line tool (similar to CloudFoundry’s way of things)

        • NS-scoped brokers - still under dev

        • Considering moving to CRDs instead of dedicated apiserver

      • Finalizing our v1.0 wish-list

        • NS-scoped brokers

        • Async-bindings

        • Resolve CRD decision

        • Generic Broker & Instance Actions

        • GUIDs as Kube “name” is problematic

      • SIG has recently been actively mentoring and onboarding newcomers

    • SIG Auth [Tim Allclair](confirmed)

      • Pod TokenRequest API and ServiceAccountTokenProjection improving for 1.11

      • Client-go gaining support for x509 credentials and externalizing currently in-tree credential providers

      • Scheduling policy design thinking happening ahead of 1.12

      • Audit Logging: improved annotation metadata coming around auth and admission for logs

      • Node Isolation: nodes no longer able to update their own taints (eg: exploit to attract sensitive pod/data to a compromised node)

      • Conformance: KEP PR open on security related conformance testing to give better assurance that best practices are in use or validate a hardened profile is active. Likely not 1.11 rather 1.12.

      • Bug bounty is WIP

    • SIG Storage [Brad Childs](confirmed) Slides

      • Had SIG face-to-face meeting last week. ~40 people and 19 companies present

      • Storage functionality is moving out of tree by way of the CSI interface

        • CSI spec moving from 0.2 to 0.3 soon

        • Lots of CSI related features coming in k8s 1.11

        • Aiming for out-of-tree feature parity relative to existing in-tree

      • Feature areas: Snapshots, topology aware (scheduling relative to location of PV) and local PV, local disk pools/health/capacity, volume expansion and online resize

      • Testing: multi-phased plan to inventory and improve test coverage and CI/CD, including test coverage on other cloud providers beyond GCE. VMware committed resources, looking for commit from others.

      • Operators: external provisioners, snapshot and other operator frameworks underway. Currently not looking to do a shared operator library to span SIG-Storage repos.

      • Metrics: there are a lot. Some are cloud provider specific. Goal is to assist SRE’s in problem determination and corrective action.

      • API Throttling: api quota exhaustion at cloud provider and api server are frequently causing storage issues. Looking at ways to streamline.

      • External projects: SIG has something like 20 projects and is breaking them apart, looking for owners and out of tree locations for them to better live. Projects should move to CSI, a kubernetes-sigs/* repo, a utility library, or EOL

  • [ 0:00 ] Announcements

    • Shoutouts this week (Check in #shoutouts on slack)

      • Big shoutout to @carolynvs for being welcoming and encouraging to newcomers, to @paris for all the community energy and dedication, and to all the panelists from the recent Kubecon diversity lunch for sharing their experiences.

      • Big shoutout to @mike.splain for running the Boston Kubernetes meetup (9 so far!)

      • everyone at svcat is awesome and patient especially @carolynvs, @Jeremy Rickard & @jpeeler who all took time to help me when I hit some bumps on my first PR.

    • Help Wanted

      • SIG UI is looking for new contributors. Check out their issue log to jump in; also listen to their SIG UI call today where they explained more and answered questions. #sig-ui in slack for on-ramp help. Notes from the call

Looking for more mentors as we kick off our contributor mentoring programs.

Fill out this form

(works for looking for mentorship, too). Pardon the dust as we do a mentor recruiting drive.


#6

May 31, 2018

  • Moderators: Jorge Castro [SIG Contributor Experience]
  • Note Taker: First Last [Company/SIG]
  • [ 0:00 ]** Demo **-- Aptomi - application delivery engine for K8S [Roman Alekseenkov]
  • [ 0:13 ]** Release Updates**
    • 1.11 [Josh Berkus - Release Lead]
      • Next Deadline: Draft doc PRs due June 4th.
      • Currently in Code Slush. Requiring milestones, sorry for lack of warning on that.
        • Were not able to move to Prow milestone maintainer or Tide for this release.
      • Code Freeze Starts Tuesday, June 5th
      • If your feature won’t be ready, now is the time to update your issue in the Features repo.
      • CI Signal -
        • Almost green, last few fixes merged.
        • 1 open tracking issue - Scale Density test for 30 pods
        • Conformance tests results (GCE and OpenStack) now in Release blocking dashboard
        • @misty on slack for release docs issues
    • Patch Release Updates
      • x.x
      • y.x
  • [ 0:00 ] Introduction to KEPs [Kubernetes Enhancement Proposals] [Caleb Miles]
    • We’ll be highlighting KEPs in community meetings
    • tracking how decisions are made: identify the problem + find a sig for motivation agreement + documenting it for everyone
    • Slides
  • [ 0:00 ] SIG Updates
    • SIG OpenStack [David Lyle and Chris Hoge]
    • SIG Node [Dawn Chen]
      • Made a steady progress on all 5 areas in Q2: 1) node management including Windows, 2) application / workload management, 3) security, 4) resource management and 5) monitoring, logging and debuggability.
      • On node management
        • Promoted dynamic kubelet config to beta
        • Refactor the system to use node-level checkpointing manager
        • Proposed a probe-based mechanism for kubelet plugins: device, csi, etc.
        • Proposed a design to address the scalability issue caused by large node object and approved by the community. Had a short-term workaround in v.11, and plan to work on the long term solution in v1.12.
      • Together with sig-windows, we made many progress on Windows support which including stats, node e2e for Windows Container Image. More works on SecurityContext, storage and network in next release.
      • Both CRI-O and containerd are GA in this release.
        • More enhancements on CRI for container logs
        • Many enhancements to crictl, the tool for all CRI-compliant runtimes. Expecting to be GA in v1.12
        • Announced CRI testing policy to the community, and introduced node exclusive tags to e2e.
      • On security
        • For 1.11, making all addons use default seccomp profile. Expecting to promote it to beta and enable it by default.
        • Proposed a design and alpha-level Kubernetes API for sandbox. Working closely with Kata community and gVisor community on integration of CRI-compliant runtime.
        • WIP for user namespace support
        • Made progress on node TLS bootstrap via TPM.
      • On resource management side, we made the progress on promoting sysctl to beta and proposed ResourceClass to make resource support extensible.
      • Made the steady progress on debug pod, but unfortunately due to backforth review from the different reviewers on API changes, we couldn’t have alpha support in v1.11. Escalate it to sig-architecture.
      • On the logistics side
        • Sig-node holds weekly meeting on Tuesday, 10am (Pacific Time)
        • Please join kubernetes-sig-node googlegroup to have access to all design docs, roadmap and emails.
        • Derek and I are working on sig-node charter, which is still under review and discussion.
  • [ 0:00 ] Announcements
    • Deprecation Policy Update (Important!)
    • SIG Leads - check the top of this document for a link to the SIG Update schedule.
    • Shoutouts - Someone going above and beyond? Mention them in #shoutouts on slack to thank them.
      • Aish Sundar - Shoutout to @dims and OpenStack team for quickly getting their 1.11 Conformance results piped to CI runs and contributing results to Conformance dashboard!
      • Aish Sundar - Shoutout to Benjamin Elder for adding Conformance test results to all Sig-release dashboards - master-blocking and all release branches.
      • Josh Berkus and Stephen Augustus - To Misty Stanley-Jones for aggressively and doggedly pursuing 1.11 documentation deadlines, which both gives folks earlier warning about docs needs and lets us bounce incomplete features earlier
    • Help Wanted
    • Meet Our Contributors (mentors on demand)
    • Stackoverflow Top Users
    • Thread o’ the week: How has Kubernetes failed for you?

#7

June 7, 2018

  • Moderators: Jaice Singer DuMars [SIG Release/Architecture]
  • Note Taker: Austin Adams [Ygrene Energy Fund]
  • [ 0:00 ] Demo – YugaByte ~ Karthik Ranganathan [karthik@yugabyte.com] (confirmed)
    • Karthik Ranganathan
    • Answers from Q&A:
      • @jberkus - For q1 - YB is optimized for small reads and writes, but can also perform batch reads and writes efficiently - mostly oriented towards modern OLTP/user-facing applications. Example is using spark or presto on top for use-cases like iot, fraud detection, alerting, user-personalization, etc.
      • q2: operator in the works. We are just wrapping up our helm charts https://github.com/YugaByte/yugabyte-db/tree/master/cloud/kubernetes/helm
      • q3: the enterprise edition does have net new DB features like async replication and enforcing geographic affinity for reads/writes, etc. Here is a comparison: https://www.yugabyte.com/product/compare/
      • q4: You cannot write data using redis and read using another API. Its often tough to model across api’s. Aim is to use a single database to build the app, so support common apis
      • The storage layer is common
      • So all APIs are modeled on top of the common document storage layer
      • The API layer (called YQL) is pluggable
      • Currently we model Redis “objects” and Cassandra “tables” on top of this document core, taking care to optimize the access patterns from the various APIs
      • We are working on postgres as the next API
  • [ 0:00 ]** Release Updates**
    • 1.11 [Josh Berkus - Release Lead]
      • Next Deadline: Docs Complete, June 11
        • All listed features have docs in draft – Thanks!
        • However: non-listed (minor) changes, please make sure you have docs!
      • Currently in Code Freeze
        • Only 1.11 patches, must be approved and critical-urgent
        • Down to 11 PRs
        • Still using the old Milestone Munger, so expect the same annoying behavior, sorry.
          • Particularly: can’t take back-branch PRs.
        • No New Features/Cleanups Now, please
          • All new features have draft documentation, however, there are lots of small patches not big enough to be a feature but we don’t know if we have documentation for those.
          • Please make sure your 1.11 small patches have documentation.
        • Code freeze ends June 19th.
        • Docs need to be complete by June 11th
      • CI Signal looking good
        • Recent GKE breakage fixed.
          • Only upgrade/downgrade tests failing, PR in progress.
        • Thanks everyone for responding to test fails quickly!
      • Scalability/Performance
        • Currently passing all performance tests
          • Thanks to everyone who worked on this early in the cycle!
        • New performance presubit test
          • Kudos to SIG-scalability for getting this done.
    • 1.12
  • **[ 0:00 KEP Highlight ] **- Kustomize [ Jeff Regan ]
  • [ 0:00 ] SIG Updates
    • **Multicluster **- Quinton Hoole (confirmed)
      • Sig Intro
        • Focused on solving challenges with running multiple clusters and applications therein.
        • Working on Cluster Federation, Cluster Registry(cluster registry for k8s for cluster reuse) and Multi cluster ingress.
      • FederationStatus
        • Development has split between federation v1 and v2.
        • Federation v1 is a POC and no further development planned, users showed they needed something different.
        • Moving forward Federation v2 will focus on reusable components, federation specific apis and implementations of higher level apis and federation controllers.
        • v2 Alpha is planned for June.
        • Behind the effort is RedHat and Huawei.
      • Cluster Registry Status
        • Grew out of Federation v1. Allows reusable clusters and discovery. Google Cloud is supported for now, but more coming. Implementation is based on CRDS.
        • Apis/CRDS in beta.
      • Link to slides
    • Network - Tim Hockin - (confirmed) (or dc
      • Sig Intro
      • In-progress Network Plumbing CRD Spec doc:
      • Network Service Mesh proposal slides
      • DevicePlugins (from Resource Management WG) have some intersection with networking, there have been many demos/PoCs but so far no consensus on how DPs should interact with existing CRI networking APIs
      • CoreDNS is now GA in 1.11
      • IPVS Proxy mode is now GA in 1.11 (anyone have a link?) but not default
      • Looking at breaking out ingress into a bunch of individual route resources instead of one monolithic list.
      • IPv6 discussions around how to support dual-stack are ongoing
      • We are working on test flakes, we don’t have a fix yet but HELP WANTED
    • VMware - Steve Wong (confirmed)
      • Vmware Cloud Provider
        • The target is 1.12.
        • Working through some process level things. This project is retained as a SubProject.
        • Creating a working group to handle testing
      • Link to deck, 4 slides, estimated 5 min:
  • [ 0:00 ] Announcements
    • Happy birthday, Kubernetes!
    • Shoutouts - _powered by slack #shoutouts _- if you see someone doing great work give them a shoutout in the slack channel so we mention those here!
      • @jrondeau for working on the weekend to get 1.11 doc builds working again!!” -mistyhacks
      • @andrewsykim for all the effort in getting SIG Cloud Provider off the ground!” -fabio
      • @neolit123 for really stepping up lately to help with user facing issues for the kubeadm 1.11 release. we really appreciate your contributions to the sig” -stealthybox
      • @cblecker who is everywhere keeping tabs on things and people on track.” -gsaenger
    • Help Wanted
      • [Stephen Augustus]** **1.12 release team is forming, see #sig-release for more info. Roles & Responsibilities info here. Volunteers needed!
      • Help wanted on Sig Network Test Flakes reach out to #sig-network on slack
      • Anyone Interested in learning prow and helping with the transition from. Munger to prow will be helpful. See @jberkus

#8

June 14, 2018

  • Moderators: Zach Arnold [Ygrene Energy Fund/SIG Docs]
  • Note Taker: Jorge Castro [Heptio/SIG Contribex] and Solly Ross [Red Hat/SIG Autoscaling]
  • [ 0:00 ]** Demo **-- Building Images in Kubernetes [Priya Wadhwa, priyawadhwa@google.com] (confirmed)
    • https://github.com/GoogleContainerTools/kaniko
    • https://docs.google.com/presentation/d/1ZoiQ3cuQNJJciKq_JvqTty_tcoaRKNyYRzgCBbTumsE/edit?usp=sharing
    • Tool for building container images without needing to mount in Docker socket
      • Extracts base image to file system
      • Downloads build context tarball from storage (e.g. S3, more on the way)
      • Executes commands listed in Dockerfile
      • Snapshots in userspace after each step
      • Ignores mounted directories during snapshots
    • Can be run in gVisor as well
    • Questions:
      • do you have to use dockerfiles, or can you use other instruction sets
        • Only dockerfiles right now, but file issues if you want other things
      • Which dockerfile verbs are supported?
        • All of them
      • Can the bucket be S3 or DO Space?
        • Working on a PR right now to support other solutions
      • Feature parity with docker build?
        • Yes
    • Link to slides.
  • [ 0:00 ]** Release Updates**
    • 1.11 [Josh Berkus - Release Lead]
      • Next Deadline: RC1 and branch on June 20th
      • Less than a week of code freeze left!
      • Docs are due and overdue; if you have a feature in 1.11,_ you should have already submitted final docs_. Contact the docs team.
      • CI signal is good, a few tests being flaky, especially alpha-features.
      • Only 2 issues and 6 PRs open; currently more stable than we’ve ever been! Thanks so much to everyone for working to get stuff in the release early.
    • 1.12 [Tim Pepper - 1.12 Release Lead]
      • Tim Pepper as Lead
      • Almost finished building 1.12 team, contact @tpepper on Slack to join.
        • Needed:
          • PR triage (tentatively adding role separate from issue triage)
          • Branch manager
    • Patch Release Updates
      • 1.10.4?
  • [ 0:00 ] KEP o’ the Week
    • SIG Cloud Provider KEP: Reporting Conformance Test Results to Testgrid [Andrew Sy Kim]
      • Formerly WG-Cloud-Provider
        • Standards and common requirements for Kubernetes cloud provider integrations
        • Improving docs around cloud providers (how to work with different integration features)
        • Improving testing of cloud providers
      • KEP is basically “Why we want conformance tests reported by the cloud providers”
        • We didn’t have a formal way to do this without KEP
        • SIG Testing infra wasn’t available back then, so now we have testgrid and a way to report tests, etc. Gives providers instructions to follow to contribute results.
        • SIG Openstack has been pioneering this work
        • We want all providers to do this eventually, we’ll be reaching out to all the cloud providers to give them visibility that this KEP exists.
        • Still missing some details, will address those as more experience is developed in how to do better test
      • Q:
        • Coverage is listed as out of scope, but is a benefit, will coverage improvements be a follow-on KEP?
          • Eventually, but currently not necessarily an immediate priority
    • https://github.com/kubernetes/community/pull/2224
  • [ 0:00 ] Open KEPs [Kubernetes Enhancement Proposals]
  • [ 0:00 ] SIG Updates
    • SIG Windows [Patrick Lang]
      • Trello board - maps K8s features to Windows release needed
      • Releasing twice a year in the Windows Server Semi Annual Channel
        • Like 18.03, 17.09, etc.
        • We’ve had to make changes to Windows Server to make Kubernetes work well. For example, symbolic links in Windows v. Unix.
        • Board is tagged with the right version of Windows to use to get a particular Kubernetes feature working, but in general, use the latest release if possible
      • Kube 1.11
        • Lots of features with Windows, e.g. Kubelet stats
      • For the future
        • Currently using dockershim, trying to figure out how to support other CRI implementations
        • Working with other CNI plugins (Flannel, OVN, Calico)
        • Trying to get support for showing test results via Prow, Kubetest, TestGrid
        • Want to move to GA eventually, with 2019 Windows release (extended support cycle)
      • Questions:
        • Q: Unix-style symlinks in Windows?
          • Have something similar to unix-style symlinks and hardlinks, needed to make some symlink changes to make sure you can’t traverse in an insecure way. Code either in Kubelet or Go winio library
          • Hardlinks not recommended, stick to the symlinks.
        • Q: is windows currently dockershim+embedded EE
          • Currently uses Docker EE Basic for Windows (as published by Docker), used for testing
          • Potentially switching to crio eventually, or containerd
    • SIG Apps [Kenneth Owens]
      • Helm
        • Helm moved to a separate CNCF project (see last meeting)
        • Helm 2 stability release
        • Helm 3 proposal merged, work continuing
      • Application Resource
        • Seeks to describe application as running
        • Controller soon
        • WIP
      • AppDef WG
        • Winding down
        • Proposal for common labels and annotations will be merged in partial form
      • Ksonnet
        • New release
      • Decentralized charts repo coming
      • Skaffold: kustomize support
      • Workloads API
      • Questions:
        • Why didn’t charts go with Helm to a separate CNCF project
          • Current Status: Charts are listed as a subproject of SIG Apps.
          • Chart maintainers aren’t necessarily Helm maintainers
          • Trying to figure out the right model for maintainership
          • Is the charts tooling part of the charts subproject, or Helm?
            • Unsure, currently part of the charts subproject, but points to the kubernetes/helm repo
    • SIG Docs [Jennifer Rondeau]
      • Zach is out sick today, fulll update Augustish, feel better Zach!
      • 1 minutes Jennifer update
        • We’re making great progress on fixes for the hugo migration, we’ve plowed through a bunch, thanks to all the new contributors who have been diving in.
        • Thanks to all of you who have submitting 1.11 docs
        • If you’re behind on 1.11 docs, please submit them asap!
  • [ 0:00 ] Announcements
    • K8s Office Hours Next Week, Wednesday 6/20
      • Volunteers always sought, ping @jorge or @mrbobbytables on slack
      • Users who participate will be entered in a raffle to win a k8s shirt!
    • SIG Leads, if you haven’t uploaded your meeting videos to the youtube channel recently, please try to catch up. Ping @jorge if you need help.
    • SIG Architecture has a new meeting time at 11PST every other Thursday after this meeting. Also, there is a new Zoom link you can get from joining the mailing list. Check out the SIG Arch readme for more information.
    • Shoutouts this week (Check in #shoutouts on slack)
      • (Josh Berkus) @liggitt and @dims for pitching in and doing a ton of work on PRs for 1.11, across all of Kubernetes.
      • (Jennifer Rondeau) @misty for stepping in to help with ALL things docs no matter how crazy they get or how much else she has on her plate :tada:
      • (Aish Sundar) @justaugustus for giving us a huge head start and herding all the cats to get a stellar 1.12 release team already in place. Thanks a lot!
      • (Misty Stanley-Jones + Aish Sundar) @jberkus for herding 1.11 release cats! :cat:
      • To echo what @misty said, HUGE shoutout to @jberkus for being an awesome patient leader throughout 1.11 cycle. It was such a learning experience seeing him work through issues calmly, all the while encouraging the RT team to lead in our own little way.
      • Jason DeTiberius
        • @neolit123 (Lubomir Ivanov) for all of the docs contributions for kubeadm v1.11
        • @jrondeau (Jennifer Rondeau) for the relentless work on improving our docs and helping bring some more structure to the docs process for sig-cluster-lifecycle

#9

(Sorry it took me so long to post this)

June 21, 2018

  • Moderators: Arun Gupta [Amazon / SIG-AWS]
  • Note Taker: Chris Short and Jorge Castro [SIG Contrib Ex]
  • [ 0:00 ]** Demo **-- Agones - Dedicated Game Server Hosting and Scaling for Multiplayer Games on Kubernetes [Mark Mandel, markmandel@google.com] (confirmed)
  • [ 0:00 ]** Release Updates**
    • 1.11 [Josh Berkus - Release Lead]
      • Code Thaw on Tuesday, held changes from Code Freeze have now cleared the queue.
        • All 1.11 changes now need to be cherrypicked.
      • RC1 was released yesterday, please test!
      • Status is currently uncertain. Probability of a release delay is 50%, will make call at Burndown meeting 10am tommorrow.
      • CI Signal Issues:
      • Release notes collector is still broken, please check the release notes to make sure all of your changes represented! An estimated 20-30 release notes are missing. Contact (@nickchase / nchase@mirantis.com) if you find something missing.
    • 1.12 [Tim Pepper - Release Lead]
    • Patch Release Updates
      • 1.8.14
      • 1.9.9 release schedule
      • 1.10.5
  • [ 0:00 ] KEP o’ the Week (Yisui)
  • [ 0:00 ] Open KEPs [Kubernetes Enhancement Proposals]
  • [ 0:00 ] SIG Updates
  • [ 0:00 ] Announcements
    • Please pin your SIG meeting info and agenda doc in your SIG slack channel. Now that the main calendar is not on https://kubernetes.io/community/ meeting info is less discoverable without these links.
      • **SIG Chairs/TLs - please check your email (sent to k-sig-leads@). New zoom settings and moderation controls. Let’s keep our meetings safe and transparent. **
    • All SIGs - please take time to look at the “help wanted” and “good first issue” labels, available across all Kubernetes repositories. They’re meant to highlight opportunities for new contributors. Please ensure that they’re being used appropriately (the “good-first-issue” especially has fairly specific requirements for the issue author): https://github.com/kubernetes/community/blob/master/contributors/devel/help-wanted.md
    • Shoutouts this week (Check in #shoutouts on slack)
      • Jason DeTiberus: @neolit123 (Lubomir Ivanov) for all of the docs contributions for kubeadm v1.11
      • Jason DeTiberus: @jrondeau (Jennifer Rondeau) for the relentless work on improving our docs and helping bring some more structure to the docs process for sig-cluster-lifecycle
      • @neolit123 (Lubomir Ivanov): @jdetiber (Jason DeTiberus), @liz (Liz Frost), @cha (Chuck Ha), @timothysc (Timothy St. Clair) and @luxas (Lucas Kladstrom) for the relentless grind trough kubeadm 1.11 backlog potentially making it the best release thus far.
      • @austbot (Austin Adams): To @lukaszgryglicki (Lukasz Gryglicki) for DevStats, which is Awesome!!
      • Stealthybox (Leigh Capili): shoutout to @oikiki (Kirsten) for being very welcoming to new contributors
      • Nikhita: shoutout to the whole test-infra community for actively using emojis in issues, PRs and slack. It’s pretty subtle but it goes a LONG way in making the project and community more friendly and welcoming to new contributors!! cc @fejta (Erick Fejta) @bentheelder (Benjamin Elder) @cblecker (Christoph Blecker) @stevekuznetsov (Steve Kuznetsov)
      • @misty (Misty Stanely-Jones): @Jesse (Jesse Stuart) for fixing CSS relating to tab sets in docs! :raised_hands:
      • @fejta (EricK Fejta): @krzyzacy (Sen Lu) and @bentheelder (Benjamin Elder) for being ever diligent about reviewing PRs in a timely manner
      • JoshBerkus: to @kjackal (Konstantinos) for actually beta-testing 1.11 and spotting a bug before RC1
      • @oikiki (Kirsten): shoutout to @gsaenger for always generously helping new folks get started contributing to k8s! (and also for completing her first major technical PR!) WOOP WOOP!
      • @gsaenger (Guinevere Senger) Um… no, really, I couldn’t have done it without so much help from @cblecker (Christoph Blecker) and @cjwagner (Cole Wagner) and @fejta (Erick Fejta)and @bentheelder (Benjamin Elder). Everyone was super nice and patient and helped me learn. :heart: So, shoutouts to them. I’m so grateful.

#10

June 28, 2018 - 1.11 Release Retrospective

  • Moderators: Jaice Singer DuMars [SIG PM/Release]
  • Note Taker: First Last [Company/SIG]
  • [ 0:00 ]** Demo **- containerd - Phil Estes - estesp@gmail.com
  • [ 0:00 ] Announcements
    • SIG IBMCloud, Autoscaling, and GCP will be updating in August
    • Github Groups [Jorge Castro]
    • Shoutouts this week (Check in #shoutouts on slack)
      • jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue.
        • Maulion: And another to @liggitt for always helping anyone with a auth question in all the channels with kindness
      • jdumars: @paris - thank you for all of your work helping to keep our community safe and inclusive! I know that you’ve spent countless hours refining our Zoom usage, documenting, testing, and generally being super proactive on this.
      • Nikhita: shoutout to @cblecker for excellent meme skills!
      • Mrbobbytales: Just want to give a big shout out to the whole release team. Thanks for all your effort in getting 1.11 out the door :slightly_smiling_face: Seriously, great job!
      • Misty: @chenopis for last-minute 1.11 docs-related heroics!
      • Misty: @nickchase for amazing release notes!
      • Misty: @jberkus for being a very patient and available release lead as I was on the release team for the first time
      • Jberkus: @liggitt for last-minute Cherrypick shepherding, and @nickchase for marathon release notes slog
      • Jberkus: and @misty @AishSundar @tpepper @calebamiles @idvoretskyi @bentheelder @cjwagner @zparnold @justaugustus @Kaitlyn for best release team yet
      • Tpepper: shoutout to @jberkus for his leadership of our team!
  • [ 0:00 ]** Release Retrospective for 1.11**
    • Retro doc
      • SIG Release will do deep dive on retrospective details, but today this meeting focused on the high level cross-project topics like:
        • Release timeline evolution and deadlines
        • How to better track major features and changes that are in need of docs, test cases, release noting
        • How do we get user/distributor/vendor testing of betas and rc’s. Consumption is harder when docs and kubeadm upgrade path aren’t there yet.
    • Retro Part II (detail retro): Tuesday, July 3rd, 10am, https://zoom.us/j/405366973

#11

July 12, 2018

  • Moderators: Paris Pittman [ContribEx, Google]
  • Note Taker: Josh Berkus [Release]
  • Demo: No demo today - see you next week!
  • [ 0:01 ]** Release Updates**
  • [ 0:07 ] KEP o’ the Week (Janet Kuo)
    • https://github.com/kubernetes/community/pull/2287
    • For cleanup of frequently created & dropped objects
      • We don’t have a good way to garbage collect items which no longer have an owner.
      • Often people update-and-replace instead of modifying
    • Proposal for new GC for these objects.
      • Will be discussed in next API-machinery meeting next week (Wednesday) if you care about the KEP
      • Will give detailed presentation there.
  • [ 0:00 ] SIG Updates
    • SIG API Machinery David Eads
      • Link to slides
      • Delivered in 1.11:
        • Improved dynamic client, easier to use for CRD developers. Everyone should switch to this because the old client will eventually go away.
        • “Null CRD conversion”: you can promote a CRD from one version to another, even though there’s no API changes. No data transformation, no changes to schema. So very limited for now.
        • Work on feature-branch for Server-side apply.
        • Prep work for making controller-manager start from a config
      • 1.12 work
        • Server-side apply dry run being merged into Master
        • Path to more advanced CRD conversion, field defaults, advanced versioning (design phase).
        • Controller-manager moving to running from config
        • Generic initializers as alpha. May be superseded by admission webhooks. If you need something in Generic Init that isn’t satisfied by webhooks, speak up in their meeting to save it.
    • SIG Testing [Steve Kuznetsov] (confirmed)
      • Link to slides
      • Implemented caches for test runs, which is a big performance boost
        • Reduced GH API hits by 1500/hr
        • Bazel build cache lowered test times
      • UX improvements for the k8s bot
        • Now can LTGM and approve in review comment
        • Robots now validate OWNERS files
      • Easier administration
        • Using Peribolos for GH API management
      • Automated branch protection now on all repos
        • Only bots can merge to branches
      • Simpler test Job management: now just needs a container with an entrypoint
      • Merge workflows using Tide are implemented
        • Plan to rollout for 1.12
        • Will include PR status page, yay! Makes it easier to see why your PR is stuck.
      • Testgrid dashboard for conformance tests
        • Including openstack
      • Prow is now being adopted by other orgs
        • Google, Red Hat, Istio, JetStack …
      • Future work:
        • Better onboarding docs
        • Fix tech debt that makes getting started hard
        • Better log viewer, esp now that we have scalability presubmits
        • Clean up config repo
        • Framework for writing bot interactions
        • API for cluster provisioning
      • Questions:
        • What about archival stats on PR status dashboard?
          • Will discuss at sig-testing meeting
        • What about doc on how to write a test?
          • Also really critical, needs help
    • SIG ContribEx Paris
      • Link to slides
      • Contributor Guide
        • Umbrella issue is now closed
        • non-code guide in development - meets on Weds
      • Developer Guide
        • Tim Pepper now taking point on this
        • Reach out to him @tpepper if you can help
      • Contributor.kubernetes.io web site is under early design
        • Different from general community, this one will be just for contributors
        • More modern calendar
        • Prototype up, check it out (link from slides)
        • goal to launch in 90 days
      • Community Management
        • All talking all the time, it’s time consuming
        • Contributor summits, first one (run by contribex) in Copenhagen
          • Rolling out new contributor workshop + playground
          • Will have smaller summit in Shanghai (contact @jberkus)
          • Started planning for Seattle, will have an extra ½ day.
            • Registration will be going through kubecon site
          • Manage alacarte events at other people’s conferences
        • Communication pipelines & moderation
          • Clean up spam
          • Reduce number of pipelines
          • Some draft moderation guides
          • Also run the Community Meeting
          • Zoom has a bad actor problem, so we’re not locking down Zoom permissions, trying not to take away public meetings, looking at new security together with Zoom execs.
          • Moderating k-dev and k-users MLs now
          • If you need to reach moderators quickly, use slack-admins slack channel
          • Slack: 40K users, a lot less moderation required
          • Discuss.kubernetes.io
            • Been successful for tips & tricks and user advice
            • Will be “official” RSN
      • Mentoring
        • Meet Our Contributors is doing well
          • Yesterday’s special edition had Steering Committee members
        • Outreachy, only participating twice a year
          • September deadline for winter intern, planning on 1
          • Participating companies can pay for more
        • Group mentoring: the 1:1 hour
          • If your SIG needs to move people up to approver, please contact @paris
        • GSoC, being done by API-machinery
      • DevStats
      • Github Management proposed subproject
  • Announcements
    • Shoutouts - enter yours in #shoutouts slack channel!
      • (jberkus) - @jdumars for inventing, then running, really effective retros for releases.
      • (paris) - shouts to @liggitt @stevekuznetsov and @munnerz for a jam packed, informative, #meet-our-contributors session yesterday! (watch the recording; good info!)
      • (paris) another shout to @arschles @janetkuo for being mentors on the second great episode of #meet-our-contributors yesterday. (also to bdburns, pwittroc, and philips but I will spare their notifications for doing our first AMA with steering committee members)
      • James - shout out to @stevekuznetsov for immediately jumping to spend time debugging and fixing issues with our Prow deployment and tide (not) merging our PRs! looking forward to finally rolling the fix out :slightly_smiling_face: it has caused us issues for 1-2 months now :smile:
    • Office Hours is next Wednesday - volunteers to help answer user questions are always appreciated, ping @jeefy or @mrbobbytables if you want to help, otherwise help us spread the word!
    • Next week’s meeting won’t be streamed, so expect a slight delay on publishing it to YouTube

#12

(Sorry for the late posts, I’ve been travelling!)

July 19, 2018

  • Moderators: Tim Pepper [ContribEx, Release, VMware]
  • Note Taker: Solly Ross
  • Demo: Microk8s - Marco Ceppi (confirmed)
    • https://microk8s.io / #microk8s on Slack / https://github.com/juju-solutions/microk8s
      • Lightweight kubernetes cluster install
      • Installed, uninstalled with snaps
        • works across different linux distros, other OSes coming eventually)
      • Still a bit in beta
        • Different releases installed with different channels (beta channel is 1.11.0, edge is 1.11.1)
    • Commands installed namespaced by microk8s.
      • kubectl is microk8s.kubectl
      • Can enable different addons like dns and dashboard with microk8s.enable
        • Cert generation, ingress, storage also available
      • kubeconfig is scoped just to microk8s.kubectl, doesn’t interfere with normal kubectl
      • microk8s.reset resets to blank state
    • Kubernetes run as systemd services
    • Service Cluster IP addresses available as normal on host system
  • [ 0:09 ]** Release Updates**
  • [ 0:13 ] **KEP o’ the Week **- none this week
    • If you want to get a broader audience for an up-and-coming KEP, you can get it discussed here!
  • [ 0:14 ] SIG Updates
    • SIG Big Data (Anirudh Ramanathan, Yinan Li, confirmed)
      • Deal with big data workloads on Kube
        • Specifically: Spark, Spark Operator, Apache Airflow, HDFS
      • Code freeze for Spark coming up, so lots of work there
        • python support, client node support for things like Jupyter notebooks talking to Spark on Kubernetes)
        • Stability fixes - better controller logic
          • Making sure to be level triggered and not edge triggers
        • Removing some hacks with init containers
      • Spark (link)
        • Working towards 2.4 release.
        • 2.4 code freeze and branch cut on 8/1
        • Major features
          • PySpark support
          • Client mode - support for notebooks
          • Lots of testing, merged integration tests
          • Removal of things like init-containers (getting us closer to GA)
          • Stability fixes - controller logic
          • Improvements on client side
        • Future work
          • Customize pod templates
          • Dynamic allocation/elasticity
          • HA driver - might need help from sig-apps to make it work
          • SparkR and Kerberized HDFS support
      • Spark Operator new features (link)
        • Mutating admission webhook to replace initializer used before
        • Python support
      • HDFS support (link)
        • Assessing demand, making progress
        • Chart exists in link above
      • Airflow (link)
    • SIG Multicluster (Quinton Hoole, confirmed)
      • Slides
      • Goals
        • Solving common challenges releated to managing multiple clusters
        • Applications that run across multiple clusters
      • Subprojects
        • Cluster Federation (v2) [https://github.com/kubernetes-sigs/federation-v2 ]
          • Work across different clusters, same or different cloud provider
          • V1 was a POC, won’t be developed further
          • V2 focuses on decoupled, reusable components
          • V2 has feature parity with v1, is alpha
          • Highlights
            • CRDs for control planes, installed in existing cluster
            • Generic impl for all kube types (including CRDs) for propagating any types into all clusters, with basic per-cluster customization
            • Several higher-level controllers, for example:
              • migration of RS and deployments between clusters
              • Managing federated DNS
              • Management of Jobs
              • Management of HPA to manage global limits
            • Uses cluster registry
          • Next steps
            • Federated status
            • Federated read access (e.g. view all pods across all clusters)
            • affinity/anti-affinity for bunches objects or namespaces to a particular cluster
            • RBAC enforcement
          • Please comment if you have suggestions for API, before moves to beta
            • Contributions to code also welcome
        • Cluster Registry [https://github.com/kubernetes/cluster-registry ]
          • Fairly stable and complete
        • Multicluster Ingress [https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress ]
          • Look at repo for more information
        • Questions
          • Cluster registry vs cluster API?
            • Cluster API is to create clusters, cluster registry is for using already-existing clusters
            • Maybe could disambiguate the terms better, manage overlap
    • SIG Scheduling (Bobby Salamat, confirmed)
      • 1.11 Update
        • Pod Priority and Preemption to beta, available by default
          • Improved the feature, restricted a bit to avoid allowing untrusted users to create high-prio pods, only allow super-high-priority pods in kube-system namespace
        • DaemonSet scheduling in default scheduler (alpha)
      • 1.12 Update
        • Focus on performance
          • Improved equivalence cache (pod with similar spec probably fits on same node unless the node has changed)
            • 3x performance improvement now
            • Helps with scheduling large replica sets, etc
        • Working on proposal for gang scheduling [link here]
        • Proposal for scheduling framework, direction might change a bit [link here]
        • Moving to beta
          • Taint by condition, taint-based eviction
          • Equivalence cache
          • DaemonSet scheduling in default scheduler
        • Want to graduate descheduler out of incubator
  • Announcements
    • **Shoutouts **(mention people on #shoutouts on Slack)
      • Jeremy Rickard: Shout out to @mbauer for really pushing to make service catalog use prow and to improve out PR reviewing and testing process
      • Christoph Blecker: Two shoutouts I wanted to get out this week:
        • First, shoutout to @matthyx who has been very active in k/test-infra recently and has been making a number of different contributions from fixing bugs, to adding new features to our automation. He’s been eager to help and has stuck with some of the more complex changes that require many comments and interactions (sig-bikeshed ftw :bikeshed:)
        • Second, shoutout to @nikhita! I could easily stop right there, as her many contributions to the project really speak for themselves. I want to call out though the little chopping wood and carrying water tasks she does that may not be as obvious… like ensuring that stale issues are reviewed and either closed or marked as still relevant, or welcoming new contributors with an emoji or two. It’s these kinds of things that exemplify what the Kubernetes community is all about.
      • Benjamin Elder: shoutout to @Quang Huynh for continuing to send k/test-infra fixes and push through to flesh out the PR status page (especially https://github.com/kubernetes/test-infra/pull/8612) long after his internship! :simple_smile: Hopefully we can hopefully start using the Prow PR status page more widely now thanks to all the hard work there :tada:
      • Aaron Crickenberger: shoutout to @bentheelder for helping push kubernetes v1.11.1 images out (fixing the symptom), and getting the appropriate folks within google involved to ensure there is now a team owning a better solution to the problem (fixing the problem); this is continued progress toward decoupling google.com as a requirement for releases
    • Kubernetes wins most impact award at OSCON!!! (-paris; Tim to read)

#13

July 26, 2018

  • Moderators: Chris Short [ContribEx]
  • Note Taker: Solly Ross, Josh Berkus
  • Demo: EKS - Bryce Carman - [Amazon EKS] (confirmed)
    • Managed Kubernetes on https://aws.amazon.com/eks/
    • Provisioning
      • Control plane is hosted/managed by EKS, worker nodes are under control of users
        • No outside communication with the control plane besides via the load balancer in front of the API server
        • Can use security groups to limit control-plane-worker-node interaction
      • Can set role used to create various AWS resources (like loadbalancers) so that you don’t have to give EKS full permissions in your account
      • Can just use VPC and subnets already present in account
    • Networking
      • CNI plugin
      • Usines IP addresses from VPC that the nodes are already part of (integrated with AWS networking)
      • No overlay network
      • Can integrate with Calico network policy as well
      • Designed to isolate control planes from nodes as well
    • Interaction
      • Using Heptio authenticator and 1.10 for external authentication for kubectl in order to authenticate against AWS IAM
      • Just uses the same creds as the AWS CLI – no separate auth to manage
    • Demo’d using Helm to create a wordpress site
    • Questions
      • Can users scale control plane?
        • No
  • Release Updates
    • 1.12 - Tim Pepper - Confirmed
      • Feature Freeze Tuesday July 31 - next week
        • see email on k-dev for more info
        • After Tuesday features not captured by the release team must go through the exception process.
        • SIGs should be thinking about their release themes (major work focuses) for the 1.12 release, insuring those are represented in feature issues and have plans for documentation and test coverage.
        • Not code freeze (that comes later)
    • 1.11.x - Anirudh Ramamathan - Confirmed
      • Nothing to report
  • **KEP o’ the Week **- KEP 17 - Jordan Liggitt - Confirmed
    • KEP 17 - Moving ComponentConfig API types to staging repos
    • Taking config for core kube components from loose flags to structured config
      • Kubelet currently has a config file format that’s in beta
      • Makes it easier to look at exactly how a particular component is configured, warn about deprecated config, missing config, etc
    • Want to put configuration types in separate repo
      • Tools like kubeadm should be able to import config to manipulate and generate, without pulling in all of Kubernetes
    • Want to make sure common configuration aspects can be shared, referenced, and reused
      • client connection info
      • Leader election
      • etc
    • Look over if you are involved in developing the Kube components, or have tooling that sets up the various components
  • SIG Updates
    • Auth - Jordan Liggitt - Confirmed
      • https://docs.google.com/presentation/d/1MAIypro-bcLC7wNEnIazYqmCL6ILBN69uUWIBw7QBIY/edit?usp=sharing
      • Usability
        • Multiple Authorizers (e.g. GKE)
          • Now honor superuser permissions from other authorizers, so if you’re a superuser, you can create policy without first explicitly granting yourself those permissions
          • Now show the error message from all authorizers, instead of just the error from the first authorizer
          • Show a much cleaner, more succinct and readable error message for failures due to escalations
      • Features
        • Kubelet Certs
          • Better support for delegating to an external credentials providers (e.g. AWS IAM)
          • Requesting and rotating certs with the CSR API (still requires external approval process for the CSRs)
        • Scoped service account tokens
          • Moving towards beta for time-limited and audience-scoped tokens
        • Audit improvements
          • Heading towards v1 audit event API
          • Work ongoing on dynamic audit webhook reg
    • Instrumentation - Frederic Brancyzk - Confirmed
      • Heapster deprecation (https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md)
        • Setup removal in 1.12, completely removed as of 1.13
      • Node metrics work still ongoing, in collaboration with SIG Node
        • Improve monitoring story around node monitoring
        • Chime in if you maintain a device plugin or node component
      • Metrics-server rework (https://github.com/kubernetes-incubator/metrics-server/pull/65)
        • call for testing in non-production servers, should make things more stable, has several fixes to communication with nodes
      • k8s-prometheus-adapter advance configuration merged
        • Allows more precisely controlling how metrics in the custom metrics API map to Prometheus queries, and how metrics show up in the custom metrics API
      • A number of third party service involving e2e tests have been put behind a feature flag in the test infrastructure
        • should improve flaking tests from sig-instrumentation, especially around components that we can’t control
  • Announcements
    • **Shoutouts **(mention people on #shoutouts on Slack)
      • Manjunath Kumatagi for patiently working through issues that will help us run conformance tests on other architectures (say arm64). It’s taken a really long time to get this far and the end is in sight. Thanks for your hard work across multiple repos and sigs.
      • Jordan Liggitt for always knowing the answer to … everything … and being so available to answer questions. You’re an incredible resource and I’m always grateful to lean on you when I need to!
      • Quinton Hoole for including a “how you can contribute” slide in the SIG Multicluster update in today’s community! Way to model SIG leadership in growing the k8s team by facilitating new/increased participation!

#14

August 2, 2018 -

  • Moderators: Solly Ross
  • Note Taker: Bob Killen [Company/SIG]
  • [ 0:00 ]** Demo **-- Kritis Overview [aprindle@google.com]
    • Slides
    • Build off of grafeas
    • Test assertions before deploying containers
    • Can validate / do vulnerability scanning
    • Cron schedule that is constantly monitoring to ensure images are never fall out of sync
    • CRD based configuration
      • Supports whitelisting images
      • Can define things such as maximum CVE severity
      • Can deny images that are usings tags such as ‘latest’
    • Helm Chart available for deployment
    • When attempting to deploy an image with a vulnerability, user will be given a denied error
    • Blog post incoming on August 13th
    • Initial v0.1.0 release coming soon
    • Custom attestation policies in the future
    • Questions:
      • Is like Portieris? Unknown, will look / follow up
      • Does it support Notary? Auth piece has similar goal
        • Notary support should be possible, both designed for build provenance
    • [slides]
    • https://github.com/grafeas/kritis
  • [ 0:00 ]** Release Updates**
    • Current Release Development Cycle [Tim Pepper ~ 1.12 Release lead]
      • v1.12 Feature Freeze: was Tuesday July 31
        • This was feature definition (not implementation) deadline.
        • ~50 features captured
        • Implementation (code, test cases, docs drafts)deadline is “Code Freeze” on Sept. 4
      • v1.12.0-alpha.1 release cut yesterday Aug. 1. Is a major milestone in that along with 1.11.1 we are transitioning from Google employees running the build/release mechanism to community members. The transition has had a few issues, but is rapidly improving. Expecting first beta to be smooth.
      • More details and links at http://bit.ly/k8s112-release-info
    • Patch Release Updates
      • 1.11.2 Scheduled for the Aug 11th (cherry picks should be up by Friday, August 3rd)
  • [ 0:00 ] Open KEPs
    • Dynamic Audit Configuration https://github.com/kubernetes/community/blob/master/keps/sig-auth/0014-dynamic-audit-configuration.md
      • Advanced auditing is still difficult to configure
      • Working on making it similar to dynamic admission control
      • Support both static runtime configuration via flag and new dynamic method
      • Moving into alpha in 1.12, beta in 1.13
      • Will be Feature Gated
      • Can be used to compute API coverage on a running cluster. Previously it was not possible to alter the audit config of a running cluster. Dynamic audit config allows you to turn on API coverage calculator and compute the API usage for a period of time.
  • [ 0:00 ] SIG Updates
    • SIG UI [Jeffrey Sica] (confirmed)
      https://docs.google.com/presentation/d/1f6dI2mP_5SZeuJd9i3e6y6jx44i6ouFZGvFAfYT1BsA/edit?usp=sharing
      • New release coming soon (2-3 weeks)
        • Many bug fixes
        • Will use 1.8.10 client-go
      • Angular Migration in progress
        • Migrating from version 1 to version 6
        • Requires a complete rewrite
      • Upcoming features
        • oauth2 integration
        • multi-arch manifests
        • security enhancements
          • inform users when running as admin or with other insecure configuration
        • Will support multiple themes
        • Customized CSS (branding etc)
      • Looking for more contributors
        • angular js migration
        • bug triage
        • feature discovery
    • SIG AWS [Nishi Davidson] (confirmed) (had to move to later week)
    • SIG Service Catalog [Jeremy Rickard] (confirmed)
      • SIG Charter recently approved
      • SIG Chairs have changed recently (insert names later)
      • Working actively on improving contributor experience
        • active in labeling issues
        • improving contributor guide
      • Moving to prow
      • Service Catalog now supports namespace
      • Catalog restrictions on a per namespace basis
      • Working towards providing default types for services
  • [ 0:00 ] Announcements
    • Kubecon CFP deadline
    • Save the Date: Kubernetes Contributor Summit, 10 December, right before Kubecon.
      • Sunday, 9 December will likely
    • Shoutouts this week (Check in #shoutouts on slack)
      • thanks to @mhb for his efforts in working with #sig-testing to get service-catalog all hooked up to prow :prow: and tide
      • thanks to @tpepper @jeefy @bentheelder @rdodev for great responses and their time on #meet-our-contributors yesterday! :tada: solid examples of good mentors
      • Shout out to @neolit123 for quick responses and status updates to failing ci tests in cluster lifecycle
      • Many thanks to @ahmet for quick reviews of changes to kubernetes/examples repo!
    • Stackoverflow Top Users (Once a month at the end of the month)
    • Turning off bot for 1.12 release, last artifact of munge github (missed 1st part of this)
    • Contributor Experience is looking for new contributors
    • SIG leads have an email regarding zoom

See Q1-2 Archive here


#15

Here are the notes from today, waiting on the video to render but I’m leaving for a long weekend so I’ll have to fill it in later, cheers!

Aug 9, 2018 - (recording)

  • Moderators: Arun Gupta [Amazon]
  • Note Taker: Tim Pepper [VMWare/SIG Release and Jorge Castro [Heptio/SIG Contribex] and Josh Berkus [Red Hat/SIG Release etc.]
  • [ 0:00 ]** Demo **-- No demo this week
  • [ 0:00 ]** Release Updates**
    • Current Release Development Cycle [Tim Pepper ~ 1.12 Release lead]
      • We are roughly halfway through the ~12-13 week release cycle for 1.12, but almost ⅔ of the way through our open development phase:
        • It’s been ~50 days since master branch reopened from 1.11’s freeze
        • It is only 26 days to 1.12’s code freeze!
      • 1.12.0-beta0 is Aug. 14: We are validating a new build/publish mechanism and its documentation. Beta should be cut from a newly created 1.12 release branch next week, CI will be enabled on the branch, and the branch will fast-forward regularly pulling master branch’s content for the next weeks.
      • Looking for high SIG attention toward keeping CI signal green for release master blocking and release master upgrade
      • Code Freeze: September 4 (26 days from today)
      • Release Target: September 25 (47 days from today)
    • Patch Release Updates
      • 1.9.10 (5 days ago)
      • 1.10.6 (12 days ago)
      • 1.11.2 (1 day ago)
  • [ 0:00 ] SIG Updates
    • SIG Scalability [Shyam Jeedigunta] (confirmed)

      • Recent work toward improving tools for scale testing:
      • For 1.12 kubelet watches for secrets instead of polling, making a big perf win, can scale to a 100k namespaces currently.
      • Kubelet heartbeat changes to reduce etcd interactions (see KEP 0009 node heartbeat)
        • Moving node heartbeat to another API
        • Current node heartbeat produces a LOT of etcd version history, bloating the etcd database
      • CI Testing
        • Deflaking our jobs
        • Solving 1.12 regression
    • SIG Architecture [Brian Grant] (confirmed)

      GitHub notifications don’t work for most and slack also is lossy. Use mailing list.

      • Tracking boards - if you want to get on the SIG Arch radar, please get onto the project board so you can get on the agenda. Feel free to use the sig-architecture mailing list to reach out to us. (Slack is too ephemeral, please use the list as the primary point of contact.)
      • Pushing back on newly compiled-in APIs, reviewing those more closely.
      • Will post to k-dev on the engagement model for interacting with API changes ← important
    • SIG CLI [Sean Sullivan] (confirmed)

    • SIG AWS [Nishi Davidson] (confirmed)

      • Slides link
      • Looking to upstream more, especially documentation and testing
      • Repos now in kubernetes-sigs namespace
      • Giving an overview of subprojects:
        • Aws-iam-authenticator, allows authentication against IAM credentials for kubernetes running on AWS. Renamed from heptio-authenticator.
        • Aws-alb-ingress-controller, created by CoreOS and Ticketmaster & donated, watches for ingress events on kubernetes and creates AWS ALBs. It’s in production at Ticketmaster (also used by Bluejeans & Freshworks). At some point will be added to Amazon EKS.
        • Aws-encryption-provider provides envelope encryption for Etcd, still an alpha project where they are debating design elements.
        • Aws-csi-driver-ebs allows the CSI driver to work with EBS for PVs. Collab with Red Hat. Hope to make stable in 1.13/1.14 and replace the current EBS driver.
        • Pod-identity-access: just a proposal right now. Would like to have identity injection inside the pod for IAM credentials. Target for 1.13/1.14 work.
        • Cloud-provider-aws: project to move AWS cloud provider to the cloud provider API (as per KEP 0019). Added a documentation KEP for it.
    • Cluster API [Kris Nova]

  • [ 0:00 ] **Steering Committee Updates **[Aaron @spiffxp]
    • Steering Committee Elections 2018
    • Walked through how a meeting works:
      • kubernetes/steering project board
      • They start with a kanban board and look at all of the things they were supposed to have done
      • Right now they’re supposed to be having elections, but there are pending tasks that weren’t done a year ago, like deciding who is a “member of standing”.
      • Went over criteria for member of standing. Right now they’re planning to use Devstats criteria for contributions by contributor (rolling window 1year), requiring 60 contributions.
      • Need to codify SIG liaisons from SC. This is partly for the charter process. Have at least 2 people assigned to each SIG.
    • Code of Conduct Committee (CoCC): open candidates, closed voting -> set of members added in community repo. See committee readme for more info.
    • Charters: lots of activity but also slow progress. WIP, lots to do, tracked in meta issue.
    • Meet Our Contributors - Steering Committee edition
    • Non SC participation: Would like to allow non-SC members to join the meetings by invitation (meetings are recorded though and posted to the youtube channel for community review), such as Jaice who has been auditing the meetings and asking questions. Another example is cblecker querying the SC about GH permissions management, and made a proposal for it. Not suggesting making the meetings open, joining would be by invitation, usually based on a proposal to the SC.
  • [ 0:00 ] Announcements
    • Kubernetes Office Hours is next week! [Jorge]
    • SIG Update Schedule for this meeting is updated through October [Jorge]
      • It is always linked to from the top of this document
      • SIGs, it is your responsibility to ensure that you can make this update, if not, let someone in SIG Contrib-Ex know so we can schedule you.
    • Demo section is finally caught up! If you want to demo something during this meeting see the top of this document. [Jorge]
      • If you’ve demo’ed over a year ago consider submitting again so we can check out your progress!
    • GitHub Management subproject [Aaron @spiffxp]
    • Subprojects [Aaron @spiffxp]
    • Sunsetting Kubernetes SIG service accounts [Ihor]
    • Shoutouts this week (Check in #shoutouts on slack)
      • paris: thanks to @tpepper (Tim Pepper) @jeefy (Jeffrey Sica) @bentheelder (Benjamin Elder) @rdodev (Ruben Orduz) for great responses and their time on #meet-our-contributors yesterday! :tada: solid examples of good mentors
      • spiffxp: thanks to @mhb (Morgan Bauer) for his efforts in working with #sig-testing to get service-catalog all hooked up to prow :prow: and tide
      • Jerickar (Jeremy Rickard): what @spiffxp said! tide and prow are dope and we love using the now
      • tpepper: shoutout to @jorge , @paris , zoom, and any others who’ve been working for months to improve our meeting moderation abilities and best practices to better insure our collaborations are constructive and resilient in the face of potential abuse
      • spiffxp: shoutout to @matthyx (Matthias Bertschy) for adding per-repo label support to our label_sync bot, so you can add labels to your repo by PR’ing a file instead of making the change manually with admin access
      • jorge: shoutout to @chenopis (Andrew Chen) for sorting out netlify for the contributor site!
      • spiffxp: shoutout to @mkumatag (Manjunath Kumatagi) and @dims (Davanum Srinivas) for their push on multi-arch e2e test images, ppc64le is now passing node conformance (https://k8s-testgrid.appspot.com/sig-node-ppc64le#conformance)

#16

Aug 16, 2018

  • Moderators: Aaron Crickenberger (@spiffxp, Google, SIG Beard)
  • Note Taker: Solly Ross (@directxman12, Red Hat, SIG Autoscaling)
  • [ 0:00 ]** Demo **-- Kubernetes Ingress Controller for Kong [Harry Bagdi, harry@konghq.com] (confirmed)
    • Links/contact
    • Kong is an open source API gateway built on nginx
      • Performance and features from nginx
      • flexible routing
        • Hash-based
        • Cookie-based
        • client-based
      • dynamic configuration
      • plugins for custom logic common to your microservices
    • Ingress Deployment
      • Dataplane mode does the proxying, pulling config from the database
      • Controlplane mode configures things, writing them to a database
      • Runs in a single namespace, but serves ingresses for all namespaces
      • Data is proxied directly to pods, skipping kube-proxy
        • Enables things like sticky sessions in Kong
      • Custom resource for extending normal Ingress with additional Kong functionality (KongIngress)
        • Proxy configuration
        • Routing methods, regex priority, etc
        • Active and passive health checks
      • Plugins for custom logic
        • Use CRDs set up different plugin configurations
        • For example, rate-limitting
        • Apply configured plugins to ingresses with annotations specifying the name of an instance of the custom resource
        • Have many plugins, all opensource
      • Supports multiple services
      • Supports TLS upstream and termination
    • Inspection
      • Can inject headers for info
        • Via
        • Latency
        • Rate-limitting information
      • can also be inspected using an HTTP API to check underlying Kong configuration
    • Questions
      • Q: How are websockets handled?
        • Kong can forward websocket traffic directly (you can upgrade connections to websockets as normal)
        • Can’t actively manipulate traffic on websockets
  • [ 0:00 ]** Release Updates**
    • Current Release Development Cycle [Tim Pepper ~ 1.12 Release lead]
      • ~2.5 weeks to code freeze!!! Yes already!!
      • 40 days to release
      • release-1.12 branch created Tuesday; fast forwarding daily to master
        • Fast-forward for next couple of weeks
      • Branch CI on track to arrive this week
      • CI signal mostly OK for release master blocking and release master upgrade, but a number of issues being worked
    • Patch Release Updates
      • 1.9.10 (14 days ago) - Mehdy Bohlool (@mbohlool)
      • 1.10.6 (21 days ago) - Maciek Pytel (@MaciekPytel)
      • 1.11.2 (9 days ago) - Anirudh Ramanathan (@foxish)
  • [ 0:00 ] **Graph o’ the Week **[spiffxp]
    • Let’s talk about flaky and failing tests
    • Testgrid - presubmits-kubernetes-blocking#Summary
      • Show’s blocking tests
      • Also a dashboard for non-blocking tests
      • Can click to see history of job runs in a grid, where they succeeded and failed
      • Tests are considered failing until it sees a pass in some particular window
    • Velodrome - BigQuery Metrics - Presubmit Failure Rate
      • Grafana instance looking at test failures
      • Can see which suites are failing over time
        • E.g. kops spiked, integration built over time, but has been fixed (thanks @janetkuo!)
    • GitHub Query - is:open label:kind/flake org:kubernetes
      • Can use this query to find flaky tests (intermittently failing and succeeded)
    • GitHub Query - is:open label:kind/failing-test org:kubernetes
      • Can use this query to find tests that are failing all the time (as opposed to “just” being flaky)
    • Who should be helping fix these?
        1. Who owns the test?
        • [sig-foo] thing should not explode
        1. Who owns the job?
        • test-infra/config/jobs/kubernetes/sig-foo/OWNERS
        1. Who owns the infra?
        • #test-infra
        • If you skip steps 1 & 2 and go directly to 3, you will be sent to the back of the line
  • [ 0:00 ] KEP o’ the Week [Chris Hoge, @hogepodge, on behalf of Nishi Davidson, @d-nishi]
    • Part of SIG Cloud Provider
      • Coordinates stuff among all cloud providers
    • https://github.com/kubernetes/community/blob/master/keps/sig-cloud-provider/0019-cloud-provider-documentation.md - Accepted
      • Transfer responsibility of maintaining docs to cloud providers
      • Provide documentation on how to activate any out-of-tree cloud provider
      • Set minimum standards for cloud provider documentation
      • Maintain docs for how to write a new out-of-tree cloud provider
    • Follow up discussion in SIG-Cloud-Provider and SIG-AWS
    • Questions
      • Q: Working with Cluster Lifecycle to improve workflow in kubeadm?
        • Yes, working on docs to start out with
  • [ 0:00 ] SIG Updates
    • SIG Docs [Andrew Chen]
      • [slide link]
      • Ongoing/upcoming work
        • 1.12 is under (@zparnold is docs lead)
        • Docs contributor guide has been refactored (@mistyhacks)
        • Considering alternative search engines for China PR#9845
        • Figuring out generated docs (working group) – e.g. for kubelet PR#66034
        • Proposal for fundamental concepts of Kubernetes (modeling, architecture) [slides]
          • Need more/helpful diagrams
      • PR bash and docs sprint at Write the Docs in Cincinnati
      • Search outage postmortem [doc]
        • Kubernetes.io dropping off of search results
        • Version docs aren’t indexed (via X-Robots-Tag: noindex)
        • Noindex header got added to main site as well by accident, causing no search engine results
        • What to do going forward
          • Hand off infra to CNCF, document mechanisms and processes
          • Adding testing and monitoring, notify on abnormalities
          • Have better failsafe default state
            • master was the exception before, default state was “nothing gets indexed”
            • default state should have been “everything got indexed”
    • SIG IBMCloud [Sahdev Zala]
      • Slide deck
      • Relatively new SIG for building/maintaining/using Kubernetes with IBM public and private clouds
      • Meets every other week (Wednesdays at 14:00 EST)
        • Start with presentations about IBM Cloud Kubernetes Service, IBM Cloud Private (recorded)
          • IKS supports 3 concurrent releases, multi-az clusters
          • IBM Cloud Private 2.1.0.3 releaed in May, certified for up to 1000 nodes, scalability work ongoing
      • Ongoing discussions/work
        • SIG cloud provider integration
        • Public repo for IBM cloud provider code
        • SIG Charter
      • Future discussions (see SIG agenda)
        • Hybrid clouds (IKS <-> ICP)
        • Performance
      • Community Collaboration
        • Networking
          • Working with Red Hat & Tigera
            • Move Egres/IPBlock network policy to GA in 1.12
        • Scalability
          • Etcd changes to improve cluster creation, improve monitoring overhead
        • Storage
          • Flex volume resize and metrics
          • IBM Cloud object store plugins
    • SIG Autoscaling [Solly Ross]
      • SIG is in charge of anything related to automatic scaling both of pods, cluster components themselves, and the cluster (VMs) itself
      • Horizontal Pod Autoscaler
        • Removing scale limits in favor of more sophisticated behavior (looking at metric data point timestamps and pod launch timestamps)
        • Brainstorming further algorithmic improvements (looking at more than one data point, etc) for flexibility around additional use cases and custom metrics
        • HPA v2beta2 landing in 1.12 release
          • Specify labels to further scope metrics
          • Target average values on object metrics (divide value by number of pods)
          • API consistency improvements
      • Cluster Autoscaler
        • Focusing on some large known issues (scaling around GPUs, local persistent volume scaling)
        • Investigating steps to integrate cluster autoscaler with cluster API (may require some changes to the cluster API instead of custom logic in the autoscaler)
  • [ 0:00 ] Announcements
    • Shoutouts this week
      • shoutout to Di Xu (@dixudx) for being such an active reviewer and reviewing LOTS of incoming PRs so quickly!!!
      • shoutout to Arnaud Meukam (@ameukam) and Jeremy Rickard (@jerickar) for being awesome bug triage shadows and handling the job wonderfully while I was out last week!
      • Mistyhacks: Shoutout to @ianychoi, who has just become a k8s org member in order to work on Korean localization, and is already providing great feedback, as evidenced in this PR: https://github.com/kubernetes/website/pull/9643/comment#issuecomment-411886340
      • @jdumars for creating :testgrid: (Slack emoji)
    • Steering Committee Elections are coming! Announcements will go out next week on multiple platforms but k-dev@ will be the main communication channel.
      • Elections are coming!
      • Next week, email will go out with eligibility, etc information on kubernetes-dev ML
      • There will be a voters guide checked into GitHub as a single source of truth
    • Changing how we do GitHub membership - file an issue instead of send an e-mail?
    • Brace yourselves, automation is coming [spiffxp]
    • Heapster deprecation reminder [directxman12]
      • Bug-fix only mode on 1.12, completely deprecated & retired in 1.13
      • Please start the process of migrating away from Heapster if you haven’t already (look at metrics-server and/or third-party monitoring solutions, such as Prometheus)

#17

Video will come later as we couldn’t livestream due to technical issues, in the meantime, here are the notes:

Aug 23, 2018

  • Moderators: Paris Pittman (SIG Contributor Experience)
  • Note Taker: Josh Berkus and Danny Rosen
  • [ 0:00 ]** Demo **-- KeyCloak - bdawidow@redhat.com, stian@redhat.com (confirmed)
    • Keycloak is an open source IAM (Identity Access Management) solution
    • Demo involving Ingress
      • Set up “realm” for credentials
      • Then set up security for Ingress endpoints
      • Supports bearer tokens
      • Only keycloak sees the credentials, applications only know what’s authenticated by access token
      • Handles managing multiple roles per user, with different levels of permissions by role
      • Support for multiple identity providers (Github example)
      • Libraries for auth for javascript, Java. Supports general SAML libraries for other languages, also working on a goal-based proxy provider.
      • Support for external user stores (LDAP, Kerberos, Custom)
      • Multiple identity providers per Realm, can also have database-backed identity database locally.
      • Keycloak can be used for authentication for Kubernetes itself
      • Used at U Michigan
      • Similar to OpenAM but has more features
  • [ 0:11 ]** Release Updates - **
    • Current Release Development Cycle [Tim Pepper ~ 1.12 Release lead]
      • Proposal in flight to drop “status/approved-for-milestone” from list of merge required labels during code freeze, with lazy consensus target Aug 27
      • Code Slush: Aug. 28
      • Code Freeze: Sept. 4
      • Release Target: Sept.25
      • …one month to go. Your feature work should be wrapping up ahead of code freeze. Docs PR’s are due. Test cases should be in place.
      • Continuous Integration:
    • Patch Release Updates
      • 1.9.10 (20 days ago) - Mehdy Bohlool (@mbohlool)
      • 1.10.7 (3 days ago) - Maciek Pytel (@MaciekPytel)
      • 1.11.2 (15 days ago) - Anirudh Ramanathan (@foxish)
  • [ 0:15 ] **Graph o’ the Week **[spiffxp]
    • Let’s talk about our automation’s GitHub API Token usage
    • We get: 5,000 requests per hour
    • We used to work around this in mungegithub by:
      • keeping an in-memory cache
      • tuning munger polling frequency
      • separating into SQ/misc-mungers instances
    • Switching to prow to do things on demand vs. a polling loop helped, for a bit
    • Now, we’re using ghproxy (thanks @cjwagner!)
      • Implemented by our own Cole Wagner
    • Hero charts: last 6 months of cache and github token usage
      • See population of the cache, how many api tokens we didn’t have to use over time
      • Turned in on mid-May
      • Prior to turning the cache on, we often hit max tokens, esp. At the end of code freeze
      • Now usage is much more stable/lower, can go through the backlog faster
      • We’re moving away from mungegithub so you won’t see this much more, moving to Tide for merging.
  • [ 0:22 ] KEP o’ the Week powered by SIG PM
    • tallclair@ - KEP 0014-runtime-class
    • RuntimeClass - Define a generic way for a runtime to be defined, where in the past it was opaque to the control plane beyond kubelet
    • Motivation is to support new runtimes, like katacontainers, GVisor and maybe future stuff like serverless runtimes or GPUs
    • There’s a podspec for the RunTimeClass, to decouple the configuration and node-level implementation from the name users need to use
      • We could end up with more than one class spec for the same runtime
    • See list of Non-Goals, we’re trying to keep the mechanism simple. They do have a list of future extenions, though, such as:
      • PodOverhead, so that you can account for resources outside those used for the container, like for Kata.
      • Policies for abstract runtimeclasses in podspec, such as a requirement for a “sandbox” runtime or “unix” (pod doesn’t care which specifically they get)
    • Want to make it consistent to express supported/unsupported features (including mutually exclusive ones on a node like SELInux vs. Apparmor).
    • Leave Comments:
  • [ 0:00 ] SIG Updates
    • OpenStack (Chris Hoge, confirmed)
      • https://docs.google.com/presentation/d/1fdq0X-UPN-8xc_3bpvvrwIic_UGTTDyKRt-Cjtgp9io/edit?usp=sharing
      • Completed in the last cycle:
        • CloudProvider Openstack, added conformance testing, lots of bug fixes, sync’d with in-tree provider
        • Planned to remove the in-tree provider in 1.12, but has been delayed to 1.13 to give users time to move to external provider.
        • Added Manilla Storage Provisioner for shared storage (NFS)
        • Added keystone authenticator for mapping multiple projects to accounts
        • Added extensive documentation, including general docs for Cloud Providers
        • Began work for transitioning to WG Openstack of SIG Cloud Provider
      • Upcoming Work
        • Magnum (OpenStack’s service for container orchestrators) conformance & cert testing toward getting it certified as a k8s installer
        • Driver work: autoscaling drivers, barbican driver for key management
    • **Storage **(Saad Ali, confirmed)
      • https://docs.google.com/presentation/d/1TFX6BDCod6E0PJRusQ1zntOX36kDyuO5iycpSfH8pL4/edit?usp=sharing
      • For 1.12:
        • Topology-aware volume scheduling, since not all volumes work on all nodes, old version was based only on cloud providers. Moved it to a generic interface both in Kubernetes and in CSI.
        • This quarter moving in-tree storage to topology, and for all CSI plugins.
        • We can have volumes provisioned in a smarter way.
        • First Kubernetes storage features that could not be part of core.
        • Snapshots / restore functionality (CSI, Kubernetes internal & external)
        • Drive CSI to GA/Stable
      • Preparing for CSI (Out of tree volume extension mechanism) for GA / Stable Q4
      • This Quarter: Support of ephemeral volumes (eg: secret volume, configmap volume).
      • Moving Kubelet Device Registration to beta
      • Adding conformance testing for storage to kubernetes storage suite
      • Block volume support moving to Beta
    • Apps (Matt Farina, confirmed)
      • https://docs.google.com/presentation/d/1jbEDX4GDeCssT4D42Q1iajDSLU3sz_RQgPwDCkR2J1c/edit?usp=sharing** **
      • Active projects:
        • Application CRD & Controller
        • Workload API
        • Kompose
        • Examples
      • SIG Apps Charter: WIP, should be ready for review soon
      • Recently merged: Recommended labels merged into Helm documentation as well.
      • Application CRD & Controller: Cross tool way to describe an application.
      • Workloads API: Looking at Lifecycle Hooks, Pod disruption budget & Deployments, Jobs with deterministic pod names.
      • Time split between Workloads API & Developer tooling week by week.
      • Kompose: Converts Docker Compose to Kubernetes objects, actively being worked on
      • Helm moved to CNCF - Everything from kubernetes-helm has moved to the Helm org. Charts is still using prow/tide automation
  • [ 0:00 ] Announcements
    • Shoutouts this week (pulled from #shoutouts in slack weekly)
    • kubernetes-client/typescript has been moved to kubernetes-retired [spiffxp]
    • Automating all the things update [spiffxp]
    • Seattle Contributor Summit is now a part of the KubeCon registration process. Add as a co-located event. Dec 9th and 10th.
    • Steering Committee Election Announcement went out to k-dev on Aug 21 (or 22nd depending on where you are in the world!)
      • Next deadline: Nominations and exception eligible voter forms due on Sept 14th
    • Contributor Role Board [castrojo] (will show you next time due to time constraints, in the mean time check it out!)
      • A place for volunteers to declare interest
      • A place for SIGs/WGs/others to post roles for volunteers.
      • Pairs volunteers with mentors.
      • SIGs, we’d love to get some postings from you!
    • We will have a Contributor Discussion Social at Kubecon Shanghai, on the evening of November 13th. This will include drinks, snacks, and a panel Q&A on contributing to Kubernetes from China /Asia. Anyone who contributes to Kubernetes and is at Kubecon Shanghai is invited. Venue/schedule details TBA.
      • If you are a Chinese contributor to Kubernetes, we are still looking for panelists.
      • This is in addition to the New Contributor Workshop and the Doc Sprints during the day, which you can register for with your Shanghai registration.

#18

August 30, 2018


#19

September 6, 2018


#20

Sorry this one is a tad late, I was on the road:

Sep 13, 2018

  • Moderators: Arun Gupta [SIG AWS/Amazon]
  • Note Taker: Solly Ross [SIG Autoscaling]
  • [ 0:00 ]** Demo **-- Answering questions on k8s Slack w/ Foqal [Vlad Shlosberg, vlad@foqal.io] (confirmed)
    • https://docs.google.com/presentation/d/19RNjayF59WanE8Q9ug4sftFXniGQP4PRRXsRC4X7dd4
    • https://foqal.io/oss
    • Goals
      • Improve UX
      • Focus Contributor Times
    • Core Idea
      • Automatically respond to common questions without any special interaction
    • Functionality
      • Upon asking a question (without special syntax), Foqal sends answer, marked as just to you
      • Can rate question, if marked as helpful, the answer is sent to entire channel
    • Sources
      • StackOverflow
      • Docs (divided into small sections)
      • Slack conversations
        • Upon detecting question, looks for answers sent afterwards
          • Sends message to answerer, asking if it’s appropriate to store
          • Can edit answers before storing them
    • Results
      • 3 months, 2 active channels, 37 helpful autoresponses in past 2 weeks
      • Slack conversations and Kubernetes docs provide most useful answers
    • Currently talking to docs folks to use Foqal responses to improve docs content, searchability, and examples
    • Invite Foqal bot to your channel in Kube slack
      • /invite @Foqal
      • Both SIG channels and more user-facing channels
      • Add context before storing
      • Can manually add to Foqal using the elipsis meu on any slack message
    • Talk to Foqal about…
      • importing other docs sources
      • Partitioning (SIG meeting times might not be useful to kubernetes user channels)
      • Ask Vlad if you have questions
    • Can also run on private Slack instances
  • [ 0:00 ]** Release Updates**
    • Current Release Development Cycle [Tim Pepper ~ 1.12 Release lead]
      • Still in Code Freeze. See here for TLDR what do I do to get a merge.
      • Beta 2 - Sept. 11
      • RC - Sept. 18
      • Release target - ** Sept. 25: AT RISK**
        • We are making progress on CI Signal, but slowly.
        • Depending on today/tomorrow improvements merging and test results showing up by Monday Sept. 17
        • potential to delay release toward Sept. 27
      • Tide: we moved k/k to it on Monday. Worked through a few minor issues. Seems to be working reasonably now.
  • [ 0:00 ] SIG Updates
    • SIG Windows [Michael Michael] (confirmed)
      • Finished a bunch of functionality required for moving to stable
        • Not moving to stable until 1.13, due to conformance, perf, stability hiccups
      • Want to finalize docs, how-to guides, etc for GA
      • Stopping feature development to focus on stabilization
    • SIG Node [Dawn Chen] (confirmed)
      • Slide: https://docs.google.com/presentation/d/1G034FTqXeXO5Gf1H-ufTkAMgJKOcx6HCIzB6krkO6zY/edit?usp=sharing
      • Finished charter
        • Meetings weekly Tuesday at 10AM PT, Resource Management WG Wednesday 11AM PT, on-demand meetings for Asia times
        • Revised/categorized SIG scope (see slide 3, large list)
      • Recent work
        • Sandbox Pods
          • RuntimeClass proposal, alpha feature, CRD
          • Working to integrate with Kata and other sandbox solutions, containerd shim
        • Windows Container Support (with SIG Windows)
          • GA in 1.13
          • Kubelet stats for Windows system containers
          • Fixes for network, eviction manager bugs
          • In-review PRs for
            • DNS capabilities for Windows CNI (with sig-network)
            • Windows CNI support (with sig-network)
            • Testing frameworks (with sig-testing)
        • Testing
          • Changes in Node E2E (see slide 6 for link)
            • Reorganized tests to more easily track results
            • New tests need to be tagged to run in normal test suites
          • CRI Testing dashboard (see slide 6 for link)
            • One place to view node conformance test results and features for CRI implementations
        • Misc
          • User NS support in progress
          • ResourceClass API under discussion (beyond just GPU support)
          • Efficient heartbeat for scalability in progress
          • PID NS sharing in beta
          • Updated debug container API, accepted proposal, implementation in progress
  • [ 0:00 ] Announcements
    • Steering Committee Election update: [paris/ihor/jorge]
      • Tomorrow! Is the deadline for all nominations (entire process including bios uploaded) and voter eligibility forms (if you are not on voters.md and want to vote).
        • Voter eligibility is normally based on contributions in the past 12 months, but you can make a request to be added if you’ve made non-GitHub contributions and you think you should be eligible
      • Next? CIVS polling ballots go out on Wednesday, September 19th to emails we have on file. If you do not receive an email by Thursday (please check spam/bulk), contact community@kubernetes.io. We will remind everyone on this call next week as well as our regular channels (k-dev ML, discuss.k8s.io, slack, etc.)
    • #Shoutouts!_ (want to say thanks? Use the #shoutouts channel in slack)_
      • @Mzee1000: Shout-out to @AishSundar and @gsaenger for incredible help with CI signal
      • @AishSundar: Huge shoutout to @gsaenger for lighting up the right fires when and where needed for 1.12 !! Way to go
      • @Justaugustus: Shoutout to @dougm, @dims, @bentheelder, @sttts, and anyone I might’ve missed for working the weekend to test our Release Engineering tooling ahead of the next beta cut!
      • @misty: @lucperkins for adding per-heading anchor links to the docs so people can share an in-page section at any level, without having to go back to the TOC to find the link!
      • @neolit123: thanks to @timothysc and @fabrizio.pandini who helper with debugging a release blocking e2e test for sig-cluster-lifecycle!
      • @mkumatag: Now we have v1.12.0-beta.2 release images are all fat manifest… This made all other architectures first class citizens… Thanks @dims @dougm @ixdy @luxas @calebamiles @tpepper @bentheelder
      • @paris: shout to @ameukam for helping contribex with our communication platform discovery and doing the hard work. perfect example of chopping wood and carrying water.
      • @tpepper: huge shout out to @bentheelder for working late late last night and right back to it this morning on diagnosing/resolving build pipeline issues in support of 1.12 release