It’s release time! Kubernetes 1.13 is out, which you can read about elsewhere. Along with the 1.13 release, we also released an all-versions security patch, so even if you’re not upgrading to 1.13, it’s time to update your clusters today.
Also note that we didn’t have an LWKD last week due to Code Freeze, and we may not have one next week due to Kubecon Seattle.
The community meeting kicked off with Andrew Chen and Dominik Tornow explaining the Docs Modeling Working Group, which is an effort to improve the project’s “big picture” documentation through the use of models and diagrams.
Matt Farina, newly elected chair, briefed us on SIG-Architecture, which now has a charter. They are working on API review, conformance testing, and KEPs (which are moving to kubernetes/enhancements repo). They are hoping to handle Windows conformance testing soon.
SIG-Release was represented by Tim Pepper, also a new chair, who went over the changes from the last few releases. In 1.12, we enabled non-Google branch managers and moved to Tide for merge queue. 1.13 saw a cleanup of the testgrid, moving out unmaintained tests (to be repeated in the future), and creating a Branch Manager team (instead of one person). For 1.14, they’re shooting for better RPMs and Debs, and improvements to build tools and automation. A big change is the launch of the LTS Working Group, who will be figuring out if Kubernetes can and should have Long Term Support releases and what those would look like.
By the time you read this, Kubernetes 1.13.0 will be out. 1.12.3, 1.11.5, and 1.12.3 are also out, and patch a super-critical security hole. Update your servers now.
The 1.14 Release Team is being selected and 1.14 development has already started, with the lifting of Code Freeze last week. Here’s some 1.14 changes for you to anticipate:
Chaos monkey for e2e tests! NodeKiller allows randomly shutting down nodes during e2e tests, subject to a bunch of parameters like a time between failures and a percentage of nodes to target overall. While not every test will benefit from this kind of chaos testing, it will hopefully improve the overall reliability of Kubernetes as well as help detect flaky e2e tests. And while we’re on the topic of e2es, we had two new suites added for file exec and pod preemption.
While there have been systems in the past for extending and customizing the behavior of the scheduler, it has been a difficult proposition up until now. This new plugins framework allows very simply registering code to run during different phases of the scheduling process. For now only the
pre-bind steps have been exposed, but more are expected to land before 1.14 ships.
This PR divides up the APIs to access feature gates data into read and write halves. This prevents code from unexpectedly modifying feature gate settings, and makes it easier to track which places are using the mutable API. It’s unlikely this should affect much code in the wild, but it’s possible it may require tweaks to testing systems.
And finally a shorter entry, a tweak to the scheduler loop to prevent scheduler starvation. This could occur if the cluster has a large number of unschedulable pods such that a small subset of them are continually bumping to the start of the queue and thus others never get picked up as schedulable later on.
--experimental-encryption-provider-confighas been replaced with
--encryption-provider-configand will be dropped in 1.14
Last Week In Kubernetes Development (LWKD) is a product of some members of the Kubernetes project, but is not an official publication of the Kubernetes project or the CNCF. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.