LWKD logo

Last Week in Kubernetes Development

Stay up-to-date on Kubernetes development in 15 minutes a week.

Subscribe
Mastodon
Twitter
RSS

View LWKD on GitHub

Week Ending August 7, 2022

Developer News

Starting sometime in October, k8s.gcr.io will begin to 302 redirect to registry.k8s.io, in order to ensure that old releases pull from the new, lower hosting-cost and higher performance, registry.

With 1.24 stable on Go 1.18, Kubernetes contributors are officially allowed to use Go Generics, although you should avoid them in any backportable bug fixes until 1.24 is EOL in late 2023.

Release Schedule

Next Deadline: Docs ready Aug 9th; Test Freeze Aug 10th

We are in Code Freeze, and trying to wrap up 1.25. This means that tests are frozen (no changes except to resolve CI bugs) starting this Wednesday. You should also have your docs ready for review by Tuesday. The first beta is released.

This Friday (8/12) is the cherry pick deadline for the next batch of patch updates to 1.22, 1.23 and 1.24.

#111090: Add support for user namespaces phase 1 (KEP 127)

Containers are great in many ways but the major runtimes for Kubernetes (Docker, Containerd, CRI-O, etc) have all had the issue that user ID 0 inside the container was the same as user ID 0 outside the container. In a perfect world, this shouldn’t ever matter as the kernel should be checking everything against all the other dozen security subsystems we call “containers”, but we don’t live in that world and so it has forever been a best practice to avoid the use of UID 0 (or other meaningful UIDs) in our Pod specs. With this PR we have the first phase of our fix, user namespace remapping. For this phase, only Pods with either no volumes or “stateless” volumes: configmap, secret, downwardAPI, emptyDir, or projected. If you enable the UserNamespacesStatelessPodsSupport feature gate and set hostUsers: True in the PodSpec, the containers will be run in a mapper user namespace. In practical terms this means user 65535 in one container will not be able to read files or kill processes owned by user 65535 in another container, or you can provide “root-ish” access in more limited contexts such as a VPN daemon which has to be privileged calls but not mess with other containers.

#111113: Support handling of pod failures with respect to the configured rules

Job objects have slowly grown from an infrequently used feature to the core runtime for CI tools, machine learning pipelines, and lots more. With that has come a growth from “please run this container once or something” to wanting more detailed control over the whole lifecycle. This PR adds a new podFailurePolicy substruct in JobSpec to configure the behavior both with respect to exit codes and Kubernetes-specific events like resource evictions. Combined with the new deletion reason tracking in Pods, this gives the Job system very fine-grained control over failure behaviors for both advanced use cases and old-fashioned simple use cases in high-churn environments.

Other Merges

Test Reliability: APIService lifecycle, scheduler tests

Promotions

Deprecated

Version Updates

Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.

You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.