Image Registry Migration: the old signing key, on packages.cloud.google.com, has stopped working. You need to use the new key on dl.k8s.io. The community-images krew plugin will help you identify any remaining references to the old registry on your cluster. It’s not just users who need to update; 60 of our project’s own repositories still refer to the old registry. Please check those resources and update as soon as you can.
Next Deadline: Enhancement Exceptions Due, March 6th
It’s still feature work time, until Code Freeze on March 15th, so keep working on finishing up those 1.27 enhancements. It’s also time to think about your tests passing, so please check the new SIG-Release gating test policy.
Patch releases 1.26.2, 1.25.7, 1.24.11, and 1.23.17, are out, together with some changes to patch releases. First, we’re no longer building an rc0 for patch releases. Second, we are building patch releases signed using Cosign, but due to technical reasons were unable to sign all objects this patch release. The problem will get fixed next month.
After 4 years of planning and at least two of development, we finally have a first pass on in-place resource scaling for Pods. This PR implements the core resizing features and some related API handling. The main feature is pretty straightforward, you can now edit the
resources: map on an existing container and rather than an API error, Something™ will happen. Upon seeing a resize request, the Kubelet will spring into action and check if the node has sufficient resources to meet the new request. If so, it will set the
Status.Resize field to
InProgress and get on with the CRI calls and other internal bookkeeping updates. If the new sizes don’t fit, other status flows are started but all with the same overall goal to try and provide the requested change if possible. There is also a new per-resource resize policy field which can be set to
RestartNotRequired (the default) to say that a restart isn’t needed (however some runtimes may still have to do one anyway) or
Restart to force the container down and back up (for example with a Java app where the
-Xmx heap size flag needs to be recalculated).
One feature notably absent for now is resize-driven eviction. The Kubelet won’t handle it automatically for now, but if an external system does the Pod removals it will react appropriately. This is planned for future discussion along with more workflow improvements and integration with Pod-scoped resources as that feature develops.
--prune mode was added to Kubectl way back in 1.15. Back in 2019 it was hoped that the feature could be implemented in a simple and minimalist fashion. Sadly as with so many things, the simple approach turned out to be full of difficult edge cases and stumbling blocks for users. To help get things back on track, a new tracking mechanism has been added to power it called Applysets. An applyset tracks which objects are part of which named set so that a future
kubectl apply of that set can efficiently and correctly handling pruning. Each applyset is tracked as an API object (a Secret by default) with all the metadata needed for doing the prune-tracking without killing performance. This is similar essence to Helm manifest objects, though with a very different, substantially streamlined implementation. This new applyset format is also laid out as a community standard so that other tools such as Helm or KPT can interoperate with them.
For now this feature is behind an alpha flag,
export KUBECTL_APPLYSET=true to play with it. You’ll need to provide a unique applyset name as well as enabling pruning mode if desired.
--config flag is optional for kubeadm
resource.claims in PVC specs as N/A
Tons of Test Cleanup: implement private registry test images, fix Int types, fix and lint gomega usage, remove deprecated device test, remove vSphere variable, kubectl debug unit tests, drop TestListDeprecated, use OpenApi fake client, revise import restrictions, OOM-killed test
Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.