Stay up-to-date on Kubernetes development in 15 minutes a week.
Code Freeze is coming, as is a mega-long list of merged changes and graduations.
The python-client developers are considering making major breaking changes, mainly for OpenAPI code. Add your feedback.
Next Deadline: Code Freeze, Tuesday November 8th
Code Freeze is coming! This Tuesday at 5pm PST (November 9th 0100 UTC), we put a hold on all 1.26 merges that aren’t fixing failing tests. So get your last tweaks in! From here the deadlines are:
Patch releases are due out November 9th.
Pod objects have a fairly simple life cycle for their first few milliseconds. The API object is created, a scheduler sees that and tries to find a place to put it, if a node is found then the kubelet on that node wakes up and does Complicated Things. But what if you want to do some preparatory steps before all of this? Previously the only option was to make a validating webhook and reject the create in the first place, and arrange for the client to try again later when maybe the condition will be correct. This PR adds a better way,
schedulingGates. This Spec field works kind of like finalizers but in reverse. When the scheduler sees a new unscheduled pod, it will check if the
schedulingGates array is empty. If it’s not empty then no scheduling happens. Repeat any time the array changes. This means you can use mutating hooks on the create to inject a gate, wait until your condition to proceed is met, and then remove the gate and let things proceed as before. This opens up a lot of interesting possibilities with things like autoscaling, scheduler QoS, or fancy quota enforcement. If any of those sound up your alley, be sure to check out this alpha feature!
kubectl wait is one of the mainstays of “I just need to shim this one thing” shell scripts. Previously the
--for=condition mode would behave as expected if the condition didn’t yet exist, it would keep waiting until the condition both existed and matched the requested state. But with
--for=jsonpath, often used for sequencing load balancer and ingress setup, it would exit with an error if the path didn’t already exist. Now this behavior is unified so
wait will stick around until the condition is met (the timeout expires) in both cases. This may fix some silent bugs in your scripts or it might be a good excuse to clean up any extra retry logic you built as a workaround.
A potentially-breaking change for some folks, if multiple HPAs are configured against the same Pod, both with disable themselves and require you fix the configuration. Currently they would just both apply meaning whichever ran last would “win”. This does mean that when upgrading to 1.26, you should check for any such errors as your HPAs might have silently disabled themselves. You can also get a head-start on this now by checking for overly-broad selectors on your HPAs.
Test Overhaul: reusing/caching tests POC, naming of storage tests, code of storage tests, APIserver validation, NodeInclusionPolicy benchmarking, scheduler performance tests, large indexed job test, kubeadm reset, readWriteOncePod scheduling, APIserver tracing, disable cloud provider for tests, enabling NodeInclusionPolicy, podContainerManager, and improve formatting of e2e test output
Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.