Stay up-to-date on Kubernetes development in 15 minutes a week.
Kubernetes version 1.26 “Electrifying” is now available. The biggest things in this release are the number of features reaching maturity/GA, and the number of obsolete APIs and deprecated features being removed. Make sure you read the release notes and big congrats to everyone who worked on it!
The names of the presubmit tests have been changed to better reflect what they actually test.
David Tesar is proposing to add workflow labelling for Kubernetes PRs, with the idea of allowing people to check status less often. Please provide feedback so that we can build something that actually works. ContribEx would also like to discuss retaining LGTM through squashes in order to reduce repeat-LGTMs.
Remember to switch your imagge source to registry.k8s.io.
Next Deadline: 1.27 Release Cycle begins, January 9, 2023
1.26.0 is released (see above). One of the last changes out of it is modifying the fast-foward process during release branch. Now, we’re in the “Free Space” between releases, so if you feel like hacking over the holidays, mess around as you please.
Patch releases for 1.25.5, 1.24.9, 1.23.15, and 1.22.17, are out and include updates to Golang, which fix known security holes. Update as soon as you can.
This is the very last patch release for 1.22, so if you are running on 1.22 or earlier you need to be working on upgrading.
As we’ve discussed previously, the old leader-election implementations backed by ConfigMaps and Secrets are saying their last goodbyes. Today this extends to control plane services as well. While 1.26 and earlier will allow transitioning using the
secretlease modes, 1.27 and up will only allow the Lease-only backend. If you’re planning out your 1.26 upgrade in the near future and haven’t already gotten on the Leases train, now is definitely the time to add that to your checklist.
pprof system is an invaluable source of data when looking at the performance of just about any Kubernetes component. Unfortunately as a network service, exposing it brings certain risks especially for denial-of-service attacks as gathering a sample is a relatively intensive process. Of course there’s many ways to lock down network services, from firewalls to loopback adapters to authenticating proxies, but one of the oldest and safest is the humble Unix domain socket. These can be secured using all the tools available for filesystem access control, which is generally faster and more comprehensive than network tooling. This PR adds a
--debug-socket-path option to kube-apiserver which launches the same
pprof server but on a domain socket at the provided path. Unfortunately
go tool pprof doesn’t (yet?) support domain sockets itself so if you want to try out this feature, the best approach for now is to use
curl --unix-socket to download a dataset and then analyze it elsewhere using
go tool pprof.
Open the Floodgates! All of the below are PRs against 1.27 which have been waiting on Code Freeze, unless otherwise noted.
--container-runtime, system-node-critical pods, better retries
kubectl scale, which also fixes dry-run
kubectl-convertis statically linked
Testing Overhaul: deflake a preemption test, better cache & heap coverage, refactor FieldManager and make it generic, add Daemonset rolling update test, fix test for dual-stack, test coverage for controller-manager, test coverage for apiserver, add CRD validating tests, SCTP e2e tests
nodes/specRBAC, so removed
Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.