Stay up-to-date on Kubernetes development in 15 minutes a week.
Some program details are up for the Amsterdam Contributor Summit; register soon because it’ll probably fill up.
SIG-Storage has chosen two leads and two tech leads so that they have better, broader leadership.
The LTS Working Group is proposing to extend the patch period to a full year starting with release 1.19. The KEP is open, so now’s your time to comment for, against, or with “don’t forgets.”
Next Deadline: Beta and Branch Creation, Feb 18th
On Tuesday 1.18 becomes its own versioned branch for you to target. Code Freeze is March 5.
1.17.3, 1.16.7, and 1.15.10 were all updated on February 11th. The next patch releases have not yet been scheduled.
Starting off with a change that is both very small and very large. The client code generators have been tweaked to support passing through an options struct to Create()
, Update()
, and Patch()
calls. This will allow for passing in things like dry-run create flags or force-apply for server-side apply patches. In practical terms, this mostly means adding a lot of metav1.CreateOptions{}
and similar to existing code once you update to 1.18. If you don’t need to pass in any options, an empty struct will be equivalent to the current behavior.
Currently the state of the art for authenticating service-to-service communication with plain Kubernetes is to have one service send an unprivileged service account token, which the other service can then check using the TokenReview API. However this can result in a lot of TokenReview calls which makes kube-apiserver into a bottleneck for this process. However those tokens are signed JWTs, and the signature can be checked by anyone who knows the right public key. This new API provides that information, and in the spirit of compatibility it is structured to be similar to an OIDC provider. It’s not a complete OIDC implementation, but matches the same standard for the few endpoints needed for this key verification check. This will allow much higher volume of these service-to-service checks without impacting the API server.
It took a few attempts, but this PR adds some new metrics to the scheduler to track internal performance. scheduler_e2e_scheduling_duration_seconds
can give an overall view of how long scheduling new pods is taking, while the segment metrics are there to track down hotspots if performance begins to drop. If you are running up against scheduler slowness, defintely check these out.
And finally a well-deserved feature graduation. The topologySpreadConstraints
system allows much more fine-grained control over how pods are distributed over a network. This is functionally similar to the old PodAffinity
/PodAntiAffinity
, however with more detailed options. Check out https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ and look forward to trying it out as a Beta API in 1.18!
--dry-run=server
everywherekubeadm config images list
has structured output in multiple formatskubectl describe
returns “no resources found” just like get
does (congrats to Brian Pursley on his first merged PR)FilteredNodeStatuses
HardPodAffinitySymmetricWeight
removed from configs in favor of PluginConfigkubeadm upgrade node config
command will be removed in v1.18Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.