LWKD logo

Last Week in Kubernetes Development

Stay up-to-date on Kubernetes development in 15 minutes a week.

Subscribe
Mastodon
Twitter
RSS

View LWKD on GitHub

Week Ending February 16, 2020

Developer News

Some program details are up for the Amsterdam Contributor Summit; register soon because it’ll probably fill up.

SIG-Storage has chosen two leads and two tech leads so that they have better, broader leadership.

The LTS Working Group is proposing to extend the patch period to a full year starting with release 1.19. The KEP is open, so now’s your time to comment for, against, or with “don’t forgets.”

Release Schedule

Next Deadline: Beta and Branch Creation, Feb 18th

On Tuesday 1.18 becomes its own versioned branch for you to target. Code Freeze is March 5.

1.17.3, 1.16.7, and 1.15.10 were all updated on February 11th. The next patch releases have not yet been scheduled.

#87952: add *Options to Create, Update, and Patch in generated clientsets

Starting off with a change that is both very small and very large. The client code generators have been tweaked to support passing through an options struct to Create(), Update(), and Patch() calls. This will allow for passing in things like dry-run create flags or force-apply for server-side apply patches. In practical terms, this mostly means adding a lot of metav1.CreateOptions{} and similar to existing code once you update to 1.18. If you don’t need to pass in any options, an empty struct will be equivalent to the current behavior.

#80724: Provide OIDC discovery for service account token issuer

Currently the state of the art for authenticating service-to-service communication with plain Kubernetes is to have one service send an unprivileged service account token, which the other service can then check using the TokenReview API. However this can result in a lot of TokenReview calls which makes kube-apiserver into a bottleneck for this process. However those tokens are signed JWTs, and the signature can be checked by anyone who knows the right public key. This new API provides that information, and in the spirit of compatibility it is structured to be similar to an OIDC provider. It’s not a complete OIDC implementation, but matches the same standard for the few endpoints needed for this key verification check. This will allow much higher volume of these service-to-service checks without impacting the API server.

#87923: Collect some of scheduling metrics and scheduling throughput (vol. 2)

It took a few attempts, but this PR adds some new metrics to the scheduler to track internal performance. scheduler_e2e_scheduling_duration_seconds can give an overall view of how long scheduling new pods is taking, while the segment metrics are there to track down hotspots if performance begins to drop. If you are running up against scheduler slowness, defintely check these out.

#88105: Graduate PodTopologySpread to Beta

And finally a well-deserved feature graduation. The topologySpreadConstraints system allows much more fine-grained control over how pods are distributed over a network. This is functionally similar to the old PodAffinity/PodAntiAffinity, however with more detailed options. Check out https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ and look forward to trying it out as a Beta API in 1.18!

Other Merges

Deprecated

Version Updates

Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.

You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.