Stay up-to-date on Kubernetes development in 15 minutes a week.
Kubernetes Contributor Summit EU is happening next Tuesday, March 19, 2024. Make sure to register by March 15. If you want to bring a family member to social send an email to summit-team@kubernetes.io. We’re eagerly looking forward to receiving your contributions to the unconference topics.
Also, don’t forget to help your SIG staff its table at the Kubernetes Meet and Greet on Kubecon Friday.
Take a peek at the upcoming Kubernetes v1.30 Release in this Blog.
Next Deadline: Draft Doc PRs, Mar 12th
Kubernetes v1.30.0-beta.0 is live!
Your SIG should be working on any feature blogs, and discussing what “themes” to feature in the Release Notes.
DRA, or Dynamic Resource Allocation, is a way to bridge new types of schedulable resources into Kubernetes. A common example of this is GPU accelerator cards but the system is built as generically as possible. Maybe you want to schedule based on cooling capacity, or cash register hardware, or nearby interns, it’s up to you. DRA launched as an alpha feature back in 1.26 but came with some hard limitations. Notably the bespoke logic for simulating scale ups and scale downs in cluster-autoscaler had no way to understand how those would interact with these opaque resources. This PR pulls back the veil a tiny bit, keeping things generic but allowing more forms of structured interaction so core tools like the scheduler and autoscalers can understand dynamic resources.
This happens from a few directions. First, on the node itself a DRA driver plugin provides information about what is available locally, which the kubelet publishes as a NodeResourceSlice
object. In parallel, an operator component from the DRA implementation creates ResourceClaimParameters
as needed to describe a particular resource claim. The claim parameters include CEL selector expressions for each piece of the claim, allowing anything which can evaluate CEL to check them independently of the DRA plugin. These two new objects combine with the existing ResourceClaim
object to allow bidirectional communication between Kubernetes components and the DRA plugin without either side needing to wait for the other in most operations.
While this does increase the implementation complexity of a new DRA provider, it also dramatically expands their capabilities. New resources can be managed with effectively zero overhead and without the even greater complexity of custom schedulers or a plugin-driven autoscaler.
This KEP proposes to update the kube-apiserver to allow tracing requests. This is proposed to be done with OpenTelemetry libraries and the data will be exported in the OpenTelemetry format. The kube-apiserver currently uses kubernetes/utils/trace for tracing, but we can make use of distributed tracing to improve ease of use and to make analysis of the data easier. The proposed implementation involves wrapping the API Server’s http server and http clients with otelhttp.
This KEP is tracked to graduate to stable in the upcoming v1.30 release.
custom
flag to kubectl debug for adding custom debug profiles.kubectl get jobs
now prints the status of the listed jobs.Always
where it couldn’t update its Pod state from terminated to non-terminated.StorageVersionMigration
API, which was previously available as a CRD, is now a built-in API.Last Week In Kubernetes Development (LWKD) is a product of multiple contributors participating in Kubernetes SIG Contributor Experience. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.