No meeting this week or next week due to the holidays. We look forward to seeing you all again on January 3rd for the first meeting of 2019!
Next Deadline: 1.14 cycle starts, January 2nd
We’re still in 1.14 team formation and setup. The release team is looking solid and they are working on any needed policy changes before the official 1.14 kick-off, watch for updates on the mailing list.
Support for the Kustomize YAML manipulation tool has been added to
kubectl apply. This will only activate if the folder you are in has a
kustomization.yaml file, but if so then you’ll automatically get the YAML patches applied to your files before upload. Check out the Kustomize docs for more information on the tool.
For those that have not used Kustomize, it’s an alternative (or sometimes even an addition to) templating tools like Helm and Ksonnet which allows for editing YAML documents with patches and overlays rather than direct changes. This allows for things like per-team and per-app customizations of shared base objects.
This PR creates a new
cloud/node_lifecycle controller to share logic between in-tree and out-of-tree cloud plugins. This will help avoid mismatched behavior as more clusters switch to out-of-tree plugins, and ensures that critical deletion logic remains in-tree where we can all keep an eye on it.
A follow up to last week’s default enabling of the new node heartbeat subsystem, the Lease API which powers it is now GA. While this is immediately useful for node heartbeats, it can also be used by other code. The hope is that eventually all users of the
LeaderElection module in
client-go can switch over to the Lease API instead.
A short patch but potentially very useful for large sites which make heavy use of service mesh tools like Istio. If you add a
service.kubernetes.io/service-proxy-name annotation to a service (the value doesn’t matter as long as the annotation is present) then kube-proxy will ignore the service and its endpoints. The API for the service will be unaffected, but no underlying proxy rules will be configured. If these were already redundant with the proxy provided by your service mesh and your clusters are very large or under heavy churn (or both) this could save quite a few CPU cycles.
And finally an awesome new visualization from
@ibzib, a display of all the Prow jobs related to a given PR. You can check out the view for this PR itself as an example.
Last Week In Kubernetes Development (LWKD) is a product of some members of the Kubernetes project, but is not an official publication of the Kubernetes project or the CNCF. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.