Happy Independence Day, Americans! Short issue today because of the American holiday.
Next Deadline: Code Freeze, July 8th
Code Freeze is this Thursday. We’ll also release 1.22 Beta 1 on the same day, and placeholder Doc PRs will be due Friday. So, maybe do the easy thing and open your placeholder PRs now?
For the hard thing: here’s the CI Report, which is not looking good for Code Freeze week. We have a failing job in Master-Blocking, plus 5 flaking jobs, and 12 failing or flaking jobs in Master-Informing. All of those are gonna need to get looked at in the next week, so if you can help with a test fail/flake, please do ASAP.
Cherry-picks for the next round of patch releases are due this Friday as well.
A common issue with internal Services in Kubernetes is they use a cluster-wide routing mesh via kube-proxy. This is great for dealing with node-level reliability issues, but not so fun for network performance or network failure isolation. In simple terms, there are cases where picking endpoints at random is unhelpful and we want our kube-proxy routing to understand the overall system topology and use that to make smarter decisions. In recent years we’ve seen several iterations on this idea. The first was the now-deprecated
ServiceTopology feature gate, which allowed for specific topology labels in a Service to guide the routing. This ended up being difficult to maintain and, as mentioned, is pending removal in 1.22. Learning from that attempt, 1.21 added two alpha features related to service routing:
TopologyAwareHints. The first of those is now being promoted to beta for wider testing.
The new feature can be enabled by setting
internalTrafficPolicy: Local on a Service. This will set up kube-proxy on each node to only consider Endpoints from that node. On a node which has no matching Pods, the Service will behave as if there are none. This is only helps with some of the topology-routing use cases but very much worth looking at for those. The most common example is running something as a DaemonSet and making each requests from a node end up in the DaemonSet Pod for that node. It can also be combined with scheduling affinity settings for slightly looser coupling of related services (compared to using multi-container Pods).
And for all the other use cases, definitely check out the topology aware hints system. It may have a little longer to go in alpha but will hopefully do the more complex behaviors in the future.
net.ipv4.ip_unprivileged_port_startwas added to the safe sysctl list so that pods can bind low ports
kubectl debugis backwards-compatible with older Kubernetes versions
kubectl top podlets you select a field
Last Week In Kubernetes Development (LWKD) is a product of some members of the Kubernetes project, but is not an official publication of the Kubernetes project or the CNCF. All original content is licensed Creative Commons Share-Alike, although linked content and images may be differently licensed. LWKD does collect some information on readers, see our privacy notice for details.
You may contribute to LWKD by submitting pull requests or issues on the LWKD github repo.