pouet.chapril.org est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Chapril https://www.chapril.org est un projet de l'April https://www.april.org

Administré par :

Statistiques du serveur :

1,1K
comptes actifs

#k8s

4 messages4 participants2 messages aujourd’hui
www.linkedin.com#kubernetes #devops #dns #observability #performance #networking #kubedns… | Adam Danko | 24 commentairesThe Kubernetes default that cost us performance: *** In Kubernetes, 'ndots:5' is set by default in '/etc/resolv.conf'. This means every hostname with fewer than 5 dots goes through every search domain - before even trying the actual FQDN. So when your app tries to resolve 'example.com', it might actually generate multiple DNS queries like: 'example.com.svc.cluster.local' 'example.com.google.internal' 'example.com.cluster.local' 'example.com' Each failed lookup = latency, DNS noise, and pointless retries. 🔍 As Tim Hockin (Kubernetes co-founder) explained back in 2016: “This is a tradeoff between automagic and performance.” The default 'ndots:5' isn’t about optimization - its about making things “just work” across SRV records, multi-namespace service lookups, and what were then called PetSets (now StatefulSets). Even if it means triggering multiple DNS lookups before hitting the actual domain. So yes - it comes at a performance cost. ✅ What are the possible workarounds? - Use FQDNs ("my-service.my-namespace.svc.cluster.local.") - dont forget the trailing dot to skip search paths - Lower the 'ndots' value with dnsConfig at the pod level or at a wider level using policy engines (Gatekeeper, Kyverno) - Reduce unnecessary search domains in your cluster setup 🔎 Real-world impact: After lowering ndots, we saw a clear drop in both conntrack table usage and application latency - confirming the reduction in DNS query volume and retries. (Image attached - green, yellow, and blue lines are the nodes with kube-dns on them.) The impact is most noticeable if your workloads involves: - Low-latency demands - Constant DNS resolution 👉 Have you tuned your DNS settings - or just lived with the default? What other Kubernetes defaults have surprised you in production? (Source of Tim's comment: https://lnkd.in/dBVDeCCD) #kubernetes #devops #dns #observability #performance #networking #kubedns #coredns #openshift | 24 commentaires sur LinkedIn

Thank y'all for the first day of #Rejekts2025 with great talks and inspiring conversations!

I am excited that I got a spot for the #LightningTalk​s.
Looking forward to present you #Kubenix a tool leveraging #NixOS modules to declare #K8s workloads fully declarative.
I will also show how its #Helm integration essentially bridges the #CloudNative and #Nix ecosystem effectively, while offering additionally type safety.

See you at 18:15 in the hall #TheNash!

I don't use containerization ( #docker, #k8s or whatever) on my servers, I only use distrib packages or sources of the app I want to install... the old way, so.
Does dockerized applications need more resources? or is it insignificant?
Usually, I install small servers.

The new k8s bug has a lame name: IngressNightmare. *sigh* Where's the clever word play?

Too many people relying on simple appending Nightmare to the name of the attack surface these days... might as well get an LLM to name them if we're just going to copy the last bug name all over again.

Anyway, you can read about it here:

wiz.io/blog/ingress-nginx-kube

#threatintel, #k8s

wiz.io · Remote Code Execution Vulnerabilities in Ingress NGINX | Wiz BlogWiz Research uncovered RCE vulnerabilities (CVE-2025-1097, 1098, 24514, 1974) in Ingress NGINX for Kubernetes allowing cluster-wide secret access.

Made my first #Go contribution. It's very basic but on the Helm project.
It will help people to easily know how much you are waiting for your #k8s resources to be ready.
It will be useful when the "helm upgrade --wait" is taking too much time.

More to come.

Incredibly stupid question potentially, but does Kubernetes have sort of a "look at current Pods and potentially reschedule on other nodes" trigger? I've just had the situation that after a sequential node restart, I had two nodes completely full, up to having some Pods pending, while the last node to be rebooted still had CPU free. But to make use of that, some Pods on the full nodes would need to be rescheduled.

1/2