pouet.chapril.org est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Chapril https://www.chapril.org est un projet de l'April https://www.april.org

Administré par :

Statistiques du serveur :

1,1K
comptes actifs

#k8s

5 messages4 participants0 message aujourd’hui
www.linkedin.com#kubernetes #devops #dns #observability #performance #networking #kubedns… | Adam Danko | 24 commentairesThe Kubernetes default that cost us performance: *** In Kubernetes, 'ndots:5' is set by default in '/etc/resolv.conf'. This means every hostname with fewer than 5 dots goes through every search domain - before even trying the actual FQDN. So when your app tries to resolve 'example.com', it might actually generate multiple DNS queries like: 'example.com.svc.cluster.local' 'example.com.google.internal' 'example.com.cluster.local' 'example.com' Each failed lookup = latency, DNS noise, and pointless retries. 🔍 As Tim Hockin (Kubernetes co-founder) explained back in 2016: “This is a tradeoff between automagic and performance.” The default 'ndots:5' isn’t about optimization - its about making things “just work” across SRV records, multi-namespace service lookups, and what were then called PetSets (now StatefulSets). Even if it means triggering multiple DNS lookups before hitting the actual domain. So yes - it comes at a performance cost. ✅ What are the possible workarounds? - Use FQDNs ("my-service.my-namespace.svc.cluster.local.") - dont forget the trailing dot to skip search paths - Lower the 'ndots' value with dnsConfig at the pod level or at a wider level using policy engines (Gatekeeper, Kyverno) - Reduce unnecessary search domains in your cluster setup 🔎 Real-world impact: After lowering ndots, we saw a clear drop in both conntrack table usage and application latency - confirming the reduction in DNS query volume and retries. (Image attached - green, yellow, and blue lines are the nodes with kube-dns on them.) The impact is most noticeable if your workloads involves: - Low-latency demands - Constant DNS resolution 👉 Have you tuned your DNS settings - or just lived with the default? What other Kubernetes defaults have surprised you in production? (Source of Tim's comment: https://lnkd.in/dBVDeCCD) #kubernetes #devops #dns #observability #performance #networking #kubedns #coredns #openshift | 24 commentaires sur LinkedIn

I don't use containerization ( #docker, #k8s or whatever) on my servers, I only use distrib packages or sources of the app I want to install... the old way, so.
Does dockerized applications need more resources? or is it insignificant?
Usually, I install small servers.

The new k8s bug has a lame name: IngressNightmare. *sigh* Where's the clever word play?

Too many people relying on simple appending Nightmare to the name of the attack surface these days... might as well get an LLM to name them if we're just going to copy the last bug name all over again.

Anyway, you can read about it here:

wiz.io/blog/ingress-nginx-kube

#threatintel, #k8s

wiz.io · Remote Code Execution Vulnerabilities in Ingress NGINX | Wiz BlogWiz Research uncovered RCE vulnerabilities (CVE-2025-1097, 1098, 24514, 1974) in Ingress NGINX for Kubernetes allowing cluster-wide secret access.