Kubernetes 1.34 stabelizes Dynamic Resource Allocation
Dynamic resource allocation is now a stable feature of the new Kubernetes release. The Kubernetes project Metal3.io has also announced an innovation.

Kubernetes 1.34 stabelizes Dynamic Resource Allocation
Dynamic resource allocation is now a stable feature of the new Kubernetes release. The Kubernetes project Metal3.io has also announced an innovation.
Oh snap! My most recent blog post made it onto @thisweekinrust! Very exciting!
https://blog.appliedcomputing.io/p/make-the-easy-change-hard
https://this-week-in-rust.org/blog/2025/08/27/this-week-in-rust-614/
Kubernetes 1.34 stabilisiert Dynamic Resource Allocation
Die dynamische Ressourcenzuweisung ist nun stabiles Feature des neuen Kubernetes-Release. Eine Neuerung vermeldet auch das Kubernetes-Projekt Metal3.io.
Once you migrated your "people" EPFL profile on people.epfl.ch, you profile will be served by #Ruby instead of #Perl, and from #Kubernetes. See the announcement here https://actu.epfl.ch/news/web-switch-to-the-new-version-of-your-people-profi/
#EPFL #EPFLPeople
Das ist heute um 11h, in weniger als 20 Minuten.
Wenn Euch irgendwas an der #digitalesouveränität liegt und ein #rechenzentrum mit #Kubernetes oder #Openstack betreiben wollt ist das Euer Meeting.
Kubernetes v1.34: Of Wind & Will (O' WaW) - https://kubernetes.io/blog/2025/08/27/kubernetes-v1-34-release/ #Kubernetes
#Etymology fun: the "gover" in #government and the "cyber" in #cybernetics and the "kuber" in #kubernetes are the same
AI-powered Root Cause Analysis is now available for #Coroot Community users! https://t.ly/Nt0HQ
Instantly understand what caused a system incident and how to fix it, with 10 free investigations every month.
heise+ | KYAML: Kubernetes 1.34 bekommt neues Datenformat
Ein neuer YAML-Dialekt namens KYAML soll Kubernetes-Konfigurationsfehler vermeiden und vollständig abwärtskompatibel sein. Er kommt mit Kubernetes 1.34.
@bsi gibt es zur Doppelstrategie noch konkreteres zum Absichern von Software von z.B. US Anbietern? Bin da sowohl aus dem Netzpolitik, Heise und der offenen Antwort immernoch nicht ganz schlau draus geworden.
Gedanklich wäre ich da jetzt, dass man vll. VM+container Lösungen unter Linux und deren Security features weiter absichern/fördern würde (@proxmox, #incus, #LXC, #docker, #kubernetes, #Wine, #systemd, #apparmor, #selinux, #cgroups, namespaces...). Aber vll. auch nur Wunschdenken bei mir?
Autoscaling stuff depending on cluster size - that's a new one to me.
Supposedly good for cluster components like DNS
https://github.com/kubernetes-sigs/cluster-proportional-autoscaler
What makes AWS STS OIDC Driver stand out?
AWS STS OIDC Driver is a tool that enables Kubernetes workloads to authenticate with AWS services directly using OpenID Connect (OIDC). It eliminates the need for long-term IAM credentials by leveraging temporary AWS Security Token Service (STS) tokens tied to your OIDC identity.
Project link on #GitHub
https://github.com/awslabs/StsOidcDriver
#Infosec #Cybersecurity #Software #Technology #News #CTF #Cybersecuritycareer #hacking #redteam #blueteam #purpleteam #tips #opensource #cloudsecurity
— P.S. Found this helpful? Tap Follow for more cybersecurity tips and insights! I share weekly content for professionals and people who want to get into cyber. Happy hacking
CoreDNS plugin that let's you serve records for Kubernetes Gateway/Ingressy resources.
I wonder if I really could flatten Kubernetes IPs and have it all on layer 2 native...
Looks like there is a typo squatting attack going on to harvest #container #registry login #credentials of #ghcr:
https://bmitch.net/blog/2025-08-22-ghrc-appears-malicious/
Be safe out there!
kubernetes-event-exporter seem to be abandoned?
Apple acquires developer of the Open Policy Agent
The inventors of the Open Policy Agent are moving to Cupertino: Apple is buying expertise in open source software that remains under the control of the CNCF.
Apple übernimmt Entwickler des Open Policy Agents
Die Erfinder des Open Policy Agents wechseln nach Cupertino: Apple kauft damit Expertise in der Open-Source-Software, die unter Kontrolle der CNCF bleibt.
To SSH is human, but that doesn’t mean we should.
SSH is like popping the hood of your car while driving 70mph. It works just fine. Until it doesn’t, and then you have a problem.
Here's why Talos Linux removes SSH entirely, and how that shift leads to consistent, secure, and boringly reliable infrastructure. No drift. No late-night fixes. No hidden state.
Read the full post: https://www.siderolabs.com/blog/to-ssh-is-human/
Update: things settled down.
The initial symptom yesterday was hundreds of duplicate pods with `OutOfpods` status, which seemed like maybe it was flooding the control plane. Later it became apparent that Multus had too small of a memory limit which was causing it to get OOM killed as pods tried to shuffle around, which meant pods never got scheduled.
I also accidentally `talosctl reset` the wrong node at some point yesterday afternoon which was a big source of my frustration.
I think a few things are going on:
1. I don't have memory limits on any of my deployments so when I take the big node out of circulation everything tries to shuffle around, but the scheduler doesn't have enough information to spread it evenly and stuff starts getting OOM killed. Also the aforementioned Multus limit.
2. I think Longhorn is just not for me. The backups didn't work so I lost a few months of VMSave data (luckily I had a db dump on my laptop from April, otherwise I would have lost years).
In the cold light of day, the solution here isn't switching off of #Kubernetes. It's still the thing that fits my #homelab the best, despite the additional complexity.
I'm going to be implementing three things. First, Longhorn is done. I'm going to move pods to either local-path-provisioner or democratic-csi iSCSI mounts on my NAS.
Second, I'm going to put memory limits and VPA recommenders on every pod and monitor with Goldilocks.
Third, I'm going to really get real about backups. Data living on TrueNAS makes that a little easier because I can just use scheduled volume snapshots, but beyond that I'm going to figure out how to add automatic Cloud Native PG backup testing and alerting.
Ultimately I need to think long and hard about how to reduce complexity.
Major platform migration ahead? Learn from Duolingo's Kubernetes Leap at InfoQ Dev Summit Munich (Oct 15-16). Franka Passing will share challenges, lessons, and practical advice on moving to EKS.
Get the story: https://bit.ly/45YF4l7