eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

195
active users

#kubernetes

18 posts16 participants0 posts today
Replied in thread

@bsi gibt es zur Doppelstrategie noch konkreteres zum Absichern von Software von z.B. US Anbietern? Bin da sowohl aus dem Netzpolitik, Heise und der offenen Antwort immernoch nicht ganz schlau draus geworden.

Gedanklich wäre ich da jetzt, dass man vll. VM+container Lösungen unter Linux und deren Security features weiter absichern/fördern würde (@proxmox, #incus, #LXC, #docker, #kubernetes, #Wine, #systemd, #apparmor, #selinux, #cgroups, namespaces...). Aber vll. auch nur Wunschdenken bei mir?

What makes AWS STS OIDC Driver stand out? 🤔🔐

AWS STS OIDC Driver is a tool that enables Kubernetes workloads to authenticate with AWS services directly using OpenID Connect (OIDC). It eliminates the need for long-term IAM credentials by leveraging temporary AWS Security Token Service (STS) tokens tied to your OIDC identity.

#AWS #Kubernetes #OIDC

🔗 Project link on #GitHub 👉 github.com/awslabs/StsOidcDriv

#Infosec #Cybersecurity #Software #Technology #News #CTF #Cybersecuritycareer #hacking #redteam #blueteam #purpleteam #tips #opensource #cloudsecurity

✨
🔐 P.S. Found this helpful? Tap Follow for more cybersecurity tips and insights! I share weekly content for professionals and people who want to get into cyber. Happy hacking 💻🏴‍☠️

Apple übernimmt Entwickler des Open Policy Agents

Die Erfinder des Open Policy Agents wechseln nach Cupertino: Apple kauft damit Expertise in der Open-Source-Software, die unter Kontrolle der CNCF bleibt.

heise.de/news/Apple-uebernimmt

heise online · Apple übernimmt Entwickler des Open Policy AgentsBy Jan Mahn

To SSH is human, but that doesn’t mean we should.

SSH is like popping the hood of your car while driving 70mph. It works just fine. Until it doesn’t, and then you have a problem.

Here's why Talos Linux removes SSH entirely, and how that shift leads to consistent, secure, and boringly reliable infrastructure. No drift. No late-night fixes. No hidden state.

👉 Read the full post: siderolabs.com/blog/to-ssh-is-

Sidero Labs · To SSH is human, but that doesn’t mean we should - Sidero LabsSSH is like opening the hood of your car while driving 70mph to adjust the engine. It works fine, until it doesn’t… Consider this: You go weeks with everything running smoothly. You follow the process, write great code, and don’t SSH into a node right before going to bed on Friday night. One day, an […]
Continued thread

Update: things settled down.

The initial symptom yesterday was hundreds of duplicate pods with `OutOfpods` status, which seemed like maybe it was flooding the control plane. Later it became apparent that Multus had too small of a memory limit which was causing it to get OOM killed as pods tried to shuffle around, which meant pods never got scheduled.

I also accidentally `talosctl reset` the wrong node at some point yesterday afternoon which was a big source of my frustration.

I think a few things are going on:

1. I don't have memory limits on any of my deployments so when I take the big node out of circulation everything tries to shuffle around, but the scheduler doesn't have enough information to spread it evenly and stuff starts getting OOM killed. Also the aforementioned Multus limit.

2. I think Longhorn is just not for me. The backups didn't work so I lost a few months of VMSave data (luckily I had a db dump on my laptop from April, otherwise I would have lost years).

In the cold light of day, the solution here isn't switching off of #Kubernetes. It's still the thing that fits my #homelab the best, despite the additional complexity.

I'm going to be implementing three things. First, Longhorn is done. I'm going to move pods to either local-path-provisioner or democratic-csi iSCSI mounts on my NAS.

Second, I'm going to put memory limits and VPA recommenders on every pod and monitor with Goldilocks.

Third, I'm going to really get real about backups. Data living on TrueNAS makes that a little easier because I can just use scheduled volume snapshots, but beyond that I'm going to figure out how to add automatic Cloud Native PG backup testing and alerting.

Ultimately I need to think long and hard about how to reduce complexity.