eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

206
active users

#DisasterRecovery

0 posts0 participants0 posts today

"ZFS snapshots as poor man's ransomware recovery"

It holds up. Better than you'd think.

Ransomware hits a server? I roll back to a snapshot taken 10 minutes ago. Immutable, local, instant.

No restore wizard. No cloud latency. No vendor lock-in.

Just:

zfs rollback pool/dataset@safe

Gone. Like it never happened.

You want real ransomware defense?

🧊 Immutable local snapshots

📦 Offsite ZFS send/mirror

🔐 Key-based SSH, no password logins

🎯 Restore script you actually test

ZFS isn’t "enterprise." It’s survival-grade.

alojapan.com/1327115/historic- Historic Crested Ibis Release Set for June 2026 in Ishikawa #birds #CrestedIbis #culture #DisasterRecovery #Hakui #IshikawaPrefecture #Japan #JapanNews #JapaneseCrestedIbis #Minamigata #NaturalDisaster #news #noto #NotoEarthquake #NotoPeninsula #Reconstruction このページを 日本語 で読む The city of Hakui, located on the Noto Peninsula in Ishikawa Prefecture, has been selected as a release site for the crested ibis (toki). The toki is a Special Natural Mo

It feels bizarre seeing news articles about the work I perform. I led this process this time around, my first time as a Coordinator rather than a Team Lead. It was one of the most complex and political damage assessment, probably due to the current climates.

The article does get one thing wrong. Harney County had a lot of flood damages to homes, but Douglas County had more infrastructure impacts, about $8-12 million from the events versus Harney's $1.5-$2m.

Now we wait and see if the Feds will fund this recovery for an area that voted 90% for this Administration. My money is on a denial.

salemreporter.com/2025/05/30/o

Salem Reporter · Oregon governor requests federal funding to support aftermath of spring floods - Salem ReporterOregon Gov. Tina Kotek on Friday asked President Donald Trump to declare a disaster, the first step to obtaining federal funding to help Coos, Curry, Douglas and Harney counties recover from the aftermath of intense spring flooding and landslides. In March and April, parts of southern Oregon experienced flooding from rapid snowmelt, record-level rainfall and overflowing […]

🎧 #InfoQ #podcast: Julia Furst Morgado (Veeam) & Olimpiu Pop dive into #Kubernetes edge resilience in the face of ransomware attacks.

They explore the real challenges – limited resources, network issues, and security threats – and uncover the vital solutions: strong backups, secure write-protected storage, and automated & tested recovery.

🔗 Listen to the full discussion here: bit.ly/3HcIOWs

📄 #transcript included

1/🧵 I've been invited to partner with a local county to put on a presentation on how private non-profits can best prepare for disasters in terms of mutual aid and access federal aid for recovery.

There's such a need for education for all organizations these days, this stuff is too complicated unless you can afford consultants.

Which creates #equity issues that many governments have difficulties in addressing.

I want to send some data into the ether that may interest people who wonder how US governments recover from the increasing occurrences of #disasters.

In 2021, I sent an email saying that, given the situation, I would not be able to keep your machines running with the current infrastructure. You replied that the funds are allocated "for migration to a cloud-based solution," so the machines must be kept running as they are. I strongly suggest you reconsider: your connections are unstable and slow, it would be impossible to work (via Remote Desktop, Windows servers) in such conditions - from over 30 thin clients. You responded that the budget is now allocated for this, and that the consultant assured you everything would be perfect.

The migration will never happen – after spending tens of thousands of euros – due to "lack of connection quality."

I "forgive" you.

At the beginning of 2023, I wrote to you that there are two critical issues that will soon compromise the security and reliability of the machines. You replied that you are "working on the plan for migration to a centralized solution in one of your locations." I responded that the connectivity is inadequate, that that location has data center flooding issues (it should at least be moved to an upper floor), and that, in case of problems, all your 9 locations would be down because of it. Therefore, given the situation, it would be better to keep a server and a replica at each site, with offsite backups, ensuring they are current and consistent.

Minimal financial investment, maximum uptime.

You replied that "the defined path is now set, so we are proceeding with it" because a "consultant has guaranteed maximum reliability."

I withdrew, predicting disasters and for the same reason: they present themselves elegantly, with glossy catalogs and buzzwords – they sure seem credible and modern!

Go ahead, it’s your money and your data.

This morning at 5:30, you woke up me because your location had electrical issues, and the other branches are facing external connectivity problems due to the (same) Internet provider (and the backup, which uses the same channels as the primary provider, unlike what the vendors had promised).

A downtime of at least 3 days is expected.

Everything is down. You want me to take the backups (made by others, I don't even know how), and restore the old servers in the branches (from 2015), as you believe it would be possible to resume work only this way – as suggested by the "elegant consultant."

I replied that I wish you good luck and went back to sleep with a clear conscience.

This morning, a (non-critical) FreeBSD VPS went down. The provider mentioned a "problem that would be resolved as quickly as possible," without providing an ETA.

I immediately took another VPS and, thanks to zfs-send and zfs-receive, replicated the zroot/bastille from the last backup. I set a maximum time limit of one hour. In 5 minutes, plus a few more minutes for the copy, I already had the replacement ready.

58 minutes later, the original VPS was back online. I discarded the restored one. I was almost disappointed at that point, I almost hoped to put the other one online instead 😆

Monitoring shouts at me: "This server is DOWN!"
I immediately check - it doesn’t respond to ping requests. I try to reboot it remotely - no luck.
I attempt to request a remote console; after more than 45 minutes, there’s still no reply.
I check the logs: the last ZFS send/receive based backup occurred just 23 minutes before the outage (it's an hourly backup).

I call the client to explain the situation: we can either wait or restore from a backup. They express a preference to get back to work after lunch (13:30).

I set up a VPS, install FreeBSD and some packages, then connect to the backup server:
zfs send -RLvw [mybckdataset]/bastille@lastSnap | pigz - | mbuffer -m512M | ssh destserver "pigz -d - | zfs receive -x canmount -x readonly zroot/bastille"

After a few minutes (50 GB later):
zfs load-key -r zroot/bastille (since they’re encrypted)
zfs mount -a
service bastille start

Everything's up and running. DNS record changed - disaster recovered. Time: 12:48.

I call the client and say, "Hey, you’re back up. Now we’ll wait for the original server to come back, and then we’ll resync the datasets."

The customer, with a witty remark that cleverly shows gratitude without being direct, replies, "Oh come on, and I was hoping to extend my lunch break! 😆"

FreeBSD, jails, and ZFS have, once again, done an excellent job.

Now, I can have my lunch.

#FreeBSD#ZFS#jails

No vabe', non è possibile, ditemi che non è vero: cioè il problema è la ditta, gli immigrati, i lavoratori o la proprietà della stessa, il malocchio, il chiodo, la nutella...

OPPURE

è VERGOGNOSO che il sistema dei treni di un paese sia SENZA alcun ridondanza seria e senza aver previsto soluzioni banali di #disasterrecovery per evitare incidenti come questo?

Cioè si vede l'effetto, e NON LA CAUSA?