A new Desert Sage letter reflects on July, neighborhood fireworks and reporting on disasters.

A new Desert Sage letter reflects on July, neighborhood fireworks and reporting on disasters.
https://www.europesays.com/2282618/ Korea’s nuclear waste cleaning robot throws bottle with precision #BottleThrow #DisasterRecovery #humanoid #korea #KoreanRobot #nuclear #NuclearDecommissioning #NuclearDisaster #RescueOperation #RescueRobots
Korea’s nuclear waste cleaning robot throws bottle with precision https://www.byteseu.com/1235075/ #BottleThrow #DisasterRecovery #humanoid #Korea #KoreanRobot #Nuclear #NuclearDecommissioning #NuclearDisaster #RescueOperation #RescueRobots
"ZFS snapshots as poor man's ransomware recovery"
It holds up. Better than you'd think.
Ransomware hits a server? I roll back to a snapshot taken 10 minutes ago. Immutable, local, instant.
No restore wizard. No cloud latency. No vendor lock-in.
Just:
zfs rollback pool/dataset@safe
Gone. Like it never happened.
You want real ransomware defense?
Immutable local snapshots
Offsite ZFS send/mirror
Key-based SSH, no password logins
Restore script you actually test
ZFS isn’t "enterprise." It’s survival-grade.
https://www.alojapan.com/1327115/historic-crested-ibis-release-set-for-june-2026-in-ishikawa/ Historic Crested Ibis Release Set for June 2026 in Ishikawa #birds #CrestedIbis #culture #DisasterRecovery #Hakui #IshikawaPrefecture #Japan #JapanNews #JapaneseCrestedIbis #Minamigata #NaturalDisaster #news #noto #NotoEarthquake #NotoPeninsula #Reconstruction このページを 日本語 で読む The city of Hakui, located on the Noto Peninsula in Ishikawa Prefecture, has been selected as a release site for the crested ibis (toki). The toki is a Special Natural Mo
When disaster strikes, your recovery plan decides your uptime & bottom line. Dive into Warm Standby vs Multi-Site for AWS — a practical look at speed, cost & business continuity.
https://www.europesays.com/2214948/ Trump’s DOGE Cuts Are a Texas-Sized Disaster #ClimateChange #DepartmentOfGovernmentEfficiency #DisasterRecovery #doge #ElonMusk #GulfCoastProject #hurricanes #Musk
Houston’s Housing and Community Development Department will hold three community meetings this month to discuss its spending plan for $315 million in federal disaster recovery dollars.
It feels bizarre seeing news articles about the work I perform. I led this process this time around, my first time as a Coordinator rather than a Team Lead. It was one of the most complex and political damage assessment, probably due to the current climates.
The article does get one thing wrong. Harney County had a lot of flood damages to homes, but Douglas County had more infrastructure impacts, about $8-12 million from the events versus Harney's $1.5-$2m.
Now we wait and see if the Feds will fund this recovery for an area that voted 90% for this Administration. My money is on a denial.
#InfoQ #podcast: Julia Furst Morgado (Veeam) & Olimpiu Pop dive into #Kubernetes edge resilience in the face of ransomware attacks.
They explore the real challenges – limited resources, network issues, and security threats – and uncover the vital solutions: strong backups, secure write-protected storage, and automated & tested recovery.
Listen to the full discussion here: https://bit.ly/3HcIOWs
#transcript included
LA Times: This massive map helps Altadena fire victims feel seen
https://www.latimes.com/lifestyle/story/2025-04-02/altadena-eaton-fire-map
Introducing #DiRMA: The Disaster Recovery Testing Maturity Assessment!
A new framework to measure & improve the maturity of #DiRT programs across three key dimensions: People, Processes & Tools.
Learn how DiRMA helps organizations strengthen their disaster recovery strategy in this #InfoQ article by Yury Niño Roa https://bit.ly/3DXAgBu
Now the race: Threw my laundry in the wash, recharging everything in here... while the AQI is good and no wind, and power, LOL. Plus, with 8% humidity, line drying will be near instant LOL. #DisasterRecovery
1/ I've been invited to partner with a local county to put on a presentation on how private non-profits can best prepare for disasters in terms of mutual aid and access federal aid for recovery.
There's such a need for education for all organizations these days, this stuff is too complicated unless you can afford consultants.
Which creates #equity issues that many governments have difficulties in addressing.
I want to send some data into the ether that may interest people who wonder how US governments recover from the increasing occurrences of #disasters.
In 2021, I sent an email saying that, given the situation, I would not be able to keep your machines running with the current infrastructure. You replied that the funds are allocated "for migration to a cloud-based solution," so the machines must be kept running as they are. I strongly suggest you reconsider: your connections are unstable and slow, it would be impossible to work (via Remote Desktop, Windows servers) in such conditions - from over 30 thin clients. You responded that the budget is now allocated for this, and that the consultant assured you everything would be perfect.
The migration will never happen – after spending tens of thousands of euros – due to "lack of connection quality."
I "forgive" you.
At the beginning of 2023, I wrote to you that there are two critical issues that will soon compromise the security and reliability of the machines. You replied that you are "working on the plan for migration to a centralized solution in one of your locations." I responded that the connectivity is inadequate, that that location has data center flooding issues (it should at least be moved to an upper floor), and that, in case of problems, all your 9 locations would be down because of it. Therefore, given the situation, it would be better to keep a server and a replica at each site, with offsite backups, ensuring they are current and consistent.
Minimal financial investment, maximum uptime.
You replied that "the defined path is now set, so we are proceeding with it" because a "consultant has guaranteed maximum reliability."
I withdrew, predicting disasters and for the same reason: they present themselves elegantly, with glossy catalogs and buzzwords – they sure seem credible and modern!
Go ahead, it’s your money and your data.
This morning at 5:30, you woke up me because your location had electrical issues, and the other branches are facing external connectivity problems due to the (same) Internet provider (and the backup, which uses the same channels as the primary provider, unlike what the vendors had promised).
A downtime of at least 3 days is expected.
Everything is down. You want me to take the backups (made by others, I don't even know how), and restore the old servers in the branches (from 2015), as you believe it would be possible to resume work only this way – as suggested by the "elegant consultant."
I replied that I wish you good luck and went back to sleep with a clear conscience.
This morning, a (non-critical) FreeBSD VPS went down. The provider mentioned a "problem that would be resolved as quickly as possible," without providing an ETA.
I immediately took another VPS and, thanks to zfs-send and zfs-receive, replicated the zroot/bastille from the last backup. I set a maximum time limit of one hour. In 5 minutes, plus a few more minutes for the copy, I already had the replacement ready.
58 minutes later, the original VPS was back online. I discarded the restored one. I was almost disappointed at that point, I almost hoped to put the other one online instead
Monitoring shouts at me: "This server is DOWN!"
I immediately check - it doesn’t respond to ping requests. I try to reboot it remotely - no luck.
I attempt to request a remote console; after more than 45 minutes, there’s still no reply.
I check the logs: the last ZFS send/receive based backup occurred just 23 minutes before the outage (it's an hourly backup).
I call the client to explain the situation: we can either wait or restore from a backup. They express a preference to get back to work after lunch (13:30).
I set up a VPS, install FreeBSD and some packages, then connect to the backup server:
zfs send -RLvw [mybckdataset]/bastille@lastSnap | pigz - | mbuffer -m512M | ssh destserver "pigz -d - | zfs receive -x canmount -x readonly zroot/bastille"
After a few minutes (50 GB later):
zfs load-key -r zroot/bastille (since they’re encrypted)
zfs mount -a
service bastille start
Everything's up and running. DNS record changed - disaster recovered. Time: 12:48.
I call the client and say, "Hey, you’re back up. Now we’ll wait for the original server to come back, and then we’ll resync the datasets."
The customer, with a witty remark that cleverly shows gratitude without being direct, replies, "Oh come on, and I was hoping to extend my lunch break! "
FreeBSD, jails, and ZFS have, once again, done an excellent job.
Now, I can have my lunch.
$53 billion (BILLION) price tag for Hurricane Helene in North Carolina alone.
No vabe', non è possibile, ditemi che non è vero: cioè il problema è la ditta, gli immigrati, i lavoratori o la proprietà della stessa, il malocchio, il chiodo, la nutella...
OPPURE
è VERGOGNOSO che il sistema dei treni di un paese sia SENZA alcun ridondanza seria e senza aver previsto soluzioni banali di #disasterrecovery per evitare incidenti come questo?
Cioè si vede l'effetto, e NON LA CAUSA?
Transylvania County resources for food and water
"Water and food was delivered to Transylvania County Tuesday by helicopter. Meal distributions in the community are listed below:"
https://text.bpr.org/2024/10/03/transylvania-county-resources-for-food-and-water/