I was talking to a colleague last week about what REAL #StrategicAutonomy would mean.
For software in critical infrastructure, I think it would include military hardware-like contractual provisions, requiring eg the full #audit and #escrow of all source code, including for updates… and the right at any time for governments facing supply/continued operation threats to use that escrowed code to recreate their own versions of the software, and use them until the supply threats are removed.
In fact, governments probably should only EVER deploy executables they have built themselves, using their own compilers (see the classic computer science paper Reflections on Trusting Trust).
You’d also need chip #microcode auditing and verification for security-critical systems. And some level of chip assurance. And Cell-like audits… Details to be determined
@1br0wn For reliability let alone autonomy reasons, you likely also want staged deployments of updates that can be automatically rolled back if the updated version shows any problems (see also #CrowdStrike).
And no ability for third-party code to communicate outside the government’s own domain, except via tightly controlled government proxies which monitor and control all data in/out
@1br0wn In the US you can't even require the auditing of voting machines. "Why not" is one of those questions to which the most cynical response is almost certainly the correct one.
@1br0wn When this deal was done https://www.ft.com/content/74782def-1046-4ea5-b796-0802cfb90260 one key argument was agencies could only hire enough IT staff if they choose a commercial platform that these folk where familiar with. All that free cloud largesse to undergrads had by then the desired effect of mass indoctrination. Oo for the glorious FOSS days of GDS…