eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

241
active users

#aiprogramming

1 post1 participant0 posts today

When I hear about AI-based programming, I think back several decades to a time when I was dealing with a hairy set of data, and I wrote a pretty complex bit of code generating an even more complex bit of SQL. I don't remember now if it ended up proving useful or not, though I think it did. But that's not the point.

The point was when I came back to it after a few months ... I couldn't figure it out at all. Neither the generator, nor the generated code.

And I HAD WRITTEN IT. Myself, from scratch, sorting out what I wanted and how to get there.

There's a principle in programming that debugging and maintenance are far harder than coding. Which means you should never write code that you are too stupid to debug and maintain. Which is precisely what I'd failed in my anecdote.

And of course, Management, in its infinite wisdom, typically puts far greater emphasis on new development than on testing, or Heavens Forefend!!! maintenance. So all the brightest talent (or so perceived, at any rate) goes to New Development.

(There's a great essay from about a decade ago, "In Praise of Maintenance, which you, and by "you" I mean "I", should really (re)read: freakonomics.com/podcast/in-pr).

With AI-based code generation, presuming it works at all, we get code that's like computer-chess or computer-Go (the game, not the lang). It might work, but there's no explanation or clarity to it. Grandmasters are not only stumped but utterly dispirited because they can't grok the strategy.

I can't count the number of times I've heard AI referred to as search or solution without explanation, an idea I'd first twigged to in the late 2010s. That is, if scientific knowledge tells us about causes of things, AI ML GD LLM simply tells us the answer without being able to show its work. Or worse: even if it could show work, that wouldn't tell us anything meaningful.

(This ... may not be entirely accurate, I'm not working in the field. But the point's been iterated enough times from enough different people at least some of whom should know that I tend to believe it.)

A major cause of technical debt is loss of institutional knowledge over how code works and what parts do what. I've worked enough maintenance jobs that I've seen this in all size and manner of organisations. At another gig, I'd cut the amount of code roughly in half just so I could run it in the interactive environment which made debugging more viable. I never really fully understood what all of that program did (though I could fix bugs, make changes, and even anticipate some problems which later emerged). Funny thing was when one of the prior Hired Guns who'd worked on the same project before my time there turned up on my front door some years later ... big laughs from both of us...

But this AI-generated code? It's going to be hairballs on hairballs on hairballs. And at some point it's gonna break.

Which leaves us with two possible situations:

  • We won't have an AI smart enough to deal with the mess.
  • Or, maybe, we will. Which as I think of the possibility whilst typing this seems potentially even more frightening.

Though my bet's on the first case.

FreakonomicsIn Praise of Maintenance (Replay) - FreakonomicsIn Praise of Maintenance (Replay) - Freakonomics
Continued thread

.. continued ..

I've been coding since the mid 1980's and was wondering how other developers think about the potential issues with "AI" generated code possibly including parts of, or whole, pieces of code that is covered by an Open Source license like GPLv2 or AGPLv2, without you knowing it.

Regardless companies' and people's disclaimers, etc, this, to me, seems like a potential can of worms, from a legal perspective, no? 🤔