My biggest fear is how AI will affect our society at fundamental levels. If social media destroyed self esteem and attention spans… AI will inevitably destroy our ability to think and reason. You might think, this is overreacting… but I’m talking about the effects on our future generations. The ones that don’t have fully developed frontal lobes before the age of 25. We are going to make ourselves comfortable with being both dumb and insecure, if we’re not careful. Let’s be careful, is all I ask.
On that note, good luck with all your AI-driven projects, guys! Don’t destroy the world, if you can help it.
Edit: This isn’t a reply to David, just to the conversation in general.
I think that the worst will be when the generation that already has social anxiety problems, will find that being a recluse is the best option. AI will usher an era of one-to-one entertainment, information and emotional bubbles that feel so convenient, that nobody will be interested in doing something hard. Everyone will be the king and queen of the universe that is their little bubble. The dystopian AI will feed that little bubble with everything the lizard brain requires to be happy.
This is all going to make the mental disease epidemic witnessed in Japan and Korea look like a joke.
I don’t know what to tell you. If you look through my posting history here, you’ll see that I’ve written multiple interactive debugging interfaces for TADS3: a general interactive debugger for the stack; one for the transcript (the TADS3 message-displaying system); and eventually a module for implementing simple, special-purpose interactive debuggers. I’ve written code for implementing generic linters. A tool and generic gameworld for automated regression testing. Performance testing tools. Statistical analysis tools. And tens of thousands of lines of code for general IF stuff, usually accompanied by as close an approximation of unit testing as the TADS3 build system allows.
That’s the kind of approach I take to code I’ve written myself. Because if I’m going to release code I want it to work, and actually just working isn’t really sufficient. Like I recently had a testing thread for a web widget for a feelie/documentation for the WIP, and I spend hours tweaking fonts and easing animations, and I’m about 90% certain that most players will probably only look at that stuff for a couple minutes.
Your example of “surprisingly admirable” work was, by your own account, something that contained errors of basic syntax. I wouldn’t just sorta hand-wave that away in code in general. Certainly not my own code. I don’t see how asking consistent standards of code, agnostic of where its coming from, is “an axe to grind”.
Then I guess we better hope the models get much, much better at reasoning out of corpus or hope we’ve already invented all the languages we’re ever going to need. Go and Rust, for example, are both around 15 years old.
In my experience productivity dictates outcomes. New platforms may get created, but they will very likely be done in conjunction with GenAI. The entire corpus of software design patterns will eventually be codified so that GenAI can be more accurate and productive and immensely faster than a human.
It will start with more IT departments adopting the current platforms that GenAI excels at like python and Typescript, though Rust and Go and Swift are only marginally less so.
I’d even expect efforts to get C# and Java “up to speed”.
But eventually Anders Hejlsberg (C#/Typescript) or someone else that designs languages will start envisioning (if they haven’t already) ways to design a platform that leans into GenAI strengths without regard for human interactions. No IDE, no documentation, no user guide. Pure platform that AI can be accurately directed to engineer new software.
On the bright side, us humans become the IDE and I think that’s going to be a good thing.
I’m not clear that the “LLM first language” is coming. To do a better job you’d need more, and potentially better, training data and where is that coming from? Otherwise you are generalising from out-of-set training data but you’re doing that already so where’s the benefit?
So I infer a different path. Now that we’re getting acclimated to using GenAI, we can create the pathways it needs to be deterministic on a set of logic. We can invent the data needed to train the model to understand a new programming paradigm.
We invented the egg. Now we can go back and invent the chicken.
We learn from patterns. The smart people that created LLMs are not software developers. They are data scientists. Now that language designers have been using LLMs and will continue to use them, they can add these new patterns to their language design processes. One science inspires another.
A language designer before GenAI would be concerned with how a human will use it, garbage collection, inheritance, typing, messaging, generics, etc.
A language designer may alter their perception of what is fundamental to software creation without the human element. Design patterns can be encoded (factory, singleton, strategy, etc) so that GenAI knows not just the base elements of the language, but also foundational constructs for design patterns. A language designer with GenAI as its only customer has a very different set of assumptions.
Computer languages allow humans with expert knowledge to perform one cognitive task really well, and that is pattern matching.
Since the job of a developer is rapidly evolving, the real value add they bring is accountability. Humans cannot outdo machines in pattern matching.
The problem has shifted to observability and verification. What is happening now is that we are all converging towards the same thing. What is the new output next to the ‘thing’ that allows humans to verify outcomes and trace back behaviours. Simply put, how can you be accountable for PRs the size of a planet. This is the domain that all companies with large tech teams are investing in.
I am sure exciting new stuff will emerge as differrent solutions will be revealed. It will be interesting so see what shape the output has that accomodates the human mind to operate effectively. It surely cannot be thousands of lines of code.
@DavidC Sounds like an efficient AI coding language will become a machine language model that only the AI will truly understand. It could be a strange pattern of symbols, like some minified code block. We’ll essentially be forced to do what you’re already proposing with a human readable spec format and that’s where our input ends. Write a screenplay for your software development, basically.
“David listened while the Terminator laid it all down: Claude, Sharpee, Judgment Day, the history of things to come…” Just promise us that if a woman named Sarah Connor visits you, just do as she says and don’t argue.
Oddly, I have no qualms about a software spec-only development paradigm if it weren’t for the fact that it’s AI-driven and the perceived societal detriment that brings. I wonder if AI could create an OS to rival what is already out there. It could look through complied software code and port that software over for free access for all. I mean, big companies use art, music and writing without permission to make their AIs… so why can’t we take their software and reproduce it too? It’s going to be interesting times ahead. Is the genie out of the bottle yet? When do I get my hover car?
There are at least three vibe-coded OSes currently available. As far as I can tell, none of them has actually been tested beyond a superficial level, and the projects have gone cold—these people were more interested in saying “I built an OS” than in the hard work of actually maintaining one, especially one whose code they don’t understand.
This, to be clear, is not meant as a dig at DavidC, who clearly does have experience with the work that goes into specs, testing, maintenance, and all the stuff that happens beyond “Claude, write me an OS”. But a lot of people on the internet do not.
In 10 years you guys will be the woodshop basement workers with all manual tools. Still produce high quality but at a snails pace. Also highly rewarding (I know several such people).
Well, not that likely. We are a one-coder office, with vast amounts of legacy software that only I really understand, and it won’t be much more than 10 years before I get to retire, so hopefully I never quite have to live the professional life where I don’t actually get to write code, because it’s the one bit of the job I actually enjoy,
Yes. You arrogantly dismissed almost all the ethical issues with LLMs. You insultingly called people who know how to write code and believe in doing so “code typists”. You pushed the authoritarian position that human programming is already over and we’re powerless to resist.
It’s the kind of rhetoric that makes LLM (and GenAI) proponents so unpopular.