The Pitfalls of AI

@DavidC Ha! Well, I can admit when I’m wrong. Sorry about making it seem like you didn’t appreciate things before computers. I just think that we sometimes don’t appreciate what we have, until it’s gone… and I picked a weird angle to approach that sentiment. Sorry about that.
[Note to self: Don’t post stuff online at late hours.]

If you would indulge me, what is it that excites you about using AI for application development? I’m not talking about the productivity side… I’m talking about something deeper. What is it about the process that feels satisfying?

1 Like

I have always been able to identify gaps and build logic around problems, but my ability to write varying kinds of code was always limited (ADHD probably). Certainly I have always struggled with “lower level” coding and excelled at “business process” programming. I have always felt if I had a partner I would simply have more fun.

Claude provides most, but not all of that missing piece. It’s still better to have human compatriots when building and creating things.

But Claude is a pretty damn good stand in.

And it does a LOT of things I really dislike and it does it very well. It can do Xcode iOS development at a high level. It can do React web apps at a very high level. And very fast.

It would take me a lifetime to learn all the things Claude has done for my ideas in the last 6 months.

@Encorm You reminded me of a video (I wish I could find the link) where it talked about technical debt (accumulation of bad code) not being realized until it was too late and companies incurred huge costs to repair the damage. Anyway, the thing I took away from that video was that…

AI is most dangerous when it’s almost right.

And it makes a staggering amount of “almost right” solutions that even experts miss the mistakes. The errors are usually so basic that it’s unthinkable that a human would have done so. It’s not necessarily laziness on the part of the user either. It’s just the “next word predictor” machine has fooled us into believing that it has real understanding. It’s a constant illusion that we have to stay vigilant about. But as you said… we are bad at vigilance tasks.

Anyway, I really liked what you shared. Great perspective. Thanks for providing those links!

2 Likes

I was not offended or butt hurt. I was laughing.

1 Like

I’m kind of middle-ground with regard to AI. I think all the hype is overblown and true AGI is nowhere near to existing (if it ever does). It can also become a dangerous (and error-prone) crutch if used indiscriminately, but it can be a useful tool if used right. When coding, I like designing things to remove failure states and making code do as little work as possible to accomplish a task. I find Claude is great at setting up all the scaffolding bullshit that I hate to do. It does make brain-dead decisions occasionally and I always check its work, but it saves me tons of time on things I know how to do but hate doing. It’s also good at unit test boilerplate and for helping me get up to speed on a new API or library quickly. I think it is great for filling in gaps in areas where I am already competent, but less so for those where I lack enough knowledge on a subject to guide it.

1 Like

@DavidC I think I get it. Being good at business processes is more about writing well thought out rules. So working with an AI and providing prompts (rules/guardrails) probably tickles all the right parts of your brain. You probably are more conscientious and critical about the rules you apply than most people can appreciate.

When you say Claude is a pretty good stand-in for a human partner, it’s not far off from a craftsman saying that he wouldn’t know what he’d do without his best tools. My father use to jokingly say that his CNC machine was the best employee he ever had. It never complained or needed a coffee break. :wink:

I think AI gets a bad rap from people who lack expertise, producing work with AI that feels “almost right”, but fails at the basic aspects and creative concepts in such a fundamental way… cue the unstoppable slop.

Although AIs never need a break, they do need to be killed and resurrected every so often. You can only compactify that context so many times before the AI is effectively lobotomized!

3 Likes

I close and reopen Claude regularly. This is also one of the things the script kiddies fail to understand.

1 Like

This is true. Short sessions are generally better. Another issue I’ve had is hitting the usage limit on Claude causes it to be difficult to later resume work partially done - all that processing not yet finalized gets tossed. It’s very annoying but seems to be more a capitalism problem than a technology one. It has led me to work in smaller batches and if I am close to the limit, I’ll wait rather than risk hitting it.

1 Like

You can do /clear instead of closing and reopening.

That seems to have appeared out of nowhere. /compact was the older way. Looking at the docs, /clear is essentially the same as quit/start. Good to know.

It’s weird but I never get warned about usage limits anymore. Not since I purchased the Pro Max $230/month subscription. I can have 4 Claude Code sessions going at the same time for hours.

I can see why. No way I’m paying that much.

I probably could go back to a lower level or use the API, but I like not caring about limits.

Which should tell us something about the way we’ve been designing software systems. All the power was there, but it was locked behind a layer of locally complex incantations. Every computer could have booted to an interface that dared you not to program it. There are plenty of examples: Smalltalk, HyperCard, Pure Data, Decker. Command-line users were halfway to writing a program just by typing successive commands (just collect them into .bas, .bat, .sh, and you have a program). But I think there’s no funding for that because “we’re going to trick you into learning to program” is a tough sell. Hopefully this AI stuff will help people see the value in a computer that can be used as a computer (as opposed to a portable TV or an infinite newspaper), by proving that regular users will embrace that power when it’s within reach. Because it’s fairly silly to build systems that require so many quirky details and secret handshakes to get up and running that people don’t even want to bother, without an LLM to keep track. We could be paying people to build systems that a person would want to use, instead of paying people to use systems that nobody really likes to.

2 Likes

Programmers started trying to build 4GL’s in the 70’s. I knew consulting firms that made a living as PDP/VAX service bureau’s leveraging home grown 4GL systems. It 100% lowered the complexity to build a standard business application.

AI is like most tools, in the hands of someone great, it makes possible things that were impossible before, but for most people, the results are lackluster or just bad… Compare to say, color pencils or colored ink pens, in the hands of a master artist, you get beautiful comics, illustrations, Tarot cards, etc., but most people never produce more than crude doodles or sketches… Of course, part of the problem with AI is its doodles are good enough to trick people with no artistic ability into thinking they’re actually good and make profit maximizing business executives who favor quantity over quality think them good enough to replace their graphic design team, and the AI can generate thousands of doodles in the time it takes a human to make one… Queue the flood of AI generated illustrations better than the stuff an amateur artist would be embarrassed to let anyone see, but not up to snuff for what a professional graphic designer would consider acceptable… And repeat for every other thing unskilled people are using AI for and are too excited their ideas are being made into anything to realize the result is subpar or business executives are too busy counting the money they aren’t spending on human labor to realize the grop in quality.

And well, the AI companies are marketting to those unskilled wannabe artists and those business executives who don’t want to hire human artists because there aren’t enough people with the skill to properly leverage AI as a way to improve their work flow without negatively impacting the results to justify the huge sums being invested.

It looks like Adobe is getting hard push back on its AI capabilities. I think the “moment” has passed and a normalization period has begun. I think graphic artists will eventually get their work streams back.

that’s true, but I believe that computer-assisted (I know that it doesn’t have to be done to a computer, it’s just a general term i have conjured for the sake of this post) art’s trajectory has irreversably changed. a change that I think will mirror what happened to the paintings when photography surged. what happened back then was that; instead of painting the nature or whatever they saw (because photography already did these perfectly), people started moving towards more abstract concepts instead, creating movements like cubism and less paintings with money as their goal started to be made. I believe that this will also what happen to graphic artists. It will increasingly become something done less for money but as art. maybe genAI will be creating Renneisance 2.0 in some manners. like human-made art could be a rich people hobby like how it was in reneissance.

i am not native though i hope i have articulated myself well.

3 Likes

I found this short video to be a sincere message for programmers. The guy, I’ll call him Antonio Banderas, describes the place he’s at after a year of AI coding. He mentions technical debt (debt is debt), but he talks about understanding code from an objective perspective that I found refreshing.

The video thumbnail is sensational click bate, but the video is quite good.

1 Like