I’ve seen enough nominally technically competent people say absolute nonsense about genAI to know that’s simply not true. The entire field is rife with misinformation. The people who put the most money into this are repeatedly telling the world they’re making the computer from I Have no Mouth and I Must Scream in their basement.
There are already ethically trained models and some of them are relatively small and will be able to run on a machine or server easily.
The reality is that the well is extremely poisoned against this technology by its largest investors and boosters; it’s not really plausible to trust that a model is ‘ethically trained’, discern what exactly that means, or separate that from the models that are trained through mass-scraping without permission. More generally, I simply do not think any use of this technology is ethical.
The actual training of your own from scratch is already becoming easier, without knowing python and its libraries. This technology is changing fast. The addition of MCP and RAG and other aspects of GenAI will be in its infancy for years.
I’m sure this is not your intention but when you talk about ‘training your own from scratch’ that perpetuates the exact misconception I’m talking about here. People think they are training models exclusively on writing that they own or that they are able to ‘ethically train’ their own models from zero, which is not the reality of how this tech works. I’m not sure what the relevance of MCP and RAG are to this, either.
There are already improvements in model size and capability and fat GPUs will gradually move into average PCs.
I own a cheap bottom-of-the-line laptop that I use as a tertiary machine; it has integrated Intel graphics. This is the type of machine that might very well be the most that a lot of people can afford, and historically people who can’t afford much more than something like that have been able to judge and even submit to ifcomp. I highly doubt these machines will ever be able to reasonably run an LLM locally.
I do not believe that comparable machines will at any point in the foreseeable future come with discrete graphics or have a ‘fat GPU’, and I’d be willing to bet on it. Regardless, what future hardware can do seems beside the point.
I don’t like the idea of flat out banning anything (except CSAM) and there’s likely going to be an audience for AI-driven stories.
The IFComp has always had an “experimental” aspect and we should maintain that. If we want a secondary comp that explicitly bans any AI, I’m all for it.
By all means let’s keep an open mind but not so much that our brains fall out. What LLM usage mostly means is not ‘experimentation’, it’s low-effort slop and an overall crowding out of actual formal or thematic experimentation.
Take Penny Nichols, which people are so enamored with apparently… chatGPT is doing basically all of the heavy lifting here; I’ve read the prompt and it’s barely sensical, if anything about this entry is impressive it’s the fact that chatGPT can generate something apparently coherent at all out of it.
For me it highlights the noxious nature of this technology. A lot of what the entry’s author clearly thinks they are doing is not really what’s going on; telling chatGPT to roll dice doesn’t actually make it roll dice, it just makes it generate text claiming that dice was rolled. The prompt is largely providing role-playing game jargon for chatgpt to weave into its responses, creating the impression of game mechanics. And I’m not sure even the author understands that, let alone users.
It’s just this fundamentally dishonest machine, but the thing is that chatGPT is so adept at generating apparently coherent and extremely agreeable output that you can prompt it with nearly anything and get a result that feels like it coheres, even though there’s almost nothing authored about the resulting experience other than the vague general premise. This is toxic to the entire idea of authorship.