The forum is currently under a bit of a flood of spam, so I have turned on manual approval for new users. Apologies to any new people, but we’ll attempt to not keep you waiting too long. Please sign up with a meaningful username - if it looks like a random group of letters you probably won’t be approved.
This is already the case in some other forums. Would a Captcha help reducing the registration tried by bots?
I’d have to install a plugin, and I’m not sure how effective it would be. But I’ll consider it if the number of signups we get becomes overwhelming.
Captcha should be outlawed as an ableist crime against the visually impaired and hard of hearing. Plus, I’ve heard AI image recognition has reached the point most visual captchas are only effective against the dumbest of bots.
I can see and hear, and I fail Captcha at least half the time. I mean, is the square with a tiny sliver of the car in it showing a car? Does that tiny picture of a busy street have cars in it? – gets out magnifying glass-- that might be a car! I click on it. No, I’ve failed. Now we have streetlights! Does the square with part of the pole of the streetlight count? It must count, so I click on it. Nope, another fail.
There has just GOT to be a better way.
That little passage really made me laugh—I totally recognized myself in that sense of dismay. But the underlying idea from @Pebblerubble is good; we need a protection mechanism that’s both user-friendly and relatively effective because the AI-based cyberflood is quickly going to become unbearable.
I’ve tried looking for a better way, but I’ve never found one. I doubt anyone else in this thread will find one, either.
I saw on some forums an alternating question instead of a captcha. Example: “What is the software named that this forum is about?” Important is that it is alternating (not always the same).
It now asks you for your (optional) pronouns when you register, which is accidentally acting as a honeypot for the spam accounts - they fill it with random characters just like the username. But I don’t know how we could make use of that to then block the account.
Anyways, for now manually approving accounts is working well.
The one thing I considered suggesting is making a “sandbox” category (with a better name) and limiting level 0 users from posting anywhere but there. It would contain the “introducing ourselves” thread and require users to reply to the introduction thread to get trust level one. If it was AI or spam it’d be much more obvious if it was limited to that category.
I believe it’s doable, but would be a significant change for new users. Lurker accounts who never post would be deleted after (I think?) 30 days. We can change that setting also.
@AmandaB, I found the things annoying when I could see, but they became absolutely infuriating and dehumanizing after I went blind. And the audio versions, where they exist, aren’t any better since it’s often a pain in the anatomy to navigate captchas with a keyboard to find the “get audio challenge” button, then the play button, then find the text box to type what I think I heard, then find the verify button. Oh, and the audio they use tends to be whisper quiet if the volume on my computer is set for my screen reader, so I have to fight with web design that should get someone fired while my screen reader is yelling in my ear. And that’s not even getting into cases where you basically have to speedrun through the damn things because you’re barely given any time before the current challenge expires or when the audio refuses to play and I have to go through the hassle of downloading the audio as mp3 and installing something that can play an mp3 file.
Concur with Peter: IIRC, the mechanism he describes is already in use in the registration into the Spring Thing.
Best regards from Italy,
dott. Piergiorgio.
Ha, I feel the same way with Captcha. One of my short story ideas is a person who thinks they are a real human being, but they later discover they are actually a robot. The first inkling of discovery is when they can’t get past the Captcha test.
I believe there was some research in the news last year suggesting that robots are now better than humans at solving captchas, so the time for that idea may have passed…
Yeah, as others have mentioned AI is pretty good at captchas by now. (And image recognition, and answering riddles and–)
I do not think the time for this idea has come (if it ever will), but for discussion purposes:
-
the concept I see tossed around for "this is how we will have human-only forums in an age of ubiquitous genAI spambots" is requiring people to pay a nominal amount for an account. Think $1 per year or $5 for a lifetime forum account or something. (The concept is not that it would be _impossible_ for spambots to have access to the financial system, but that $1 per spambot is way above the break-even point for spam.)
Something something late stage capitalism where the most identifiable difference between me and genAI is that I am willing to pay $1 a year to participate in a niche interest forum . . .
Isn’t the point of the CAPTCHA to detect movements while doing it, not the actual CAPTCHA itself (and feeding further algorithms)? Tracking the cursor and typing for organic movement and a lack of consistency (perfectly straight lines, etc.).
From my experience as a web user I don’t think so.
This is what Google’s reCAPTCHA version 2 does. It uses behavioural heuristics like those to detect “organic” behaviour and assign a score to it. If in doubt, it reverts to image recognition (but still uses data such as where exactly the clicks are detected, timings, and mouse movements during the challenge). Version 3 is entirely based on site interaction heuristics in the background and does not require an explicit challenge. I’m not qualified to make any judgement on their effectiveness.
This is really not my area, but I think right now mouse movements and cookies are both part of it. If those yield a bad verdict, then websites usually are configured to give an image recognition task or something. For example, reading this (bolding mine):
But anyway, whether it’s mouse movements or “which of these is a motorcycle,” doesn’t make a difference to the concern I was trying to point out. If they can train a program to recognize erratic mouse movements I bet they can train one to create erratic mouse movements. I haven’t heard of an easily-implemented-at-scale performance based “task” that humans reliably perform better at right now. (I mean, it may be that programs that create erratic mouse movements aren’t currently widespread, but now we’re talking about the speed at which spam bots are getting smarter.)
Now, there’s the other, non-performance options also mentioned above: gating based on cookies (i.e., have you logged into a google account recently on the same device and browser) or trying to gauge how trusted an IP address is, but that’s probably not a complete solution unless a website is willing to entirely require google login or exclude people using TOR or connecting from public wifi etc.