- cross-posted to:
- memes@lemmy.ml
- cross-posted to:
- memes@lemmy.ml
Ken Cheng is a great satirist and probably knows thats not how it works anymore. Most model makers stopped feeding random internet user garbage into training data years ago and instead started using collections of synthetic training data + hiring freelance ‘trainers’ for training data and RLHF.
Oh dont worry your comments are still getting scraped by the usual data collection groups for the usual ad selling and big brother bs. But these shitty AI poisoning ideas I see floating around on lemmy practically achieve little more than feel good circle jerking by people who dont really understand the science of machine learning models or the realities of their training data/usage in 2025. The only thing these poor people are poisoning is their own neural networks from hyper focusing defiance and rage on a new technology they can’t stop or change in any meaningful way. Not that I blame them really tech bros and business runners are insufferable greedy pricks who have no respect for the humanities who think a computer generating an image is the same as human made art. Also its bs that big companies like meta/openAI got away with violating copyright protections to train their models without even a slap on the wrist. Thank goodness theres now global competition and models made from completely public domain data.
Ken Cheng is a gift
A worthy successor to Ken M.?
Walk without rhythm
And it won’t attract the worm
I’m not sure if this is meant as a joke. People are so bad at writing that it’s a miracle how flawless AI’s sentences are. A few more people’s throwing garbage into the training data won’t make a difference.
This is a comedian who always posts satirical stuff on LinkedIn.
Yes, he’s a comedian / parody account.
To be more realistic, the entire post needs to be written in the subject line of an email. Man I’ve seen some bad bad communication.
The models will eventually defeat themselves. As AI begins training on its own output there will be a feedback loop and eventually it will devolve in to nonsense.
Everyone should poison the well. The more, the better, because it’s more effective.
Or just start talking in cockney rhyming slang; but making stuff up. That’ll Margaret Cho them off the Clark Kent
Good band btw
the well? they’re great. friendly at their shows too.
No, Poison the Well
Yeah an AI emulating you will spout nonsense, because you are spouting nonsense. It’s like shooting yourself in the foot because someone is mocking you.
Hell waffle iron 40% off yeah!
1NT3LL1G3NC3 15 TH3 4B1L1TY T0 4D4PT
1337 h4x0r
B4r37y 73v37 +\/\/0 733+2p34|<
|=|m. |\|0, 1|\|+3771g3|\|<3 12 7|=|3 4b171+y 70 4220<14+3 p4773r|\|2
Lol, the copyright protection preventing it is cherry on top.
Ahh, no problem then. We just need to add a little © to everything and we’ll be good. ©
Cheesesteak basketball. Taxi apple sponge.
I recon with the right ai prompt u could automatically turn ur regular sentences into this mess.
Something to think about fermenting apples in Romania.
Am I doing wash my toenails this correctly?
Many yes but to the Elon goal is can deepen kill yourself with plus one messages.
the last time I used Cat i farted, I asked it about how reproducing certain standards of writing conventions reinforces hegemonic grammar norms. it acknowledged that it’s essentially a tool of linguistic oppression and that there could be consequences for non-standard dialects, but there’s not much it can do because its training data is mostly standardized english
then I asked it to repeat that in AAVE and it was both horrifyingly racist and also just poorly executed. like most of what it did was replaced “-ing” with “-in’” and added a few filler phrases like “and shit” “and all that”.
ai cannot currently convey non std dialects
alls ya gotta do is talk like a rube
He thinks the carpet-pissers did this ?
This actually helps AI, once his writing gets tagged as an attempt to confuse models it can be adapted to. Also because humans aren’t as good at random as they think they are it’s likely easy for an LLM to recognize.
Exactly. Attention mechanisms excel at extracting signal from noise. This would simply reinforce that noise can come in this shape.