In 2020, No Food for Thought asked whether AI's next achievement would be unlimited trolling. As far as I know, large-scale peer production remains mostly spared from such disruption so far.
That's not because AI hasn't "progressed" in recent years. It's just that so far, it has been directed towards social engineering and spamming. Teachers have to worry about AI more than peer producers.
Very little is "lacking" for this to change very quickly though. And more importantly, there is still virtually nothing preventing this from changing.
The conclusion was too right.Today, we can say that the time for globally verifiable identities was years ago. And unfortunately, very little progress has been made since.
Artificial intelligence is not the problem. It is our neglect and lateness which has condemned us to years of inefficiency, multi-level disruption, distrust, and—most problematically—out of control disinformation. As long as it lacks vision and unity, our species will keep going from one crisis to more crises.
"Trolling" might not be the best name, but it's now clear artificially generated text is starting to hurt peer production, with Stack Overflow possibly the most obvious victim. With still no solution in sight
The advent of bug bounties allowed to make profitable what could have been a predictable nuisance: bogus security vulnerability reports. It appears AI text generation is causing crappy reports to look credible, wasting even more developer time on invalid reports, which may now be majority.