The Turing Test from Alan Turing was proposed as a test of an intelligent system. Could a human determine if the other party in a conversation was a machine? This was an interesting way of imagining how powerful a computer might grow and the types of answers it might give to a human. Interestingly enough Turing didn't argue about the correctness of the answers, just that they appeared to be from a human.
In some sense, I wonder how many people would have been fooled by the GPT-3 bot on Reddit. It posted comments for a week to a variety of threads. You can look through the posts by the "thegentlemetre" user, but this one caught my eye, and as I read it, I was surprised how much this looks like things I've seen posted on the Internet.
Is this AI bot intelligent? I don't know, but I do think that the quality of comments and posts on all sorts of threads and articles seems to go down over the years. Maybe it's humans that are becoming worse at online communications rather than computers are getting smarter. Really, I think both things are happening.
AI/ML systems are getting better at mimicking what humans do, and I suspect that in many cases, especially in small samples, they can fool many people, perhaps most. That's disconcerting, especially as I already feel many people are a worse version of themselves online, without feedback and social cues directly available. Having bots add to the volume of poor communications and comments doesn't seem useful to society in general.
While I do think that AI systems can dramatically help us with mundane tasks and tedious work, I also think there can be problems if they become rigid in their actions, without allowing for some flexibility. Humans have discretion, and while they might not use it fairly, or even in ways that their organizations approve of, but they are flexible. Seeing these posts, I wonder if the AIs can learn to be flexible as well. I think they can be.