It's early days, but I found this article to be very interesting. In this case, an AI model was trained to use some patient feedback on knee pain, as opposed to a previous methodology. The old methodology did work, but it had some problems with some populations.
I haven't thought that much of our ML/AI (machine learning/artificial intelligence) work is particularly smart. The algorithms learn well and they can match or out-perform humans, but these systems are really mimicking what humans do. They can be more reliably and definitely more scalable, but it's doing what we humans do, not often leaping ahead.
Often we train these models based on previous data and results from human experts. However, often what we think of as expert advice, what many people accept, is flawed in and of itself. Humans often work with a small set of data and experiences. They find patterns and create a solution that works, but not always as well as we'd like. Especially as the solution is applied to a wider variety of situations.
In this case, researchers looked at alternative methodologies, and used AI/ML to test whether a different solution might be better. This isn't going to replace the current methodology for now, but it might get more doctors and researchers to rethink how they approach this particular issue.
This might be one area where AI/ML truly help humans move forward. By looking for gaps, oversights, and other problems in our existing methods, the computer might spur humans to make new leaps that help us drive forward.