yeah, that's kind of where I'm going with it: anything which is<li> a "control engineering" issue where some treatment is varied to maintain a given sensed quantity within some acceptable band of states (anesthesia, internal medicine), or</li><li> a "bucketing" issue where you are given some pile or stream of information and have to make a decision based on that information about what bucket the patient goes into (diagnostics, radiology)</li>
are going to be eaten alive by AI, because the driving force in today's world is the "standard of care", which is defined with reference to commonly-accepted evidence-based medicine - RCTs, publications, and so forth - combined with insurance acceptability
the issue is that AI is going to incorporate a larger corpus of information ***automatically*** than any doctor can possibly even have access to; this guarantees that whatever the "standard of care" is, the AI is going to be better at achieving it than the doctor is
(note that this ***does not mean*** the AI is necessarily going to be better in any specific case, or that it is necessarily going to lead to better patient outcomes; what it means is that overall, it will probably have better care statistics, and for megascale systems like modern healthcare, that's literally the only thing that matters)
arguably, the rise of the 90IQ MD with a tablet (EMR) is here, for those with eyes to see it (feels bad mang); everything you described is already something that's happening