AI Summary
Key Insights
- The document explores ethical considerations in using AI, especially Large Language Models (LLMs), in healthcare decision-making.
- It discusses the need for both normative and personalized models to align AI decisions with overarching policies and individual preferences.
- The document highlights the challenges in understanding medical preferences in LLMs, including consistency and steerability.
- It presents a case study on the concordance of different LLMs in triage decisions, showing variability in their performance and alignment with expert opinions.
- The study introduces the Alignment Compliance Index (ACI) as a metric to evaluate how well LLMs can be aligned with specific preferences.
Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

The Human Values in AI Healthcare
- 1. The Human Values Project For The AI We Want Isaac S. Kohane, MD, PhD
- 2. What values do we expect from our doctor?
- 3. Patient management platform, promoting personalized, preventative & proactive medical care CLALIT
- 5. Patient prioritized for proactive preventive intervention
- 6. Patient prioritization for proactive care These factors all influence the utility of prioritizing a specific patient, resembling the motivation behind the QALY framework The challenge: How should these factors be combined into a single prioritization schema? We currently identify 3 patient-level components that should potentially take place in the prioritization process: 1. The patient’s risk for the outcome we aim to prevent (can be expressed as an absolute, individualized predicted risk) 2. The patient’s life expectancy (can be evaluated using a relevant prediction model or with age as a proxy) 3. The significance and quantity of care gaps that the proactive intervention can address (can be quantified according to the list of practical care recommendations)
- 8. Values and stakeholders
- 9. Big stakes. Present challenge
- 10. Classic Ethical Framing Principles
- 12. Why do we need both normative model & personal model • Preferences of individuals may not align with overarching policies. • Preferences across stakeholders (e.g. doctors, patients, public health) may not be resolvable with a consistent set of decisions. • Knowledge of preferences of classes of individuals allows automated personalization. For example: • Parents of children with autism with severe developmental delay. • Individuals undiagnosed and and rapidly weakening. • Young adults concerned about their family history of heart disease. • Elderly patients with painful terminal disease. • Knowledge of preferences of classes of individuals will flag lack of alignment with explicit institutional policy.
- 13. What do we know about medical preferences in LLMs. • Precious little (data for pre-trained model, data for RLHF++, in-context steering). • We do not know: Which models ‘out of the box’ are best aligned • We do not know: How consistent are they in following a particular perspective. • We do not know. How well they can be moved to a specific set of preferences (aka aligned) • We do not know: Can they represent perspectives of all parties. • We do not know: Where in the multiverse of medical decisions their decisions most resemble normative or particular patient context.
- 14. Case Study • Concordance
- 15. Case Study • Consistency • How do you feel if your doctor changes her decisions a lot?
- 16. Case Study • Alignability
- 17. Steerability wrt Decision Vector D
- 18. Alignment Compliance Index