AI Research | 2024-10-15 The case for unlearning that removes information from LLM weights — AI Alignment Forum