Today, only 6% of business leaders take hiring action to ensure the responsible development of machine learning (ML). Experts doubt that ethical design for AI systems will be widely employed in the next decade. Data scientists and their managers seem happy to take the same large salaries as doctors and civil engineers, but often without the same basic harm reduction obligations. Given all this, perhaps it’s not surprising that the Partnership for AI’s incident database now contains over 1,200 reports of public AI system failures! With public mistrust and regulatory scrutiny growing, it’s time for ML to grow up! But how exactly do we govern such a dynamic technology? To help answer this question, I’ll provide motivations, general best practices, and forward-looking commentary for ML model governance. For motivation, we’ll take a tour of public AI incidents, from the humorous to the deadly. We’ll also discuss who suffers the most when data scientists go fast and break things. With real-world failures covered, we’ll turn to best practices from model risk management and cybersecurity, and discuss how combining common standards from these fields is a solid start for ML governance. We’ll also touch on why many “ethical AI” or “responsible AI” programs are struggling. Finally, we’ll consider the most realistic ways to transition from today’s AI wild west to the future of mature and regulated ML.