E4: #EUAIRegs Focus on standardization to mitigate bias

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | - A podcast by ForHumanity Center

Categories:

ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Dr. Shea Brown is a researcher, lecturer, speaker, and consultant in AI Ethics, Machine-learning, and Astrophysics. He earned his Ph.D. in Astrophysics from the University of Minnesota. He is the founder and CEO of BABL, AI. He is a current ForHumanity fellow focusing on algorithmic auditing and AI governance. In this episode, Shea discusses his perspectives on EU AI regulations. He shares that a focus on standardization in order to mitigate bias is crucial. Regulations will also help other countries, like the USA, to increase their own AI regulations. These regulations will push the rest of the world to continue developing and evolving their AI regulations. Visit us at https://forhumanity.center/ to learn more --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message