#OpenBox - Dangers of watermarked images in training

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | - A podcast by ForHumanity Center

Categories:

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.  curious minds looking to solve real-world problems.  This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.  Today, we have with us Kirill. He is a ML Ph.D. student at Technische Universität Berlin. He is part of UMI lab, where in Interpretability and Explainable AI.  He studies abstractions and representations in Deep Neural Networks. He is also a passionate photographer. We are going to be talking about his recent paper “Mark My Words: Dangers of Watermarked Images in ImageNet” which he presented at European Conference on Artificial Intelligence few months back.  This is part 1 of the podcast