Keynote Announcement

Keynote: A Quantitative Approach to Measuring Responsible AI, Reflections from RAIN by Professor Jerry John Kponyo

Jerry Bio: Prof. Jerry John Kponyo is the Dean of the Quality Assurance and Planning Office of the Kwame Nkrumah University of Science and Technology (KNUST) under the Vice-Chancellor’s Office. He is the former Dean of the Faculty of Electrical and Computer Engineering, KNUST. Prior to becoming Dean of the Faculty of Electrical and Computer Engineering he was Head of Electrical Engineering Department. He is currently the Project Lead of the KNUST Engineering Education Project (KEEP), a 5.5 Million Dollar Africa Center of Excellence (ACE) Impact project sponsored by the World Bank with a focus on Digital Development and Energy. He is Co-Founder of the Responsible AI Network (RAIN) Africa, which is a collaborative effort between KNUST and TUM Germany. Between 2016 and 2019 he was a visiting Professor at ESIGELEC, France on a staff mobility programme where he taught postgraduate courses in Business Intelligence and conducted research with staff of ESIGELEC. He has done extensive research in IoT, intelligent systems and AI and currently leads the Emerging Networks and Technologies Research Laboratory at the Faculty of Electrical and Computer Engineering, KNUST which focuses on digital development technologies research. Prof Kponyo’s Ph.D. research focused on applying AI to solving a traffic problem in Vehicular Ad hoc Networks (VANETs). He has published over 50 articles in refereed Journals and Conference proceedings. He is a member of the Ghana Institution of Engineers. Prof. Jerry John Kponyo is currently the coordinator of the West Africa Sustainable Engineering Network for Development (WASEND). Prof Kponyo is the PI and Scientific Director of the Responsible Artificial Intelligence Lab which is a 1 Million Canadian Dollar grant sponsored by IDRC and GIZ. He is also PI for the Partner-Afrika project which is sponsored by BMZ.

Keynote Announcement

Keynote: Larger isn’t always better by Abeba Birhane

Abebe

Bio: Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence. Her interdisciplinary research sits at the intersections of embodied cognitive science, complexity science, critical data and algorithm studies, and afro-feminist theories. Her work includes audits of computational models and large scale datasets. Birhane is a Senior Fellow in Trustworthy AI at Mozilla Foundation and an Adjunct Assistant professor at the school of computer science and statistics at Trinity College Dublin, Ireland.

Keynote Announcement

Keynote: Training Dynamics and Trust in Machine Learning by Nicolas Papernot

Nicolas

Bio: Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute and is a faculty affiliate at the Schwartz Reisman Institute. His research interests span the security and privacy of machine learning. Some of his group’s recent projects include proof-of-learning, collaborative learning beyond federation, dataset inference, and machine unlearning. Nicolas is an Alfred P. Sloan Research Fellow in Computer Science. His work on differentially private machine learning was awarded an outstanding paper at ICLR 2022 and a best paper at ICLR 2017. He serves as an associate chair of the IEEE Symposium on Security and Privacy (Oakland), and an area chair of NeurIPS. He co-created and co-chaired the first IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) in 2023. Nicolas earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year at Google Brain where he still spends some of his time.

Keynote Announcement

Keynote: TBD by Aisha Walcott

Aisha

Bio: Dr. Aisha Walcott is a Senior Staff Research Scientist and head of Google Research Kenya. She has over a decade of experience working in Africa and leading teams to develop innovative technologies that leverage AI and computing to address some of Africa’s most pressing challenges. Her current work focuses on challenges of Africa’s food systems and exploring ways in which advances in AI tools can make an impact on food security through building resilience in food systems.

Prior to her time at Google, she was a Senior Technical Staff Member at IBM Research Africa, and led projects in developing AI tools for global health, water management and access, as well as transportation. Currently, she serves on the board for the African Institute for Mathematical Sciences (AIMS) doctoral research program in data science, and is a Workshops co-chair for the International Conference on Learning and Representations 2023 (ICLR’23) - to be held in Rwanda May 2023. Dr. Walcott earned her PhD in the Electrical Engineering and Computer Science Department at MIT with a focus on robotics.

CFP

CFP @ Trustworthy AI Workshop 2023

Recent years have seen an overwhelming body of work on fairness and robustness in machine learning (ML) models. This is not unexpected, as it is an increasingly important concern as ML models are used to support decision-making in high-stakes applications such as mortgage lending, hiring, and diagnosis in healthcare. Trustworthy AI aims to provide an explainable, robust, and fair decision-making process. In addition, transparency and security also play a significant role in improving the adoption and impact of ML solutions.

We need to assess potential disparities in outcomes that can be translated and deepened in our ML solutions. Particularly, data and models are often imported from external sources in addressing solutions in developing countries, thereby risking potential security issues. The divergence of data and model from a population at hand also poses a lack of transparency and explainability in the decision-making process. This workshop aims to bring researchers, policymakers, and regulators to discuss ways to ensure security and transparency while addressing fundamental problems in developing countries, particularly, when data and models are imported and/or collected locally with less focus on ethical considerations and governance guidelines.

We’re looking for extended abstracts that will be presented as contributed talks (10 to 15 minutes) related to:

  • Audit techniques in data and ML models.
  • Advances in algorithms and metrics for robust ML.
  • Uncertainty quantification techniques and Fairness studies.
  • Applications and research in data and model Privacy/Security.
  • Methodologies or case studies for explainable and transparent AI.

If you’re interested in presenting your work at TrustAI Workshop, please submit your extended abstract here before July 21st, 2023 July 28th, 2023. The short paper must be formatted using the DLI Author Kit.

This year we will use CMT to manage the submissions, if this is your first time using the platform, you can watch a great tutorial here on how to create an account and make a submission.