CFP

CFP @ Trustworthy AI Workshop 2023

Recent years have seen an overwhelming body of work on fairness and robustness in machine learning (ML) models. This is not unexpected, as it is an increasingly important concern as ML models are used to support decision-making in high-stakes applications such as mortgage lending, hiring, and diagnosis in healthcare. Trustworthy AI aims to provide an explainable, robust, and fair decision-making process. In addition, transparency and security also play a significant role in improving the adoption and impact of ML solutions.

We need to assess potential disparities in outcomes that can be translated and deepened in our ML solutions. Particularly, data and models are often imported from external sources in addressing solutions in developing countries, thereby risking potential security issues. The divergence of data and model from a population at hand also poses a lack of transparency and explainability in the decision-making process. This workshop aims to bring researchers, policymakers, and regulators to discuss ways to ensure security and transparency while addressing fundamental problems in developing countries, particularly, when data and models are imported and/or collected locally with less focus on ethical considerations and governance guidelines.

We’re looking for extended abstracts that will be presented as contributed talks (10 to 15 minutes) related to:

If you’re interested in presenting your work at TrustAI Workshop, please submit your extended abstract here before July 21st, 2023 July 28th, 2023. The short paper must be formatted using the DLI Author Kit.

This year we will use CMT to manage the submissions, if this is your first time using the platform, you can watch a great tutorial here on how to create an account and make a submission.