In many application areas of machine learning (ML), there is a great demand for trustworthiness. Under the aspect of trustworthiness, high demands are placed on ML systems in terms of privacy, explainability, fairness and robustness. Currently, these facets are only addressed individually: Explainable AI (XAI) aims to improve the explainability of ML models, while Privacy-Preserving Machine Learning (PPML) aims to increase data protection. There is a lack of holistic approaches to trustworthiness that give equal weight to all facets. In addition, companies need concrete tools to consider trustworthiness not only in the development but also in the operation of ML systems.
The overall goal of this research project is to investigate trustworthiness in the development and operation of ML systems in order to develop tools and best practices that support companies in implementing trustworthiness. Privacy, explainability, fairness and robustness as essential facets of trustworthiness are considered in a balanced and comprehensive way to enable a multi-perspective design of tools and best practices. The tools developed focus on the entire ML lifecycle to explicitly consider the intertwining of development and operations (MLOps). Specifically, the tools aim to make trustworthiness evaluable, testable and monitorable. The tools and best practices are to be validated on the basis of numerous use cases of the corporate partner. Finally, it is expected that the findings can be transferred to other ML applications in highly sensitive areas.