Virginia Dignum, Umeå University (Sweden)
The last few years have seen a huge growth in the capabilities and applications of Artificial Intelligence (AI). Hardly a day goes by without news about technological advances and the societal impact of the use of AI. Not only are there large expectations of AI's potential to help to solve many current problems and to support the well-being of all, but also concerns are growing about the impact of AI on society and human wellbeing. Currently, many principles and guidelines have been proposed for trustworthy, ethical or responsible AI.
In this talk, I argue that ensuring responsible AI is more than designing systems whose behavior is aligned with ethical principles and societal values. It is above all, about the way we design them, why we design them, and who is involved in designing them. This requires novel theories and methods that ensure that we put in place the social and technical constructs that ensure that the development and use of AI systems is done responsibly and that their behavior can be trusted.
Virginia Dignum is Professor of Social and Ethical Artificial Intelligence at Umeå University, Sweden and associated with the TU Delft in the Netherlands. She is the scientific director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software. She is a Fellow of the European Artificial Intelligence Association (EURAI), a member of the European Commission High Level Expert Group on Artificial Intelligence, of the World Economic Forum’s Global Artificial Intelligence Council, of the Executive Committee of the IEEE Initiative on Ethically Alligned Design, and a founding member of ALLAI-NL, the Dutch AI Alliance. She is the author of “Responsible Artificial Intelligence: developing and using AI in a responsible way” published by Springer in 2019.
She has a PHD in Artificial Intelligence from Utrecht University and in 2006 she was awarded the prestigious Veni grant by the NWO (Dutch Organization for Scientific Research). She a well-known speaker on the social and ethical impacts of Artificial Intelligence, is member of the reviewing boards for all major journals and conferences in AI and has published over 180 peer-reviewed papers.
Sascha Ossowski, University Rey Juan Carlos (Spain)
Many challenges in today’s society can be tackled by open distributed software systems. To instil coordination in such systems is particularly demanding, as usually only part of them can be directly controlled at runtime. In this talk, I will first introduce the agreement technologies (AT) approach to the development of open multiagent systems. I will argue that, by adequately combining models and methods from the AT sandbox, one can achieve an appropriate level of coordination in domains with different degrees of openness, and will back my claim by means of various application examples.
Sascha Ossowski is a full professor of computer science and director of the Centre for Intelligent Information Technologies (CETINIA) at University Rey Juan Carlos in Madrid. He received a MSc degree in informatics from U Oldenburg (Germany) and a PhD in artificial intelligence from TU Madrid (Spain). The main themes of his research refer to models and mechanisms for coordination in all sorts of agent systems and environments. He was co-founder and first chair of the board of directors of the European Association for Multiagent Systems (EURAMAS), and chaired the European COST Action on Agreement Technologies. He is emeritus member of the board of directors of the International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) and belongs to the steering committee of the ACM Annual Symposium on Applied Computing (SAC).