Virginia Dignum
Umeå University (Sweden)

More Information

Alessio Lomuscio
Imperial Coleague London (UK)

More Information

Sascha Ossowski
University Rey Juan Carlos (Spain)

More Information

Virginia Dignum
Developing and using AI responsibly

Virginia Dignum, Umeå University (Sweden)


Abstract

The last few years have seen a huge growth in the capabilities and applications of Artificial Intelligence (AI). Hardly a day goes by without news about technological advances and the societal impact of the use of AI. Not only are there large expectations of AI's potential to help to solve many current problems and to support the well-being of all, but also concerns are growing about the impact of AI on society and human wellbeing. Currently, many principles and guidelines have been proposed for trustworthy, ethical or responsible AI.

In this talk, I argue that ensuring responsible AI is more than designing systems whose behavior is aligned with ethical principles and societal values. It is above all, about the way we design them, why we design them, and who is involved in designing them. This requires novel theories and methods that ensure that we put in place the social and technical constructs that ensure that the development and use of AI systems is done responsibly and that their behavior can be trusted.

Biography
Alessio Lomuscio
Towards formally verifying forthcoming multi-agent systems

Alessio Lomuscio, Imperial Coleague London (UK)


Abstract

Over the past 15 years, methods for the formal verification of multi-agent systems, such as model checking, have achieved a considerable degree of sophistication and increasingly complex systems have been analysed.

But two design paradigms are rapidly emerging in applications of agent-based systems: swarming and machine learning (ML). In swarm-based agent-based systems, such as swarm robotics or IoT systems, the number of agents is not known at design time and may vary at run-time. In ML-based agent systems, the agents are not programmed via an agent-based programming language, but learned from data. Traditional verification methods cannot deal with systems with these characteristics.

In this talk I will summarise some of the recent work in our lab towards the verification of agent-based systems in which the number of components is unbounded at design time and agents synthesised via ML-based methods.

(The talk is based on joint work with several members of the Verification of Autonomous Systems research group at Imperial College London).

Biography
Sascha Ossowski
Building multiagent applications with agreement technologies

Sascha Ossowski, University Rey Juan Carlos (Spain)


Abstract

Many challenges in today’s society can be tackled by open distributed software systems. To instil coordination in such systems is particularly demanding, as usually only part of them can be directly controlled at runtime. In this talk, I will first introduce the agreement technologies (AT) approach to the development of open multiagent systems. I will argue that, by adequately combining models and methods from the AT sandbox, one can achieve an appropriate level of coordination in domains with different degrees of openness, and will back my claim by means of various application examples.

Biography