This theme focuses on the fundamental question: How do AI agents act and learn in a society?
Agents should not reason, learn and act in isolation. They will need to do it with others and among others. So, this theme will explore the foundations of how AI systems should communicate, collaborate, negotiate and reach agreements with other AI and (eventually) human agents within a multi-agent system (MAS). We will go from intelligence centred in one agent to social intelligence and social behaviours, laying down the foundation that leads to the understanding and engineering of hybrid societies composed of AI and humans. Nowadays computation is increasingly distributed and the IoT will enable devices to become more intelligent, to communicate, and in the end to socialise. Social AI will be observable within Massive Multi-Agent Systems (MMAS), which will include all sorts of devices and different interaction modes with people, organisations and institutions. This theme will explore the current AI techniques to bring the social component into the foundations of AI.
The main questions that will drive the research on the fundamentals of social AI are:
- How do we empower individual AI agents to communicate with each other, collaborate, negotiate and reach agreements? How can agents coordinate to fairly share common resources?
- How can we make agents learn from each other in a responsible and fair way, leading to more intelligent behavior?
- How to create trustworthy hybrid human-AI societies that fulfil humans’ expectations and follow their requirements?”
Contact: Ana Paiva (firstname.lastname@example.org)