Information and Communication Technology 2022ICT22-023

Training and Guiding AI Agents with Ethical Rules


Training and Guiding AI Agents with Ethical Rules
Principal Investigator:
Institution:
TU Wien
Co-Principal Investigator(s):
Thomas Eiter (TU Wien)
Ezio Bartocci (TU Wien)
Status:
Ongoing (01.05.2023 – 30.04.2027)
Funding volume:
€ 799,570

Autonomous agents are increasingly becoming an integral part of our world. It is essential that they act in legal, ethically-sensitive, and socially acceptable ways; more broadly, their behavior must be regulated by norms. While the crucial importance of this endeavor is well-acknowledged, the question of how to implement such agents is still open. Two different approaches emerge: one approach uses symbolic Artificial Intelligence (AI) techniques (Logic, Knowledge Representation and Reasoning), while the other relies on sub-symbolic AI (i.e., Machine Learning), where in particular Reinforcement Learning (RL) has proven to be a powerful technique to train autonomous agents to solve complex tasks in sophisticated environments. As both approaches have strengths and weaknesses, the three partners of TAIGER --E. Bartocci (Cyber-Physical Systems), A. Ciabattoni (Logic), and T. Eiter (Knowledge Representation and ASP)-- aim to integrate them, thus getting the best of both worlds. Specifically, TAIGER will introduce effective frameworks for equipping RL-based agents with the ability to comply with norms in possible interplay with their goals. Grounded in formal reasoning, the frameworks will be modular and facilitate transparent justification of judgments. Moreover, they will cope with potential contradictions in normative requirements and handle situations in which no compliance is possible, without deviating too much from the optimal behavior the agent has learned.

 
 
Scientific disciplines: 102001 (55%) | 101013 (25%) | 102034 (20%)

We use cookies on our website. Some of them are technically necessary, while others help us to improve this website or provide additional functionalities. Further information