Member of executive committee representing Luxembourg, member of faculty council. ERCIM
Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. In cognitive science it is concerned with how people store and process information. In artificial intelligence the primary aim is to store knowledge so that programs can process it and achieve the verisimilitude of human intelligence. AI researchers have borrowed representation theories from cognitive science. Thus there are representation techniques such as frames, rules and semantic networks which have originated from theories of human information processing. Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to represent knowledge in a manner as to facilitate inferencing i.e. drawing conclusions from knowledge.
We work on logics for knowledge representation and reasoning. Input-output logic (IOL) is a theory of input/output operations resembling inference, but where input propositions are not in general included among outputs, and the operation is not in any way reversible. Examples arise in contexts of conditional obligations, goals, ideals, preferences, actions, and beliefs. Four are singled out: simple-minded, basic (making intelligent use of disjunctive inputs), simple-minded reusable (in which outputs may be recycled as inputs), and basic reusable. They are defined semantically and characterised by derivation rules, as well as in terms of relabeling procedures and modal operators. Their behaviour is studied on both semantic and syntactic levels.
A multi-agent system (MAS) is a system composed of several agents, collectively capable of reaching goals that are difficult to achieve by an individual agent or monolithic system. The exact nature of the agents is a matter of some controversy. They are sometimes claimed to be autonomous. For example a household floor cleaning robot can be autonomous in that it is dependent only on a human operator to start it up. On the other hand, in practice, all agents are under active human supervision. Furthermore, the more important the activities of the agent are to humans, the more supervision that they receive. In fact, autonomy is seldom desired. Instead interdependent systems are needed. MAS can be claimed to include human agents as well. Human organizations and society in general can be considered an example of a multi-agent system.
We work on normative multi-agent systems, which study general and domain independent properties of norms. It builds on results obtained in deontic logic, the logic of obligations and permissions, for the representation of norms as rules, the application of such rules, contrary-to-duty reasoning and the relation to permissions. However, it goes beyond logical relations among obligations and permissions by explaining the relation among social norms and obligations, relating regulative norms to constitutive norms, explaining the evolution of normative systems, and much more.
Our normative multi-agent systems are based on the architecture, an abstract agent representation, that consists of the four components Beliefs, Obligations, Intentions and Desires. The simple-minded BOID is a lightweight stimulus response agent, that only exhibits reactive behavior. Our BOID consists of two phases: the first phase results in an intermediate epistemic state, and the second phase results in new intended actions. This simple-minded BOID is extended (as time and resources allow) with capabilities for deliberation which may result in more complex (e.g. pro-active) behavior. BOID
Curriculum Vitae
I was born in Rotterdam, the Netherlands, and at the Erasmus University of Rotterdam I held positions at EURIDIS and the Department of Computer Science during which I obtained my MS (August 1992) and my PhD in computer science (February 1997). I worked on deontic logic in computer science (with Yao-Hua Tan).
In the following two years I visited the Max Planck Institute for computer science and the IRIT laboratory in Toulouse, France as a Marie Curie fellow, where I worked on qualitative decision theory (with Jerome Lang and Emil Weydert), and started to work on input/output logics (with David Makinson).
Returning to the Netherlands, I worked at the Vrije Universiteit van Amsterdam in the SINS project and at the CWI on the ArchiMate project. I worked on agent theory and cognitive science. I initiated the BOID project (with Jan Broersen, Mehdi Dastani, Zhisheng Huang and Joris Hulstijn) and the normative multiagent systems (with Guido Boella).
I started January 2006 at the University of Luxembourg. I am COST ICT Domain Committee member for Luxembourg, ERCIM executive committee member for Luxembourg, and responsible for priority P1 on security and trust within the University of Luxembourg.
Research
Research is guided by the insight that intelligent systems are characterized not only by their individual reasoning capacity, but also by their social interaction potential. Our overarching goal is to develop and investigate formal models and computational realizations of individual and collective rationality. This includes in particular the generalization of existing frameworks for single agent reasoning to multi-agent communities, the modeling and study of interactions between agents in complex social environments, and the development of generalized inference techniques, e.g. for aggregating conflicting evidence and norms.
Teaching
Introduction to intelligent systems: agents and reasoning,
agents 1: knowledge representation,
agents 2: multiagent systems,
game theory,
selected topics in AI (MiCS),
discrete mathematics 2,
methods in science,
introduction to intelligent and adaptive systems (BINFO).