0. Opening 1. Verslag vorige ALV 2. Financiën 3. Activiteiten verleden en toekomst 4. Rondvraag 5. Sluiting
Joint work with Thomas Agotnes and Michael Wooldridge.
We discuss logics for cooperation in which agents have control over certain aspects of the world. Moreover, this control can be delegated to other agents. We then discuss Coalition Logic: we show how adding preferences to Pauly's coalition logic CL enables one to express several game theoretical concepts, and we finally show how adding for a restricted form of quantication gives a language that is equally expressive, but exponentially more succinct than CL.
Traditional modelling is usually centered around crisp (precise) definitions of the modelled reality. The underlying assumption is that the objective reality is, or can be crisply modelled. On the other hand, the environments in which intelligent systems are embedded are often much more complex and dynamic. Approximate concepts and relations seem to be omnipresent in every physical world description, in particular when perception is involved. Knowledge representation formalisms used to model such environments have then to be partial and approximate in nature.
In the talk we shall discuss approximate techniques, generalizing rough sets, and show how the traditional multimodal formalization of belief-goal intention (BGI) models of multi-agent systems can be reformulated to deal with approximate concepts and theories.
In collaboration with Philippe Balbiani (Institut de Recherche en Informatique de Toulouse - IRIT), Alexandru Baltag (Oxford University), Andreas Herzig (IRIT), Tomohiro Hoshi (Stanford University) and Tiago de Lima (IRIT)
Public announcement logic is an extension of multi-agent epistemic logic with dynamic operators to model the informational consequences of announcements to the entire group of agents. We propose an extension of public announcement logic, called arbitrary announcement logic, with a dynamic modal operator that expresses what is true after arbitrary announcements. Intuitively,  phi expresses that phi is true after an arbitrary announcement psi.
For an example, let us work our way upwards from a concrete announcement. When an atomic proposition p is true, it becomes known by announcing it. Formally, in public announcement logic, p & [p] K p. This is equivalent to
< p > K p
which stands for 'the announcement of p can be made and after that the agent knows p'. More abstractly this means that there is a announcement psi, namely psi = p, that makes the agent know p, slightly more formal:
there is a formula psi such that < psi > K p
We introduce a dynamic modal operator that expresses exactly that:
<> K p
Obviously, the truth of this expression depends on the model: p has to be true. In case p is false, we can achieve <> K ~p instead. The formula <> (K p v K ~p) is valid.
Deontic logics aim at modelling the reasoning involved in dealing with "external" motivations, such as obligations, prohibitions and permissions. In this talk I briefly sketch some of the issues in formulating deontic logics in a multi-agent setting. In particular I discuss different views on the notion of "action" and several different ways of introducing deontic operators in a STIT-setting (STIT is an acronym for "Seeing To It That") I also discuss the relation of STIT with ATL (Alternating Time Temporal Logic), to emphasize the relevance of deontic STIT theory for defining deontic operators in ATL.
Consider the following problem:
Victor is a secret agent, and keeping secret his intelligence has a high priority. However, his mission is to protect Peggy from great dangers, so when needed, protecting Peggy takes priority over keeping his information secret. Now he is confronted with the following situation: Victor does not know whether certain information X known to him, is also known to Peggy. (`Peggy is kindly invited for a dinner at the Mallory's place.') Victor knows that Mallory is a very malicious person. If Peggy does know that she is kindly invited, Victor would like to send her a warning message (`Don't go there, it is a trap. You will get killed in case you go there.'). However, if Peggy has somehow not received the invitation X, Victor would like to keep his warning for himself, as well as his knowledge of Peggy's invitation. Therefore, Victor asks Peggy to prove her knowledge of the invitation. Only after the proof, Victor will disclose his warning to Peggy. In the protocol, Peggy does not learn whether Victor actually knew about the invitation, other than from his possible next actions, such as sending a warning.
Peggy is willing to prove her knowledge of the invitation X, but only if she can make sure that Victor does not cheat on her, by actually finding out about the invitation because he tricks her into telling him that she has been invited. That is, she only wants to prove her knowledge of the invitation if Victor actually knew about the invitation beforehand.
In this problem, higher-order epistemic knowledge has to be exchanged, while the lower-order epistemic state has to remain unchanged. A solution to this problem helps to reconcile the demands of the opposing parties in the privacy debate. For example, flight passenger information between the EU and the US can be exchanged in such a way that only the privacy of the terrorists is infringed upon.
In the lecture, a solution for this problem will be given and analyzed.