Planning under Uncertainty: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
No edit summary
Line 10: Line 10:
the decentralized POMDP (Dec-POMDP) framework.
the decentralized POMDP (Dec-POMDP) framework.


Some of our results in this research area can be found at the [http://decpucs.isr.ist.utl.pt DecPUCS] project page, as well as Matthijs Spaan's [http://users.isr.ist.utl.pt/~mtjspaan/publications/ publications].
Some of our results in this research area can be found at the [http://mediawiki.isr.ist.utl.pt/wiki/From_Bio-Inspired_to_Institutional-Inspired_Collective_Robotics MAIS+S] and  [http://decpucs.isr.ist.utl.pt DecPUCS] project pages, as well as Matthijs Spaan's [http://users.isr.ist.utl.pt/~mtjspaan/publications/ publications].

Revision as of 18:18, 23 May 2012

In the Intelligent Systems Lab, one of the research foci is planning under uncertainty. That is, we compute plans for single agents as well as cooperative multiagent systems, in domains in which an agent is uncertain about the exact consequences of its actions. Furthermore, it is equipped with imperfect sensors, resulting in noisy sensor readings which provide only limited information. For single agents, such planning problems are naturally framed in the partially observable Markov decision process (POMDP) paradigm. In a POMDP, uncertainty in acting and sensing is captured in probabilistic models, and allows an agent to plan on its belief state, which summarizes all the information the agent has received regarding its environment. For the multiagent case, we frame our planning problem in the decentralized POMDP (Dec-POMDP) framework.

Some of our results in this research area can be found at the MAIS+S and DecPUCS project pages, as well as Matthijs Spaan's publications.