MoDSeM: Modular Framework for Distributed Semantic Mapping

MoDSeM: Modular Framework for Distributed Semantic Mapping

Title: MoDSeM: Modular Framework for Distributed Semantic Mapping
Authors: Gonçalo S. Martins (Edifício B CTCV); João F. Ferreira (Institute of Systems and Robotics, University of Coimbra and Computational Neuroscience and Cognitive Robotics Laboratory, Nottingham Trent University); David Portugal (Institute of Systems and Robotics, University of Coimbra); Micael S. Couceiro (Edifício B CTCV);
Year: 2019
Citation: Martins, G. S., Ferreira, J. F., Portugal, D., Couceiro, M. S., (2019). MoDSeM: Modular Framework for Distributed Semantic Mapping. UK-RAS19 Conference: “Embedded Intelligence: Enabling and Supporting RAS Technologies” Proceedings, 12-15. doi: 10.31256/UKRAS19.4

Abstract:

This paper presents MoDSeM, a novel software framework for spatial perception supporting teams of robots. MoDSeM aims to provide a semantic mapping approach able to represent all spatial information perceived in autonomous missions involving teams of field robots, and to formalize the development of perception software, promoting the development of reusable modules that can fit varied team constitutions. Preliminary experiments took place in simulation, using a 100x100x100m simulated map to demonstrate our work-in- progress prototype’s ability to receive, store and retrieve spatial information. Results show the appropriateness of ROS and OpenVDB as back-ends for supporting the prototype, achieving promising performance in all aspects of the task and supporting future developments.

Download PDF