The real world is 3D and robots should be able to move both horizontally and vertically within it. But until recently the brain-inspired robot navigation was studied primarily in planar horizontal environments, leaving much unknown about how to build 3D navigation for robots. The recent discoveries of neural basis of brain’s 3D navigation, including 3D Place Cells, 3D Direction Cells, 3D Grid Cells, etc., play a fundamental role in the mammalian or other animals navigation in 3D environments, like 3D flying (bat), volumetric swimming (fish, dolphin), walking on 3D ground surface (rat, human). It offers a new source of biological inspiration for creating robot autonomous navigation in complex 3D indoor environments. This project aims to develop a high robust 3D cognitive navigation system which is based on these biological findings for a real robot platform with light-weight sensors, like intelligent drones, autonomous vehicles.
More info on www.cognav.org
It is a long-term challenge in building cognitive indoor positioning system (IPS) with life-long robustness under large, dynamic, complex 3D indoor environments. However, the human and mammals can sense their location and direction very well. Recent findings in neuroscience initially explained neural mechanisms of spatial cognition in the brain, e.g. cognitive map, place cells, grid cells, head-direction cells, speed cells, boundary cells, etc. They offer us a new source of biologically inspiration for building novel indoor positioning technologies for pedestrian.
How to improve reliability, accuracy, adaptation, life-long robustness of IPS under large, dynamic, complex 3D indoor environments inspired by the brain? How to transform these neuro models of spatial cognition in computational neuroscience into cognitive indoor positioning system?
The CogIPS project is focusing on the development of computational models of spatial cognition processes in the mammalian and other animal brains and applying them to pedestrian positioning tasks in challenging environments. We use neuromorphic computing models (e.g. continuous attractor network, CAN) to estimate and update pedestrian real-time location and direction with neural dynamics rather than utilize classical probabilistic filter models (e.g. particle filter, Kalman filter, etc. ). Meanwhile, there are lots of external cues including spatial features, signal features (Wi-Fi, Bluetooth, Geomagnetic, etc.), visual features and semantic features, which can be sensed by rich sensors on mobile devices (e.g. smart phone, smart wearables). All of them as inputs of neuro models can re-localize and update pedestrian location by neural dynamics. Thus, these novel methods may improve reliability, accuracy, adaptation, life-long robustness of IPS in dynamic, complex indoor environments.
More info on www.cognav.org
WearNav: Wearable Indoor Cognitive Navigation System
The project was supported by Fundamental Research Founds for National University of CUG (No. 1610491T08). It aims to build an indoor cognitive navigation system based on smart wearable devices, e.g. smart glass, smart watch. The WearNav system was developed based on an intelligent hybrid indoor positioning approach fusing Wi-Fi, BLE, PDR, Vision, spatial model-aided enhanced localization. In addition, a cognitive navigation model was developed which utilizes indoor semantic landmarks to improve the navigation performance and reduce user’s cognitive load.
Participants：Fangwen Yu (Leader), Yifan Zhang, Zhiyong Zhou, Xinyi Tang, Wen Chen, Yongfeng Wu, Jianga Shang, etc.
SeLoMo: Multi-Layered Indoor Semantic Location Model
2013.01 – 2016.12
The project was supported by the National Natural Science Foundation of China (No. 41271440). It aims to build a multi-layered indoor semantic location model to support enhanced indoor positioning and intelligent location-based services, like “Position queries, Nearest neighbour queries, Range queries, Navigation and Visualization” for context-aware applications in ubiquitous computing environments especially indoor spaces. In particularly, the model can represent spatial features for indoor positioning, like floor map, sensory landmarks, spatial constraints, spatial topology, signal maps, etc., which can be used for enhanced spatial model-aided indoor positioning.
Participants：Jianga Shang, Fangwen Yu, Zhiyong Zhou, Xinyi Tang, Jinjin Yan, Xin Wang, Chao Wang, Jie Ma, etc.
SpaLoc: Spatial Model-aided Indoor Localization
2016.05 – 2017.09
The project was supported by National Key Research and Development Program of China (No. 2016YFB0502200). Challenge in indoor positioning is how to keep balance among low cost, high accuracy, and ubiquitous. It aims to utilized spatial information to improve indoor positioning accuracy and robust. The spatial information includes geometry, topological graph, grid model, sensory landmarks, signals characteristics map, etc., which can be used to constraint motion and improve positioning accuracy with low cost.
Participants：Jianga Shang, Wen Cheng, Yongfeng Wu, Pan Chen, Fangwen Yu, Xuke Hu, Ao Guo, Fuqiang Gu, etc.
UbiEyes: Universal Indoor Real-time Positioning System
The project was supported by Fundamental Research Founds for National University (No. CUGL090247). It aims to develop a universal indoor real-time positioning system. The UbiEyes system is an independent software platform used to build and support real-time positioning or location-aware applications written in Java. The system includes several core service components, such as Localization Engine (uLocEngine), LBS Server (uLBSServer), Context Data Engine (CDE) and Message-Oriented Middleware (MOM), as well as several application software like location monitoring software (uLocView), mobile Location-based application software (uLocMobile) and system management software (uManager). The system supports different localization technologies including Wi-Fi, BLE, PDR, nanoLOC, ZigBee, GPS, etc., and uses a variety of strategies to achieve enchanced positioning accuracy. It also combines the indoor semantic location model to provide basic indoor location-based services.
Participants：Jianga Shang, Fangwen Yu, Fuqiang Gu, Xuke Hu, Ao Guo, Bin Ge, etc.
iSoNe: Indoor Location-based Mobile Social Network System
The project was supported by Fundamental Research Founds for National University (No. 1310491B07). It aims to build a mobile social network system based on indoor real-time positioning and location-based query technologies. The iSoNe system includes several core service components, such as Localization Engine (iLocEngine), LBS Server (iLBSServer), Chat Sever (iChatServer), Database and Message-Oriented Middleware (MOM), as well as mobile application software based on Andriod (iSoNe App). The system supports some core functions including peer-friend continuous navigation, nearby friend continuous query, fine-grained check-in, friend tracking, fine-grained geo-notification, etc. The system was tested in Optic Valley Mall, Yifu Museum of CUG, etc.
Participants：Zhiyong Zhou, Xuke Hu, Xinyi Tang, Rui Wang, Yang Zhou, Yue Zhuo, Fangwen Yu, Jianga Shang, etc.
SmartENav: Context Aware Smart Indoor Navigation System for Emergency
The project was supported by Fundamental Research Founds for National University (No. 1310491B07). It aims to build a smart indoor navigation system for emergency based on context-aware and real-time positioning. It support a hybrid indoor positioning fusing Wi-Fi, BLE, PDR based on smart phone. The core functions include emergency exit navigation, rescue attendant tracking, security monitor, etc. It was tested in the school hospital of CUG.
Participants：Fangwen Yu (Leader), Jinjin Yan, Zhiyong Zhou, Xiao Zhang, Xiaonan Wang, Xuke Hu, Donghe Kang, Yibo Ge, Jianga Shang etc.