A Sliding Window Filter with GNSS-State Constraint for RTK-Visual-Inertial Navigation. paper link
Authors: Xiaohong Huang, Cui Yang, Miaowen Wen
RTK-Visual-Inertial-Navigation-JetsonTX2 is a fast version of RTK-Visual-Inertial-Navigation. The purpose of this project is to improve the efficiency of RTK-Visual-Inertial-Navigation so that it can run on embedded devices, such as Jetson-TX2. The RTK-Visual-Inertial-Navigation-JetsonTX2 can achieve real-time state estimation with a state update rate of 20~25Hz in Jetson-Tx2.
RTK-Visual-Inertial-Navigation is a navigation system that tightly fuses GNSS, visual, and inertial measurements. It uses a sliding window filter (SWF) with GNSS-state constraints for sensor fusion. That is, the GNSS states (i.e., position, orientation, and velocity of the body and inertial biases at the time of capturing GNSS measurements) are retained in the SWF to construct more appropriate constraints between measurements and states. It also uses the parallel elimination strategy in a predefined elimination ordering, which can solve the Gauss-Newton problem and simultaneously obtain the covariance for ambiguity resolution. The system can perform the following types of navigation:
- RTK-Visual-Inertial Navigation;
- RTD-Visual-Inertial Navigation;
- SPP-Visual-Inertial Navigation;
- SPP-Visual-Inertial Navigation with Carrier-Phase Fusion
- Visual-Inertial navigation.
This package requires some features of C++11.
This package is developed under ROS Melodic environment.
Our code uses Opencv 3 and Opencv extra modules for image process.
Clone the repository to your catkin workspace (for example ~/catkin_ws/
):
cd ~/catkin_ws/src/
git clone https://github.com/xiaohong-huang/RTK-Visual-Inertial-Navigation-JetsonTX2.git
Clone the packages for interfacing ROS with OpenCV:
cd ~/catkin_ws/src/
git clone https://github.com/ros-perception/vision_opencv/tree/melodic.git
In our source code, we have developed our solving strategy based on Ceres-Solver. The original version of Ceres-Solver is not satisfied for our project. To build the project, you need to build our modified Ceres-Solver with:
# CMake
sudo apt-get install cmake
# Eigen3
sudo apt-get install libeigen3-dev
# Ceres-Solver-Modified
cd ~/catkin_ws/src/RTK-Visual-Inertial-Navigation-JetsonTX2
tar -xvf ceres-solver-modified.tar
cd ceres-solver-modified/
sh build.sh
The modified version will only be installed in the workspace's folder. So you don't need to worry that the installation will change the settings of your computer.
Then build the package with:
cd ~/catkin_ws/
catkin_make
For the Jetson-TX2 paltform, we use the Quad-Core ARM® Cortex®-A57 MPCore for state optimization, one of the Denver core for front-end processing, and NVIDIA CUDA cores for image feature extraction. To enable this setting, you can set the Jetson-TX2 to MAXN power mode by:
sudo nvpmodel -m 0
sudo nvpmodel -q verbose
sudo jetson_clocks --fan
sudo jetson_clocks
Our equipment is shown as follows: A grayscale camera (MT9V034 752x480@25HZ), a MEMS-grade IMU (BMI088 400HZ), a
Download our Dataset and launch the rviz via:
source ~/catkin_ws/devel/setup.bash
roslaunch rtk_visual_inertial rtk_visual_inertial_rviz.launch
Open another terminal and run the project by:
source ~/catkin_ws/devel/setup.bash
rosrun rtk_visual_inertial rtk_visual_inertial_node src/RTK-Visual-Inertial-Navigation-Fast/yaml/SETTING.yaml YOUR_BAG_FOLDER/BAG_NAME.bag ourput.csv
YOUR_BAG_FOLDER is the folder where you save our dataset. BAG_NAME is the name of our dataset. SETTING.yaml is the setting for RTK-Visual-Inertial-Navigation-Fast. You could use the following settings to perform different types of navigation.
rtk_visual_inertial_config.yaml #RTK-Visual-Inertial-Navigation
rtd_visual_inertial_config.yaml #RTD-Visual-Inertial-Navigation
spp_visual_inertial_config.yaml #SPP-Visual-Inertial-Navigation
spp_CP_visual_inertial_config.yaml #SPP-Visual-Inertial-Navigation with carrier-phase fusion
visual_inertial_config.yaml #Visual-Inertial-Navigation
We have also provide a demo for evaluating the positioning errors (see evaluate.py).
The VIO framework is adapted from VINS-Mono. The Ceres-Solver-Modified is developed base on Ceres-Solver
The source code is released under GPLv3 license.