Skip to content

How to manage execution logs and to process data? (scripts available in dev only)

Bruno Bodin edited this page Aug 16, 2019 · 1 revision

The python scripts.

Preparation

Let's assume you want to do some experiments with kfusion, over the TUM dataset. If you haven't your framework ready yet, you can prepare it that way 👍

make toon eigen 
make kfusion
make ./datasets/TUM/freiburg1/rgbd_dataset_freiburg1_xyz.slam
make slambench APPS=kfusion

To generate one sample point, we run slambench using kfusion over the freiburg1_xyz data. On hte following command please note the -o output0.log

>>> build/bin/benchmark_loader -i datasets/TUM/freiburg1/rgbd_dataset_freiburg1_xyz.slam -load ./build/lib/libkfusion-cpp-library.so  -o output0.log
Parameter input assigned value datasets/TUM/freiburg1/rgbd_dataset_freiburg1_xyz.slam
Parameter load-slam-library assigned value ./build/lib/libkfusion-cpp-library.so
new library name: ./build/lib/libkfusion-cpp-library.so
Configuration consumed 0 bytes
SLAM library loaded: ./build/lib/libkfusion-cpp-library.so
Parameter log-file assigned value output0.log
Process every frame mode enabled
*** Start memory tracking
*** Test XU3 Monitoring.
*** XU3 Monitoring failed.
*** There is no available power monitoring techniques on this system.
input Size is = 640,480
camera is = 591.1,590.1,331,234
Last frame processed.
End of program.
Clean SLAM system ...
Algorithm cleaning succeed.

readlog.py / reading files

Let's assume you run the system multiple times, with different parameters, or in different situations. Eventually, we end up with several files in a directory.

The readlog.py script can be used to read these files:

>>> ./framework/tools/python/readlog.py res/
Log files... |################################| 111/111

readlog.py / basic plot

The tool can be used to visualize the data using the -p argument.

./framework/tools/python/readlog.py --help
usage: readlog.py [-h] [-v] [-a] [-d DUPLICATE] [-s SAVE] [-p] N [N ...]

positional arguments:
  N                     Log/Directory/Summary to process

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         turn verbose on
  -a, --accuracy        print accuracy record
  -d DUPLICATE, --duplicate DUPLICATE
                        Look for duplicate
  -s SAVE, --save SAVE  Save summary into summary file
  -p, --plot            Plot the results

summary files

The tool can also be used to generate a summary file, summary files are faster to read, when there is lots of data it might be better to go for this kind of file. To generate a summary file you can use the -s filename command.

>>> ./framework/tools/python/readlog.py res/ -s summary.gkl
Log files... |################################| 111/111

An then reusing the file is as simple as replacing the directory name by the file:

./framework/tools/python/readlog.py  summary.gkl -p

Summary files... |################################| 1/1
Preparing test.png out of 111 points ...
Segment /home/toky/slambench2//datasets/ICL_NUIM/living_room_traj2_loop.slam : 13 pareto points.
Segment /home/toky/slambench2//datasets/ICL_NUIM/living_room_traj2_loop.slam : [19, 28, 31, 34, 39, 53, 54, 65, 66, 78, 99, 103, 106]

API

You can reproduce the readlog script to perform an datamining operation you have in mind, the data can be collected that way

import slamlog
data = slamlog.load_inputs(["summary.pkl"])

data is a map, each key of the map is a filename, each value of this map is another map that contains ['date', 'Statistics', 'Properties', 'Summary']. and so you can access data that way :

data['res/algorithmic1163.log']['Summary']['algo']['AbsoluteError']['MEDIAN']
 2.26483511925