Skip to content

Object detection on short LiDAR videos from buses in Trondheim, Norway. This was a part of the course TDT17 - Visual Intelligence at NTNU autumn 2021.

Notifications You must be signed in to change notification settings

Magssch/TDT17-lidar-object-detection

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LiDAR object detection project

How to run

This project is run using personal notebooks contained in the notebooks-folder.

The final delivery notebook is contained in final.ipynb

YOLOv5 has been the main architecture used up until this point.

Generating training data

In order to train on the LiDAR videos, they first need to be converted to images (frame by frame).
In the src-directory there is a script for doing exactly this.

python src/dataset_builder.py [--merge] [--patches]

The --merge flag is optional. When used it merges the three video channels (ambient, intensity and range) into RGB-images. The --patches flag is also optional. When used it splits all videos into 8x 128x128 images

The script can also be run via the notebook.

Requirements

Please refer to the YOLOv5 documentation for installation and use.

Python scripts has been developed using version Python 3.8, but is expected to work with other versions as well

If you are using pip, run the following at the command-line to install the project dependencies:

pip install -r requirements.txt

About

Object detection on short LiDAR videos from buses in Trondheim, Norway. This was a part of the course TDT17 - Visual Intelligence at NTNU autumn 2021.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.7%
  • Python 0.3%