Machine vision based landing zone recognition
Virtanen, Arno (2019)
Virtanen, Arno
2019
Sähkötekniikan DI-ohjelma - Degree Programme in Electrical Engineering
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2019-11-06
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-201911014276
https://urn.fi/URN:NBN:fi:tuni-201911014276
Tiivistelmä
This thesis presents how machine vision can be used in the automatic landing process of an unmanned aerial vehicle. The goal of this work is to study what concepts are related to machine vision and UAVs. The goal is to implement a system that can recognise a suitable landing location from the air and land to this location without user interference to the landing procedure.
The work was implemented by using open-source tools and libraries as much as possible and only the landing logic was implemented during this work. For this work different kinds of options were used at the landing location of the UAV. The landing location must enable the UAV to define the coordinates and orientation where the UAV will land at. Also, the landing location must be uniquely identified in an environment where there are multiple landing locations. The landing is considered successful if the UAV lands within 15 cm of the landing location center and with a maximum of a 10-degree deviation. In this work a QR-code like shape was used called the ArUco marker.
The system functionality was developed with the Gazebo simulator environment. In this environment almost all of the components could be simulated and these could communicate with the developed software that was made for this work. The developed software was related to the control logic of the UAV and utilised a library specialised for robot applications called Robot operating system (ROS). The tests conducted in the simulator environment were promising and the landing accuracy was in the range of 5 cm and the deviation of the orientation was 0.5 degrees at maximum. The tests conducted in the simulator environment had an overall success rate of over 90 %.
The software processes developed with the help of the simulation environment could be transferred to actual hardware without modifications. During early test flights it was noticed that the flight should have started simultaneously with the simulator environment tests. This is because in the simulator environment the GPS-signal was nearly perfect but in reality it is not and has great errors in it. During the software development phase it wasn’t understood that the used control signals were GPS based. This caused great problems during the real-world tests and it was decided to stop the experiments since the UAVs movement was erratic and unpredictable thus the UAV could not land at all.
It was concluded that applications that require accuracy cannot be based on GPS data. Though the system cannot solely rely on one single sensor but the system should use a variety of different sensors. The system developed during this work functioned as intended in an ideal simulator environment but did not function properly when used in a real-world environment. It was determined that the use of GPS data in the control scheme of the system caused the failure of the real-world experiments.
The work was implemented by using open-source tools and libraries as much as possible and only the landing logic was implemented during this work. For this work different kinds of options were used at the landing location of the UAV. The landing location must enable the UAV to define the coordinates and orientation where the UAV will land at. Also, the landing location must be uniquely identified in an environment where there are multiple landing locations. The landing is considered successful if the UAV lands within 15 cm of the landing location center and with a maximum of a 10-degree deviation. In this work a QR-code like shape was used called the ArUco marker.
The system functionality was developed with the Gazebo simulator environment. In this environment almost all of the components could be simulated and these could communicate with the developed software that was made for this work. The developed software was related to the control logic of the UAV and utilised a library specialised for robot applications called Robot operating system (ROS). The tests conducted in the simulator environment were promising and the landing accuracy was in the range of 5 cm and the deviation of the orientation was 0.5 degrees at maximum. The tests conducted in the simulator environment had an overall success rate of over 90 %.
The software processes developed with the help of the simulation environment could be transferred to actual hardware without modifications. During early test flights it was noticed that the flight should have started simultaneously with the simulator environment tests. This is because in the simulator environment the GPS-signal was nearly perfect but in reality it is not and has great errors in it. During the software development phase it wasn’t understood that the used control signals were GPS based. This caused great problems during the real-world tests and it was decided to stop the experiments since the UAVs movement was erratic and unpredictable thus the UAV could not land at all.
It was concluded that applications that require accuracy cannot be based on GPS data. Though the system cannot solely rely on one single sensor but the system should use a variety of different sensors. The system developed during this work functioned as intended in an ideal simulator environment but did not function properly when used in a real-world environment. It was determined that the use of GPS data in the control scheme of the system caused the failure of the real-world experiments.