Group 1/example subpage

Project description
A quadcopter has two levels of control, local and global, these terms will be used throughout the project and are defined as follows: Local control is the stable flight of the quadcopter. The local control will make the quadcopter hover when in place and move as dictated by the global control. Global control is the behavior of the quadcopter. The global control will dictate in which direction the quadcopter should fly and will prevent the quadcopter from colliding with other objects.

For this project in specific the quadcopter will be able to fly by human global control with the aid of a RF transceiver pair, the RF transceiver can switch the quadcopter to autopilot. The autopilot will navigate the quadcopter in a static indoor situation to explore this situation and create a map. The quadcopter will navigate with the aid of visual sensors.

A typical situation where this quadcopter can be used is a building which cannot be entered by humans due to a potential hazard. The quadcopter can be send into the building to explore the building and generate a map where after possible rescue teams or other more specific robots can be send into the building. The quadcopter could be developed further to perform simple rescue tasks itself.

The project is divided in three different tracks.
 * Local control
 * Line follow system with a camera
 * Distance measuring and mapping with a camera

The tracks are not directly depending on each other. Therefore it is possible to perform them in parallel. The project consists of multiple products. The products are divided over two groups. The first group are the products that needs to be delivered (must have) and the second group are products that will be done when there is enough time and there are no problems with the products where they depend on. The products are as follows:


 * Must have:
 * Flying quadcopter with open source software
 * Stable local control
 * Able to carry an image sensor
 * Distance measuring with vision
 * Follow a line with vision
 * Mapping a building


 * Nice to have:
 * Autonomous navigation in a static environment
 * Object avoidance

The conditions of the environment are that there may not be any moving objects around the system, so the environment needs to be static.

The pictures above show the general setup for our hardware and applications.

Group Progress

 * Tuesday 26-03-2013
 * Working OpenCV in virtual Machine on PC.


 * Thursday 19-04-2013
 * Finally our last package has arrived and we can start building the frame! Pictures will follow later once we are finished building. We still need cables and a battery connector.


 * Friday 19-04-2013
 * MultiWii Arducopter software project running and communication with the PC.
 * Kinect Example project in Linux with depth image.
 * Beagle board Kubuntu running on the beagle board, communication over Wifi (dongle) and able to control the Kinect.
 * OpenCV Capturing a webcam stream, saving this as a video and some simple video transformations. Inter socket communication (UDP), not able to stream a video yet.


 * Friday 24-05-2013
 * Quadcopter is flying (and has been for a while), not completely stable yet, but this will be tuned in the next weeks. Video
 * Bottom cam we have managed to get our bottom cam running on the beagle board with OpenCV, finally we succeeded using the Trust Spotlight webcam.
 * Line detection with the bottom cam we can detect a line and live stream this from the beagle board to a PC over UDP. Next up is recognizing a crossing or T-junction and creating global control to follow the line
 * Sonar we have started working on our sonar for height detection
 * Kinect we can process the depth image from the Kinect and construct a 3D view
 * a Video of our progress so far:


 * Monday 24-06-2013
 * Quadcopter is flying more stable and the height control is implemented, we had some issues with the power management.
 * Sonar The 5V power to our sonar was quite noisy which caused inaccuracies in the height measurement. We have used different power source implementations but in the end the best solution is to use a separate power source (battery) due to the high level of noise.
 * Kinect We have connected the Kinect to the Beagle board and we are now able to transmit 5fps of depth data by an UDP broadcast. This broadcast is received by the mapping application which can now show a point cloud map of a static environment.
 * Line detection The algorithm is implemented and will be tested on the Quadcopter this week
 * Global control is working in theory and will be tested this week


 * Friday 05-07-2013
 * Demo we have been working hard this week to be able to do a demonstration today, unfortunately one small part in our implementation is not working yet. The communication between the global control (Beagle Board) and local control (MultiWii) is sometimes failing and therefore making it hard to control the quadcopter and tune the global control. This problem could be due to the inter process communication (named pipe) or the serial connection between the Beagle Board and MultiWii ... or maybe something else
 * Point cloud stitching furthermore there are still some issues with the point cloud stitching (mapping application).


 * Wednesday 10-07-2013
 * Kinect we have finally decided to strip down the Kinect to reduce the weight the quadcopter has to carry. With the original Kinect we have to fly at 70% of the full power to hover, with the stripped down version we only have to use 50% of the full power.
 * Serial communication We have isolated our communication issue, the serial connection is sometimes failing due to time out errors, we will try to use a different implementation for the serial communication to solve this problem.

Hardware Components
See Plan of Action slides for the component list

Beagle Board
We are using the Beagle Board XM revision C1
 * USB: the board has 1 on board high speed USB hub with 4 USB ports and one USB OTG (On The Go) host. Our Wifi, Kinect and bottom cam are connected to this same USB hub, we already suspected that this would be a bottle neck in our design and when implementing both the Kinect streaming application and the line detection application we noticed that the performance was very poor. We decided to use the USB OTG port as an extra usb host. The USB OTG port is normally used to power the board or to connect the board to a pc, the OTG port is then forced into slave mode. By grounding pin 4 of the connector the OTG port can be forced into master mode, this way the board has a second USB host. The revision C1 board has a jumper (J1) to connect pin 4 to ground. Our performance has doubled by using this implementation! We should mention here that the OTG port can only supply 100mA of power, even though our camera uses max 30mA we could not get this to work, next we decided to power the camera with an extra 5V (so disconnect it from the USB host and connect it to a 5V power source). Still the Beagle Board gave an error ("dmesg" shows messages for externally connected devices) saying that the device request to much power (the exact error is this: "rejected 1 configuration due to insufficient available bus power"). USB devices have a negotiation protocol where apparently our webcam tells the USB host that it requires more than 100mA. We finally solved this by using a USB hub in between the camera and the USB OTG port.
 * USB: the board has 1 on board high speed USB hub with 4 USB ports and one USB OTG (On The Go) host. Our Wifi, Kinect and bottom cam are connected to this same USB hub, we already suspected that this would be a bottle neck in our design and when implementing both the Kinect streaming application and the line detection application we noticed that the performance was very poor. We decided to use the USB OTG port as an extra usb host. The USB OTG port is normally used to power the board or to connect the board to a pc, the OTG port is then forced into slave mode. By grounding pin 4 of the connector the OTG port can be forced into master mode, this way the board has a second USB host. The revision C1 board has a jumper (J1) to connect pin 4 to ground. Our performance has doubled by using this implementation! We should mention here that the OTG port can only supply 100mA of power, even though our camera uses max 30mA we could not get this to work, next we decided to power the camera with an extra 5V (so disconnect it from the USB host and connect it to a 5V power source). Still the Beagle Board gave an error ("dmesg" shows messages for externally connected devices) saying that the device request to much power (the exact error is this: "rejected 1 configuration due to insufficient available bus power"). USB devices have a negotiation protocol where apparently our webcam tells the USB host that it requires more than 100mA. We finally solved this by using a USB hub in between the camera and the USB OTG port.

Bottom camera
As a bottom camera we use the trust spotlight which works with the beagle board and OpenCV. We'll add details on how to set this up.

Kinect
To create a depth map of the environment we use the XBOX 360 Kinect, we use the library libfreenect to read the data from te Kinect, this is described in more detail in the software section. The Kinect is normally powered by a 12V adapter, research on the internet has learned us that the Kinect operates at least in the voltage range of 12V downto 8.5V. We have measured that the adapter has an output of 12.3V, therefore we concluded that it would be safe to connect the Kinect directly to our lipo battery pack which is in the range of 12.4V downto 9.9V (we only drain our cells downto 3.3V). We have tested this setup, the Kinect draws 0.3A at 12.4V and about 0.4A at 8.5V. The special Kinect usb-connectors are hard to come by, we have cut the wire of the adapter to connect the Kinect to our battery. This cable has 2 wires, brown is the + wire and gray is the GND wire. We use a jack connector to be able to either connect the Kinect to its original power source or to the battery.

To reduce the weight of the Kinect we have stripped down the device.

The casing and stand of the Kinect have been removed and a new cable has been constructed, the battery power supply, the USB power supply and the USB data are connected directly to the header on the Kinect. The picture above shows the connection diagram. A connector was hard to come by therefore we have modified a (small) flat cable connector from an old PC.

Software
...

Image Stitching
Besides a depth map we can also capture normal RGB images with our Kinect camera. This is a nice program to use if you want to make a panorama out of the captured pictures during a 360 degrees rotation.

Bottom camera
We use OpenCV to read out the bottom camera, the video is processed online to detect a line. The direction and position of the line is calculated with our line detection algorithm and this information is send to the global control to control the Quadcopter.

Kinect
libfreenect is a light weight library to read the data from both the RGB and the depth camera of the Kinect. An example of how to acces these cameras can be found in the C++ Wrapper.

Beagle Board in general
...

where -20 is the highest priority and 19 the lowest priority
 * Setting process priority's: nice &
 * Setting process priority's on the run: renice : link
 * general performance can be viewed in "top" (command: top)
 * When the priority has been set the processes don't respond to ctrl+C any more, a process can be killed by: kill -KILL
 * Running multiple process in parallel from a bash script: link

Slides

 * [[media:PoA_group1.pdf|Our Plan of Action]]

Installing OpenCV on Kubuntu
We have installed Kubuntu on a VMWare Virtual Machine


 * Intstall:
 * http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html#linux-installation
 * sudo apt-get update
 * sudo apt-get install build-essential [gcc]
 * sudo apt-get install cmake
 * sudo apt-get install git
 * sudo apt-get install python-dev python-numpy
 * sudo apt-get install pkg-config
 * sudo apt-get install libgtk2.0-dev
 * sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev [movie codecs]
 * sudo apt-get install libjpeg-dev libpng-dev libtiff-dev libjasper-dev [image codecs]
 * [install openCV from sourceforge: http://sourceforge.net/projects/opencvlibrary/files/latest/download]
 * sudo apt-get install default-jre [Java]
 * [install eclipse: http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/juno/SR2/eclipse-cpp-juno-SR2-linux-gtk.tar.gz]

-- Configuring done -- Generating done -- Build files have been written to: /home/qc/Documents/opencv-2.4.4/bin
 * Make and build:
 * qc@ubuntu:~$ cd Documents/
 * qc@ubuntu:~/Documents$ cd opencv-2.4.4/
 * qc@ubuntu:~/Documents/opencv-2.4.4$
 * qc@ubuntu:~/Documents/opencv-2.4.4$ mkdir bin
 * qc@ubuntu:~/Documents/opencv-2.4.4$ cd bin
 * qc@ubuntu:~/Documents/opencv-2.4.4/bin$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..

[takes some time...]
 * qc@ubuntu:~/Documents/opencv-2.4.4/bin$ make


 * qc@ubuntu:~/Documents/opencv-2.4.4/bin$ sudo make install

add path to path variables: ~/.bashrc export LD_LIBRARY_PATH=/usr/local/lib

example project in eclipse http://docs.opencv.org/doc/tutorials/introduction/linux_eclipse/linux_eclipse.html

run application: qc@ubuntu:~/Documents/opencv-sources/Debug$ ./DisplayImage ../QC.png

We have some examples running, but we are having problems connecting our webcam to the VMWare Virtual Machine, we would like to hear from you if you have managed to do this