DIY Robocars KuaiKai

DIY Robocars KuaiKai – Autonomous Vehicle Racing

13 engineers. 10 countries. 5 days. 2 full-sized autonomous cars.

To make a self-driving car from scratch in less than a week is not an easy task at all… But nothing is stronger than a team with a balance of motivation, experience and perseverance.

                                                                                                                                    – Oscar Rovira, 2018

So what do the profiles of the team of engineers look like? Alexander (Alex) is an Assistant Professor at Nagoya University in Japan and has several years of experience in autonomous vehicles and robotic systems development. David Wong has just completed his PhD in the field of visual localization systems and is now an Assistant Professor at Nagoya University. Alex and David are also part of Tier IV in Japan, a company that maintains Autoware, a software stack for autonomous vehicles. Rohan is an Electrical Engineering and Robotics student at the Indian Institute of Technology (IIT) Madras, who has experience interning in NVIDIA’s self driving hardware team, and is now working on his own startup, Dynamove. Oscar is a Mechanical and Electrical Engineering student at the University of Bath, UK, he is finishing the Udacity Nanodegree on self driving cars and is currently interning at SAIC Technical Centre, UK. Matteo is a student at Cardiff University in UK and a member of the Cardiff Driverless Racing team. Mo is pursuing his PhD in Civil Engineering at Cardiff University and is also a member of the team. Mingyang is a Masters student at the Ludwig Maximilian University (LMU) of Munich in Germany and is currently interning at BMW in the USA. Ernie is a Masters student at the Robotics Institute in Carnegie Mellon University (RI, CMU) and is currently interning with Uber Advanced Technologies Group (ATG). David Qiu is also a Masters student at RI, CMU, and is currently interning in NASA’s Jet Propulsion Laboratory, working on designing autonomous Mars rovers. JiaWei is an engineer in the autonomous vehicles team at SAIC Motors in Shanghai, China. Adam is a Masters student at Lanzhou University and a member of their autonomous vehicle team. Kevin is working as an engineer at an autonomous driving startup in Shenzen, China. Khaled is a senior software engineer at Valeo in Egypt, working on simulator environments for self-driving cars.

   

        

   

Believe me when I say that the above profiles are brief introductions of these amazing people. Every single one of them has diverse backgrounds and years of technical experience. We all applied for the DIY Robocars KuaiKai self-driving racing event online and were selected to attend it in May 2018. All of us met each other for the first time at the C-Zone Hackspace in Guiyang, on 19th May, 2018. On the first day, we introduced ourselves and described our fields of expertise, and then we began discussing the challenges posed by the event. The organizers, PIX Moving, had made the event like an obstacle course, with 16 challenges for us to complete.

The racetrack comprised roads in the Guizhou Science and Technology Industrial Park. It was closed to the public on the day of the race, and when we needed to collect data, we would go there early in the morning or late at night, so that we would get clear 3D LIDAR point cloud information.

The challenges were split into basic and advanced challenges. The basic challenges were traffic light perception, pedestrian avoidance, bicycle avoidance, bus station, U-Turn, S-Bend, speed limits and finally, a stop line. The advanced challenges were GPS outage, under construction sign perception (lane switch), vehicle queuing, automatic parking, slope driving, refueling simulation, accident avoiding, and once again, a stop line. We had the option to do either the basic challenges or the advanced challenges.

Since we had just 5 days to build the autonomous vehicle system, we had to use existing open-source self-driving platforms. Our options included Baidu’s Apollo Auto platform and Tier IV’s Autoware platform. We unanimously chose to go with Autoware, since we had several members with prior experience with it, and it also supported the VLP-16 and VLP-32 LIDAR sensors that we had access to.

Once we settled on using Autoware, we noticed that splitting into different teams would not make much sense, since a large portion of the software stack is the same. We coalesced into a single team with a single goal – that of beating the human driver with our AI driver to win the Ultimate Challenge! There were a number of tasks to be performed. We diligently sub-divided the tasks among ourselves by splitting them into different verticals.

We were provided two new full-sized EU400 electric sedans from BAIC Motors, whose CAN buses had been hacked by the PIX Moving team to provide drive-by-wire control. We had the tasks of mounting various sensors, including GPS, cameras and LIDAR, installing the vehicle computer for running the autonomous car software, installing batteries and an inverter to run the computer, and of developing and installing the code to do various tasks.

Different sets of people worked with the PIX Moving employees to mount the various sensors on the two vehicles using custom-designed hardware mounts. The C-Zone hackspace had facilities to develop all of this in-house. We deliberated on the positions of various sensors and since we had just one team, we decided to use two cameras and two LIDAR sensors on each car. We felt that these would help in creating additional data as well as redundancies for our algorithms to utilize. We were using a new RTK GPS device which used the BeiDou system, and so we needed someone to work on obtaining data from it and setting it up with Autoware.

The industrial cameras were provided by MindVision, and we found Chinese instruction manuals for them online. It took us some time to set them up to work with our computers as there was some issue with the provided drivers. They would only work on specific laptops with USB 3.1 interfaces. Once we discussed with their support staff, we were able to obtain updated drivers and get the cameras working on all computers. We then developed ROS nodes to publish the images from the cameras so that they could be used by other computer vision nodes. We also tried to calibrate the cameras using MATLAB’s camera calibration tool and the checkerboard we were using for the traffic light perception task.

Once we had the vehicles set up, we had to collect 3D point cloud data of the racetrack, so that the vehicle could be localized and could navigate around the course. We drove the vehicles around the racetrack manually, while collecting ROSbag data from the sensors. Once we had done that, we were able to build a point cloud map using the NDT Mapping functionality in Autoware. Using the point cloud map, the vehicle was able to localize itself in the environment.

For the different challenges, we had two options. We could either add them to the vector map, meaning a map which contains physical assets like traffic signs, bus stops, fuel stops, etc, or we could use computer vision and cameras to send control signals to the car as it perceives them. We chose to work on these two approaches in parallel. We had a number of people working on camera and LIDAR perception – identifying the bicycle, the pedestrian, the construction zone and accident zone, the traffic signal (which was replaced by a checkerboard since it wasn’t possible to set up an actual signal), the fuel stop sign, the bus stop sign, the queuing vehicle and the cones for the S-Bend challenge. First we developed the computer vision or signal processing code required for each of these challenges, and then we built ROS nodes to integrate their outputs to control the vehicle using Autoware. Building a vector map required the use of proprietary software from Tier IV, for which we got permission and access courtesy Alex’s colleague from Nagoya University.

We also had a set of people working to get commands from the software to the CAN bus, to drive the vehicle. This involved testing the drive-by-wire interface (hardware and software) developed by PIX Moving, as well as writing code (ROS Twist and Velocity Node) to interface with it from Autoware. The CAN team was able to overcome numerous obstacles along the way, and we owe them a great deal for enabling the safe movement of the vehicles. The image below is of the PIX By-Wire kit Vehicle Control Unit (VCU) which was installed in the vehicles.

Once we could control the car and we had the point cloud map, we had to use a planner for navigation. We had two options again – the OpenPlanner in Autoware, which would use waypoint following and pure pursuit, or the Dynamic Planner (DP) which would be able to dynamically generate new paths in case of obstacles along the original path. We did use the OpenPlanner to test the vehicle and ensure it would be able to drive autonomously around the racetrack, but we also really wanted to get the DP Planner working so that we could clear the obstacle avoidance related challenges of pedestrian and bicycle detection.

To detect the checkered flag, we tried a variety of classical computer vision approaches, since it would take time to train a deep learning based classifier. We had a picture of the flag and tried to use SIFT/SURF based features, but it created many false positives due to the abundance of corners in any image, as well as in the actual flag. We requested the organizers to replace the flag, which is fluid and hence difficult to detect, with a rigid checkerboard. We then tried thresholding and simple blob detection in OpenCV, which gave really good results. Similarly for the signs (bus, fuel stop, construction, accident zone), we used simple colour thresholding and contour detection and we were able to detect the signs very well, with few false negatives/positives. We classified them using the SSIM measure with the template image.

snap.png

To detect the cones, we decided to use Euclidean clustering on the LIDAR point cloud data, followed by a simple filtering of the coordinate space by the intensity of the points, since the cones were highly reflective and hence gave intense outputs. This worked really well and we were able to use it to identify the S-Bend and navigate accordingly. To detect vehicles and pedestrians, we used pre-trained models for the Single-Shot Multibox Detector (SSD) architecture, which ran reasonably fast on CPU and in real-time on GPU.

It took a long time (several hours) and a lot of processing power (32GB of RAM and a quad-core Intel Core i7 processor) to build the point cloud map of the racetrack from the ROSbag LIDAR data. We used Alex’s laptop for this purpose. Once this was done, we were able to use the proprietary software and build the vector map, which worked perfectly in simulation, but we were unable to get it to work on the actual vehicle. The DP planner also had some issues which we were unable to rectify on the actual vehicle. We were also unable to get stable GPS output, so we chose not to use it for the final race. The localization worked perfectly with just NDT Mapping and did not require the use of GPS.

For sending commands to the vehicle via Autoware, we tried to publish messages to the “light_color” topic so that the car would stop for a red light signal, but these were not being picked up by the car. After the event, we went through the Autoware code and found that these were finally being published to a managed topic, and we decided to subscribe to that topic and directly set the x-velocity of the vehicle to zero. Luckily, in practice, the braking was not that abrupt, and the acceleration was being set to zero, bringing the car to a smooth stop.

The event culminated with two self-driving cars which were able to score 105 points (recorded) out of a maximum of 150 points in the challenges. After the event, we continued to work on the perception tasks well into the evening and tested them with the headlights at night, and were able to succeed in those as well. Maybe now our score would be around 120!

All in all, this competition was an exhilarating experience, with the opportunity to meet and work with brilliant minds from across the world! The challenges only made it more realistic and useful for the self-driving industry. It was an honour to work in this exciting field with motivated and talented people. Perhaps next time, we may be able to beat the human drivers too!

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s