Library of Congress Cataloging-in-Publication Data
Names: Liu, Shaoshan, author.
Title: Engineering autonomous vehicles and robots : the DragonFly modular-based approach / Shaoshan Liu.
Description: First edition. | Hoboken : Wiley-IEEE Press, 2020. | Includes bibliographical references and index.
Identifiers: LCCN 2019058288 (print) | LCCN 2019058289 (ebook) | ISBN 9781119570561 (hardback) | ISBN 9781119570554 (adobe pdf) | ISBN 9781119570547 (epub)
Subjects: LCSH: Automated vehicles. | Mobile robots.
Classification: LCC TL152.8 .L585 2020 (print) | LCC TL152.8 (ebook) | DDC 629.04/6–dc23
LC record available at https://lccn.loc.gov/2019058288
LC ebook record available at https://lccn.loc.gov/2019058289
Cover Design: Wiley
Cover Images: Courtesy of Shaoshan Liu; Background © Chainarong Prasertthai/Getty Images
1 Affordable and Reliable Autonomous Driving Through Modular Design
1.1 Introduction
In recent years, autonomous driving has become quite a popular topic in the research community as well as in industry, and even in the press, but besides the fact that it is exciting and revolutionary, why should we deploy autonomous vehicles? One reason is that ridesharing using clean-energy autonomous vehicles will completely revolutionize the transportation industry by reducing pollution and traffic problems, by improving safety, and by making our economy more efficient.
More specifically and starting with pollution reduction: there are about 260 million cars in the US today. If we were to convert all cars to clean-energy cars, we would reduce annual carbon emissions by 800 million tons, which would account for 13.3% of the US commitment to the Paris Agreement [1]. Also, with near-perfect scheduling, if ridesharing autonomous vehicles could be deployed, the number of cars could be reduced by 75% [2]. Consequently, these two changes combined have the potential to yield an annual reduction of 1 billion tons in carbon emission, an amount roughly equivalent to 20% of the US Commitment to the Paris Agreement.
As for safety improvement, human drivers have a crash rate of 4.2 accidents per million miles (PMM), while the current autonomous vehicle crash rate is 3.2 crashes PMM [3]. Yet, as the safety of autonomous vehicles continues to improve, if the autonomous vehicle crash rate PMM can be made to drop below 1, a whopping 30 000 lives could be saved annually in the US alone [4].
Lastly, consider the impact on the economy. Each ton of carbon emission has around a $220 impact on the US GDP. This means that $220 B could be saved annually by converting all vehicles to ride-sharing clean-energy autonomous vehicles [5]. Also, since the average cost per crash is about $30 000 in the US, by dropping the autonomous vehicle crash rate PMM to below 1, we could achieve another annual cost reduction of $300 B [6]. Therefore, in the US alone, the universal adoption of ride-sharing clean-energy autonomous vehicles could save as much as $520 B annually, which almost ties with the GDP of Sweden, one of the world's largest economies.
Nonetheless, the large-scale adoption of autonomous driving vehicles is now meeting with several barriers, including reliability, ethical and legal considerations, and, not least of which, affordability. What are the problems behind the building and deploying of autonomous vehicles and how can we solve them? Answering these questions demands that we first look at the underlying design.
1.2 High Cost of Autonomous Driving Technologies
In this section we break down the costs of existing autonomous driving systems, and demonstrate that the high costs of sensors, computing systems, and High-Definition (HD) maps are the major barriers of autonomous driving deployment [7] (Figure 1.1).
1.2.1 Sensing
The typical sensors used in autonomous driving include Global Navigation Satellite System (GNSS), Light Detection and Ranging (LiDAR), cameras, radar and sonar: GNSS receivers, especially those with real-time kinematic (RTK) capabilities, help autonomous vehicles localize themselves by updating global positions with at least meter-level accuracy. A high-end GNSS receiver for autonomous driving could cost well over $10 000.
LiDAR is normally used for the creation of HD maps, real-time localization, as well as obstacle avoidance. LiDAR works by bouncing a laser beam off of surfaces and measuring the reflection time to determine distance. LiDAR units suffer from two problems: first, they are extremely expensive (an autonomous driving grade LiDAR could cost over $80 000); secondly, they may not provide accurate measurements under bad weather conditions, such as heavy rain or fog.
Cameras are mostly used for object recognition and tracking tasks, such as lane detection, traffic light detection, and pedestrian detection. Existing implementations usually mount multiple cameras around the vehicle to detect, recognize, and track objects. However, an important drawback of camera sensors is that the data they provide may not be reliable under bad weather conditions and that their sheer amount creates high computational demands. Note that these cameras usually run at 60 Hz, and, when combined, can generate over 1 GB of raw data per second.
Figure 1.1 Cost breakdown of existing autonomous driving solutions.
Radar and sonar: The radar and sonar subsystems are used as the last line of defense in obstacle avoidance. The data generated by radar and sonar show the distance from the nearest object in front of the vehicle's path. Note that a major advantage of radar is that it works under all weather conditions. Sonar usually covers a range of 0–10 m whereas radar covers a range of 3–150 m. Combined, these sensors cost less than $5000.
1.2.2 HD Map Creation and Maintenance
Traditional digital maps are usually generated from satellite imagery and have meter-level accuracy. Although this accuracy is sufficient for human drivers, autonomous vehicles demand maps with higher accuracy for lane-level information. Therefore, HD maps are needed for autonomous driving.
Just as with traditional digital maps, HD maps have many layers of information. At the bottom layer, instead of using satellite imagery, a grid map is generated by raw LiDAR data, with a grid granularity of about 5 cm by 5 cm. This grid basically records elevation and reflection information of the environment in each cell. As the autonomous vehicles are moving and collecting new LiDAR scans, they perform self-localization by performing a real time comparison of the new LiDAR scans against the grid map with initial position estimates provided by GNSS [8].
On top of the grid layer, there are several layers of semantic information. For instance, lane information is added