Engineering Autonomous Vehicles and Robots. Shaoshan Liu. Читать онлайн. Newlib. NEWLIB.NET

Автор: Shaoshan Liu
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119570547
Скачать книгу
or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

       Library of Congress Cataloging-in-Publication Data

      Names: Liu, Shaoshan, author.

      Title: Engineering autonomous vehicles and robots : the DragonFly modular-based approach / Shaoshan Liu.

      Description: First edition. | Hoboken : Wiley-IEEE Press, 2020. | Includes bibliographical references and index.

      Identifiers: LCCN 2019058288 (print) | LCCN 2019058289 (ebook) | ISBN 9781119570561 (hardback) | ISBN 9781119570554 (adobe pdf) | ISBN 9781119570547 (epub)

      Subjects: LCSH: Automated vehicles. | Mobile robots.

      Classification: LCC TL152.8 .L585 2020 (print) | LCC TL152.8 (ebook) | DDC 629.04/6–dc23

      LC record available at https://lccn.loc.gov/2019058288

      LC ebook record available at https://lccn.loc.gov/2019058289

      Cover Design: Wiley

      Cover Images: Courtesy of Shaoshan Liu; Background © Chainarong Prasertthai/Getty Images

      1.1 Introduction

      In recent years, autonomous driving has become quite a popular topic in the research community as well as in industry, and even in the press, but besides the fact that it is exciting and revolutionary, why should we deploy autonomous vehicles? One reason is that ridesharing using clean-energy autonomous vehicles will completely revolutionize the transportation industry by reducing pollution and traffic problems, by improving safety, and by making our economy more efficient.

      More specifically and starting with pollution reduction: there are about 260 million cars in the US today. If we were to convert all cars to clean-energy cars, we would reduce annual carbon emissions by 800 million tons, which would account for 13.3% of the US commitment to the Paris Agreement [1]. Also, with near-perfect scheduling, if ridesharing autonomous vehicles could be deployed, the number of cars could be reduced by 75% [2]. Consequently, these two changes combined have the potential to yield an annual reduction of 1 billion tons in carbon emission, an amount roughly equivalent to 20% of the US Commitment to the Paris Agreement.

      As for safety improvement, human drivers have a crash rate of 4.2 accidents per million miles (PMM), while the current autonomous vehicle crash rate is 3.2 crashes PMM [3]. Yet, as the safety of autonomous vehicles continues to improve, if the autonomous vehicle crash rate PMM can be made to drop below 1, a whopping 30 000 lives could be saved annually in the US alone [4].

      Lastly, consider the impact on the economy. Each ton of carbon emission has around a $220 impact on the US GDP. This means that $220 B could be saved annually by converting all vehicles to ride-sharing clean-energy autonomous vehicles [5]. Also, since the average cost per crash is about $30 000 in the US, by dropping the autonomous vehicle crash rate PMM to below 1, we could achieve another annual cost reduction of $300 B [6]. Therefore, in the US alone, the universal adoption of ride-sharing clean-energy autonomous vehicles could save as much as $520 B annually, which almost ties with the GDP of Sweden, one of the world's largest economies.

      1.2.1 Sensing

      The typical sensors used in autonomous driving include Global Navigation Satellite System (GNSS), Light Detection and Ranging (LiDAR), cameras, radar and sonar: GNSS receivers, especially those with real-time kinematic (RTK) capabilities, help autonomous vehicles localize themselves by updating global positions with at least meter-level accuracy. A high-end GNSS receiver for autonomous driving could cost well over $10 000.

      LiDAR is normally used for the creation of HD maps, real-time localization, as well as obstacle avoidance. LiDAR works by bouncing a laser beam off of surfaces and measuring the reflection time to determine distance. LiDAR units suffer from two problems: first, they are extremely expensive (an autonomous driving grade LiDAR could cost over $80 000); secondly, they may not provide accurate measurements under bad weather conditions, such as heavy rain or fog.

      Radar and sonar: The radar and sonar subsystems are used as the last line of defense in obstacle avoidance. The data generated by radar and sonar show the distance from the nearest object in front of the vehicle's path. Note that a major advantage of radar is that it works under all weather conditions. Sonar usually covers a range of 0–10 m whereas radar covers a range of 3–150 m. Combined, these sensors cost less than $5000.

      1.2.2 HD Map Creation and Maintenance

      Traditional digital maps are usually generated from satellite imagery and have meter-level accuracy. Although this accuracy is sufficient for human drivers, autonomous vehicles demand maps with higher accuracy for lane-level information. Therefore, HD maps are needed for autonomous driving.

      Just as with traditional digital maps, HD maps have many layers of information. At the bottom layer, instead of using satellite imagery, a grid map is generated by raw LiDAR data, with a grid granularity of about 5 cm by 5 cm. This grid basically records elevation and reflection information of the environment in each cell. As the autonomous vehicles are moving and collecting new LiDAR scans, they perform self-localization by performing a real time comparison of the new LiDAR scans against the grid map with initial position estimates provided by GNSS [8].

      On top of the grid layer, there are several layers of semantic information. For instance, lane information is added