A Step toward an Autonomous Future
By Radovan Miucic, Technical Fellow/Team Lead, Changan US R&D Center, Inc.
There is a difference between autonomous and automated vehicles. The main differentiator is the degree of human intervention. A fully Autonomous Vehicle (AV) is classified as either level 4 or 5 according to SAE classification. Such a vehicle completely depends on the machine for driving tasks. An automated vehicle employs technologies that fall into levels 3 and below, where the driver is responsible for the final vehicle's controls.
Automated and autonomous vehicle development is reaching a peak on the hype curve. Some automated vehicles are already in production (such as GM’s Super Cruise and Tesla’s Autopilot) and most other major car companies have announced that their highly automated vehicles will be available within a few years or shorter. Much has been invested in startup companies that produce technology for autonomous vehicles. This article briefly covers some of the central, most promising technologies which enable innovation for autonomous vehicle development. Technology domains can be classified into three categories: sensing, planning, and controls. Machine learning and vehicle connectivity span and aid the functionality of the three classes.
Environmental sensing enables the car to see and understand other moving vehicles as well as pedestrians on the road. An autonomous vehicle may use many sensors such as multiple cameras, radars, and LiDARs. The data is fused to create an environmental model that becomes an input to the planning block. The vehicle plans the route according to the available driving path and the state of the surrounding vehicles. Understanding of surroundings depends on the perception layer. For example, the camera’s perception layer may use classical or deep learning methods to derive information about the remote vehicles in front. Pixel information from the image of the front car is used to predict distance, angle, and intention.
A precompetitive consortium within the auto industry might be a more efficient way to reach the goal of a fully autonomous vehicle
The performance of such a system depends on many factors such as camera resolution, weather, and distance to the remote vehicle. Closer vehicles in the camera image will have more pixels, and information will be predicted more accurately. Consequently, information inferred from vehicles which are farther away will have fewer pixels and will be less accurate.
When people talk about vehicle connectivity, they usually mean connectivity for infotainment. Yet there is another branch of vehicle connectivity that deals with traffic safety. This includes modalities such as Vehicle to Vehicle (V2V), Vehicle to Infrastructure (V2I), and Vehicle to Pedestrian (V2P). Collectively all these communication modalities are referred to as Vehicle to Everything (V2X). Historically, the automotive industry has developed advanced cooperative safety applications using Dedicated Short Range Communication (DSRC) - and more recently C V2X or LTE-V. V2X communications have great potential to increase awareness of surrounding vehicles and pedestrians as well as to relay traffic light information to the autonomous vehicles. V2V is a direct communication system that requires no support from cellular infrastructure. Vehicle sensors provide information such as position, speed, and brake status. Each vehicle then broadcasts its own information to other vehicles in the area. The information generated is the best available because the host vehicle does not need to infer from raw sensor data. For example, radar uses Doppler effect calculation to infer the speed of the remote vehicle. In V2V communication, the remote vehicle provides its speed in its broadcast messages. V2X is a “perfect” sensor as long as the positioning is accurate and a sufficient number of other vehicles, pedestrians, and roadside infrastructure objects are equipped with V2X communication. For example, recall that camera perception performance depends on distance. Well, inferring accurate information from remote vehicles using V2V does not depend on distance. As a matter of fact, V2V communication works in non-line of sight situations such as blind intersections and obstructing vehicles. V2X communication is radio-based technology and does not require a strict direct line of sight between the host and remote vehicle, unlike a traditional sensor such as a camera, which does. Historically, V2X communication has been intended to mainly address traffic crashes for manually driven vehicles, but we believe that it has the potential to ease environmental perception for autonomous vehicles.
Car companies have been developing autonomous vehicles with help of their suppliers or acquired companies. For example, GM works with Cruise Automation and Ford with Argo. Each AV project has been advanced, more or less, in a bubble. There is a lot of duplicated effort. Some of the same challenges have been tackled by multiple companies. This means that future AVs will behave differently on the road. Clearly, AV standards need to be established. While it is certainly understood that the AV business is highly competitive, a precompetitive consortium within the auto industry might be a more efficient way to reach the goal of a fully autonomous vehicle. Automakers should put aside competitive differences and work collaboratively for the good of the industry and consumer safety, and for ease of government regulation.
Legacy Transformation with API Led Approach
Ashish Kalve, Director of Software Development, American Tower
From Big to Small Data
Driss Temsamani, Head of Digital Channels and Enterprise Banking Services, Latin America, Citi