Autonomous Car Literature Review


Table of contents 

  • Abstract
  • 1. Introduction
  • 2. History of Self Driving Cars
    • 2.1. The First Attempts to Automate Vehicle Control
    • 2.2. The First Self Driving Car
    • 2.3. DARPA Grand Challenge
    • 2.4. DARPA Urban Grand Challenge
    • 2.5. The First Self Driving Car Approved For Road Traffic
  • 3. Solutions and Algorithms
    • 3.1. Camera Image for Environment Recognition
      • 3.1.1. Traffic Lane Detection
      • 3.1.2. Semantic Image Segmentation and Object Recognition
      • 3.1.3. Visual SLAM (Image based environment mapping)
    • 3.2. Using Sensors for Environment Recognition
      • 3.2.1. Sensor Data Based on Environment Mapping (SLAM)
    • 3.3. Sensor Data Fusion
      • 3.3.1. Detection, Classification and Tracking of Moving Objects in a 3D Environment (DATMO)
    • 3.4. Decision Making and Planning
      • 3.4.1. DDM (Driving Decision Making)
      • 3.4.2. Mission Planning (Route planning)
      • 3.4.3. Control of the Vehicle Movement
      • 3.4.4. Artificial Intelligence for Prediction
  • 4. Conclusion
  • 5. REFERENCES

Abstract


In recent years, autonomous vehicles which can travel without human intervention are the last point reached in the light of intelligent systems and can perform almost all tasks successfully in controlled environments. Road and traffic situation in autonomous vehicles are modelled with the help of communication systems and sensors, then the motion of the vehicle is designed by implementing various techniques and algorithms [1]. 
Autonomous vehicles are equipped with cameras, sensors and communication systems that enable them to produce enormous amounts of data. In this way, a lot of data will be produced to help the vehicle see, hear, and think while driving. According to Gartner, by 2020, 250 million cars will connect with each other and the surrounding infrastructure through various communication systems. 

On this project we aimed to add emergency vehicle priority awareness feature to autonomous cars. When an emergency vehicle approaches, all other traffic must stop or move to the right side to allow the emergency vehicles pass through. But today’s autonomous cars don’t have features to recognize emergency vehicles.

In order to solve this problem, we will use various sensors and cameras. Firstly, our cars will check all lanes if one right side is available it will start driving from there to be able to balance the traffic on the all lanes. With audio sensors the vehicle will recognise sirens and with cameras it will check if emergency vehicle is behind of the car and not on the opposite side of the road after emergency vehicle move away the car will return its previous lane.

1. Introduction


The most remarkable vehicle technology in today is undoubtedly autonomous vehicles. This technology, which will direct the future, is actively developing and being used in the world. In this part of our report we will include brief information about autonomous vehicle technology. 
Autonomous vehicles are automobiles that can move without any intervention by detecting the road, traffic flow and surrounding objects without the need of a driver with the help of the control system they have. These vehicles can detect objects around them by using technologies and techniques such as RADAR, LIDAR, GPS, Odometry, Computer Vision. 
The autopilot drive of autonomous vehicles starts briefly with the ultrasonic sensors on its wheels, detecting the positions of vehicles that are braking or parked, and data from a wide range of sensors are analysed with a central computer system and events such as steering control, braking and acceleration are performed. This event can only be described as a beginning. As the computer technologies are more accessible and cheaper the future of driverless cars becomes more achievable. Though still in its infancy, self-driving technology is becoming increasingly common and could radically transform our transportation system (and by extension, our economy and society). 


Figure 1 Shows Sensor Technologies on the Autonomous Car.

The Self Driving Cars have 5 different levels of driving automation. 

Level 0- At the level zero, there is no automation. All tasks have been done by the driver. 

Level 1- At the level One, the vehicle has Driver Assistance but there is still need to a driver to do the primary tasks like accelerating, braking and monitoring. The vehicle can assist with simple functions for example braking a little bit more when you get too close to another car on the road. 

Level 2- The Level Two, also known as Partial Automation, steering, acceleration functions assisted by the vehicle. The drivers disengaged from some of their tasks but must be ready to take control of the vehicle in the emergency situations. 

Level 3- At the Level Three (Conditional Automation), most important improvement is the vehicle started to use sensors (LIDAR, Radar etc.) to control all monitoring of the it’s environment. A Driver is still needed but on safety critical functions the vehicle can respond itself. 

Level 4- At the Level Four (High Automation), besides to other levels features vehicle can determine when to change lanes, turn and use signals. But it cannot determine traffic congestion or merge onto the highway. 

Level 5- The Level Five is Complete Automation. There is no need to a human driver to control any task of the vehicle. So, there is not a steering wheel, pedals or brakes. All tasks are handled by the vehicle’s itself. 

With the increase in competition among firms in the automotive sector and the increase in the number of traffic accidents caused by driver error, investments in this field has increased and artificial intelligence has entered this field. Samples of autonomous vehicles are slowly released into traffic, and sales are scheduled to begin in 2020. 

DRIVE AGX Pegasus – NVidia announced DRIVE AGX Pegasus which is an energy-efficient, high-performance AI computer for Level Five automation driverless car. Being an expert in software makes the company stand out. 

Waymo – Waymo, formerly known as Google Car, which is a small, cute prototype autonomous vehicle. The exact date for release and its cost is still unknown. The clearest information about the vehicle is that it is completely autonomous. It is said to be integrated with the Google Maps application that works better than most navigation systems. 

Tesla – Tesla is the first brand that comes to mind when it comes to electric cars, and it is now considered the best brand in autonomous driving technology.

2. History of Self Driving Cars


With the improvements on the computer technologies in both hardware and software over time, development of self-driving vehicles increased rapidly. 

2.1. The First Attempts to Automate Vehicle Control


1925, Francis Houdina demonstrated the radio-controlled “American Wonder” for the first time. The American Wonder was a 1926 Chandler. It had a launcher on the tonneau and operated by a second car that follows it and emits a radio pulse captured by the transmitting antenna. The antenna was introducing the signal into the circuit breaker, which operated a small electric motor to guide each movement of the car. 

1939, General Motors introduced the idea of autonomous vehicle design for the first time at the “FUTURAMA” exhibition at the New York World’s Fair.

1953, RCA Labs successfully manufactured a miniature car that was guided and controlled by wires that were laid on the laboratory floor [2]. 

1956, The General Mobile Firebird II is equipped with receivers for detector circuits embedded in roadways.

1979, Stanford Cart was able to move indoors without human intervention with using the image processing algorithm called as The Cart’s Vision Algorithm. This algorithm was inspired by the Blocks World planning method and consisted of reducing the image to the edges but proved unsuitable for outdoor use with many complex shapes and colours [3]. 

2.2. The First Self Driving Car


In the 1984, the first truly autonomous and self-sufficient vehicle created with the help of Carnegie Mellon University’s Navlab and ALV (Autonomous Land Vehicle) projects. The vehicle could speed on road with 31 kilometres per hour. In the 1986 they added obstacle avoidance mechanism and in the 1987 the vehicle could work in the day and night conditions.

2.3. DARPA Grand Challenge


On March 13, 2004 in the Mojave Desert region of the United States, DARPA (Defense Advanced Research Projects Agency) organized a prize competition for American autonomous vehicles which named as DARPA Grand Challenge to accelerate the development of autonomous driving technologies that can be applied to military needs. Any vehicles could complete the difficult desert route. So, $1 million prize went unclaimed.

The second Grand Challenge with $2 million prize was held in the desert Southwest near the California, Nevada state line, on October 8,2015. At the total five vehicles successfully completed the challenge.

2.4. DARPA Urban Grand Challenge


On November 3, 2007 at the site of the Southern California Logistics Airport, in California the third autonomous car competition of the DARPA Grand Challenge has been organized with $2 million. The winner was Tartan Racing, a collaborative effort by Carnegie Mellon University and General Motors Corporation, with their vehicle “Boss”, a heavily modified Chevrolet Tahoe [4]. The Boss used perceptual, planning and behavioural software to understand traffic conditions and decide on the way to the destination. The vehicle was equipped with various lasers, cameras and radars that allowed the planning of the route considering static and dynamic obstacles and moving objects. It is collected with algorithms used to find and recognize environmental information, lanes, parking limits, waypoints and more. In addition, algorithms have been applied to determine the dangerous behaviours of other drives. The most important features of the technology developed by the Tartan Racing team are:

  • Driving in accordance with the road traffic rules (considering the priority rules of crossing intersections)
  • Detection and tracking other vehicles over long distances
  • Finding parking spaces and parking itself
  • Maintaining safe distance to followed vehicles
  • Reacting to non-standard events (e.g. traffic jams) [5]

2.5. The First Self Driving Car Approved for Road Traffic


The Tesla Model S is an all-electric five-door lift back sedan, produced by Tesla, Inc., and introduced on June 22, 2012. As of April 23, 2019, the Model S Long Range has an EPA range of 370 miles (600 km), higher than any other battery electric car [6]. It also has Autonomous Driving option with software called Autopilot Firmware which offered the second level of automation. Also, it has features such as Enhanced Summon which is allows the car drive through a parking lot to find you without any help from a driver, and Sentry Mode to sense and record any suspicious activity around the car. 
This is a summary of the history of autonomous vehicles from past to present. Different companies are joining the sector day by day and some software is aimed to be open source and contribute to the sector. 

3. Solutions and Algorithms


This part of the report will include basic algorithms of digital image processing, decision algorithms and information storage which are used in autonomous vehicle systems. Developing a fully autonomous vehicle software requires a lot of knowledge and various proven solutions since numerous events may occur on the road, image diversity and different traffic rules in different countries. Almost all analysis algorithm and image processing can be used in autonomous vehicle systems. But, the number of algorithms is determined by the type of sensors used and limited by device capabilities.

3.1. Camera Image for Environment Recognition


Cameras are eyes of the autonomous vehicles, used for recognition of traffic lanes, identification of objects and image-based environment mapping. In addition to cameras, radar sensors (insensitive to weather, dust and pollution) and more expensive laser sensors – lidars are used [7].

3.1.1. Traffic Lane Detection


The lines on the road indicate to the driver where the lane is and as a criterion for determining which direction the vehicle should travel and how the vehicle agent interacts harmoniously on the road. Identifying lanes on the road includes finding lines that are connected at one point. 


Figure 2 Shows line recognition of Self Driving Car.

The VP (Vanishing Point) method for finding road in a desert, proposed by C. Rasmussen [8], consists in calculating the dominant orientations in image segments (image resolution 640 x 480 pixels divided into 72 segments), estimating the position of the point of convergence and tracking that point in subsequent image frames. The Rasmussen method, developed to recognize the road in a desert area, today plays an important role in detecting road lanes in an image [9] and forms the basis for more advanced algorithms [10].

Among of many approaches for lane tracking Hough Transform and Spatial CNN are most popular ones.

Hough transform 

The Hough transform is a technique which can be used to isolate features of a particular shape within an image. Because it requires that the desired features be specified in some parametric form, the classical Hough transform is most commonly used for the detection of regular curves such as lines, circles, ellipses, etc. [11]


Figure 3 Shows Hough Transform on a given frame.

CNN and Spatial CNN 

Although convolutional neural networks (CNNs) have proven to be effective methods for identifying lower image layers (e.g., edges, colour gradients) and deeper complex features and entities (e.g., object recognition), they are often having difficulties representing the “poses” of these features and entities (CNN is well suited for extracting semantics from raw pixels, but does not perform well when capturing spatial relationships such as rotation and translation relationships of pixels in a frame.). In a traditional layer-by-layer CNN, each convolutional layer receives input from its previous layer, performs convolution and nonlinear activation, and then sends the output to the next layer. Spatial convolutional neural networks (SCNN) takes this one step further by treating the rows and columns of each feature map as “layers”, applying the same process in turn, allowing pixel information to pass messages between neurons in the same layer.


Figure 4 Shows how Spatial CNN works for Traffic Scene Understanding.

SCNN is relatively new, published on 2018, but have already outperformed the likes of ReNet (RNN), MRFNet (MRF+CNN), much deeper ResNet architectures, and placed first on the TuSimple Benchmark Lane Detection Challenge with 96.53% accuracy [12]. 


Figure 5 Shows accuracy rates among architectures.

3.1.2. Semantic Image Segmentation and Object Recognition


Segmentation is critical for image analysis tasks. Semantic segmentation describes the process of associating each pixel of an image with a category label, detects objects of a specific category, including people, road signs, and cars in complex images. 


Figure 6 Semantic Segmentation

R-CNN 
R-CNN is one of the Deep learning object detection algorithms based on CNN. Based on this, there is Faster R-CNN for faster object detection, and Mask R-CNN for faster object sample segmentation (image segmentation). 

SSD (Single Shot MultiBox Detector) 
SSD is a popular algorithm for object detection. Faster than R-CNN. 

YOLO (You Only Look Once) 
YOLO is an object detection algorithm that has recently become very popular. It is popular because it is much faster than other detection algorithms. You can use this algorithm to train any object you want to detect. 

TensorFlow Object Detection API 
TensorFlow simplifies object detection using a pre-trained object detection model. Objects can be easily detected with pre-trained models. 


Figure 7 Object Detection on Self Driving Car

3.1.3. Visual SLAM (Image based environment mapping)


Simultaneous Localization and Mapping (SLAM) can accurately determine the location of the user relative to the environment. Visual SLAM is a specific type of SLAM system that the trajectory of the platform and the location of all landmarks can be estimated online without any prior knowledge of the location. In the case of an algorithm that uses an image that must identify and track landmarks in each frame of the camera image, the requirements for the device (computing power) used are very strict. 
The framework is mainly composed of three modules as follows:

  1. Initialization
  2. Tracking
  3. Mapping 

According to the purposes of applications the following two additional modules are also included in Visual SLAM algorithms:

  1. Re-localization
  2. Global map optimization 

Some vSLAM Algorithms:

  • EKF-SLAM (EKF – Extended Kalman Filter)
  • FastSLAM algorithm which uses RBPFs (Rao-Blackwellised Particle Filters)
  • RGB-D SLAM which utilizes image and image depth (Kinect) [13].
  • ORB-SLAM and ORB-SLAM2 for single images, stereo images (stereoscopic vision) and RGB-D cameras [14]. The algorithm utilizes the ORB (Oriented FAST and Rotated BRIEF) descriptor [15].
  • LSD-SLAM an algorithm that generates depth maps from individual image frames without using landmarks [16].
  • L-SLAM – an algorithm that reduces the dimensionality of FastSLAM algorithms [17]. 

3.2. Using Sensors for Environment Recognition


The autonomous vehicles use three main type of sensors which are cameras, radar and lidar. Using all these sensors together provides the car a visual effect of the surrounding environment and help it detect the speed and distance of nearby objects and their three-dimensional shape. Furthermore, sensors are called as inertial measurement unit helps to track the acceleration and position of the vehicle. 

Radar (Radio Detection and Ranging) sensors has long been used in the automotive industry to determine the speed, range, and direction of movement of objects. Radar sensors complement camera vision during low visibility such as night driving and improve the detection capabilities of autonomous vehicles. 

By emitting an invisible laser at an alarming rate, the LIDAR (Light Detection and Ranging) sensors can draw detailed 3D images from the instantaneously bounced signal. These signals form a “point cloud” that represents the environment around the vehicle to enhance the safety and diversity of sensor data. The LIDAR provides driverless cars a 360-degree view 3D images of their surroundings, shape and depth for the surrounding cars and pedestrians and finally the geographic area of the road. Moreover, like radar, it works well in low light conditions. 


Figure 8 Point Cloud with LIDAR
Therefore, a less advanced and relatively inexpensive set of devices such as ultrasonic sensor mounted on, for example, a car bumper (parking sensor) that measures the distance to an obstacle. 

3.2.1. Sensor Data Based on Environment Mapping (SLAM)


SLAM always uses several different sensor types, and the functionality and limitations of the various sensor types have been the main drivers of the new algorithm. 
The SLAM algorithm can be classified by sensor type: 

Radar:

  • GraphSLAM offline algorithm [18], based on constructing graphs and mapping environmental grid [19].
  • Graph-Based SLAM based on constructing graphs and finding node configurations of minimal error [20].
  • Real-Time Radar SLAM, which utilizes FastSLAM and GraphSLAM, and which is performed within 45 milliseconds (IntelCore i7-3830K) [21]. 

LIDAR:

  • ML-SLAM (ML – Maximum Likelihood) based on maximum likelihood estimation [22].
  • Credibilist SLAM (or C-SLAM) [23] based on TBM (Transferable Belief Model) [24].
  • ICP-SLAM [25] which utilizes ICP (Iteractive Closest Point) method of registering three-dimensional shapes [26].
  • Google’s Cartographer SLAM, which utilizes, in addition to a lidar, IMU (Inertial Measurement Unit) and images from cameras [27], [28].

The LOAM algorithm (Lidar Odometry And Mapping), which consists in using distance measurements made by a biaxial lidar moving in six degrees of freedom, is also used for environment mapping [29].

3.3. Sensor Data Fusion


Sensor data fusion helps combining data which delivered from multiple sensor sources and gives better information accuracy than using these sources individually. 


Figure 9 Multiple Sensor Perception SystemFor merging multiple sensor data Kalman Filter algorithm (One of the most popular fusion algorithm) has been used since the date when it invented in 1960 by Rudolph Kalman. This algorithm also known as linear quadratic estimation (LQE), uses measurement series of a dynamic system which are observed over time, handles noise and other inaccuracies with producing unknown variables to give more accurate results by estimating a joint probability distribution over the variables for each timeframe. For example, multiple sensors can give different distance value of an object. Kalman Filter allows us to estimate real distance value with better accuracy.


Figure 10 Kalman Filter Circle

3.3.1. Detection, Classification and Tracking of Moving Objects in a 3D Environment (DATMO)


Getting reliable perception of the surrounding environment is most important step of building an intelligent vehicle. This step is usually divided into two sub-tasks: simultaneous localization and mapping (SLAM) and transport detection and monitoring objects (DATMO). 
The purpose of SLAM is to provide the vehicle with a map consisting of static entities of the environment while DATMO uses that map to detect and track dynamic entities. [30] 


Figure 11 Experimental results showing the detection, classification and tracking of moving objects

3.4. Decision Making and Planning


At the beginning self-driving vehicles were mostly semi-autonomous because their designed functions were typically limited to lane tracking, adaptive cruise control and a few other basic functions. A wider range of features were demonstrated in the 2007 DARPA Urban Challenge (DUC) [31]. Although the performance of self-driving vehicles was still far below the level of human drivers, this event shows the applicability of autonomous driving in urban environments and presents significant research challenges for autonomous driving. 

3.4.1. DDM (Driving Decision Making)


The Driving Decision Mechanism (DDM) is identified as the key technology to ensure safe driving of autonomous vehicles, which is primarily affected by vehicle status and road conditions. With the sensors, the autonomous vehicle can detect and collect traffic information, including vehicle conditions and road conditions in real time, to enter the data processing program designed for some data processing to obtain input variables of the DDM. 
Based on these input variables, DDM searches for relevant information and matches the exact driving decisions with the learning experience, and then transmits the decision order to the control system. These learning experiences refer to driving decision-making rules in DDM which are obtained by learning a large amount of actual driving experience. The control system then continues to operate the control actuators (including the steering system, pedals and automatic shifting). 


Figure 12 Schematic architecture of the driving decision-making process of autonomous vehicle. DDM: Driving Decision-making Mechanism.

methods for decision making: 

  • Decision trees [32],[33] and diagrams [34].
  • Partially Observable Markov Decision Processes (POMPD) [35].
  • Machine learning: 
    • Support Vector Machine Regression (SVR) with Particle Swarm Optimization (PSO)[36].
    • Markov Decision Processes (MPD) with Reinforcement Learning (RL) [37].
    • Deep Reinforcement Learning (DRL) [38].

3.4.2. Mission Planning (Route planning)


The route that the autonomous vehicle takes is called as the route plans. In order to find your own vehicle and determine the route on the map, you first need to apply GPS sensors and geographic location data. Additionally, there are solutions that use maps based on image semantics and 3D lidar data. 

Search algorithms are made with Lattice Planning and Reinforcement Learning. 

A * (Star Search Algorithm) -one of the algorithms used to find the shortest path. 

Lattice Planning done by repeating the original path of the possible connection. Suitable for restricted environments, it can cause difficulties in evasive maneuvers. 

Reinforcement learning -is a behavioural-inspired machine learning approach that deals with the actions an object must take to obtain the highest rewards in the environment. 

The behaviour planner is responsible for making decisions to ensure that the vehicle follows traffic rules and interacts with other agents in a routine, safe manner, while gradually evolving along the path of the mission planner. 

3.4.3. Control of the Vehicle Movement


Control section sets the steering, speed and braking state of the vehicle. 

Classical Control Methods: 

Nonlinear control, including tracking the trajectory of vehicle motion and maintaining the trajectory by manipulating the vehicle along a specified route. 

Generally used controller structure in many applications is Feedback-feedforward control. This controller structure can reduce the negative effects of parameter changes, modelling errors and unnecessary interferences. Additionally, it can change the transient behaviour of the system and the effects of measurement noise. 

The most widely used feedback control method is PID (Proportional Integral Derivative) control. The steering control is based on the information of the lane tracking system. Driving results can be optimized by transferring results from the lane tracking system to other neural networks. 

We can also use the model in the control application. The control method using system modelling to optimize the forward time interval is called as Model Predictive Control (MPC) in the literature. 

Parallel autonomous systems are providing additional security by taking on driver duties in dangerous situations. The parallel autonomous system can also take over the operation of the vehicle at the request of the driver. 

3.4.4. Artificial Intelligence for Prediction


In the Prediction part, the vehicles predict the behaviour of the vehicle or human being in its environment. It predicts the direction and speed of the vehicle. In this way, an autonomous vehicle can react to different events in advance. For this purpose, Recurrent Neural Network used. 

Recurrent Neural Networks – Convolutional Neural Networks (CNN) process information in specific image frames regardless of the information they have learned from previous frames. In addition, the RNN architecture supports memory so that it can benefit from past determinations when calculating future forecasts. Therefore, RNN provides a natural way to predict the next step. 

4. Conclusion


Self-Driving Cars are one of the most important and interesting technology of the future. These cars using various kinds of sensors, analytical algorithms and AI to perform specific tasks (object recognition, mission-planning, decision-making, controlling) and perceives their environment. 
With today’s technology fully autonomous cars couldn’t invented yet and big companies spending so many investments for be able to constantly develop it. And Autonomous cars still have problems like ethical issues (Trolley dilemma, is killing one person to save five people is wrong or not.) and what will be the rules of the driver’s license test for such a driver for future. 
This literature review was aiming look deeper into the Self Driving Car algorithms and systems, so it does not provide a list of best algorithms for Self-Driving Cars. Because performance of the algorithms depends on:

  1. The computing power of the device that affects the speed at which the algorithm executes.
  2. The type and quality of sensor depends on the size and purpose of the vehicle. 

We aimed to add emergency vehicle priority awareness feature to autonomous cars. In our project, we plan to use Artificial Intelligence, Machine Learning, Image Processing methods and test the results in simulation environment. The Autonomous Vehicle Drive Simulator that we will use need to provide us to simulate sensors such as Lidar, GPS, radar and gives potential sensor outputs, with these outputs and by trying out possible traffic scenarios we will improve the software that we will make.

According to these outputs from sensors and cameras, our primary targets are; Lane tracking system with Hough Transform and Spatial CNN, Identifying objects and traffic signs with Object Identification and Semantic Image Segmentation methods, With voice recognition, the approaching emergency vehicle will be detected and camera data, it will determine the direction and the geographical position of the vehicle. It will calculate the possible movement of the surrounding vehicles with data fusion algorithms and use Machine Learning algorithms for planning, control mechanism. 

5. REFERENCES

[1] https://dergipark.org.tr/tr/download/article-file/614340 
[2] https://en.wikipedia.org/wiki/Self-driving_car 
[3] http://cyberneticzoo.com/cyberneticanimals/1960-stanford-cart-american/ 
[4] https://en.wikipedia.org/wiki/DARPA_Grand_Challenge 
[5] Cargegie Mellon University: Tartan Racing Technology. http://www.cs.cmu.edu/~tartanrace/tech.html [Retrieved: 30.08.2018]. 
[6] https://en.wikipedia.org/wiki/Tesla_Model_S 
[7] Hirz M., Walzel B.: Sensor and object recognition technologies for self-driving cars. Computer-Aided Design and Application. 2018. 
[8] Rasmussen C.: Grouping dominant orientations or Ill-structured road following. International Conference on Computer Vision and Pattern Recognition. IEEE, 2004. 
[9] Miksik O.: Rapid Vanishing Point Estimation for General Road Detection. 
Proceedings – IEEE International Conference on Robotics and Automation. IEEE, 2012. 
[10] Moon Y. Y., Geem Z. W., Han G. T.: Vanishing point detection for self-driving car using harmony search algorithms. Swarm and Evolutionary Computation. 2018. 
[11] http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.html 
[12] https://towardsdatascience.com/tutorial-build-a-lane-detector-679fd8953132 
Ebdres F., Hess J., Engelhard N.: An evaluation of the RGB-D SLAM system. [13] International Conference on Robotics and Automation. IEEE, 2012. 
[14] Mur-Artal E., Tardos J.D.: ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. Transactions on Robotics PP (99). IEEE, 2016 
[15] Rublee E., Rabaud V., Konolige K., Bradski G.: ORB: an efficient alternative to SIFT or SURF. http://www.willowgarage.com/sites/default/files/orb_final.pdf 
[16] Engel J., Shöps T., Cremers D.: LSD-SLAM: Large-Scale Direct Monocular SLAM. Computer Vision. ECCV, 2014. 
[17] Petridis V., Zikos N.: L-SLAM: „Reduced dimensionality FastSLAM algorithms. The 2010 International Joint Conference on Neutral Networks (IJCNN). 2010. 
[18] Thrun S., Montemerlo M.: The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. The International Journal of Robotics Research. 2006. 
[19] Colleens T., Colleens J. J., Ryan C.: Occupancy grid mapping: An empirical evaluation. Control & Automation. IEEE, 2007. 
[20] Shuster F., Keller C., Rapp M. Haueis M.: Curio C.: Landmark based radar slam using graph optimization. Intelligent Transportation Systems (ITSC). IEEE, 2016 
[21] Shuster F., Keller C., Rapp M. Haueis M.: Curio C.: Landmark based radar slam using graph optimization. Intelligent Transportation Systems (ITSC). IEEE, 2016 
[22] Thrun S., Burgard W., Fox D.: A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping. International Conference on Robotics and Automation. IEEE, 2000 
[23] Trehard C., Pollard E., Bradai B., Nashashibi F.: Credibilist SLAM Performances with Different Laser Set-ups. 13th Interational Conference on Control, Automation, Robotics and Vision (ICARCV). 2014. [24] Smets P., Kennes R.: The transferable belief model. Artificial Inteligence (66), No. 2. Elsevier, 1994. 
[25] Mendes E., Koch P., Lacroix S.: ICP-based pose-graph SLAM. International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2016. 
[26] Besl P. J., McKay H. D.: A method for registration of 3-D shapes, IEEE PAMI (14), nr 2. IEEE, 1992 
[27] Datta A.: Google releases SLAM tool Cartographer to open source community. Geospatial World, 2016.https://www.geospatialworld.net/blogs/google-open-sources-slam-tool-cartographer 
[28] Nuchter A., Bleier M., Schauer J., Janotta P.: Impoving Google’s Cartographer 3D Mapping by Continuous-Time SLAM. https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W3/543/2017/isprs-archives-XLII-2-W3-543-2017.pdf 
[29] Zhang J., Singh S.: LOAM: Lidar Odometry and Mapping in Real-time. Conference: Robotics: Science and Systems Conference. 2014. 
[30] https://lig-membres.imag.fr/aycard/html/Publications/2012/Azeem12.pdf 
[31] Buehler, M.; Iagnemma, K.; Singh, S. TheDARPAUrbanChallenge: AutonomousVehiclesinCityTraffic;Springer: Berlin, Germany, 2009; Volume 56. 
[32] Olsson M.: Behavior Trees for decision-making in Autonomous Driving. 2016. http://www.diva-portal.org/smash/get/diva2:907048/FULLTEXT01.pdf 
[33] Hu M., Liao Y., Wang W.: Decision Tree-Based Maneuver Prediction for Driver Rear- End Risk-Avoidance Behaviors in Cut-In Scenarios. Journal of Advanced Transportation. 2017. 
[34] Claussmann L., Carvalho A., Schildbach G.: A path planner for autonomous driving on highways using a human mimicry approach with binary decision diagrams. European Control Conference. 2015. 
[35] Ulbrich S., Mauerer M.: Probabilistic Online POMDP Decision Making for Lane Changes in Fully Automated Driving. 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). IEEE, 2013. 
[36] Zhang J., Liao Y., Wang S., Han J.: Study on Driving Decision-Making Mechanism of Autonomous Vehicle Based on an Optimized Support Vector Machine Regression. Applied Sciences. 2017. 
[37] Mueller M. A.: Reinforcement Learning: MDP Applied to Autonomous Navigation. 2017. http://aircconline.com/mlaij/V4N4/4417mlaij01.pdf 
[38] Legrand N.: Deep Reinforcement Learning for Autonomous Vehicle Control among Human Drivers. 2017. https://ai.vub.ac.be/sites/default/files/thesis_legrand.pdf 
Figure 1: https://medium.com/deep-learning-turkiye/otonom-araclardaki-derin-ogrenme-mantigi-nasil-calisir-9f0fb59ba0a5
Figure 2: https://towardsdatascience.com/tutorial-build-a-lane-detector-679fd8953132
Figure 3: http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.html 
Figure 4, 5: https://towardsdatascience.com/tutorial-build-a-lane-detector-679fd8953132
Figure 6: https://www.mathworks.com/help/vision/ug/getting-started-with-semantic-segmentation-using-deep-learning.html 
Figure 7: https://medium.com/deep-learning-turkiye/otonom-araclardaki-derin-ogrenme-mantigi-nasil-calisir-9f0fb59ba0a5
Figure 8: https://blogs.nvidia.com/blog/2019/04/15/how-does-a-self-driving-car-see/ 
Figure 9: https://www.researchgate.net/profile/Jelena_Kocic3/publication/329153240/figure/fig2/AS:696154882842627@1542987666684/Multiple-sensor-perception-system-28.ppm
Figure 10: https://medium.com/@wilburdes/sensor-fusion-algorithms-for-autonomous-driving-part-1-the-kalman-filter-and-extended-kalman-a4eab8a833dd 
Figure 11: https://lig-membres.imag.fr/aycard/html/Publications/2012/Azeem12.pdf
Figure 12: Article, Study on Driving Decision-Making Mechanism of Autonomous Vehicle Based on an Optimized Support Vector Machine Regression Junyou Zhang, Yaping Liao, Shufeng Wang and Jian Han.

Design a site like this with WordPress.com
Get started