Autonomous Driving

Leishen Intelligent System Co., Ltd.

Home / Solution / Autonomous Driving

What is Autonomous Driving all about?

As we all know, robots perform prescribed tasks without human intervention. Similarly, autonomous driving refers to autonomous vehicles or transport systems.

They are a systematic combination of advanced sensor technologies, intelligent control systems and intelligent actuators.

In 2014, SAE International(Society of Automotive Engineers) published a standard called “J3016” that defines various levels development up to fully autonomous vehicles. The levels for autonomous driving range are from level 0, i.e., no automation, up to level 5, i.e., full vehicle autonomy.

Need for Autonomous Driving: Safety

From the 1950s to the present, driving safety has evolved with predefined safety standards:

1950 - 2000

Safety/Convenience Features Cruise Control, Seat Belts, Anti-Lock Brakes

2000 - 2010

Advanced Safety Features Electronic Stability Control, Blind Spot Detection, Forward Collision Warning, Lane Departure Warning

2010 - 2016

Advanced Driver Assistance Features Rear-view Video Systems, Automatic Emergency Braking, Pedestrian Automatic Emergency Braking, Rear Automatic Emergency Braking, Rear Cross-Traffic Alert, Lane Centering Assist

2016 - 2025

Partially Automated Safety Features Lane Keeping Assist Adaptive Cruise Control, Traffic Jam Assist

2025+

Fully Automated Safety Features

How does Autonomous Driving work?

Sensing Part

The sensing part is equivalent to the eyes and ears of a human being, which perceives the environment and vehicle through sensors such as in-vehicle cameras, LiDAR, and millimetre-wave radar, collects data from the surrounding environment, and transmits it to the decision-making layer

Decision-making Part

The decision-making part is equivalent to the brain of a human being, which processes the received data in real time and outputs the corresponding operation and instruction tasks through the operating system, chip, and computing platform

Execution End

The execution end is equivalent to the limbs of a human being, which executes the received operation instructions to the vehicle terminal parts such as power supply, direction control, and light control.

Machine Learning

Machine Learning paves the way for machines to make decisions based on their memory; this enables self-driving vehicles to exist. They allow a vehicle to collect data on its surroundings from cameras and other sensors, interpret it and decide what actions to take. They also help in reducing accidents and saving lives.

The smart sensor in the perception link is the “eyes” of the smart driving vehicle, and the mainstream sensor products currently used in environmental perception include cameras, millimeter wave radar, ultrasonic radar and LiDAR.

The difference between the sensors and the advantages of LiDAR

Therefore, LiDAR is the core sensor for advancing smart driving to L3 level and beyond.

Currently, no company is able to offer fully autonomous vehicles on a large scale. With the development of technology, autonomous driving could become a key player leading the economy in the near future. Over the next decade, autonomous vehicles will open up markets on a larger scale around the world. Contact us to know more.

Solutions

Solutions for  Autonomous Driving

FAQ

FAQ For Autonomous Driving

The sensing part is equivalent to the eyes and ears of a human being, which perceives the environment and vehicle through sensors such as in-vehicle cameras, laser radar, and millimetre-wave radar, collects data from the surrounding environment, and transmits it to the decision-making layer; the decision-making part is equivalent to the brain of a human being, which processes the received data in real time and outputs the corresponding operation and instruction tasks through the operating system, chip, and computing platform; the execution end is equivalent to the limbs of a human being, which executes the received operation instructions to the vehicle terminal parts such as power supply, direction control, and light control.

 

Machine Learning makes way for machines to make decisions based on their memory; this enables self-driving vehicles to exist. They allow a vehicle to collect data on its surroundings from cameras and other sensors, interpret it, and decide what actions to take. They also help in reducing accidents and saving lives.

 

The smart sensor in the perception link is the “eyes” of the smart driving vehicle, and the mainstream sensor products currently used in environmental perception mainly include cameras, millimetre wave radar, ultrasonic radar, and LiDAR four categories.

  • Cameras are less effective in backlight or complex light conditions
  • Millimetre wave radar has poor recognition of static objects
  • Ultrasonic radar has a limited measurement distance and is susceptible to adverse weather conditions
  • LiDAR can detect most objects (including static objects), has a relatively longer detection distance (0-300 m), high accuracy (5 cm and can construct 3D models of the environment, and good real-time performance)

Therefore, LiDAR is the core sensor for advancing smart driving to L3 level and beyond.

Please Leave Your Message

logo en

Thank you very much for your approval of LSLiDAR, we will do our best to serve you ! We will respond to your intended needs within 24 hours, thank you for your support.