Fully automated driving remains the ultimate AI technology goal for the automotive industry. It is likely that years of development, innovation, and cultural change will be needed before this dream is realised. However, computer vision based AI models are making a difference for many drivers right now.
One exciting AI application that promises to make the driving experience easier is in-cabin AI. Machine learning powered systems can recognize objects, and human behaviour. This makes a range of use cases possible as a result. In this blog we will show how computer vision AI is making driving safer and more convenient from inside the cabin.
Firstly, this blog will look at how in-cabin AI can make driving safer by monitoring driver behaviour. Secondly, we will show how object recognition can make travelling more user friendly. These useful features are made possible by annotated training data. Finally, we will identify the ways in which data annotation tools can make a difference for developers.
Keeping drivers safe
Mistakes made by drivers cause a significant majority of road accidents. Tired, impaired or distracted drivers can hurt themselves as well as other road users and pedestrians. AI can be used to monitor drivers and give early warning of potential accidents. Machine learning allows in-cabin cameras to recognize movements and facial expressions that show a driver is impaired in some way.
For example, the in-cabin system knows if a driver is falling asleep. Subsequently, this system can trigger an alarm which will wake the driver up. They can then look for a safe place to rest before continuing their journey. These applications can also help logistics organisations to train their employees, giving managers early notice of potentially dangerous recurring behaviour.
Machine learning models are trained with annotated video and images of in-car behaviour so that they can spot problems in real life. Human annotators add labels to each frame of video data using annotation tools. This means: following body movements with polylines, tracking eye pupil movement to show where a person is looking, and identifying the emotions that a person is displaying.
Finding in-cabin objects
Object recognition allows AI models to make driving more convenient for busy people. Accidentally leaving an object in your car is a common (and annoying) mistake. Lots of drivers waste time searching for keys or phones inside their home/workplace when they should be looking in the car.
This experience can be exasperating, but leaving young children or pets unattended in a vehicle can be dangerous. AI applications can identify important objects and people and warn users if they are left behind when they leave the car.
Object recognition is made possible by semantic segmentation annotation. When performing semantic segmentation, annotators divide each pixel in a digital image or frame into a class. This allows small objects, like phones and keys, to be highlighted and contextualised.
Data labelling tools
The two AI applications discussed above are made possible by image and video annotation tool. AI developers need the right annotation tools to create precise training data. Keylabs is an annotation platform designed to streamline annotation with effective and unique features:
- Video annotation: Keylabs allows multiple annotators to work on the same piece of video at the same time. When annotations are complete Keylabs seamlessly merges video segments together to create exceptional training data fast.
- Interpolation: Keylabs features comprehensive interpolation options. Tracking objects from frame to frame makes the annotation process faster.
- Analytics: Keylabs give managers more information with detailed analytics. This means that annotation tasks can be given to workers who are best suited or performing well generally.