Google recently released a video detailing their Soli radar sensing technology, which it says will reduce gadget interruptions by interpreting small human signals. The Motion Sense function on the Pixel 4 already uses the same technology.
More information on Google’s Soli radar technology for nonverbal interactions between humans and machines has been released. Waving your hand or tilting your head to communicate with a gadget is an example of a nonverbal interaction. Motion sensors and algorithms are combined in the technology, which the business is constantly developing.
In a new video, Google illustrates how the sensors and algorithms function, as well as prospective use cases. According to the business, this technology can help reduce device overload and make gadgets more useful and less obtrusive.
Deep learning algorithms are used in gestures like ‘approach and depart’ to assess if someone is in a device’s ‘personal space.’ Personal space overlap, according to Google, is an excellent determinant of whether individuals will interact or simply pass by.
Machine learning algorithms that can detect more subtle body language recognise actions like ‘turning away/towards’ and ‘look.’ This technology, for example, can detect the angle at which your head is oriented and anticipate how likely you are to engage.
“As technology becomes more pervasive in our lives, we feel it’s only fair to start asking technology to take a few more signals from humans,” says Leonardo Giusti, ATAP’s head of design.
The Soli radar sensor was introduced in 2015 and has since been utilised in a variety of Google products. It was utilised by Google in the Pixel 4 for Motion Sense, which detects hand motions and allows users to halt music or alarms without touching their phones. The sensors are also utilised in the Sleep Sensing function of the Nest Hub smart display, which helps track your sleep quality by sensing your breathing patterns and activity.