With research in ML, computer vision and 3D.Project Starline gives a effect of the person sitting just across from you in a conversation.

Google has launched a sample video (featured above) in which it has shown people are interacting in a video in such a way that they are actually sitting with each other in the same room. The person chatting on a video said that it was a mind-blowing experience as if she is right in front of the other person. This system is large now and it looks to be a whole booth, full of lights, multiple cameras, and a seat to sit. Google basically depends on custom-built hardware
Project Starline combines research in computer vision, machine learning, real time compression and spatial audio.
The team at Google has had a breakthrough development in creating a light field display system that gives a sense of volume and depth. This eliminates the need for additional headsets or glasses for the 3D feel for the interaction.
The video conferencing tech uses a “light field display system”, which consists of various cameras and sensors, to capture a person’s appearance that are then rendered to form a 3D model.

ML plays a major role in the project starline as it recognizes the series of images and helps to covert it into the 3d format or holographic structure.
The image sequence serves as the primary data used for further scene 3D reconstruction by SfM technique.

ML can be used to improve the quality of your image, or to help you extract useful information from it. It’s useful in fields like medical imaging, and it can even be used to hide data inside an image.ML focus on transforming images from one form to another, and Computer Vision systems help the computer to understand.
For more details about Google’s I/O event refer-blog.google.io
For more details about Machine Learning|ML visit-www.guvi.com