All projects were made using Unity 3D and C#
The goal of this prototype was to explore 3D scene reconstruction and gestures understanding on mobile. This research’s results and prototype were then handed over to the the REFLEKT Remote product team - a Remote collaboration app with AR support - for future product feature integrations.
In the first part of the video, you can see the user moving around the scene in order to gather enough features in the point cloud. An invisible mesh is then constructed. When the user draws an annotation on the screen, their gesture is recognized and translated into the corresponding shape (point, arrow or circle). The 3D annotation is projected onto the mesh.
Not part of this video: Another user on a computer can see what the phone user sees, and can annotate from the computer. All annotations are reflected in augmented reality in the mobile phone’s user environment.
Remotehololens2 (1)-1 Trim-1.m4v
Conception and implementation of a HoloLens version of REFLEKT Remote, a remote collaboration app for mobile with AR support. The user can connect to their REFLEKT Remote account and see a list of people they can call. When a HoloLens calls, the other peer on a computer or mobile device can see what the HoloLens user sees.
This is unfortunately not present in this screen recording, as it was not possible to share the device’s camera while doing screen recording on HoloLens.
Ar Annotations Prototype Into Remote Hl Compressed-1.m4v
What you can see in this video is the perspective of the user on a computer when calling someone on HoloLens. Clicking on the screen creates a spatial AR annotation, visible by both users.
Some of our customers sometimes didn’t have a 3D model of the real world object they wanted to track in augmented reality. The idea of this prototype is to allow them doing basic tracking, and therefore overlay augmented reality information, for these objects where a 3D model is missing.
The user takes a photo of the side of the object, and a flat 3D model is generated. This 3D model can then later be used for model tracking initialization. After initialization, augmented reality tracking during the session is handled by SLAM algorithms such as ARKit and ARCore.