Capturing point cloud data is an interesting pursuit, however post-production to clean up these scanned assets if often the opposite and can be quite a tedious task depending on your use case. Continuing our LiDAR journey then, today we look at ways to make things a little more interesting in this space. How so? Easy, introduce some VR!
What are we looking at here? I don’t have access to a £40k laser scanner…
First off, we’ll briefly look at the capture process where we’ll be using LiDAR point cloud (a series of vertices with photo scan data) obtained with any recent iOS gadget with ‘pro’ moniker; hopefully the Android crowd follow suit. It should be noted that LiDAR is not a prerequisite for the VR aspects to follow, there are other methods to obtain 3D data such as photogrammetry or through traditional point cloud data with its eye-watering number of data points clustered together by one of these beauties. We might just give laser scan data a go in a follow up blog, the process will be a little different with such dense data (optimisation is the word!), we’re excited to find out!
Quick Comparison
A large scene (such as the bathing machine below) for LiDAR will consist of roughly 1.7 million vertices and 1000 image scans totaling 1.8gb of data, compare with traditional laser scans that run into several 100gb’s.
Polycam & 3D Scanner App
There are a dozen apps out there that can obtain LiDAR data. Our previous blog looked at apps for scanning as-built information to create floor plans. Two apps which seem to do the job well for 3D objects are Polycam, this is a paid application at around £8 per month or 3D Scanner App which is free. Both apps are very intuitive and offer various export formats such as .PLY, .OBJ.
Anecdotally speaking, my experience has been mixed when comparing the two apps: On the odd occasion I have noticed using 3D scanner app, particularly with larger scenes, that 3D data gets overwritten or added quite easily when making more than a single pass around an object, resulting in unwanted artifacts which is frustrating. Polycam on the other hand does a great job of removing shadows and toning down overly saturated images which is ideal at later stages. I would recommend giving both apps a go to see what works for your situation as they differ with each scenario. See the images below for an idea. Beginner tip: Larger planes are ideal when capturing information, you’ll have a hard time picking up smaller objects, say around 1 inch, 25mm.




With your data captured and processed, export your model as .OBJ, a widely supported 3D format, we’ll be using this next. As a side note, at this point with no further post-production, these scans make great background objects in 3D scenes such as for Architectural Visualisation, adding a touch of realism, just don’t get up too close!
The Fun Begins
Oculus Medium: The Cleanup Crew
Oculus (Adobe) Medium is a free sculpting tool for Oculus. It’s a whole suite of tools that are particularly good for cleaning/patching up 3D models, which is why I decided to throw an ungodly mess of a scan at it to see what it could to: the scene was large and high in artifacts that needed removing – a tricky/lengthy process on 2D screens.
It certainly beats sitting with a mouse and keyboard and it’s great to be “working” whilst being active. Minus the exaggerated movements I imagine this is what being a dental hygienist feels like; you can certainly get into every tricky crevice and that’s where it beats a desktop workflow. Once you’re done, you can export out the way you came in: OBJ, FBX.
Gravity Sketch: 3D Model Creation
It may be the case that your point cloud data is too messy/complex and it might be beneficial to start afresh. In this case, Gravity Sketch can help. Gravity sketch is a free 3D design tool for creation, collaboration and review.
What’s particularly great in this instance then, is Gravity Sketch’s powerful modelling tools. In the video shown I’ve imported the OBJ scan and locked it in place as a reference, recreating the model in its entirety with new 3D components. This can then be exported, as Oculus Medium will do, for post-production elsewhere such as in Blender or Substance Painter to give it some texture, lighting etc.
The benefit here is that you have clean 3D components to work with; I can only imagine all the possibilities for things such as historical recreation for example.
What are your experiences with VR apps and tools? Do you think they will replace desktop workflows? Let us know in the comments below and thanks for reading!
Really great stuff !