Trial PointFuse
News

Reality Capture for VR: are you meshing out?

Who’s had a go on a VR headset? While they were once the province of gaming enthusiasts, people at parties who wanted to experience motion-sickness, and impossibly futuristic technology adverts, VR headsets and the accompanying technology is finally seeing mainstream adoption in the AEC world. Firms are finding that VR is an exciting and winning way of showing clients their designs; letting them step into the design adds the wow factor that makes approval that bit more likely, and lets firms show stakeholders the rationale behind certain design choices far better than drawings or printouts can do. And now, forward-thinking firms are asking how they can weave another strand of data into their VR environment: reality capture.
After all, reality capture data is vital to many projects, showing how a design will interact with the real world – and as the built world increases in density, and refurbishments, extensions and alterations increase in popularity, reality capture will only become more valuable. Being able to show designs alongside as-built data in a VR environment can massively enhance the power of the VR presentation, and opens up new use cases for project teams to work on the data in VR. It’s something that a lot of early adopters are already experimenting with, and initial results are promising – so in this blog I want to review some of those methods people are employing to get reality capture data into VR.
In the interests of full disclosure, I also want to discuss how PointFuse offers a new way of getting reality capture data into a VR environment that may be the best yet. It’s based on our integration with Unity, which enables users to stream our intelligent mesh models directly into Unity Reflect to view them alongside other data in VR.  people on site wearing site safety clothing using a VR headset

Method 1: building your as-built with design elements

Some people use their point cloud data to recreate their as-built data in a CAD programme, and then stream that data into their VR environment. This method gives you a good deal of control over the final output, but there’s no getting round the fact that you’re having to do a lot of work by hand, which is time-consuming – and therefore expensive. Given that one of the reasons to use reality capture in the first place is to reduce the need for manual work, this is a pretty counter-intuitive method of importing the data into VR and would likely only make sense if you’re not able to work with your scan data for any Recreating scan data by hand also puts accuracy at risk – even the steadiest hand and keenest eye can miss details from the original scan. So, assuming you can make use of your scan data, let’s look at other methods of getting reality capture into VR.

Method 2: working with raw point cloud data

The simplest reality capture data, in many senses, is the original point cloud produced by 3D scanners. It’s certainly possible to stream this data into a VR environment, and arguably is the easiest in terms of the steps needed to get the data from the scanner into VR. However, the principle drawbacks are the same as they are any time you try to work with point clouds: the files are massive, the data essentially unstructured. If you’ve scanned a whole building, for instance, the point cloud file can be hundreds of gigabytes large, making it almost impossible to run on most computers (and quite probably overwhelming your poor VR environment). The lack of structure to the data means that there’s nothing to tell you (or your VR software) what parts of your point cloud are what. In fact, there’s nothing to even tell the software that a wall is a solid thing; as you get nearer it, it will start to look like a bunch of points (because it is just a bunch of points) and you may even be able to see through it. All this can make point cloud data confusing to the non-expert when viewing it in VR; a bit like stepping into an impressionist painting when you were expecting a Lego model. If you really need to see all your as-built data exactly as it was scanned, with absolutely no alterations, then streaming the point cloud might be the right move (if you have a high-spec computer to handle the data). However, most people will look at other methods, such as meshes.

Method 3: creating a single mesh

Meshes are the method that most techno-comfortable organisations will work with, as the file size is so much smaller than a corresponding point cloud. For those not in the know, “meshing” converts a point cloud into a series of triangles, essentially turning the point cloud into a simple giant 3D model. This is convenient in some regards – but what you’ll often find is that the mesh will contain everything in the point cloud – including people, objects, even animals that you might not want to include in your VR environment. For instance, if you’re trying to visualise an office with new furniture, then a mesh model that includes the old furniture will really spoil the effect. Removing that extraneous data typically needs to be done at the point cloud stage, and therefore has to be done manually – you choose all the points that make up the table, person, or seagull, and delete them. That’s more painstaking work – and possibly error prone, depending how many hours you’ve been staring at the screen. Of course, the same is true for those streaming their original point cloud data as mentioned in method 2. On top of this, many firms are finding that they want more rich information from their reality capture data. It’s not mandatory by any means, but it does enable firms to differentiate themselves by offering a far more versatile end product. Which brings us to what I believe is the most advanced way of importing reality capture data into VR…

 

Method 4: creating an intelligent mesh model

Intelligent mesh models give you the file size reduction that meshing brings compared to point clouds, but gives you greater control over the data. Our solution, PointFuse, automatically classifies elements of the mesh so you can highlight, filter, or delete them as you wish – a handy way to get rid of data you don’t want to see, and background noise – while reducing file size by around 90%. So when it comes time to stream into your VR environment, you can decide exactly what your viewers will see, increasing the quality of your presentation (and its usefulness, for instance if you’re trying to demonstrate how new M&E components will work in an existing building). It’s incredibly simple to do – the intelligence of the meshing process means that it doesn’t require years of experience working with scan data to make the process work, unlike designing your own as-built elements in a CAD tool, or manually editing point clouds before streaming them. Intelligent meshes also open up a new world of opportunities in the form of metadata. This is where extra data is attached to your mesh model. That could be as simple as written notes and observations made during the scan, or as complex as API-enabled links to asset management databases giving you vital information about components you’ve scanned (a service schedule for a pump, for instance). You could even attach videos and imagery (imagine being able to show people a video of how an engine you’ve scanned works, for instance, as they look round it in VR). It might feel like we’re getting back into the realms of unrealistic technology adverts here, but it’s really not as far away as it may sound.

PointFuse and Unity Reflect – an exciting evolution

As I mentioned above, PointFuse has just launched a new integration with Unity, making it possible to stream our intelligent meshes directly into Unity Reflect, a suite of products to create immersive real-time 3D experiences, including in VR and AR. We’re really excited about the possibilities this opens up for how AEC professionals digitize their processes, now it’s simple to view reality capture data alongside other datasets in VR. Of particular interest to existing PointFuse users is that, through this new integration, you no longer have to work with file formats such as OBJ, FBX and IFC, many of which mean compromising on either textures or metadata. Instead, by publishing meshes directly to Unity Reflect, you can add as much additional information as you want to the digital twin. But of course, mine is just one opinion, based though it may be in years of working in the reality capture industry. I’d love to hear if you have other ways of loading reality capture data into a VR environment, or thoughts about the methods I’ve described here. However you do it, it’s plain that bringing reality capture data into VR is a big step forward for AEC professionals everywhere. Being able to visualise all your data in one place – both designed and as-built – gives firms the opportunity to dramatically optimise their workflows and present data to project teams and stakeholders in new, useful and ways. And best of all, companies like PointFuse are already working hard to bring these benefits to users faster (in near real time, for PointFuse users), more economically, and more simply than ever before.