DxR: Bridging 2D Data Visualization into Immersive Spaces

DxR: Bridging 2D Data Visualization into Immersive SpacesRichard HackathornBlockedUnblockFollowFollowingFeb 22(with permission from ShutterStock #616733024)At the IEEE VIS 2018, researchers from diverse universities presented a paper describing a new toolkit for rapid deployment of immersive data visualizations, called DxR (Data visualization in miXed Reality).

[1]The paper represents a significant effort to merge several technology clusters into an integrated development environment.

This spans JSON Vega visualization designs with Unity3D scenes, which supports commercial VR products like Oculus Rift, HTC Vive, and Microsoft Hololens (medium priced VR headsets).

This work came to my attention via an article by Danielle Szafir, a professor at the University of Colorado Boulder.

She summarizes several data visualization projects and characterizes the significance of DxR as follows:“While our understanding of the utility of MR [mixed reality] for visualization is still limited, tools like DxR could help accelerate growth in this area.

By making it easier to build and deploy immersive visualizations, we can better understand exactly when and why these display technologies are useful for data analysis.

These tools aid both skeptics and supporters alike, allowing them to build new applications and rapidly conduct studies that test the limits of what immersive visualizations might offer.

” [2]She concludes that more experimentation of mixed reality visualization to “understand exactly when and why” they are useful, along with their limits.

As the Szafir article notes, advanced data visualization is a critical element enabling many business applications.

The pressure to use the latest display technologies (driven by the video gaming industry) is moving toward the vision described in Visualization beyond the Desktop — The Next Big Thing.

The authors surveyed the potential for VR across several applications and technologies, which was summarized as follows:“Visualization researchers need to develop and adapt to today’s new devices and tomorrow’s technology.

Today, people interact with visual depictions through a mouse.

Tomorrow, they’ll be touching, swiping, grasping, feeling, hearing, smelling, and even tasting data.

” [3]Hands DirtyLet’s see if the DxR toolkit performs as advertised by playing with it…First, read the DxR paper!.It is well written and quite approachable if you understand a little about visualization grammars and Unity3D development.

https://sites.

google.

com/view/dxr-vis/homeSecond, go to the DxR website and view the 5-min video, which is concise and clear.

Take time to browse Vega-Lite website and scroll through the numerous Examples (noticing the Vega vis specification on the left).

It’s quite a spectrum of use cases!This gives a flavor for the functionality that DxR is bring into immersive spaces.

Third, execute the code, as follows:Download the UnityPackage file by clicking on the link at Download DxR.

Choose the most recent version.

Save as a file; it is not GZ.

This file will be imported into your Unity project later.

Also, be sure that your Unity version matches, since strange things can happen.

Use the new Unity Hub to manage all your Unity versions, or just install a new Unity version.

Click on Quick Start Guide and read through the Overview.

Understand the example of Vega-Lite within an Unity scene.

Focus on the five steps in the Setting Up section, which is most of the work.

NOTE: You need newbie familiarity with Unity3D to create a new Unity project, import Unity asset packages, understand the Hierarchy, Project, and Inspector panels, and play the Unity scene.

If you need help, check out the excellent introductory tutorials for Unity3D.

Try some of the canned examples.

First, scan the DxR Examples page.

All those can be found in the DxRExamples assets folder.

Double click the Classics scene to load into the assets into the Hierarchy window.

Open the Collection folder and note the DxR… subfolders for bar charts, scatterplots and the like.

Click on the first item DxRBarchartBasic.

In the Inspector panel, not the Vis script and its global variable Vis Specs URL, which points to Examples/Classics/barchart_basic.

json.

In the JSON of this Vis Spec is the “data” attribute containing values for the data to be displayed.

In other Vis Spec, an “URL” attribute specifies the json file nameCreate your own vis by following the next section on Creating your First Immersive Vis.

There are additional tutorials covering the GUI control and custom marks (using prefabs).

Use your own data, and have fun!Create a custom mark.

I especially liked the flexible use of prefab GameObjects in Unity as DxR marks.

Note the The tutorials in Custom Marks and Channels are informative, while studying the fire Mark and its C# particle script.

ReflectionsDxR is a good beginning for bridging conventional 2D data visualization into immersive spaces.

As explained below, there are several extensions that would be desirable in future DxR versions.

These extensions will hopefully come from continued research funding and open-source community contributions.

Extending the DxR mark encodings.

DxR has implemented an impressive scope of the Vega-Lite specifications, which is not obvious initially.

Study the DxR Grammar documentation, along with the Google Sheet docs.

Lots of latent capabilities here…For example, DXR provides 3D visualizations using mark encodings for x, y, z position and rotation, similar to the Unity Transform type.

Below is a standard example called Streamline, shown in the Examples section.

Vis Spec and Rendering for the Streamline ExampleIn the above vis specs, a mark of cone is used.

Noticed the use of xdirection type to control the orientation vector of the mark.

This differs from xrotation type, which controls the Unity Transform rotation property.

Change the vis spec to see the difference.

A hidden treasure in the DxR documentation!However, Unity has rich rendering capabilities.

How can mark encodings be extended to handling texture, shaders, sound, animations, particles, and the like?.The capabilities list for action functionality by Hutong Playmaker would be the goal.

PyViz as the evolution of Vega.

A recent community — PyViz — has formed to integrate several python-based visualization packages, including Vega.

This PyVis background section paints the grand picture.

Currently, there is limited 3D viz by only Plotly using WebGL, as explained on the FAQ page on Does PyViz include 3D support.

The Roadmap page provides their current/future development directions.

No mention of Unity3D.

Definitely a fast-moving community to be leveraged.

Enhancing the Data Integration Capability.

The current data integration capability supports embedded data explicitly in the Vis Spec (for very small datasets) or as static JSON/CSV files within the Unity StreamingAssets folder.

However, the URL attribute within Vis Spec could be extended to remote SQL database access via web service calls.

Further, the abilities to handle streaming data into Unity with auto-update upon data changed and to interact with the data server.

This is a significant effort that can be aided by existing open-source resources (like the PyViz commuity).

Leveraging the Microsoft Mixed Reality Toolkit.

DxR is built upon HoloToolkit within the Unity environment.

Microsoft recently converted HoloToolkit to Mixed Reality Toolkit (MRTK) and is moving aggressively.

Architecture of Mixed Reality ToolkitObjective is… “ collection of scripts and components intended to accelerate development of applications targeting Microsoft HoloLens and Windows Mixed Reality headsets”, which does include Steam VR (like HTC Vive and Oculus Rift) and OpenXR platforms.

The MRTK github is under MIT License and built upon Windows 10 UWP (Mixed Reality APIs).

The issue is how to incorporate MRTK into DxR, leveraging its emerging capabilities, along with balancing emerging competing VR toolkits.

Unity Entity-Component-System.

Another relevant development is Unity’s new Entity-Component-System architecture, along with a Job System and Burst Compiler.

Driven by video games featuring hordes of armies, Unity is beta-testing this new architecture that separate the data driving GameObject behaviors (as Components of Entities) from the code that executes those behaviors (as Systems).

Best introduction is a series of six video tutorials plus examples.

When ECS matures, its potential will be to create and animate Unity objects (gliphs with complex behaviors and appearances), millions of them!.Packing all the data for a gliph into an Entity to be streamed into the GPU would be the only way to create such visualizations.

This capabilities may allow new complex visualization approaches.

Only One More Dimension?The above focuses on bridging 2D visualization into 3D immersive spaces.

However, does this imply that there is only one more dimension to utilize for our visualizations?In 2015, I wrote a blog on Is Virtual Reality Useful for Data Visualization.

[4] It was based on comments by Tamara Munzner, a researcher into data viz at the University of British Columbia.

Her short answer was ‘NO’ because: 3D is only needed for spatial settings and the cost of VR, broadly defined, outweighs its benefits.

Scientific visualization often have a natural 3D context within which to display data, thus being intuitive for the observer.

If a natural 3D context does not exist, is there still a justification for VR?.My conclusion was that……virtual data worlds should provide unique capabilities (such as collaboration) and be synergistic to 2D visualizations (blending them in-world).

[If we] naively uses virtual data worlds to extend 2D visualizations by one more dimension, we will fail to find effective use cases [for VR visualizations].

Since then, I have investigated several approaches to using creatively that one more dimension.

Here are some suggestions that I am pursuing;Think of building a virtual world that represents the dynamics of an entire complex system (like a digital city).

The horizontal plane would resemble the physical world, while vertical dimension would represent increasing levels of abstract (and analyses).

Remember there is no gravity here!Support multi-user environment to facilitate collaboration among persons who are observing the same behaviors at the same time, each with their own specialized tools.

Overload glyphs (marks) with information, such as Gistualizer [5] based on the study Gist of a Scene [5].

Note that DxR examples hint at this phenomenon.

Visualize the latent space (embedding) within hidden layers of a trained neural network.

[7] One approach is the overload glyphs by creating Unity textures from the weights.

A second, more intriguing, approach is to exploring the entire 100+ dimensional space, 3 dimensions at a time with Unity.

Needed is a smart navigator within Unity to wander around this hi-dim space!If this is of interest, let’s build immersive worlds!.Please collaborate by joining the Immersive Analytics community.

[website, twitter, linkedin, slack, github].

ReferencesSicat, Ronell et al.

DXR: A Toolkit for Building Immersive Data Visualizations.

(2018).

https://sites.

google.

com/view/dxr-vis/home and https://github.

com/ronellsicat/DxR.

Szafir, Danelle.

IEEE VIS 2018: Color, Interaction, & Augmented Reality.

(2018).

https://medium.

com/multiple-views-visualization-research-explained/ieee-vis-2018-color-interaction-augmented-reality-ae999b227c7Roberts, Jonathan et al.

Visualization beyond the Desktop — the Next Big Thing.

(2014).

https://ieeexplore.

ieee.

org/document/6879056Hackathorn, Richard.

Is VR Useful for Data Visualization?.(2015).

https://www.

immersiveanalytics.

com/2015/10/is-virtual-reality-useful-for-data-visualization/Bellgardt, Martin et al.

Gistualizer: An Immersive Glyph for Multidimensional Datapoints.

(2017).

https://vr.

rwth-aachen.

de/publication/02157/Oliva, Aude.

Gist of a scene.

(2005).

https://www.

sciencedirect.

com/science/article/pii/B9780123757319500458Koehrsen, Will.

Neural Network Embeddings Explained.

(2018).

https://towardsdatascience.

com/neural-network-embeddings-explained-4d028e6f0526.

. More details

Leave a Reply