Perception of Stereoscopic DataViz in AR and VR

Perception of Stereoscopic DataViz in AR and VRPanayot CankovBlockedUnblockFollowFollowingJun 13My team has been working on AR VR DataViz for a while and I’ve built some impressions and thoughts I wanted to share with you on depth perception in VR and its usage in DataViz.

Explore and Analyze Your Data in Stereoscopic 3D with AR and VR – Progress TelerikExperience the immersive value of VR directly on your headset.

Wow your peers and leadership team with next generation…www.

telerik.

comVision for each eye span about 120 degree horizontally and 100 degree vertically.

If we take into account, the eye rotations this spans the view to 210 degree horizontally.

If in average the smallest visual offset between the visual features that can be detected is about 0.

0006 degrees, this gives us a resolution of more than 350,000 pixels horizontally necessary for each eye to match what we can potentially see in reality.

The VR Book explains this, and a lot of other things about the VR in depth.

I do highly recommend it:The VR Bookfor the entire team from definitions to project delivery from research to application applicable guidelines Design for…www.

thevrbook.

netToday consumer grade VR devices are getting shipped by millions.

Sony sold over 4 millions PSVR headsets, that makes 1 in 20 PlayStation 4 buyers.

Oculus Quest standalone units are shipped rapidly, with overall high user satisfaction.

But all these consumer grade devices are still VR generation one.

They have nice controllers, ergonomic headsets, good positional tracking, easy setup and maintenance.

But when you take a step back and take a look at the technology the innovation is not revolutionary, it is evolutionary — the hardware is little more than lenses attached to a phone screen in a single package mounted on your head.

The hand input is done through infrared camera tracked controllers, but finger input is captured through buttons so fine motor skills has little play.

Feedback is provided by haptics — controller vibrations.

Sound is decent quality.

There is no body tracking.

There is no skin touch, heat, smell, etc.

How do we See in VR?The human vision relies on two eyes.

We receive two slightly different images.

Out brain combines the state of our eye muscles eyes, our head position and those two images, to form a single perception of a world rich in depth.

Visual AcuityEach of our eyes contains several types of sensors.

We will not go into details into these types, but we will note their distribution.

The visual acuity, the ability to detect clearly and in high-resolution, is concentrated in the fovea — the center of the retina.

So those 350,000 pixels we can potentially detect — we do not receive all that information at once, the fovea sees only the central 2 degrees of the visual field in high resolution.

SaccadesTo perceive more detail of a 2D or 3D picture our eyes move simultaneously to bring different details onto the fovea — Saccades.

The brain does these in succession and extrapolates the fine details in missing pieces.

If you could track these movements, you could generate an attention map.

But the most important thing for the consumer grade VR headsets is that they simply cannot deliver the resolution to fill completely the fovea.

To present high quality models, high quality textures, and the perception of the detail of a real-world premium object, the VR display will have to match the fovea resolution.

This can be done by eye tracking, streaming a very small but high resolution image targeting the fovea.

But the VR is not there yet.

Consumer VR may not be the right choice to present premium, high detail, products, pictures, textures, tiles, fine art etc.

the headsets cannot deliver the level of detail.

Horopter PlaneWe see two images, one with each eye.

Objects at a distance, that has no disparity in those two images, form a plane — the horopter plane.

Those objects are perceived as crystal clear.

Near that plane is an area where objects generate disparity, but the brain can still fuse them together.

Everything else outside that area, contains disparity too great to fuse, causing objects to appear as doubled.

VergenceVergence is the rotation of the eyes in opposite direction with the purpose to obtain sharp and comfortable vision.

The position of the eyes in this type of movement provides depth cues effectively of up to 2 meters.

AccommodationOur eyes contain lenses that change trying to focus the incoming images on the retina.

That process is called accommodation.

The feel we have of the eye muscles contraction that change the lens, provides a cue for distance of up to 2 meters.

When the effects of vergence and accommodation are combined they make objects that are on focus to appear as single clear image.

Objects that are not on focus appear as blurry doubled objects.

The consumer headsets have screens and lenses that make the images focus at a fixed distance.

But present two different images to both eyes.

This causes vergence vs accommodation conflict.

The consumer headsets benefit from that the vergence contributes to depth perception more than the accommodation.

This enables the consumer VR headsets to deliver information rich in depth suitable for engineering visualizations, buildings, machinery, floorplans, charts.

Depth PerceptionAccommodation and vergence play role in depth perception.

But the overall process is much more complicated.

Our brains process automatically array of stimuli and mix them with our knowledge and prior experience.

This allows us to perceive depth from 2D media or even from a single eye.

Consider the following list of factors contributing to depth perception again from “The VR Book”:Pictorial Depth CuesOcclusion (front objects hide rear objects)Linear perspectiveRelative/familiar sizeShadows/shadingTexture gradientHeight relative to horizonAerial perspective (rear objects look dull disappearing in fog)Motion Depth CuesMotion parallax (when you move, near objects seem to move faster)Kinetic depth effect (changes caused by moving objects)Binocular Depth CuesOculomotor Depth CuesVergenceAccommodationContextual Distance FactorsIntended actionFearThis raises the bar of designing 3D stereoscopic display experiences that rely on depth perception.

You must take into account the oculomotor depth cues thanks to the stereoscopic displays as well as the motion depth cues thanks to the head tracking.

2D vs Stereoscopic 3D Displays for VR DataVizTraditional data visualization techniques present data on a single 2D screen.

This media has the property to fit all the presented information within focus satisfying the vergence and accommodation easily.

That is the whole 2D presentation lies very closely to the horopter plane.

There is a trend for curved displays and TVs, that positions even the edges of the screen perfectly on the horopter plane.

In VR curved displays are used frequently for visualizing 2D movies, sometimes menu systems.

On the other hand, the human brain can only consume a limited amount of input at a time.

This leads to a conflict where bigger screens and better resolutions allow for increased amount of information to be displayed at once, but brains are not capable of handling it all.

On 2D you can rely only on the visual acuity distribution to control content at focus, leading to grouping of data, guidelines for content paddings, etc.

This leads to a strive for an overall simplicity in 2D data visualization.

Don’t go 3D on 2D MediaThis is as a rule of thumb.

Any old book on DataViz will talk about avoiding 3D in general.

It leads to noise, high ink per data ratio, visual clutter.

Perception of 3D data on 2D display cannot generate the accommodation and vergence depth cues.

On printed media, where the models cannot be moved, occlusion hides rear data and there are no cues from motion parallax.

This can be overcome by interactive 2D displays but it still requires active input from the user or position animations to change the point of view in order to read the data.

Shadow/shading, areal perspective requires photorealistic rendering, for DataViz on 2D displays these type of visualization often lack the context of the environment — where lights come from, room corners forming cues of linear perspective and so on.

Go 3D on 3D MediaStereoscopic 3D displays are very different.

Rich 3D models of the data can be presented.

Moving your gaze through these models will push and pull that horopter plane, moving data sets in and out of focus.

Depth cues from vergence are provided thanks to the stereoscopic display.

Head tracking will allow you to naturally change your point of view triggering depth perception cues that come from occlusion and motion parallax.

Occluded back objects can be easily and naturally revealed.

Using proper geometry can also trigger depth cues from linear perspective and relative/familiar size.

The consumer grade VR headsets can generate exceptional depth perception in users, that enables a 3rd dimension to be used, and the perception of 2D areas can be extended to 3D volumes.

So how can we make a good use of this property in AR VR DataViz?Sales MapOne thing that comes to mind is to use clusters of data placed at various distances.

Having a stereoscopic AR VR DataViz horizontally placed map, with clusters of charts positioned by regions, can effectively replace a drill-down by regions in a traditional 2D story.

Instead of checking out sales by navigating back and drilling down to a different region, you just use the most natural interaction possible — slide your gaze.

GeoSpatial VR DataVizYes, you can visualize a map on a 2D display, but if you overlay the charts, then they all get perfectly focused and you get quickly that too-much-information effect.

You can rely only on visual acuity distribution to separate groups.

The shading of the map provides height information both in 2D and AR VR DataViz, but in stereoscopic 3D depth cues are also triggered from vergence on high contrast areas — road lines, city names, pins.

The 3D Bars we selected are aligned with the table sides and the floor tiles so depth cues come from linear perspective and the familiarity with the bar sizes.

Combine Reports and Share ContextCombine several 2D reports in a single stereoscopic 3D AR VR DataViz experience.

The closest thing to this in 2D is having 2 monitors running a single reporting app — multiscreen app that should be aware of the monitors placement.

In our SalesDashboard application we placed the products in one axis and shared it between multiple 2D reports placed amphitheatrically.

Each report focuses on different aspect of the data and you can switch the context by moving your gaze, while mapping the products from one report to the other.

The navigation feels much more natural than switching tabs in a browser.

Panorama view of our amphitheatrically placed 4 reportsKeep in mind the above image is a projection of the VR experience onto 2D media.

When experienced on device, the vergence will mask the interior of the room, the sharp edges of the TV screen and room corners will not interfere with the actual chart.

Switching attention from one report to the other is followed with head and eye movement.

When you look at the leftmost dashboard the rightmost hides behind you.

Only when you take a few steps back you capture the whole picture at one sight.

Insights from TrendsWe have laid bar charts on a table.

On one side of the table we have products, on the other we have mapped time.

The amount of data that is visible is huge — 324 points.

On 2D display you can gaze at one bar and only visual acuity distribution will limit visibility of faraway bars.

VR DataViz — 3D bar chart footage as seen presented on 2D displayWhen you stand in front of that table in VR and gaze at a product, now the vision is affected by that horopter plane.

It enables you gaze at the bars for a product, while your brain is also filtering the nearer and farther products.

Perceived VR view — gazing at the bar of a product places them on the horopter plane, blurring the restThe bars have clearly visible square caps, that familiarity allows us to detect changes in size and contributes to depth perception.

They are also placed on a grid at the bottom and with bar walls in parallel to the room walls.

This makes it easier to follow the array of bars for a product.

GraphsThe 3rd dimension allows for richer graph layouts.

Naturally occurring graphs like those from social networks cannot be presented on 2D screens without having their edges overlapping.

In 3D the layout constraints are much more relaxed as lines can easily go one beyond the other.

In combination with head tracking, head movements triggers depth cues from vergence, motion parallax and kinetic depth effect.

The nodes of the graph can display rich data emphasized in node size and colors.

However, displaying quantitative data in size can break depth queues from relative/familiar size.

Thus, our choice was to use qualitative values for size, by snapping node sizes to common values.

In the Twitter Graph, all users that only retweeted content are displayed using a common fixed size circle, thus triggering relative/familiar size depth cues for the masses of users, while only top connected users are displayed using far greater sizes based on the produced content.

Footage of the VR Twitter Graph as it appears on 2D displayHow the graph in VR is perceived when focusing nodes at different depthsBubble ChartsBubble charts share common traits to graphs.

However, the represented data points are not interconnected and instead of topology they communicate data values.

This kind of data exists on 2D by the form of 2D bubble charts.

In these charts the position of each bubble identifies 2 of its properties and another 2 properties can be encoded in size and color.

The 3D version of a bubble chart has 3 axes to plot onto.

Another property can be represented by color.

But what about size?.If all data points share the same size, then this triggers depth cues from familiar size, boosting the readability of the 3 properties encoded in position.

So, this is a tradeoff you can take.

The depth cues from shadows/shading are lost because the large amount of data points makes it hard to relate a bubble with its own shadow.

There is one quality of the VR bubble charts that is very important through — they can easily visualize grouped clouds based on 3 properties, not just 2 like on 2D.

For our bubble chart the axes are drawn in parallel to the room corners and the desk to enable linear perspective depth cues, and we’ve decided to keep uniform sizes for the data points.

Point CloudsThere is a special case of bubble charts — point clouds.

These usually present points of the same size but in huge quantities.

A “point” in general is not supposed to have a volume in 3D or area in 2D, so consider it as a single pixel in space.

These cannot generate depth cue based on familiar size, because of the very small size.

To compensate these are displayed in context that it is either very easy to rotate the point cloud and boost the depth cues obtained from parallax.

Or display reading from sensors that display a well-known object where the cloud itself forms a body we are familiar with, triggering cues from relative/familiar cues for the whole group or individual clusters within the point cloud.

AR VR DataViz Gone BadWe’ve seen some of the sources of depth perception underutilized in AR VR DataViz.

The following examples have been taken out of their context, so I would like to apologize in advance and pay some respect to their authors before I use the images for illustration of what I think could be done better in VR.

Bubble ChartsReal world: The biggest screen where you can analyze the 3D data.

The whole world really provides huge real estate to present data, this kind of visualizations are better placed into environment.

The beach is a non-uniform structure where linear perspective is totally lost.

The following chart will be far better suited in an office room, with axes aligned to room walls, corners and windows.

Globe ChartGlobes in AR and VR look wonderful.

Placing 3D bar charts on them has caveats.

When you rotate the globe to position a country facing you, the bars occlude themselves — that is the top of the bar hides its height, depth perception is not good enough to rely on it for communicating values.

To compensate this, colors can encode the value, but then the whole thing becomes a heatmap.

Data visualization must not lie, including AR VR DataViz.

The above presentation has yet another issue.

It looks like a heat map, but bars at the sides are seen in their entirety while bars at focus are represented only by their cap.

And this may end up in small bars at the sides appear larger in value compared to larger bars at the center.

3D for the Sake of 3DSimply because you have a third dimension in AR VR doesn’t mean you must use it.

If you have a two-dimensional data you may be better off presenting it on a 2D plane or with a 3D geometry objects, but still placed on simple array.

It is easy to affect occlusion or form false depth perception based on relative/familiar size.

The top ring of the above presentation looks fine.

August is occluded a little by June at the far left.

Still enough from the August bar is visible so you can read its height without moving.

The ring may present a cyclic yearly recurring pattern, so the layout of the array makes sense.

The lower ring however, represents countries — the layout doesn’t represent anything cyclic.

The bars at the far left and right occlude each other so the values there are not readable, you can obtain trend information by the difference of the cap’s height, but not the overall value of the bars.

The bottom caps look the same as the top caps and as the bars are semitransparent the bottom caps limit readability at the far left and right sides of the circle.

It may have been better for the bottom ring to be a 2D map placed on the ground.

You cannot Experience an AR VR DataViz on a 2D DisplayWhether a 2D image or video, captured from AR VR experience, projecting it on your monitor limits your ability to perceive depth.

All of the 3D objects are flattened on a single plane — your horopter plane, making everything perfectly clear, you no longer can focus clusters like you could in AR and VR, depth cues from vergence are gone.

You lose the context distance factors as intended action because someone else is taking the actions in the video.

And because you are not immersed in the AR and VR experience, fear has little play.

Presenting a VR on 2D screen ends up relying on shadow, texturing, linear perspective, relative/familiar size, height relative to horizon.

Occlusion happens, but you cannot interact with it, together with the motion parallax and kinetic depth effects, they must be explicitly exaggerated for the recording session of a 3D video.

Here are a few secrets we used to capture the videos for the Twitter Graph video on our landing page.

View the following video and notice the motion blur during teleportation and the parallax effect:During the recording we used a homemade 3rd person VR camera to capture 2D video, so the hardware allowed us to enabled motion blur and trigger kinetic depth effects – it happens very clearly near 2:15 during teleportation.

The motion blur and the animated transition are supposed to also compensate for the lost intended action cues.

If these are truly applied in VR they will cause motion sickness.

Then shortly after 2:15, excessive sideways movement strengthen the parallax effect on the graph.

This compensates for the lost depth cues from vergence due to the lack of stereoscopic 3D display.

Thank You!Thank you for reading this far.

I would be happy to hear your thoughts on the matter — drop a line in the comments below.

Also, if you are interested in what we are doing, you can find out more, contact us or request a demo on our page:Explore and Analyze Your Data in Stereoscopic 3D with AR and VR – Progress TelerikExperience the immersive value of VR directly on your headset.

Wow your peers and leadership team with next generation…www.

telerik.

comFor future blogs, follow our publication “Telerik AR VR” here on Medium.

If you are into short updates, follow us on Twitter: Georgi Atanasov, Deyan Yosifov, Hristo Zaprianov and me Panayot Cankov.

.. More details

Leave a Reply