Hey Vlad,

yep timing/latency is definitely the key.

It's great to hear that rAF will be able to support future scheduling to make time sensitive things better. But wouldn't it be best to have this sort of implementation utilised by DeviceOrientation/DeviceMotion too? The Rift is really just using a low latency IMU that's pretty similar to what's in high end phones. Or am I missing something else here? From our perspective we're just using it to drive the orientation of the virtual camera/s in a 3D scene. This is exactly the same as doing that on a mobile device for AR too.

Definitely keen to hear any other thoughts you have on the Kinect and our awe.js work too.

roBman



On 17/04/14 3:02 PM, Vladimir Vukicevic wrote:
On Tuesday, April 15, 2014 8:17:44 PM UTC-4, Rob Manson wrote:
We've also put together a plugin for our open source awe.js framework
that uses getUserMedia() to turn the Rift into a video-see-thru AR
device too. And for the 6dof tracking we just use the open source
oculus-bridge app that makes this data available via a WebSocket which
is enough for this type of proof of concept.

Of course if that just turned up as the DeviceOrientation API when you
plugged in the Rift then that would be even better.
This is actually not a good API for this; as you know, latency is death in VR.  
For this to work well, the most up to date orientation information needs to be 
available right when a frame is being rendered, and ideally needs to be 
predicted for the time that the frame will be displayed on-screen.

Currently the prototype API I have allows for querying VR devices, and then 
returns a bag of HMDs and various positional/orientation sensors that might be 
present (looking towards a future with sixense and similar support; Leap might 
also be interesting).  Once those device objects are queried, methods on them 
return the current, immediate state of the position/orientation, and optionally 
take a time delta for prediction.

Conveniently, requestAnimationFrame is passed in a frame time which at some 
point in the near future (!) will become the actual scheduled frame time for 
that frame, so we have a nice system whereby we can predict and render things 
properly.

Very cool to hear about awe.js and similar.  Will definitely take a look.

On a slightly related note we've also implemented Kinect support that
exposes the OpenNI Skeleton data via a WebSocket. This allows you to use
the Kinect to project your body into a WebGL scene. This is great for VR
and is definitely a new area where no existing open web standard is
already working.
Also interesting -- Kinect was brought up earlier as another device to explore, 
and I think there's value in figuring out how to add it to this framework.

     - Vlad
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


--
Rob

Checkout my new book "Getting started with WebRTC" - it's a 5 star hit
on Amazon http://www.amazon.com/dp/1782166300/?tag=packtpubli-20

CEO & co-founder
http://MOB-labs.com

Chair of the W3C Augmented Web Community Group
http://www.w3.org/community/ar

Invited Expert with the ISO, Khronos Group & W3C

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to