I don't recall this happening on desktop vr when teleporting, I believe I could just update the camera position, is it done differently on desktop VR? If you want to move the position then you must create a parent object to do so, regardless if it would have made sense. The effect that this creates is the feeling of "transform lockout" on the camera. but in that case I would be overriding the unity's native black magic positioning, which I don't know at which moment in script execution happens, in update? in fixed update? in late update? how can I be sure that I can override the positioning and rotation, while maintaining the native integration of the rendering etc?Ĭlick to expand.Thanks for the response!
I think I could do something like this if I have full control of the cameras rotation and position via InputTracking.GetLocalPosition(VRNode.RightEye) etc. With my setup I end up with things like "wrong behaviour", and what I want is "correct behaviour": The problem is that when I have my custom cameras with each eye set for each camera, the "stereo separation" doesn't do anything to their local positions, aka offset from head center, and I have to do the offset manually like in old cardboard versions without native integrations, where you have an empty "head" game object which is the one rotated by gyroscope values, and two fixed cameras that are offseted and child of this object, the cameras don't rotate locally, they inherit rotation from the parent. In a standard setup you have a single camera with "both" option set for target eye, that will render two different invisible cameras and their positions will be offseted a couple of centimeters depending on "stereo separation" parameter, also I suppose that this two runtime-created cameras has a rotation that is influenced by "stereo convergence" parameter.
My problem now is that I did a quick test because it seems that when you don't use the "both" option in a single camera, each different camera for left and right eyes, seems to be only receiving the local rotation of VR, and not also the local position, what I mean is this: So my setup has flexibility to render both eyes individually with exclusive content for each eye (like different 360 skyboxes) instead of having one single camera with target eye = both, also it has flexibility to cast the "center camera" to chromecast while the smartphone still has the eyes rendering to the phone screen.
Hi people, I have a question regarding native cardboard VR in unity 5.6, it is related to this thread so I'll write here, maybe some of the devs who already answered above can illuminate me:Ī - empty game object which is the world offset (or head position in the game), this is freely moved by scriptsī - camera with target eye = left and layermask to show only left-eye stuffĬ - camera with target eye = right and layermask to show only right-eye stuffĭ - center camera with target eye = none, which is used when I compile to WebGL so instead of VR I have only one camera rendering full screen, also I use this camera to remote cast to chromecast.