06-29-2018 11:40 AM
06-29-2018 11:40 AM
Hey there, I'm working in unity to try and create a realistic Gun System. The component i've been working on most recently has bee the weapons scope. As of right no I have a camera that rotates based on the players headpos relative to itself rendering to a render texture, such that the player must look straight through the scope to aim accurately. the problem with this is that the scope cams position is based off of the mid point between the eyes, I can fix that by offseting it by half the lense distance, but because steam VR in unity only has one camera for both the eyes, I can only offset it for one eye at a time.
I don't know exactly how the Steam VR camera works, if its 1 camera that does two passes per frame or if the two eyes have seperate cameras that are just unaccessable for some reason. but what I need to be able to do is either: gain access to the indevidual eye cams to set Culling Masks and put two render textures in my scene(one seen by each eye cam), or if its a multi pass system, I need some way of updating either the steamVR camera, or the render texture between the passes. I have been looking online and through the unity documentation for days now and can not figure out either of these or any other solution. If any of you folks know how to solve this I'd be eternally greatfull.
Solved! Go to Solution.
06-29-2018 03:35 PM - edited 06-29-2018 03:35 PM
06-29-2018 03:35 PM - edited 06-29-2018 03:35 PM
SolutionHave you taken a look at this thread?
I'd also look at the single pass rendering article. When using multi-pass rendering, the culling mask will be shared between both eyes.
Technical Specialist - San Francisco, CA; Monday-Friday
07-01-2018 02:50 PM
07-01-2018 02:50 PM
Thanks, I added a second Camera to the rig and was able to set the culling masks to get a stereoscopic effect. The only thing I now need is to figure out is how to get the distance between(or preferably the actual position of) the eyes.
07-02-2018 09:37 AM - edited 07-02-2018 09:38 AM
07-02-2018 09:37 AM - edited 07-02-2018 09:38 AM
@OutofRam, the current best practice is to scale the entire "head" via transform depending on the IPD reported by OpenVR's APIs. If you simply "move" the camera rigs, it's very easy find yourself in a situation where the game makes little perceptual sense increasing risk of motion sickness.
This is the releveant part of OpenVR: IVRSystem::GetEyeToHeadTransform
Technical Specialist - San Francisco, CA; Monday-Friday
08-08-2018 07:20 PM - edited 08-08-2018 07:26 PM
08-08-2018 07:20 PM - edited 08-08-2018 07:26 PM
Sorry for the late reply, Ive been unable to test this untill now. Im having a bit of trouble figuring out how to impliment this code, as I do not have much experience dealing with matricies. For example if I just wanted to get the float distance between the eyes, I use:
Valve.VR.OpenVR.System.GetEyeToHeadTransform(EVREye.Eye_Left); and
Valve.VR.OpenVR.System.GetEyeToHeadTransform(EVREye.Eye_Right);
to get the matricies, but then how do I extract the IPD from that?
Edit
Nevermind, I figured it out
Valve.VR.OpenVR.System.GetEyeToHeadTransform(EVREye.Eye_Left).m3 contains the IPD