I am looking to leverage the SteamVR interaction system, but some aspects of the object system aren't clear to me.
Which class ought to be responsible for mapping the "Application Menu" button press to the activation of new objects/controls? For example, in "The Lab" this would activate a "Teleport sphere" the user could bring to their face to transition to a different Scene.
If I wanted to give the user a Tablet they could bring up to control things, I might follow a pattern I see (in Teleport for example), where there would be a "Tablet" script with an "Update()" method that would poll with GetPress() to see if the user has pressed the "ApplicationMenu" button. This seems like it would be a lot of overhead processing if the pattern were replicated in multiple places / objects. I am more used to there being a "Controller" object which takes action to invoke state changes through the object system when the user initiates state changes.
@iraytrace, I've looked into this and it turns out this overhead intensive method is a result of how the SteamVR input for Unity was written (which may be altered in the future by Valve). The Vive Input utility contains alternative and more preferable methods to check for event buttons and will see an update soon that will increase the ease of checking for event buttons. You can also look at VRTK's approach as well. This tutorial has also been helpful to many as well.