Hi, thanks a lot for this explanation.
Looking at the demo project, I see the rat model was copied 3 times using 3 different layers. One for the normal render, one for the xray, and one for the stencil. The cameras are then filtering these layers. This makes a lot of sense.
If this was an actual character having some logic, wouldn't these three separate objects would execute their logic independently from each other? How could we make sure their models overlap perfectly each other? I would assume they might fall out of synch at some point.
My idea would be that only one of the models would be an actual character with logic, and the others would be instances of that character's model, at its current state within the scene. But I can't find a way to do that, MeshInstance doesn't accept another node as a source for the model.
Yes. They need to be animated and moved simultaneously via a connection through signals and RemoteTransforms. Basically, your main game logic model (the one that's drawn 'normally' and not in a viewport):
Those two things will take care of most cases.
When you move your player with the keyboard/AI/other, the remote transform or parent-child relationship moves and rotates it to match in the world, so they don't need any physics bodies or to respond to keyboard input.
When you play an animation, the signal connection should make sure that the viewport models also play their animations at the same time.
To clarify, that means that the Xray and Mask models are just a mesh instance with an animation player or animation tree that matches the original's, and the xray and mask materials. All the logic of actually playing their own animations and moving will be done by the main model responding to the animation player, and its remote transform nodes.