iFacialMocap (Desktop Application)
iFacialMocap for Nvidia uses a Webcam with either RTX Nvidia cards or MediaPipe for Face Tracking.
This guide will walk-through setting up the iFacialMocap PC App from the Microsoft Store and the corresponding VRCFT tracking module.
Setup
For the best quality, use the Nvidia BROADCAST Input which requieres an Nvidia RTX Series Card (RTX 2000 or above).
You can run this without an Nvidia RTX Card by changing the Input to Mediapipe, but expect more limitations on the tracking quality.
You MUST have good lightning to have good results, even more if using Mediapipe.
The INPUT called iFacialmocap can be ignored. It allows you to connect either an iPhone or Android device running iFacialMocap / MeowFace, but you should be connecting them directly to VRCFT with their respective modules rather than directing them through this app.
- Install iFacialMocap for Nvidia BROADCAST from the Microsoft Store.
- If you wish to use an RTX Card, install the latest Nvidia AR SDK from the webpage.
Make sure the AR SDK is downloaded for YOUR series of cards or it may not work correctly. Grab the latest version of the SDK which should also be the version used by Vtube Studio. You can check this below the download options for the AR SDK column.
- Start VRCFaceTracking and install the "iFacialMocap" VRCFT module from the VRCFaceTracking Module Registry.
- Start the "iFacialMocap Powered by Nvidia BROADCAST" app from your taskbar.
- Change the INPUT to either Nvidia Broadcast (if using an RTX Card) or Mediapipe.
- Change the OUTPUT to iFacialMocap.
Setting up iFacialMocap
Camera input
To change the camera used on iFacialMocap, click on "Open Advanced Setting" and look for the section called "Nvidia Display" (RTX) or "Show/Hide Camera" (Mediapipe), just below these options you should see the name of your camera along with its index number.
Sometimes this index can be wrong and your camera might get a different name, this usually happens if you were to plug a new video device before opening the app (i.e. you connected a capture card) or due to camera driver installation.
On these situations, you may have selected the correct camera (i.e. Logitech Camera) but you don't get any input.
If the demo avatar or camera output is not showing, make sure that: A. The camera is not being used by another app B. Try out other cameras and see if it works
Sometimes, switching to what would be the wrong input device fixes the issue, for example, changing to ElGato Capture Card starts using your actual webcam. Make sure to save the settings once the proper camera has been selected.
iFacialMocap might not work with the default Windows drivers. If your camera is not initializing after the troubleshooting above, please look for your webcam or laptop model and install its respective drivers, or try other tracking methods such as FoxyFace.
Calibration
You can reset your head, eye and mouth rotations by clicking the Calibration button.
For the best calibration:
- Position your camera where you want it to be (preferably above the monitor)
- While looking forwards towards the center of your monitor (i.e. towards your VRC desktop cursor), tap the "Calibration" button in the app
- If using Nvidia broadcast, you should see a demo face shifting in place and looking straight, indicating that the head orientation and face tracking has successfully been reset
Adjusting Weight and Smoothing
Most people shouldn't need to mess around with these settings, but feel free to tweak them if you think you can improve something or you'd like a blendshape to be easier to trigger.
Click on "Open Advanced Setting" to open the settings menu. Below the Camera settings and Send Version section, you should find the Smoothing options.
You can increase the smoothing value to reduce jitter, but increasing this too much will introduce latency, so adjust carefully.
Further down on the settings, you can increase or decrease specific blendshape Weight (how likely it is to trigger a blendshape). If you're going to tweak this, it's recommended to ONLY increase the weight value, as other options are aimed towards vtubing apps and might have undesired side effects when using VRCFT.
Tracked Blendshapes
Although the app lists all 52 ARKit blendshapes within its settings, Nvidia Broadcast does not track the tongueOut and cheekPuff blendshapes, and Mediapipe may track even less, and expect both tracking methods to have trouble with subtle blendshapes such as mouthShrug
Pros & Cons
For a Webcam, this is one of the best tracking methods. It's easy to setup, and works universally between Vtubing apps and VRCFT.
Compared to FoxyFace, it will track less blendshapes, but should be easier to calibrate and faster to do the initial setup.
Finally, compared to an iPhone/iPad, expect worse results and the need for very good lightning for this to work properly. Apple devices simply work better for Face Tracking.
Module
Interested in the source code? Check out the iFacialMocap module source repository