Chimera LipSync Bridge
Bridge audio to MetaHuman lip sync
Chimera LipSync Bridge
The ChimeraLipSyncBridgeComponent connects audio streams to Epic's RuntimeMetaHumanLipSync system, enabling real-time lip synchronization for MetaHuman characters.
Properties
| Property | Type | Description |
|---|---|---|
TargetMetaHumanActor | AActor* | Reference to the MetaHuman actor (auto-discover if empty) |
FaceMeshSearchTag | FString | Substring to find face mesh component ("Face") |
TargetFaceMeshName | FName | Exact name override for face mesh |
bAutoWireToAnimBP | bool | Automatically inject LipSyncGenerator |
InitialMood | ELipSyncMood | Starting mood (Neutral/Happy/Sad/Angry) |
Events
OnLipSyncStarted
Fired when lip sync processing begins.
OnLipSyncStopped
Fired when lip sync processing stops.
How It Works
- Audio Reception: Publisher receives audio from the server
- Audio Routing: Audio is passed to the LipSync Bridge
- Phoneme Analysis: RuntimeMetaHumanLipSync analyzes the audio
- Animation: Face mesh blendshapes are driven in real-time
Auto-Wiring
When bAutoWireToAnimBP is enabled (default), the component automatically:
- Finds the MetaHuman's Face mesh component
- Creates a RuntimeMetaHumanLipSync generator
- Injects it into the Animation Blueprint
- Starts processing when audio arrives
Setup Guide
Prerequisites
- MetaHuman character in your level
- RuntimeMetaHumanLipSync plugin enabled
- Chimera Publisher component receiving audio
Automatic Setup (Recommended)
- Add
ChimeraLipSyncBridgeComponentto your actor - Set
TargetMetaHumanActorto your MetaHuman (or leave empty for auto-discovery) - Keep
bAutoWireToAnimBPenabled - The component handles everything else automatically
Manual Setup
Blueprint Setup
- Add
Chimera LipSync Bridgeto your actor - In Details panel, set
Target MetaHuman Actor - Keep
Auto Wire to AnimBPenabled - The component automatically binds to the Publisher's
OnAudioDataReceived
Troubleshooting
Lip sync not moving
- Verify MetaHuman has a valid Face mesh component
- Check that audio is being received (OnAudioDataReceived firing)
- Ensure RuntimeMetaHumanLipSync plugin is enabled
- Check Output Log for "LipSync" messages
Face mesh not found
- Set
FaceMeshSearchTagto match your setup (default: "Face") - Or use
TargetFaceMeshNamefor exact match - Check that MetaHuman Blueprint has the expected structure
Audio delay
The lip sync has minimal latency by design. If you notice delay:
- Check overall audio pipeline latency
- Verify sample rates match (48000 Hz recommended)