Blueprint Setup
How to set up BP_ChimeraActor with Chimera components
Blueprint Setup Guide
This guide explains how to create and configure a Blueprint actor with Chimera components for real-time audio streaming and MetaHuman lip sync.
Overview
A typical Chimera actor needs three components:
Step 1: Create the Blueprint
- In Content Browser, right-click → Blueprint Class
- Select Actor as parent class
- Name it
BP_ChimeraActor - Double-click to open the Blueprint Editor
Step 2: Add Components
In the Components panel:
- Click Add → search "Chimera Publisher" → add it
- Click Add → search "Chimera Audio Capture" → add it
- Click Add → search "Chimera LipSync Bridge" → add it
Your component hierarchy should look like:
Step 3: Configure Component Properties
Chimera Publisher
Select the ChimeraPublisher component and configure in the Details panel:
| Property | Value | Notes |
|---|---|---|
| Room URL | ws://localhost:7880 | Or your server URL |
| Access Token | (leave empty) | Will use env var CHIMERA_TOKEN |
| Client Role | Both | For bidirectional audio |
| Receive Audio | ✓ Enabled | Required for lip sync |
| Sample Rate | 48000 | Recommended |
| Channels | 1 | Mono recommended |
| Auto-Connect Mic Capture | ✓ Enabled | Auto-connects to AudioCapture |
Chimera Audio Capture
Select the ChimeraAudioCapture component:
| Property | Value | Notes |
|---|---|---|
| Enable Echo Cancellation | ✓ Enabled | Removes speaker echo |
| Enable Noise Suppression | ✓ Enabled | Reduces background noise |
| Enable Gain Control | ✓ Enabled | Auto volume adjustment |
| AEC Warm-Up Time | 100 | Milliseconds |
Chimera LipSync Bridge
Select the ChimeraLipSyncBridge component:
| Property | Value | Notes |
|---|---|---|
| Target MetaHuman Actor | (your MetaHuman) | Or leave empty for auto-discovery |
| Face Mesh Search Tag | Face | Default works for standard MetaHumans |
| Auto Wire to AnimBP | ✓ Enabled | Automatically configures lip sync |
Step 4: Create Component Variables
For Blueprint logic, create variables to reference the components:
- In My Blueprint panel → Variables → click +
- Create these variables:
| Variable Name | Type |
|---|---|
| ChimeraPublisher | Chimera Publisher (Object Reference) |
| ChimeraAudioCapture | Chimera Audio Capture (Object Reference) |
- For each variable:
- Select the variable
- In Details, find Default Value
- Drag from the Components panel to set the reference
Step 5: Set Up Event Graph
Wire OnAudioDataReceived to FeedRenderAudioFloat
This is critical for echo cancellation to work. The received audio (AI voice) must be fed to the AEC so it can remove it from the microphone signal.
- Select the ChimeraPublisher component
- In Details panel → scroll to Events section
- Click the + next to
On Audio Data Received - This creates an event node in the Event Graph
Now connect the logic:
To create this in Blueprint:
- Drag the ChimeraAudioCapture variable into the graph
- Drag from its output pin → search "Feed Render Audio Float"
- Connect the pins:
Audio Data→Audio DataFrames Per Channel→Frames Per ChannelChannels→In Channels
- Connect the exec pin from the event to the function
Step 6: Connect to MetaHuman (Optional)
If your MetaHuman is a separate actor in the level:
- Add a variable
TargetMetaHumanof type Actor (Object Reference) - Make it Instance Editable (click the eye icon)
- In BeginPlay, set the LipSync target:
Complete Event Graph
Here's the full Event Graph setup:
Step 7: Place in Level
- Drag
BP_ChimeraActorinto your level - If using a separate MetaHuman:
- Select BP_ChimeraActor in the level
- In Details, set
Target Meta Humanto your MetaHuman actor
- If MetaHuman is a child of BP_ChimeraActor:
- The LipSync Bridge will auto-discover it
Verification
When you Play:
-
Output Log should show:
- "Chimera Publisher connecting..."
- "Connected to room"
- "LipSync Bridge: Found face mesh"
-
Audio flow:
- Microphone audio is captured
- Echo cancellation removes speaker audio
- Clean audio is sent to server
- Received audio drives lip sync
Troubleshooting
Components not found when searching
Make sure the Chimera Bridge plugin is enabled in Edit → Plugins.
OnAudioDataReceived not firing
- Check
Receive Audiois enabled on Publisher - Verify server is sending audio
- Check connection status in Output Log
Echo cancellation not working
- Verify
FeedRenderAudioFloatis connected correctly - Check
Enable Echo Cancellationis enabled - Ensure audio is actually being received (OnAudioDataReceived firing)
Lip sync not working
- Check
Target MetaHuman Actoris set - Verify RuntimeMetaHumanLipSync plugin is enabled
- Check Output Log for "Face mesh not found" errors