Chimera Bridge

Blueprint Setup

How to set up BP_ChimeraActor with Chimera components

Blueprint Setup Guide

This guide explains how to create and configure a Blueprint actor with Chimera components for real-time audio streaming and MetaHuman lip sync.

Overview

A typical Chimera actor needs three components:

BP_ChimeraActor
├── Chimera Publisher         (connection & audio streaming)
├── Chimera Audio Capture     (microphone with echo cancellation)
└── Chimera LipSync Bridge    (MetaHuman lip sync)

Step 1: Create the Blueprint

  1. In Content Browser, right-click → Blueprint Class
  2. Select Actor as parent class
  3. Name it BP_ChimeraActor
  4. Double-click to open the Blueprint Editor

Step 2: Add Components

In the Components panel:

  1. Click Add → search "Chimera Publisher" → add it
  2. Click Add → search "Chimera Audio Capture" → add it
  3. Click Add → search "Chimera LipSync Bridge" → add it

Your component hierarchy should look like:

BP_ChimeraActor (Self)
├── DefaultSceneRoot
├── ChimeraPublisher
├── ChimeraAudioCapture
└── ChimeraLipSyncBridge

Step 3: Configure Component Properties

Chimera Publisher

Select the ChimeraPublisher component and configure in the Details panel:

PropertyValueNotes
Room URLws://localhost:7880Or your server URL
Access Token(leave empty)Will use env var CHIMERA_TOKEN
Client RoleBothFor bidirectional audio
Receive Audio✓ EnabledRequired for lip sync
Sample Rate48000Recommended
Channels1Mono recommended
Auto-Connect Mic Capture✓ EnabledAuto-connects to AudioCapture

Chimera Audio Capture

Select the ChimeraAudioCapture component:

PropertyValueNotes
Enable Echo Cancellation✓ EnabledRemoves speaker echo
Enable Noise Suppression✓ EnabledReduces background noise
Enable Gain Control✓ EnabledAuto volume adjustment
AEC Warm-Up Time100Milliseconds

Chimera LipSync Bridge

Select the ChimeraLipSyncBridge component:

PropertyValueNotes
Target MetaHuman Actor(your MetaHuman)Or leave empty for auto-discovery
Face Mesh Search TagFaceDefault works for standard MetaHumans
Auto Wire to AnimBP✓ EnabledAutomatically configures lip sync

Step 4: Create Component Variables

For Blueprint logic, create variables to reference the components:

  1. In My Blueprint panel → Variables → click +
  2. Create these variables:
Variable NameType
ChimeraPublisherChimera Publisher (Object Reference)
ChimeraAudioCaptureChimera Audio Capture (Object Reference)
  1. For each variable:
    • Select the variable
    • In Details, find Default Value
    • Drag from the Components panel to set the reference

Step 5: Set Up Event Graph

Wire OnAudioDataReceived to FeedRenderAudioFloat

This is critical for echo cancellation to work. The received audio (AI voice) must be fed to the AEC so it can remove it from the microphone signal.

  1. Select the ChimeraPublisher component
  2. In Details panel → scroll to Events section
  3. Click the + next to On Audio Data Received
  4. This creates an event node in the Event Graph

Now connect the logic:

┌─────────────────────────────────────┐
│ Event OnAudioDataReceived           │
│ (ChimeraPublisher)                  │
├─────────────────────────────────────┤
│ ○ Audio Data ──────────────────────────┐
│ ○ Frames Per Channel ──────────────────┤
│ ○ Channels ────────────────────────────┤
│ ○ Sample Rate                      │   │
└────────────○────────────────────────┘   │
             │ (exec)                     │
             ▼                            │
┌────────────────────────────────────┐    │
│ Feed Render Audio Float            │    │
│ Target: ChimeraAudioCapture        │◄───┘
├────────────────────────────────────┤
│ ● Audio Data                       │
│ ● Frames Per Channel               │
│ ● In Channels                      │
└────────────────────────────────────┘

To create this in Blueprint:

  1. Drag the ChimeraAudioCapture variable into the graph
  2. Drag from its output pin → search "Feed Render Audio Float"
  3. Connect the pins:
    • Audio DataAudio Data
    • Frames Per ChannelFrames Per Channel
    • ChannelsIn Channels
  4. Connect the exec pin from the event to the function

Step 6: Connect to MetaHuman (Optional)

If your MetaHuman is a separate actor in the level:

  1. Add a variable TargetMetaHuman of type Actor (Object Reference)
  2. Make it Instance Editable (click the eye icon)
  3. In BeginPlay, set the LipSync target:
Event BeginPlay


┌─────────────────────────────────────┐
│ Set Target MetaHuman Actor          │
│ Target: ChimeraLipSyncBridge        │
│ ● New Target: TargetMetaHuman var   │
└─────────────────────────────────────┘

Complete Event Graph

Here's the full Event Graph setup:

┌──────────────────┐
│ Event BeginPlay  │
└────────○─────────┘


    (Optional: Set MetaHuman target)


    (Connection happens automatically via Publisher settings)


┌─────────────────────────────────────┐
│ Event OnAudioDataReceived           │
│ (ChimeraPublisher)                  │
└────────○────○───○───○───────────────┘
         │    │   │   │
         │    │   │   └── Sample Rate (unused)
         │    │   └────── Channels
         │    └────────── Frames Per Channel


┌────────────────────────────────────┐
│ Feed Render Audio Float            │
│ Target: ChimeraAudioCapture        │
│ ● Audio Data ◄─────────────────────
│ ● Frames Per Channel ◄─────────────
│ ● In Channels ◄────────────────────
└────────────────────────────────────┘

Step 7: Place in Level

  1. Drag BP_ChimeraActor into your level
  2. If using a separate MetaHuman:
    • Select BP_ChimeraActor in the level
    • In Details, set Target Meta Human to your MetaHuman actor
  3. If MetaHuman is a child of BP_ChimeraActor:
    • The LipSync Bridge will auto-discover it

Verification

When you Play:

  1. Output Log should show:

    • "Chimera Publisher connecting..."
    • "Connected to room"
    • "LipSync Bridge: Found face mesh"
  2. Audio flow:

    • Microphone audio is captured
    • Echo cancellation removes speaker audio
    • Clean audio is sent to server
    • Received audio drives lip sync

Troubleshooting

Components not found when searching

Make sure the Chimera Bridge plugin is enabled in Edit → Plugins.

OnAudioDataReceived not firing

  1. Check Receive Audio is enabled on Publisher
  2. Verify server is sending audio
  3. Check connection status in Output Log

Echo cancellation not working

  1. Verify FeedRenderAudioFloat is connected correctly
  2. Check Enable Echo Cancellation is enabled
  3. Ensure audio is actually being received (OnAudioDataReceived firing)

Lip sync not working

  1. Check Target MetaHuman Actor is set
  2. Verify RuntimeMetaHumanLipSync plugin is enabled
  3. Check Output Log for "Face mesh not found" errors