j.foucher 224af6a27b WIP: Add ElevenLabsLipSyncComponent with spectral analysis lip sync
Real-time lip sync component that performs client-side spectral analysis
on the agent's PCM audio stream (ElevenLabs doesn't provide viseme data).

Pipeline: 512-point FFT (16kHz) → 5 frequency bands → 15 OVR visemes
→ ARKit blendshapes (MetaHuman compatible) → auto-apply morph targets.

Currently uses SetMorphTarget() which may be overridden by MetaHuman's
Face AnimBP — face animation not yet working. Debug logs added to
diagnose: audio flow, spectrum energy, morph target name matching.

Next steps: verify debug output, fix MetaHuman morph target override
(likely needs AnimBP integration like Convai approach).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 11:23:34 +01:00
2026-02-19 12:44:26 +01:00
2026-02-19 17:42:18 +01:00
2026-02-20 15:18:03 +01:00
Description
Eleven Labs Integration for Unreal Engine 5.5
3.5 GiB
Languages
C++ 90%
Python 4.8%
HTML 4.3%
Batchfile 0.6%
C# 0.3%