Real-time lip sync component that performs client-side spectral analysis on the agent's PCM audio stream (ElevenLabs doesn't provide viseme data). Pipeline: 512-point FFT (16kHz) → 5 frequency bands → 15 OVR visemes → ARKit blendshapes (MetaHuman compatible) → auto-apply morph targets. Currently uses SetMorphTarget() which may be overridden by MetaHuman's Face AnimBP — face animation not yet working. Debug logs added to diagnose: audio flow, spectrum energy, morph target name matching. Next steps: verify debug output, fix MetaHuman morph target override (likely needs AnimBP integration like Convai approach). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Description
Eleven Labs Integration for Unreal Engine 5.5
Languages
C++
90%
Python
4.8%
HTML
4.3%
Batchfile
0.6%
C#
0.3%