Compare commits

...

4 Commits

Author SHA1 Message Date
5323da6908 no message 2026-03-01 12:46:26 +01:00
350558ba50 v3.0.1: Packaged build fixes, conversation-triggered body tracking, diagnostic cleanup
Packaged build fixes:
- Use EvaluateCurveData() for emotion and lip sync curves (works with
  compressed/cooked data instead of raw FloatCurves)
- Add FMemMark wrapper for game-thread curve evaluation (FBlendedCurve
  uses TMemStackAllocator)
- Lazy binding in AnimNodes and LipSyncComponent for packaged build
  component initialization order
- SetIsReplicatedByDefault(true) instead of SetIsReplicated(true)
- Load AudioCapture module explicitly in plugin StartupModule
- Bundle cacert.pem for SSL in packaged builds
- Add DirectoriesToAlwaysStageAsNonUFS for certificates
- Add EmotionPoseMap and LipSyncPoseMap .cpp implementations

Body tracking:
- Body tracking now activates on conversation start (HandleAgentResponseStarted)
  instead of on selection, creating a natural notice→engage two-step:
  eyes+head track on selection, body turns when agent starts responding
- SendTextMessage also enables body tracking for text input

Cleanup:
- Remove all temporary [DIAG] and [BODY] debug logs
- Gate PostureComponent periodic debug log behind bDebug flag
- Remove obsolete editor-time curve caches (CachedCurveData, RebuildCurveCache,
  FPS_AI_ConvAgent_CachedEmotionCurves, FPS_AI_ConvAgent_CachedPoseCurves)
  from EmotionPoseMap and LipSyncPoseMap — no longer needed since
  EvaluateCurveData() reads compressed curves directly at runtime

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 12:45:11 +01:00
21298e01b0 Add LAN test scripts: Package, Host, Join batch files
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 11:41:58 +01:00
765da966a2 Add Listen Server networking: exclusive NPC lock, Opus audio broadcast, LOD
- Replicate conversation state (bNetIsConversing, NetConversatingPlayer) for exclusive NPC locking
- Opus encode TTS audio on server, multicast to all clients for shared playback
- Replicate emotion state (OnRep) so clients compute facial expressions locally
- Multicast speaking/interrupted/text events so lip sync and posture run locally
- Route mic audio via Server RPC (client→server→ElevenLabs WebSocket)
- LOD: cull audio beyond 30m, skip lip sync beyond 15m for non-speaker clients
- Auto-detect player disconnection and release NPC on authority
- InteractionComponent: skip occupied NPCs, auto-start conversation on selection
- No changes to LipSync, Posture, FacialExpression, MicCapture or AnimNodes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 11:04:07 +01:00
26 changed files with 7423 additions and 103 deletions

2
.gitignore vendored
View File

@ -20,3 +20,5 @@ Unreal/PS_AI_Agent/*.suo
# Documentation generator script (dev tool, output .pptx is committed instead)
generate_pptx.py
~$PS_AI_Agent_ElevenLabs_Documentation.pptx
Unreal/PS_AI_Agent/Build/
Unreal/PS_AI_Agent/Builds/

View File

@ -1,3 +1,6 @@
[/Script/EngineSettings.GeneralProjectSettings]
ProjectID=C89AEFD7484597308B8E6EB1C7AE0965
[/Script/UnrealEd.ProjectPackagingSettings]
+DirectoriesToAlwaysStageAsNonUFS=(Path="Certificates")

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,25 @@
@echo off
REM ============================================================================
REM HOST — Lance le jeu en Listen Server
REM Ce PC est le serveur + premier joueur
REM Les autres PC utilisent Join.bat pour se connecter
REM ============================================================================
echo.
echo ========================================
echo PS_AI_Agent — HOST (Listen Server)
echo ========================================
echo.
REM Detect IP address for display
for /f "tokens=2 delims=:" %%a in ('ipconfig ^| findstr /C:"IPv4"') do (
set IP=%%a
goto :found
)
:found
echo Les autres joueurs doivent entrer cette IP: %IP%
echo.
echo Appuyer sur une touche pour lancer...
pause >nul
start "" "%~dp0PS_AI_Agent\Binaries\Win64\PS_AI_Agent.exe" /PS_AI_ConvAgent/Demo_Metahuman?listen -log -windowed -ResX=1920 -ResY=1080 -WinX=0 -WinY=0

View File

@ -0,0 +1,25 @@
@echo off
REM ============================================================================
REM JOIN — Se connecter a un Host sur le LAN
REM Entrer l'IP affichee par le Host
REM ============================================================================
echo.
echo ========================================
echo PS_AI_Agent — JOIN (Client)
echo ========================================
echo.
set /p HOST_IP="Entrer l'IP du Host: "
if "%HOST_IP%"=="" (
echo Aucune IP entree. Abandon.
pause
exit /b 1
)
echo.
echo Connexion a %HOST_IP% ...
echo.
start "" "%~dp0PS_AI_Agent\Binaries\Win64\PS_AI_Agent.exe" %HOST_IP% -log -windowed -ResX=1920 -ResY=1080 -WinX=0 -WinY=0

View File

@ -36,6 +36,10 @@
"Linux",
"Android"
]
},
{
"Name": "AudioCapture",
"Enabled": true
}
]
}

View File

@ -0,0 +1,56 @@
@echo off
REM ============================================================================
REM Package PS_AI_Agent — Development build for LAN testing
REM Output: Builds\Windows\
REM ============================================================================
set UE_ROOT=C:\Program Files\Epic Games\UE_5.5
set PROJECT=C:\ASTERION\GIT\PS_AI_Agent\Unreal\PS_AI_Agent\PS_AI_Agent.uproject
set OUTPUT=C:\ASTERION\GIT\PS_AI_Agent\Unreal\PS_AI_Agent\Builds
echo.
echo ========================================
echo Packaging PS_AI_Agent (Development)
echo ========================================
echo.
echo Output: %OUTPUT%\Windows\
echo.
"%UE_ROOT%\Engine\Build\BatchFiles\RunUAT.bat" BuildCookRun ^
-project="%PROJECT%" ^
-noP4 ^
-platform=Win64 ^
-clientconfig=Development ^
-build ^
-cook ^
-stage ^
-pak ^
-archive ^
-archivedirectory="%OUTPUT%" ^
-utf8output
if %ERRORLEVEL% NEQ 0 (
echo.
echo ========================================
echo BUILD FAILED
echo ========================================
pause
exit /b 1
)
echo.
echo ========================================
echo BUILD SUCCESSFUL
echo ========================================
echo.
echo Copier le dossier Builds\Windows\ sur chaque PC.
echo Utiliser Host.bat et Join.bat pour lancer.
echo.
REM Copy Host/Join scripts into the build output
copy /Y "%~dp0Host.bat" "%OUTPUT%\Windows\"
copy /Y "%~dp0Join.bat" "%OUTPUT%\Windows\"
echo Scripts Host.bat et Join.bat copies dans Builds\Windows\
echo.
pause

File diff suppressed because it is too large Load Diff

View File

@ -57,6 +57,30 @@ void FAnimNode_PS_AI_ConvAgent_FacialExpression::Update_AnyThread(const FAnimati
{
BasePose.Update(Context);
// Lazy lookup: in packaged builds, Initialize_AnyThread may run before
// components are created. Retry discovery until found.
if (!FacialExpressionComponent.IsValid())
{
if (const FAnimInstanceProxy* Proxy = Context.AnimInstanceProxy)
{
if (const USkeletalMeshComponent* SkelMesh = Proxy->GetSkelMeshComponent())
{
if (AActor* Owner = SkelMesh->GetOwner())
{
UPS_AI_ConvAgent_FacialExpressionComponent* Comp =
Owner->FindComponentByClass<UPS_AI_ConvAgent_FacialExpressionComponent>();
if (Comp)
{
FacialExpressionComponent = Comp;
UE_LOG(LogPS_AI_ConvAgent_FacialExprAnimNode, Log,
TEXT("PS AI ConvAgent Facial Expression AnimNode (late) bound to component on %s."),
*Owner->GetName());
}
}
}
}
}
// Cache emotion curves from the facial expression component.
// GetCurrentEmotionCurves() returns CTRL_expressions_* curves
// extracted from emotion pose AnimSequences, already smoothly blended.

View File

@ -57,6 +57,30 @@ void FAnimNode_PS_AI_ConvAgent_LipSync::Update_AnyThread(const FAnimationUpdateC
{
BasePose.Update(Context);
// Lazy lookup: in packaged builds, Initialize_AnyThread may run before
// components are created. Retry discovery until found.
if (!LipSyncComponent.IsValid())
{
if (const FAnimInstanceProxy* Proxy = Context.AnimInstanceProxy)
{
if (const USkeletalMeshComponent* SkelMesh = Proxy->GetSkelMeshComponent())
{
if (AActor* Owner = SkelMesh->GetOwner())
{
UPS_AI_ConvAgent_LipSyncComponent* Comp =
Owner->FindComponentByClass<UPS_AI_ConvAgent_LipSyncComponent>();
if (Comp)
{
LipSyncComponent = Comp;
UE_LOG(LogPS_AI_ConvAgent_LipSyncAnimNode, Log,
TEXT("PS AI ConvAgent Lip Sync AnimNode (late) bound to component on %s."),
*Owner->GetName());
}
}
}
}
}
// Cache ARKit blendshape data from the lip sync component.
// GetCurrentBlendshapes() returns ARKit names (jawOpen, mouthFunnel, etc.).
// These are injected as-is into the pose curves; the downstream

View File

@ -196,7 +196,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::CacheBones_AnyThread(const FAnimationCac
{
ChainBoneIndices.Add(FCompactPoseBoneIndex(INDEX_NONE));
ChainRefPoseRotations.Add(FQuat::Identity);
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT(" Chain bone [%d] '%s' NOT FOUND in skeleton!"),
i, *ChainBoneNames[i].ToString());
}
@ -223,14 +223,14 @@ void FAnimNode_PS_AI_ConvAgent_Posture::CacheBones_AnyThread(const FAnimationCac
}
else
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT("Head bone '%s' NOT FOUND in skeleton. Available bones:"),
*HeadBoneName.ToString());
const int32 NumBones = FMath::Min(RefSkeleton.GetNum(), 10);
for (int32 i = 0; i < NumBones; ++i)
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT(" [%d] %s"), i, *RefSkeleton.GetBoneName(i).ToString());
}
}
@ -324,6 +324,30 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Update_AnyThread(const FAnimationUpdateC
{
BasePose.Update(Context);
// Lazy lookup: in packaged builds, Initialize_AnyThread may run before
// components are created. Retry discovery until found.
if (!PostureComponent.IsValid())
{
if (const FAnimInstanceProxy* Proxy = Context.AnimInstanceProxy)
{
if (const USkeletalMeshComponent* SkelMesh = Proxy->GetSkelMeshComponent())
{
if (AActor* Owner = SkelMesh->GetOwner())
{
UPS_AI_ConvAgent_PostureComponent* Comp =
Owner->FindComponentByClass<UPS_AI_ConvAgent_PostureComponent>();
if (Comp)
{
PostureComponent = Comp;
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Log,
TEXT("PS AI ConvAgent Posture AnimNode (late) bound to component on %s."),
*Owner->GetName());
}
}
}
}
}
// Cache posture data from the component (game thread safe copy).
// IMPORTANT: Do NOT reset CachedHeadRotation to Identity when the
// component is momentarily invalid (GC pause, re-registration, etc.).
@ -398,7 +422,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
const bool bHasEyeBones = (LeftEyeBoneIndex.GetInt() != INDEX_NONE);
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT("[%s] Posture Evaluate: HeadComp=%.2f EyeComp=%.2f DriftComp=%.2f Valid=%s HeadRot=(%s) Eyes=%d Chain=%d Ancestors=%d"),
NodeRole,
CachedHeadCompensation,
@ -420,7 +444,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
const FQuat Delta = AnimRot * RefRot.Inverse();
const FRotator DeltaRot = Delta.Rotator();
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT(" Chain[0] '%s' AnimDelta from RefPose: Y=%.2f P=%.2f R=%.2f (this gets removed at Comp=1)"),
*ChainBoneNames[0].ToString(),
DeltaRot.Yaw, DeltaRot.Pitch, DeltaRot.Roll);
@ -454,7 +478,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
}
if (++EyeDiagLogCounter % 300 == 1)
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT("[EYE DIAG MODE 1] Forcing CTRL_expressions_eyeLookUpL=1.0 | Left eye should look UP if Control Rig reads CTRL curves"));
}
@ -473,7 +497,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
}
if (++EyeDiagLogCounter % 300 == 1)
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT("[EYE DIAG MODE 2] Forcing ARKit eyeLookUpLeft=1.0 | Left eye should look UP if mh_arkit_mapping_pose drives eyes"));
}
@ -494,7 +518,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
Output.Curve.Set(ZeroCTRL, 0.0f);
if (++EyeDiagLogCounter % 300 == 1)
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT("[EYE DIAG MODE 3] Forcing FACIAL_L_Eye bone -25° pitch | Left eye should look UP if bone rotation drives eyes"));
}
#endif
@ -576,7 +600,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
#if !UE_BUILD_SHIPPING
if (EvalDebugFrameCounter % 300 == 1 && CachedEyeCurves.Num() > 0)
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT(" Eyes: Comp=%.2f → %s (anim weight=%.0f%%, posture weight=%.0f%%)"),
Comp,
Comp > 0.001f ? TEXT("BLEND") : TEXT("PASSTHROUGH"),
@ -633,7 +657,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
if (++DiagLogCounter % 90 == 0)
{
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT("DIAG Phase %d: %s | Timer=%.1f"), Phase, PhaseName, DiagTimer);
}
}
@ -695,7 +719,7 @@ void FAnimNode_PS_AI_ConvAgent_Posture::Evaluate_AnyThread(FPoseContext& Output)
if (EvalDebugFrameCounter % 300 == 1)
{
const FRotator CorrRot = FullCorrection.Rotator();
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Warning,
UE_LOG(LogPS_AI_ConvAgent_PostureAnimNode, Verbose,
TEXT(" DriftCorrection: Y=%.1f P=%.1f R=%.1f | Comp=%.2f"),
CorrRot.Yaw, CorrRot.Pitch, CorrRot.Roll,
CachedBodyDriftCompensation);

View File

@ -4,6 +4,11 @@
#include "Developer/Settings/Public/ISettingsModule.h"
#include "UObject/UObjectGlobals.h"
#include "UObject/Package.h"
#include "Misc/Paths.h"
#include "HAL/PlatformFileManager.h"
#include "Interfaces/IPluginManager.h"
DEFINE_LOG_CATEGORY_STATIC(LogPS_AI_ConvAgent, Log, All);
IMPLEMENT_MODULE(FPS_AI_ConvAgentModule, PS_AI_ConvAgent)
@ -22,6 +27,54 @@ void FPS_AI_ConvAgentModule::StartupModule()
LOCTEXT("SettingsDescription", "Configure the PS AI ConvAgent - ElevenLabs plugin"),
Settings);
}
// Ensure the AudioCapture module is loaded (registers WASAPI factory).
// In packaged builds, the module may be linked but not started unless
// explicitly loaded — without it, microphone capture is silent.
if (!FModuleManager::Get().IsModuleLoaded("AudioCapture"))
{
FModuleManager::Get().LoadModule("AudioCapture");
UE_LOG(LogPS_AI_ConvAgent, Log, TEXT("Loaded AudioCapture module for microphone support."));
}
// Ensure SSL certificates are available for packaged builds.
// UE5 expects cacert.pem in [Project]/Content/Certificates/.
// The plugin ships a copy in its Resources/Certificates/ folder.
// If the project doesn't have one, auto-copy it.
EnsureSSLCertificates();
}
void FPS_AI_ConvAgentModule::EnsureSSLCertificates()
{
const FString ProjectCertPath = FPaths::ProjectContentDir() / TEXT("Certificates") / TEXT("cacert.pem");
if (FPlatformFileManager::Get().GetPlatformFile().FileExists(*ProjectCertPath))
{
UE_LOG(LogPS_AI_ConvAgent, Log, TEXT("SSL cacert.pem found at: %s"), *ProjectCertPath);
return;
}
// Try to auto-copy from the plugin's Resources directory.
TSharedPtr<IPlugin> Plugin = IPluginManager::Get().FindPlugin(TEXT("PS_AI_ConvAgent"));
if (Plugin.IsValid())
{
const FString PluginCertPath = Plugin->GetBaseDir()
/ TEXT("Resources") / TEXT("Certificates") / TEXT("cacert.pem");
if (FPlatformFileManager::Get().GetPlatformFile().FileExists(*PluginCertPath))
{
FPlatformFileManager::Get().GetPlatformFile().CreateDirectoryTree(
*(FPaths::ProjectContentDir() / TEXT("Certificates")));
if (FPlatformFileManager::Get().GetPlatformFile().CopyFile(*ProjectCertPath, *PluginCertPath))
{
UE_LOG(LogPS_AI_ConvAgent, Log, TEXT("Copied SSL cacert.pem from plugin to: %s"), *ProjectCertPath);
return;
}
}
}
UE_LOG(LogPS_AI_ConvAgent, Warning, TEXT("SSL cacert.pem not found at: %s. "
"WebSocket wss:// connections may fail in packaged builds. "
"Copy cacert.pem to [Project]/Content/Certificates/."), *ProjectCertPath);
}
void FPS_AI_ConvAgentModule::ShutdownModule()

View File

@ -10,6 +10,9 @@
#include "Sound/SoundAttenuation.h"
#include "Sound/SoundWaveProcedural.h"
#include "GameFramework/Actor.h"
#include "GameFramework/PlayerController.h"
#include "Net/UnrealNetwork.h"
#include "VoiceModule.h"
DEFINE_LOG_CATEGORY_STATIC(LogPS_AI_ConvAgent_ElevenLabs, Log, All);
@ -22,6 +25,9 @@ UPS_AI_ConvAgent_ElevenLabsComponent::UPS_AI_ConvAgent_ElevenLabsComponent()
// Tick is used only to detect silence (agent stopped speaking).
// Disable if not needed for perf.
PrimaryComponentTick.TickInterval = 1.0f / 60.0f;
// Enable network replication for this component.
SetIsReplicatedByDefault(true);
}
// ─────────────────────────────────────────────────────────────────────────────
@ -31,6 +37,7 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::BeginPlay()
{
Super::BeginPlay();
InitAudioPlayback();
InitOpusCodec();
// Auto-register with the interaction subsystem so InteractionComponents can discover us.
if (UWorld* World = GetWorld())
@ -120,6 +127,17 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::TickComponent(float DeltaTime, ELevel
}
}
// Network: detect if the conversating player disconnected (server only).
if (GetOwnerRole() == ROLE_Authority && bNetIsConversing && NetConversatingPlayer)
{
if (!IsValid(NetConversatingPlayer) || !NetConversatingPlayer->GetPawn())
{
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Warning,
TEXT("Conversating player disconnected — releasing NPC."));
ServerReleaseConversation_Implementation();
}
}
// Silence detection.
// ISSUE-8: broadcast OnAgentStoppedSpeaking OUTSIDE AudioQueueLock.
// OnProceduralUnderflow (audio thread) also acquires AudioQueueLock — if we broadcast
@ -172,6 +190,12 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::TickComponent(float DeltaTime, ELevel
Tht, LastClosedTurnIndex);
}
OnAgentStoppedSpeaking.Broadcast();
// Network: notify all clients.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentStoppedSpeaking();
}
}
}
@ -179,6 +203,31 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::TickComponent(float DeltaTime, ELevel
// Control
// ─────────────────────────────────────────────────────────────────────────────
void UPS_AI_ConvAgent_ElevenLabsComponent::StartConversation()
{
if (GetOwnerRole() == ROLE_Authority)
{
// Server (or standalone): open WebSocket directly.
// In networked mode, also set replicated conversation state.
if (GetWorld() && GetWorld()->GetNetMode() != NM_Standalone)
{
APlayerController* PC = GetWorld()->GetFirstPlayerController();
bNetIsConversing = true;
NetConversatingPlayer = PC;
}
StartConversation_Internal();
}
else
{
// Client: request conversation via Server RPC.
APlayerController* PC = GetWorld() ? GetWorld()->GetFirstPlayerController() : nullptr;
if (PC)
{
ServerRequestConversation(PC);
}
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::StartConversation_Internal()
{
if (!WebSocketProxy)
{
@ -214,17 +263,29 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::StartConversation()
void UPS_AI_ConvAgent_ElevenLabsComponent::EndConversation()
{
StopListening();
// ISSUE-4: StopListening() may set bWaitingForAgentResponse=true (normal turn end path).
// Cancel it immediately — there is no response coming because we are ending the session.
// Without this, TickComponent could fire OnAgentResponseTimeout after EndConversation().
bWaitingForAgentResponse = false;
StopAgentAudio();
if (WebSocketProxy)
if (GetOwnerRole() == ROLE_Authority)
{
WebSocketProxy->Disconnect();
WebSocketProxy = nullptr;
StopListening();
// ISSUE-4: StopListening() may set bWaitingForAgentResponse=true (normal turn end path).
// Cancel it immediately — there is no response coming because we are ending the session.
// Without this, TickComponent could fire OnAgentResponseTimeout after EndConversation().
bWaitingForAgentResponse = false;
StopAgentAudio();
if (WebSocketProxy)
{
WebSocketProxy->Disconnect();
WebSocketProxy = nullptr;
}
// Reset replicated state so other players can talk to this NPC.
bNetIsConversing = false;
NetConversatingPlayer = nullptr;
}
else
{
// Client: request release via Server RPC.
ServerReleaseConversation();
}
}
@ -313,17 +374,6 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::StartListening()
Mic->StartCapture();
}
// Enable body tracking on the sibling PostureComponent (if present).
// Voice input counts as conversation engagement, same as text.
if (AActor* OwnerActor = GetOwner())
{
if (UPS_AI_ConvAgent_PostureComponent* Posture =
OwnerActor->FindComponentByClass<UPS_AI_ConvAgent_PostureComponent>())
{
Posture->bEnableBodyTracking = true;
}
}
const double T = TurnStartTime - SessionStartTime;
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Log, TEXT("[T+%.2fs] [Turn %d] Mic opened%s — user speaking."),
T, TurnIndex, bExternalMicManagement ? TEXT(" (external)") : TEXT(""));
@ -455,7 +505,14 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::FeedExternalAudio(const TArray<float>
if (MicAccumulationBuffer.Num() >= GetMicChunkMinBytes())
{
WebSocketProxy->SendAudioChunk(MicAccumulationBuffer);
if (GetOwnerRole() == ROLE_Authority)
{
if (WebSocketProxy) WebSocketProxy->SendAudioChunk(MicAccumulationBuffer);
}
else
{
ServerSendMicAudio(MicAccumulationBuffer);
}
MicAccumulationBuffer.Reset();
}
}
@ -485,6 +542,12 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleConnected(const FPS_AI_ConvAgen
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Log, TEXT("[T+0.00s] Agent connected. ConversationID=%s"), *Info.ConversationID);
OnAgentConnected.Broadcast(Info);
// Network: notify the requesting remote client that conversation started.
if (GetOwnerRole() == ROLE_Authority && NetConversatingPlayer)
{
ClientConversationStarted(Info);
}
// In Client turn mode (push-to-talk), the user controls listening manually via
// StartListening()/StopListening(). Auto-starting would leave the mic open
// permanently and interfere with push-to-talk — the T-release StopListening()
@ -494,6 +557,7 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleConnected(const FPS_AI_ConvAgen
{
StartListening();
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::HandleDisconnected(int32 StatusCode, const FString& Reason)
@ -518,6 +582,13 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleDisconnected(int32 StatusCode,
FScopeLock Lock(&MicSendLock);
MicAccumulationBuffer.Reset();
}
// Reset replicated state on disconnect.
if (GetOwnerRole() == ROLE_Authority)
{
bNetIsConversing = false;
NetConversatingPlayer = nullptr;
}
OnAgentDisconnected.Broadcast(StatusCode, Reason);
}
@ -545,6 +616,22 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAudioReceived(const TArray<uint
QueueBefore, (static_cast<float>(QueueBefore) / 16000.0f) * 1000.0f);
}
// Network: Opus-compress and broadcast to all clients before local playback.
if (GetOwnerRole() == ROLE_Authority && OpusEncoder.IsValid())
{
uint32 CompressedSize = static_cast<uint32>(OpusWorkBuffer.Num());
int32 Remainder = OpusEncoder->Encode(PCMData.GetData(), PCMData.Num(),
OpusWorkBuffer.GetData(), CompressedSize);
if (CompressedSize > 0)
{
TArray<uint8> CompressedData;
CompressedData.Append(OpusWorkBuffer.GetData(), CompressedSize);
MulticastReceiveAgentAudio(CompressedData);
}
}
// Server local playback (Listen Server is also a client).
EnqueueAgentAudio(PCMData);
// Forward raw PCM to any listeners (e.g. LipSync component for spectral analysis).
OnAgentAudioData.Broadcast(PCMData);
@ -568,6 +655,11 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAgentResponse(const FString& Re
if (bEnableAgentTextResponse)
{
OnAgentTextResponse.Broadcast(ResponseText);
// Network: broadcast text to all clients for subtitles.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentTextResponse(ResponseText);
}
}
}
@ -576,6 +668,12 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleInterrupted()
bWaitingForAgentResponse = false; // Interrupted — no response expected from previous turn.
StopAgentAudio();
OnAgentInterrupted.Broadcast();
// Network: notify all clients.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentInterrupted();
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAgentResponseStarted()
@ -587,6 +685,18 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAgentResponseStarted()
bAgentGenerating = true;
bWaitingForAgentResponse = false; // Server is generating — response timeout cancelled.
// Enable body tracking: the agent is responding, so conversation has started.
// Body tracking was deliberately NOT enabled on selection (only eyes+head)
// so the agent "notices" the player first and turns its body only when engaging.
if (AActor* OwnerActor = GetOwner())
{
if (UPS_AI_ConvAgent_PostureComponent* Posture =
OwnerActor->FindComponentByClass<UPS_AI_ConvAgent_PostureComponent>())
{
Posture->bEnableBodyTracking = true;
}
}
const double Now = FPlatformTime::Seconds();
const double T = Now - SessionStartTime;
const double LatencyFromTurnEnd = TurnEndTime > 0.0 ? Now - TurnEndTime : 0.0;
@ -621,6 +731,12 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAgentResponseStarted()
TEXT("[T+%.2fs] [Turn %d] Agent generating. (%.2fs after turn end)"),
T, LastClosedTurnIndex, LatencyFromTurnEnd);
OnAgentStartedGenerating.Broadcast();
// Network: notify all clients.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentStartedGenerating();
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAgentResponsePart(const FString& PartialText)
@ -628,6 +744,11 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::HandleAgentResponsePart(const FString
if (bEnableAgentPartialResponse)
{
OnAgentPartialResponse.Broadcast(PartialText);
// Network: broadcast partial text to all clients.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentPartialResponse(PartialText);
}
}
}
@ -828,6 +949,12 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::EnqueueAgentAudio(const TArray<uint8>
OnAgentStartedSpeaking.Broadcast();
// Network: notify all clients that agent started speaking.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentStartedSpeaking();
}
if (AudioPreBufferMs > 0)
{
// Pre-buffer: accumulate audio before starting playback.
@ -936,6 +1063,12 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::StopAgentAudio()
T, LastClosedTurnIndex, AgentSpokeDuration, TotalTurnDuration);
OnAgentStoppedSpeaking.Broadcast();
// Network: notify all clients.
if (GetOwnerRole() == ROLE_Authority)
{
MulticastAgentStoppedSpeaking();
}
}
}
@ -969,7 +1102,14 @@ void UPS_AI_ConvAgent_ElevenLabsComponent::OnMicrophoneDataCaptured(const TArray
if (MicAccumulationBuffer.Num() >= GetMicChunkMinBytes())
{
WebSocketProxy->SendAudioChunk(MicAccumulationBuffer);
if (GetOwnerRole() == ROLE_Authority)
{
if (WebSocketProxy) WebSocketProxy->SendAudioChunk(MicAccumulationBuffer);
}
else
{
ServerSendMicAudio(MicAccumulationBuffer);
}
MicAccumulationBuffer.Reset();
}
}
@ -992,3 +1132,265 @@ TArray<uint8> UPS_AI_ConvAgent_ElevenLabsComponent::FloatPCMToInt16Bytes(const T
return Out;
}
// ─────────────────────────────────────────────────────────────────────────────
// Network: Replication
// ─────────────────────────────────────────────────────────────────────────────
void UPS_AI_ConvAgent_ElevenLabsComponent::GetLifetimeReplicatedProps(
TArray<FLifetimeProperty>& OutLifetimeProps) const
{
Super::GetLifetimeReplicatedProps(OutLifetimeProps);
DOREPLIFETIME(UPS_AI_ConvAgent_ElevenLabsComponent, bNetIsConversing);
DOREPLIFETIME(UPS_AI_ConvAgent_ElevenLabsComponent, NetConversatingPlayer);
DOREPLIFETIME(UPS_AI_ConvAgent_ElevenLabsComponent, CurrentEmotion);
DOREPLIFETIME(UPS_AI_ConvAgent_ElevenLabsComponent, CurrentEmotionIntensity);
}
void UPS_AI_ConvAgent_ElevenLabsComponent::OnRep_ConversationState()
{
if (!bNetIsConversing)
{
// Conversation ended on server — clean up local playback.
if (bAgentSpeaking)
{
StopAgentAudio();
}
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::OnRep_Emotion()
{
// Fire the existing delegate so FacialExpressionComponent picks it up on clients.
OnAgentEmotionChanged.Broadcast(CurrentEmotion, CurrentEmotionIntensity);
}
// ─────────────────────────────────────────────────────────────────────────────
// Network: Server RPCs
// ─────────────────────────────────────────────────────────────────────────────
void UPS_AI_ConvAgent_ElevenLabsComponent::ServerRequestConversation_Implementation(
APlayerController* RequestingPlayer)
{
if (bNetIsConversing)
{
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Log,
TEXT("ServerRequestConversation denied — NPC is already in conversation."));
ClientConversationFailed(TEXT("NPC is already in conversation with another player."));
return;
}
bNetIsConversing = true;
NetConversatingPlayer = RequestingPlayer;
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Log,
TEXT("ServerRequestConversation granted for %s."),
*GetNameSafe(RequestingPlayer));
StartConversation_Internal();
}
void UPS_AI_ConvAgent_ElevenLabsComponent::ServerReleaseConversation_Implementation()
{
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Log, TEXT("ServerReleaseConversation."));
StopListening();
bWaitingForAgentResponse = false;
StopAgentAudio();
if (WebSocketProxy)
{
WebSocketProxy->Disconnect();
WebSocketProxy = nullptr;
}
bNetIsConversing = false;
NetConversatingPlayer = nullptr;
}
void UPS_AI_ConvAgent_ElevenLabsComponent::ServerSendMicAudio_Implementation(
const TArray<uint8>& PCMBytes)
{
if (WebSocketProxy && WebSocketProxy->IsConnected())
{
WebSocketProxy->SendAudioChunk(PCMBytes);
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::ServerSendTextMessage_Implementation(
const FString& Text)
{
if (WebSocketProxy && WebSocketProxy->IsConnected())
{
WebSocketProxy->SendTextMessage(Text);
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::ServerRequestInterrupt_Implementation()
{
InterruptAgent();
}
// ─────────────────────────────────────────────────────────────────────────────
// Network: Client RPCs
// ─────────────────────────────────────────────────────────────────────────────
void UPS_AI_ConvAgent_ElevenLabsComponent::ClientConversationStarted_Implementation(
const FPS_AI_ConvAgent_ConversationInfo_ElevenLabs& Info)
{
SessionStartTime = FPlatformTime::Seconds();
TurnIndex = 0;
LastClosedTurnIndex = 0;
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Log,
TEXT("[Client] Conversation started. ConversationID=%s"), *Info.ConversationID);
OnAgentConnected.Broadcast(Info);
// Auto-start listening (same logic as HandleConnected).
if (bAutoStartListening && TurnMode == EPS_AI_ConvAgent_TurnMode_ElevenLabs::Server)
{
StartListening();
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::ClientConversationFailed_Implementation(
const FString& Reason)
{
UE_LOG(LogPS_AI_ConvAgent_ElevenLabs, Warning,
TEXT("[Client] Conversation request failed: %s"), *Reason);
OnAgentError.Broadcast(Reason);
}
// ─────────────────────────────────────────────────────────────────────────────
// Network: Multicast RPCs
// ─────────────────────────────────────────────────────────────────────────────
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastReceiveAgentAudio_Implementation(
const TArray<uint8>& OpusData)
{
// Server already handled playback in HandleAudioReceived.
if (GetOwnerRole() == ROLE_Authority) return;
if (!OpusDecoder.IsValid()) return;
// LOD: skip audio if too far (unless this client is the speaker).
const float Dist = GetDistanceToLocalPlayer();
const bool bIsSpeaker = IsLocalPlayerConversating();
if (!bIsSpeaker && AudioLODCullDistance > 0.f && Dist > AudioLODCullDistance) return;
// Decode Opus → PCM.
const uint32 MaxDecompressedSize = 16000 * 2; // 1 second of 16kHz 16-bit mono
TArray<uint8> PCMBuffer;
PCMBuffer.SetNumUninitialized(MaxDecompressedSize);
uint32 DecompressedSize = MaxDecompressedSize;
OpusDecoder->Decode(OpusData.GetData(), OpusData.Num(),
PCMBuffer.GetData(), DecompressedSize);
if (DecompressedSize == 0) return;
PCMBuffer.SetNum(DecompressedSize);
// Local playback.
EnqueueAgentAudio(PCMBuffer);
// Feed lip-sync (within LOD or speaker).
if (bIsSpeaker || LipSyncLODDistance <= 0.f || Dist <= LipSyncLODDistance)
{
OnAgentAudioData.Broadcast(PCMBuffer);
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastAgentStartedSpeaking_Implementation()
{
if (GetOwnerRole() == ROLE_Authority) return;
bAgentSpeaking = true;
OnAgentStartedSpeaking.Broadcast();
}
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastAgentStoppedSpeaking_Implementation()
{
if (GetOwnerRole() == ROLE_Authority) return;
bAgentSpeaking = false;
SilentTickCount = 0;
OnAgentStoppedSpeaking.Broadcast();
}
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastAgentInterrupted_Implementation()
{
if (GetOwnerRole() == ROLE_Authority) return;
StopAgentAudio();
OnAgentInterrupted.Broadcast();
}
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastAgentTextResponse_Implementation(
const FString& ResponseText)
{
if (GetOwnerRole() == ROLE_Authority) return;
if (bEnableAgentTextResponse)
{
OnAgentTextResponse.Broadcast(ResponseText);
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastAgentPartialResponse_Implementation(
const FString& PartialText)
{
if (GetOwnerRole() == ROLE_Authority) return;
if (bEnableAgentPartialResponse)
{
OnAgentPartialResponse.Broadcast(PartialText);
}
}
void UPS_AI_ConvAgent_ElevenLabsComponent::MulticastAgentStartedGenerating_Implementation()
{
if (GetOwnerRole() == ROLE_Authority) return;
OnAgentStartedGenerating.Broadcast();
}
// ─────────────────────────────────────────────────────────────────────────────
// Network: Opus codec
// ─────────────────────────────────────────────────────────────────────────────
void UPS_AI_ConvAgent_ElevenLabsComponent::InitOpusCodec()
{
if (!FVoiceModule::IsAvailable()) return;
FVoiceModule& VoiceModule = FVoiceModule::Get();
if (GetOwnerRole() == ROLE_Authority)
{
OpusEncoder = VoiceModule.CreateVoiceEncoder(
PS_AI_ConvAgent_Audio_ElevenLabs::SampleRate,
PS_AI_ConvAgent_Audio_ElevenLabs::Channels,
EAudioEncodeHint::VoiceEncode_Voice);
}
OpusDecoder = VoiceModule.CreateVoiceDecoder(
PS_AI_ConvAgent_Audio_ElevenLabs::SampleRate,
PS_AI_ConvAgent_Audio_ElevenLabs::Channels);
OpusWorkBuffer.SetNumUninitialized(8 * 1024); // 8 KB scratch buffer for Opus encode/decode
}
// ─────────────────────────────────────────────────────────────────────────────
// Network: Helpers
// ─────────────────────────────────────────────────────────────────────────────
float UPS_AI_ConvAgent_ElevenLabsComponent::GetDistanceToLocalPlayer() const
{
if (UWorld* World = GetWorld())
{
if (APlayerController* PC = World->GetFirstPlayerController())
{
if (APawn* Pawn = PC->GetPawn())
{
return FVector::Dist(GetOwner()->GetActorLocation(),
Pawn->GetActorLocation());
}
}
}
return MAX_FLT;
}
bool UPS_AI_ConvAgent_ElevenLabsComponent::IsLocalPlayerConversating() const
{
if (UWorld* World = GetWorld())
{
if (APlayerController* PC = World->GetFirstPlayerController())
{
return NetConversatingPlayer == PC;
}
}
return false;
}

View File

@ -0,0 +1,3 @@
// Copyright ASTERION. All Rights Reserved.
#include "PS_AI_ConvAgent_EmotionPoseMap.h"

View File

@ -156,15 +156,23 @@ TMap<FName, float> UPS_AI_ConvAgent_FacialExpressionComponent::EvaluateAnimCurve
TMap<FName, float> CurveValues;
if (!AnimSeq) return CurveValues;
// Use runtime GetCurveData() — GetDataModel() is editor-only in UE 5.5.
const TArray<FFloatCurve>& FloatCurves = AnimSeq->GetCurveData().FloatCurves;
for (const FFloatCurve& Curve : FloatCurves)
// Use UAnimSequence::EvaluateCurveData() — works with both raw (editor)
// and compressed (cooked/packaged) curve data.
// FBlendedCurve uses TMemStackAllocator which requires an active FMemMark.
// We're on the game thread (TickComponent), not in the anim evaluation pipeline,
// so we must set up the mark manually.
{
const float Value = Curve.FloatCurve.Eval(Time);
if (FMath::Abs(Value) > 0.001f)
FMemMark Mark(FMemStack::Get());
FBlendedCurve BlendedCurve;
AnimSeq->EvaluateCurveData(BlendedCurve, Time);
BlendedCurve.ForEachElement([&CurveValues](const UE::Anim::FCurveElement& Element)
{
CurveValues.Add(Curve.GetName(), Value);
}
if (FMath::Abs(Element.Value) > 0.001f)
{
CurveValues.Add(Element.Name, Element.Value);
}
});
}
return CurveValues;
@ -212,6 +220,25 @@ void UPS_AI_ConvAgent_FacialExpressionComponent::TickComponent(
{
Super::TickComponent(DeltaTime, TickType, ThisTickFunction);
// ── Lazy binding: in packaged builds, BeginPlay may run before the
// ElevenLabsComponent is fully initialized. Retry discovery until bound.
if (!AgentComponent.IsValid())
{
if (AActor* Owner = GetOwner())
{
auto* Agent = Owner->FindComponentByClass<UPS_AI_ConvAgent_ElevenLabsComponent>();
if (Agent)
{
AgentComponent = Agent;
Agent->OnAgentEmotionChanged.AddDynamic(
this, &UPS_AI_ConvAgent_FacialExpressionComponent::OnEmotionChanged);
UE_LOG(LogPS_AI_ConvAgent_FacialExpr, Log,
TEXT("Facial expression (late) bound to agent component on %s."),
*Owner->GetName());
}
}
}
// Nothing to play
if (!ActiveAnim && !PrevAnim)
return;

View File

@ -128,11 +128,20 @@ UPS_AI_ConvAgent_ElevenLabsComponent* UPS_AI_ConvAgent_InteractionComponent::Eva
UPS_AI_ConvAgent_ElevenLabsComponent* CurrentAgent = SelectedAgent.Get();
// Get local player controller for occupied-NPC check.
APlayerController* LocalPC = World->GetFirstPlayerController();
for (UPS_AI_ConvAgent_ElevenLabsComponent* Agent : Agents)
{
AActor* AgentActor = Agent->GetOwner();
if (!AgentActor) continue;
// Network: skip agents that are in conversation with a different player.
if (Agent->bNetIsConversing && Agent->NetConversatingPlayer != LocalPC)
{
continue;
}
const FVector AgentLocation = AgentActor->GetActorLocation() + FVector(0.0f, 0.0f, AgentEyeLevelOffset);
const FVector ToAgent = AgentLocation - ViewLocation;
const float DistSq = ToAgent.SizeSquared();
@ -243,21 +252,20 @@ void UPS_AI_ConvAgent_InteractionComponent::SetSelectedAgent(UPS_AI_ConvAgent_El
NewAgent->GetOwner() ? *NewAgent->GetOwner()->GetName() : TEXT("(null)"));
}
// Network: auto-start conversation if the agent isn't connected yet.
if (!NewAgent->IsConnected() && !NewAgent->bNetIsConversing)
{
NewAgent->StartConversation();
}
// Ensure mic is capturing so we can route audio to the new agent.
if (MicComponent && !MicComponent->IsCapturing())
{
MicComponent->StartCapture();
}
// ── Listening: start ─────────────────────────────────────────────
// Body tracking is enabled by ElevenLabsComponent itself (in StartListening
// and SendTextMessage) so it works for both voice and text input.
if (bAutoManageListening)
{
NewAgent->StartListening();
}
// ── Posture: attach ──────────────────────────────────────────────
// ── Posture: attach (eyes+head only — body tracking is enabled later
// by ElevenLabsComponent when the agent starts responding) ──
if (bAutoManagePosture && World)
{
// Cancel any pending detach — agent came back before detach fired.
@ -277,6 +285,15 @@ void UPS_AI_ConvAgent_InteractionComponent::SetSelectedAgent(UPS_AI_ConvAgent_El
}
}
// ── Listening: start ─────────────────────────────────────────────
// Opens the mic but does NOT enable body tracking. Body tracking
// is enabled later by HandleAgentResponseStarted (agent starts
// responding) or SendTextMessage (explicit text engagement).
if (bAutoManageListening)
{
NewAgent->StartListening();
}
OnAgentSelected.Broadcast(NewAgent);
}
else

View File

@ -444,55 +444,59 @@ void UPS_AI_ConvAgent_LipSyncComponent::ExtractPoseCurves(const FName& VisemeNam
{
if (!AnimSeq) return;
// Use runtime GetCurveData() — GetDataModel() is editor-only in UE 5.5.
TMap<FName, float> CurveValues;
const TArray<FFloatCurve>& FloatCurves = AnimSeq->GetCurveData().FloatCurves;
for (const FFloatCurve& Curve : FloatCurves)
// Use UAnimSequence::EvaluateCurveData() — works with both raw (editor)
// and compressed (cooked/packaged) curve data. Extract at frame 0.
// FBlendedCurve uses TMemStackAllocator which requires an active FMemMark.
// We're on the game thread (BeginPlay), not in the anim evaluation pipeline,
// so we must set up the mark manually.
{
const FName CurveName = Curve.GetName();
const float Value = Curve.FloatCurve.Eval(0.0f);
FMemMark Mark(FMemStack::Get());
FBlendedCurve BlendedCurve;
AnimSeq->EvaluateCurveData(BlendedCurve, 0.0f);
// Skip curves with near-zero values — not part of this pose's expression
if (FMath::Abs(Value) < 0.001f) continue;
CurveValues.Add(CurveName, Value);
// Auto-detect naming convention from the very first non-zero curve we encounter
if (!bPosesUseCTRLNaming && PoseExtractedCurveMap.Num() == 0 && CurveValues.Num() == 1)
BlendedCurve.ForEachElement([&CurveValues](const UE::Anim::FCurveElement& Element)
{
bPosesUseCTRLNaming = CurveName.ToString().StartsWith(TEXT("CTRL_"));
if (FMath::Abs(Element.Value) >= 0.001f)
{
CurveValues.Add(Element.Name, Element.Value);
}
});
}
if (bDebug)
{
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("Pose '%s' (%s): Extracted %d curves via EvaluateCurveData."),
*VisemeName.ToString(), *AnimSeq->GetName(), CurveValues.Num());
}
// Auto-detect naming convention from the first non-zero curve
if (!bPosesUseCTRLNaming && PoseExtractedCurveMap.Num() == 0 && CurveValues.Num() > 0)
{
for (const auto& Pair : CurveValues)
{
bPosesUseCTRLNaming = Pair.Key.ToString().StartsWith(TEXT("CTRL_"));
if (bDebug)
{
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("Pose curve naming detected: %s (from curve '%s')"),
bPosesUseCTRLNaming ? TEXT("CTRL_expressions_*") : TEXT("ARKit / other"),
*CurveName.ToString());
*Pair.Key.ToString());
}
break;
}
}
if (CurveValues.Num() > 0)
{
PoseExtractedCurveMap.Add(VisemeName, MoveTemp(CurveValues));
if (bDebug)
{
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("Pose '%s' (%s): Extracted %d non-zero curves."),
*VisemeName.ToString(), *AnimSeq->GetName(),
PoseExtractedCurveMap[VisemeName].Num());
}
}
else
{
// Still add an empty map so we know this viseme was assigned (silence pose)
// Empty map: silence pose or no data available
PoseExtractedCurveMap.Add(VisemeName, TMap<FName, float>());
if (bDebug)
{
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("Pose '%s' (%s): All curves are zero — neutral/silence pose."),
*VisemeName.ToString(), *AnimSeq->GetName());
}
}
}
@ -503,11 +507,8 @@ void UPS_AI_ConvAgent_LipSyncComponent::InitializePoseMappings()
if (!PoseMap)
{
if (bDebug)
{
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("No PoseMap assigned — using hardcoded ARKit mapping."));
}
UE_LOG(LogPS_AI_ConvAgent_LipSync, Warning,
TEXT("InitializePoseMappings: PoseMap is NULL — using hardcoded ARKit mapping."));
return;
}
@ -597,6 +598,22 @@ void UPS_AI_ConvAgent_LipSyncComponent::InitializePoseMappings()
TEXT("No phoneme pose AnimSequences assigned — using hardcoded ARKit mapping."));
}
if (bDebug)
{
int32 TotalCurves = 0;
int32 EmptyPoses = 0;
for (const auto& Entry : PoseExtractedCurveMap)
{
TotalCurves += Entry.Value.Num();
if (Entry.Value.Num() == 0) ++EmptyPoses;
}
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("InitializePoseMappings: PoseMap=%s, Assigned=%d, "
"PoseEntries=%d (empty=%d), TotalCurves=%d, CTRL=%s"),
PoseMap ? *PoseMap->GetName() : TEXT("NULL"),
AssignedCount, PoseExtractedCurveMap.Num(), EmptyPoses,
TotalCurves, bPosesUseCTRLNaming ? TEXT("YES") : TEXT("NO"));
}
}
void UPS_AI_ConvAgent_LipSyncComponent::EndPlay(const EEndPlayReason::Type EndPlayReason)
@ -633,6 +650,45 @@ void UPS_AI_ConvAgent_LipSyncComponent::TickComponent(float DeltaTime, ELevelTic
{
Super::TickComponent(DeltaTime, TickType, ThisTickFunction);
// ── Lazy binding: in packaged builds, BeginPlay may run before the ────────
// ElevenLabsComponent is fully initialized. Retry discovery until bound.
if (!AgentComponent.IsValid())
{
if (AActor* Owner = GetOwner())
{
UPS_AI_ConvAgent_ElevenLabsComponent* Agent =
Owner->FindComponentByClass<UPS_AI_ConvAgent_ElevenLabsComponent>();
if (Agent)
{
AgentComponent = Agent;
AudioDataHandle = Agent->OnAgentAudioData.AddUObject(
this, &UPS_AI_ConvAgent_LipSyncComponent::OnAudioChunkReceived);
Agent->OnAgentPartialResponse.AddDynamic(
this, &UPS_AI_ConvAgent_LipSyncComponent::OnPartialTextReceived);
Agent->OnAgentTextResponse.AddDynamic(
this, &UPS_AI_ConvAgent_LipSyncComponent::OnTextResponseReceived);
Agent->OnAgentInterrupted.AddDynamic(
this, &UPS_AI_ConvAgent_LipSyncComponent::OnAgentInterrupted);
Agent->OnAgentStoppedSpeaking.AddDynamic(
this, &UPS_AI_ConvAgent_LipSyncComponent::OnAgentStopped);
Agent->bEnableAgentPartialResponse = true;
UE_LOG(LogPS_AI_ConvAgent_LipSync, Log,
TEXT("Lip sync (late) bound to agent component on %s."),
*Owner->GetName());
}
}
}
// Also retry caching the facial expression component if it wasn't found initially.
if (!CachedFacialExprComp.IsValid())
{
if (AActor* Owner = GetOwner())
{
CachedFacialExprComp = Owner->FindComponentByClass<UPS_AI_ConvAgent_FacialExpressionComponent>();
}
}
// ── Consume queued viseme analysis frames at the FFT window rate ─────────
// Each 512-sample FFT window at 16kHz = 32ms of audio.
// We consume one queued frame every 32ms to match the original audio timing.
@ -1017,13 +1073,6 @@ void UPS_AI_ConvAgent_LipSyncComponent::OnAudioChunkReceived(const TArray<uint8>
const int16* Samples = reinterpret_cast<const int16*>(PCMData.GetData());
const int32 NumSamples = PCMData.Num() / sizeof(int16);
static bool bFirstChunkLogged = false;
if (!bFirstChunkLogged)
{
UE_LOG(LogPS_AI_ConvAgent_LipSync, Verbose, TEXT("First audio chunk received: %d bytes (%d samples)"), PCMData.Num(), NumSamples);
bFirstChunkLogged = true;
}
FloatBuffer.Reset(NumSamples);
for (int32 i = 0; i < NumSamples; ++i)
{

View File

@ -0,0 +1,3 @@
// Copyright ASTERION. All Rights Reserved.
#include "PS_AI_ConvAgent_LipSyncPoseMap.h"

View File

@ -436,12 +436,11 @@ void UPS_AI_ConvAgent_PostureComponent::TickComponent(
}
}
// ── Debug (every ~2 seconds) ─────────────────────────────────────────
#if !UE_BUILD_SHIPPING
DebugFrameCounter++;
if (DebugFrameCounter % 120 == 0)
// ── Debug (every ~2 seconds, only when bDebug is on) ────────────────
if (bDebug && TargetActor)
{
if (TargetActor)
DebugFrameCounter++;
if (DebugFrameCounter % 120 == 0)
{
const float FacingYaw = Owner->GetActorRotation().Yaw + MeshForwardYawOffset;
const FVector TP = TargetActor->GetActorLocation() + TargetOffset;
@ -450,13 +449,14 @@ void UPS_AI_ConvAgent_PostureComponent::TickComponent(
const float Delta = FMath::FindDeltaAngleDegrees(FacingYaw, TgtYaw);
UE_LOG(LogPS_AI_ConvAgent_Posture, Log,
TEXT("Posture [%s -> %s]: Delta=%.1f | Head=%.1f/%.1f | Eyes=%.1f/%.1f | EyeGap=%.1f"),
TEXT("Posture [%s -> %s]: Delta=%.1f | Head=%.1f/%.1f | Eyes=%.1f/%.1f | Body: enabled=%s TargetYaw=%.1f ActorYaw=%.1f"),
*Owner->GetName(), *TargetActor->GetName(),
Delta,
CurrentHeadYaw, CurrentHeadPitch,
CurrentEyeYaw, CurrentEyePitch,
Delta - CurrentHeadYaw);
bEnableBodyTracking ? TEXT("Y") : TEXT("N"),
TargetBodyWorldYaw,
Owner->GetActorRotation().Yaw);
}
}
#endif
}

View File

@ -96,4 +96,7 @@ public:
private:
UPS_AI_ConvAgent_Settings_ElevenLabs* Settings = nullptr;
/** Copy SSL cacert.pem from plugin Resources to project Content if missing. */
void EnsureSSLCertificates();
};

View File

@ -7,12 +7,14 @@
#include "PS_AI_ConvAgent_Definitions.h"
#include "PS_AI_ConvAgent_WebSocket_ElevenLabsProxy.h"
#include "Sound/SoundWaveProcedural.h"
#include "Interfaces/VoiceCodec.h"
#include <atomic>
#include "PS_AI_ConvAgent_ElevenLabsComponent.generated.h"
class UAudioComponent;
class USoundAttenuation;
class UPS_AI_ConvAgent_MicrophoneCaptureComponent;
class APlayerController;
// ─────────────────────────────────────────────────────────────────────────────
// Delegates exposed to Blueprint
@ -270,11 +272,11 @@ public:
FOnAgentClientToolCall OnAgentClientToolCall;
/** The current emotion of the agent, as set by the "set_emotion" client tool. Defaults to Neutral. */
UPROPERTY(BlueprintReadOnly, Category = "PS AI ConvAgent|ElevenLabs")
UPROPERTY(ReplicatedUsing = OnRep_Emotion, BlueprintReadOnly, Category = "PS AI ConvAgent|ElevenLabs")
EPS_AI_ConvAgent_Emotion CurrentEmotion = EPS_AI_ConvAgent_Emotion::Neutral;
/** The current emotion intensity. Defaults to Medium. */
UPROPERTY(BlueprintReadOnly, Category = "PS AI ConvAgent|ElevenLabs")
UPROPERTY(ReplicatedUsing = OnRep_Emotion, BlueprintReadOnly, Category = "PS AI ConvAgent|ElevenLabs")
EPS_AI_ConvAgent_EmotionIntensity CurrentEmotionIntensity = EPS_AI_ConvAgent_EmotionIntensity::Medium;
// ── Raw audio data (C++ only, used by LipSync component) ────────────────
@ -282,6 +284,93 @@ public:
* Used internally by UPS_AI_ConvAgent_LipSyncComponent for spectral analysis. */
FOnAgentAudioData OnAgentAudioData;
// ── Network state (replicated) ───────────────────────────────────────────
/** True when a player is currently in conversation with this NPC.
* Replicated to all clients so InteractionComponents can skip occupied NPCs. */
UPROPERTY(ReplicatedUsing = OnRep_ConversationState, BlueprintReadOnly, Category = "PS AI ConvAgent|Network")
bool bNetIsConversing = false;
/** The player controller currently in conversation with this NPC (null if free).
* Replicated so each client knows who is speaking (used for posture target, LOD). */
UPROPERTY(ReplicatedUsing = OnRep_ConversationState, BlueprintReadOnly, Category = "PS AI ConvAgent|Network")
TObjectPtr<APlayerController> NetConversatingPlayer = nullptr;
// ── Network LOD ──────────────────────────────────────────────────────────
/** Distance (cm) beyond which remote clients stop receiving agent audio entirely.
* The speaking player always receives full audio regardless of distance. */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "PS AI ConvAgent|Network|LOD",
meta = (ClampMin = "0", ToolTip = "Distance beyond which audio is culled for non-speaking players. 0 = no cull."))
float AudioLODCullDistance = 3000.f;
/** Distance (cm) beyond which remote clients skip lip-sync / emotion processing.
* Audio still plays (if within AudioLODCullDistance) but without facial animation.
* The speaking player always gets full lip-sync regardless of distance. */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "PS AI ConvAgent|Network|LOD",
meta = (ClampMin = "0", ToolTip = "Distance beyond which lip-sync is skipped for non-speaking players. 0 = no LOD."))
float LipSyncLODDistance = 1500.f;
// ── Network RPCs ─────────────────────────────────────────────────────────
/** Request exclusive conversation with this NPC. Called by clients; the server
* checks availability and opens the WebSocket connection if the NPC is free. */
UFUNCTION(Server, Reliable)
void ServerRequestConversation(APlayerController* RequestingPlayer);
/** Release this NPC so other players can talk to it. */
UFUNCTION(Server, Reliable)
void ServerReleaseConversation();
/** Stream accumulated mic audio from the speaking client to the server.
* Unreliable: minor packet loss is acceptable for audio streaming. */
UFUNCTION(Server, Unreliable)
void ServerSendMicAudio(const TArray<uint8>& PCMBytes);
/** Send a text message via the server's WebSocket connection. */
UFUNCTION(Server, Reliable)
void ServerSendTextMessage(const FString& Text);
/** Request an agent interruption through the server. */
UFUNCTION(Server, Reliable)
void ServerRequestInterrupt();
/** Broadcast Opus-compressed agent audio to all clients. */
UFUNCTION(NetMulticast, Unreliable)
void MulticastReceiveAgentAudio(const TArray<uint8>& OpusData);
/** Notify all clients that the agent started speaking (first audio chunk). */
UFUNCTION(NetMulticast, Reliable)
void MulticastAgentStartedSpeaking();
/** Notify all clients that the agent stopped speaking. */
UFUNCTION(NetMulticast, Reliable)
void MulticastAgentStoppedSpeaking();
/** Notify all clients that the agent was interrupted. */
UFUNCTION(NetMulticast, Reliable)
void MulticastAgentInterrupted();
/** Broadcast the agent's complete text response (subtitles). */
UFUNCTION(NetMulticast, Reliable)
void MulticastAgentTextResponse(const FString& ResponseText);
/** Broadcast streaming partial text (real-time subtitles). */
UFUNCTION(NetMulticast, Reliable)
void MulticastAgentPartialResponse(const FString& PartialText);
/** Notify all clients that the agent started generating (thinking). */
UFUNCTION(NetMulticast, Reliable)
void MulticastAgentStartedGenerating();
/** Notify the requesting client that conversation started successfully. */
UFUNCTION(Client, Reliable)
void ClientConversationStarted(const FPS_AI_ConvAgent_ConversationInfo_ElevenLabs& Info);
/** Notify the requesting client that conversation request was denied. */
UFUNCTION(Client, Reliable)
void ClientConversationFailed(const FString& Reason);
// ── Control ───────────────────────────────────────────────────────────────
/**
@ -360,8 +449,16 @@ public:
virtual void EndPlay(const EEndPlayReason::Type EndPlayReason) override;
virtual void TickComponent(float DeltaTime, ELevelTick TickType,
FActorComponentTickFunction* ThisTickFunction) override;
virtual void GetLifetimeReplicatedProps(TArray<FLifetimeProperty>& OutLifetimeProps) const override;
private:
// ── Network OnRep handlers ───────────────────────────────────────────────
UFUNCTION()
void OnRep_ConversationState();
UFUNCTION()
void OnRep_Emotion();
// ── Internal event handlers ───────────────────────────────────────────────
UFUNCTION()
void HandleConnected(const FPS_AI_ConvAgent_ConversationInfo_ElevenLabs& Info);
@ -498,4 +595,19 @@ private:
/** Compute the minimum bytes from the user-facing MicChunkDurationMs.
* Formula: bytes = SampleRate * (ms / 1000) * BytesPerSample = 16000 * ms / 1000 * 2 = 32 * ms */
int32 GetMicChunkMinBytes() const { return MicChunkDurationMs * 32; }
// ── Opus codec (network audio compression) ───────────────────────────────
TSharedPtr<IVoiceEncoder> OpusEncoder; // Server only
TSharedPtr<IVoiceDecoder> OpusDecoder; // All clients
TArray<uint8> OpusWorkBuffer; // Reusable scratch buffer for encode/decode
void InitOpusCodec();
// ── Network helpers ──────────────────────────────────────────────────────
/** Distance from this NPC to the local player's pawn. Returns MAX_FLT if unavailable. */
float GetDistanceToLocalPlayer() const;
/** True if the local player controller is the one currently in conversation. */
bool IsLocalPlayerConversating() const;
/** Internal: performs the actual WebSocket setup (called by both local and RPC paths). */
void StartConversation_Internal();
};

View File

@ -45,6 +45,9 @@ struct PS_AI_CONVAGENT_API FPS_AI_ConvAgent_EmotionPoseSet
* The component plays the AnimSequence in real-time (looping) to drive
* emotion-based facial expressions (eyes, eyebrows, cheeks, mouth mood).
* Lip sync overrides the mouth-area curves on top.
*
* Curve data is read at runtime via UAnimSequence::EvaluateCurveData()
* which works with both raw (editor) and compressed (cooked) curves.
*/
UCLASS(BlueprintType, Blueprintable, DisplayName = "PS AI ConvAgent Emotion Pose Map")
class PS_AI_CONVAGENT_API UPS_AI_ConvAgent_EmotionPoseMap : public UPrimaryDataAsset
@ -52,6 +55,8 @@ class PS_AI_CONVAGENT_API UPS_AI_ConvAgent_EmotionPoseMap : public UPrimaryDataA
GENERATED_BODY()
public:
// ── Emotion Poses ─────────────────────────────────────────────────────────
/** Map of emotions to their AnimSequence sets (Normal / Medium / Extreme).
* Add entries for each emotion your agent uses (Joy, Sadness, Anger, Surprise, Fear, Disgust).
* Neutral is recommended it plays by default at startup (blinking, breathing). */

View File

@ -17,9 +17,11 @@ class UAnimSequence;
* assign your MHF_* AnimSequences once, then reference this single asset
* on every MetaHuman's PS AI ConvAgent Lip Sync component.
*
* The component extracts curve data from each pose at BeginPlay and uses it
* to drive lip sync replacing the hardcoded ARKit blendshape mapping with
* artist-crafted poses that coordinate dozens of facial curves.
* The component extracts curve data from each pose at BeginPlay using
* UAnimSequence::EvaluateCurveData() (works with both raw and compressed
* curves in cooked builds) and uses it to drive lip sync replacing the
* hardcoded ARKit blendshape mapping with artist-crafted poses that
* coordinate dozens of facial curves.
*/
UCLASS(BlueprintType, Blueprintable, DisplayName = "PS AI ConvAgent Lip Sync Pose Map")
class PS_AI_CONVAGENT_API UPS_AI_ConvAgent_LipSyncPoseMap : public UPrimaryDataAsset
@ -27,6 +29,7 @@ class PS_AI_CONVAGENT_API UPS_AI_ConvAgent_LipSyncPoseMap : public UPrimaryDataA
GENERATED_BODY()
public:
// ── Phoneme Poses (15 OVR visemes) ───────────────────────────────────────
/** Silence / neutral pose. Mouth at rest. (OVR viseme: sil) */