group2 0.1.0
CSE 125 Group 2
Loading...
Searching...
No Matches
Client Class Reference

TCP stream client — sends input to the server and receives state updates. More...

#include <Client.hpp>

Collaboration diagram for Client:
[legend]

Classes

struct  DelayedOutbound
 One outbound UDP datagram queued for delayed send. More...
struct  DelayedInbound
 One inbound payload queued for delayed dispatch. More...

Public Types

using LocalPlayerReadyFn = std::function<void(entt::entity localEntity)>
using ParticleEventCallback = std::function<void(const NetParticleEvent& evt, entt::entity localEntity)>
using MatchStateUpdateFn = std::function<void(const MatchStatePacket&)>
using KillEventCallback = std::function<void(const NetKillEvent&)>
using ShotDebugCallback = std::function<void(const net::shotdebug::ShotDebugCapture&)>
 PR-20: callback for SHOT_DEBUG_REPORT.

Public Member Functions

bool init (const char *addr, Uint16 port, const TransportConfig &transport={})
 Create the TCP socket and connect to the server.
void shutdown ()
 Close the socket and release the resolved address.
bool send (const void *data, uint32_t size)
 Send a raw message to the server.
bool sendInputSnapshot (const InputSnapshot &snap)
 Push the latest input into the redundant ring and send to the server.
bool sendShotIntent (std::uint32_t shotInputTick, std::uint16_t targetClientId, const AnimSnapshot &targetAnim)
 PR-27 (netsync): send a SHOT_INTENT packet describing the client's view of the target's animation state at fire time.
void sendPing ()
 Send a PING packet to the server for RTT measurement.
void updateStats (float dt)
 Update bandwidth stats. Call once per frame with the frame delta time.
void onLocalPlayerReady (LocalPlayerReadyFn fn)
void onParticleEvent (ParticleEventCallback fn)
void onMatchStateUpdate (MatchStateUpdateFn fn)
void onKillEvent (KillEventCallback fn)
void onShotDebugReport (ShotDebugCallback fn)
bool poll (Registry &registry)
 Receive and process one pending message.
const NetworkStatsgetNetStats () const
 Access current network statistics.
uint32_t getServerAckedClientTick () const noexcept
 Latest server-acked client predict tick.
bool consumeSnapshotApplied () noexcept
 Whether a snapshot was applied since the last call to consumeSnapshotApplied().
float getSnapshotAlpha () const
 Render-time interpolation alpha based on snapshot timing.
std::optional< entt::entity > getLocalPlayerEntity () const
 PR-21: server-assigned local-player entity (post-mapping).
Uint64 getInterpolationRenderTimeNs () const
 Render time the renderer should display non-local entities at.
Uint64 getSnapshotIntervalNs () const
 Approximate snapshot interval in nanoseconds.
void applyInterpolatedTransforms (Registry &registry)
 PR-19: overwrite Position.value (and InputSnapshot.yaw) for every non-local entity with an InterpolationBuffer to its interpolated render-time value.
void setSimulatedLatencyMs (int totalMs) noexcept
 Phase 6 testing: simulate added round-trip latency.
int getSimulatedLatencyMs () const noexcept
 Get the currently-effective simulated total RTT.
void setSimulatedLossPercent (int percent) noexcept
 Phase 6 testing: simulate UDP packet loss.
int getSimulatedLossPercent () const noexcept
 Get the currently-effective simulated packet loss %.

Static Public Attributes

static constexpr size_t k_inputRedundancy = 5
 Number of recent inputs included in each INPUT packet for redundancy.

Private Member Functions

bool acceptReliableSequence (uint16_t seq)
 Sliding-window dedup helper.
bool shouldDropPacketLocked ()
 Roll the loss RNG.
bool sendUdpDelayed (net::PacketHeader hdr, const void *data, int len)
 Send a UDP datagram immediately if the latency simulator is off, otherwise queue it for delayed send.
void recvUdpDelayed (std::vector< uint8_t > &&payload)
 Enqueue an assembled UDP message into udpRecvQueue_ immediately if the simulator is off, otherwise hold it in the inbound delay queue.
void networkLoop ()
 Network-thread main loop body.
void dispatchMessage (const uint8_t *data, Uint32 size, Registry &registry)
 Decode and dispatch a single complete framed message.
void recordInterpolationSamples (Registry &registry, Uint64 captureNs)
 PR-11: append a sample to every replicated remote entity's InterpolationBuffer, AFTER the loader has rewritten the registry from the just-arrived snapshot.

Private Attributes

MessageStream msgStream {nullptr}
 Framed message stream for server communication.
NET_Address * serverAddr = nullptr
 Resolved server address.
std::optional< registry_serialization::LoaderregistryLoader
LocalPlayerReadyFn localPlayerReadyFn
 Called once the server assigns a player entity.
ParticleEventCallback particleEventFn_
 Called for each replicated particle event from server.
MatchStateUpdateFn matchStateUpdateFn_
 Called whenever a MATCH_STATE packet is received.
KillEventCallback killEventFn_
 Called for each replicated kill event from server.
ShotDebugCallback shotDebugFn_
 PR-20: called for each SHOT_DEBUG_REPORT from server.
std::optional< entt::entity > localPlayerEntity
 The local player's entity, once assigned by the server.
bool localPlayerReadyNotified = false
 True if localPlayerReadyFn has been called.
std::vector< uint8_t > keyframePayload_
std::uint32_t keyframeTick_ = 0
NetworkStats stats
 Live network metrics.
uint64_t bytesSentWindow = 0
uint64_t bytesRecvWindow = 0
uint32_t registryUpdatesWindow = 0
float statsAccumulator = 0.0f
std::array< InputSnapshot, k_inputRedundancyinputRing_ {}
size_t inputRingHead_ = 0
 Next write index, wraps mod k_inputRedundancy.
size_t inputRingCount_ = 0
 Valid entries in ring; saturates at k_inputRedundancy.
OutboundQueue outbound_
std::mutex stateMutex_
std::thread networkThread_
std::atomic< bool > shouldStop_ {false}
std::atomic< bool > socketDead_ {false}
 Latched-true once the network thread observes a socket error.
Uint64 lastSnapshotApplyNs_ = 0
Uint64 prevSnapshotApplyNs_ = 0
int interpDelaySnapshots_ = 2
Uint64 snapshotIntervalEmaNs_ = k_defaultSnapshotIntervalNs
uint32_t serverAckedClientTick_ = 0
bool snapshotAppliedFlag_ = false
TransportConfig transportConfig_
net::UdpEndpoint udpEndpoint_
net::UdpEndpointAddr serverUdpAddr_
uint32_t connectionId_ = 0
uint16_t udpInputSequence_ = 0
 Per-channel sequence for INPUT datagrams.
std::vector< std::vector< uint8_t > > udpRecvQueue_
 UDP-received payloads waiting for the game thread to dispatch.
net::FragmentReassembler unreliableReassembler_
 Phase 3d-4: reassembly buffer for fragmented snapshot datagrams on the Unreliable channel.
uint16_t reliableHighestSeen_ = 0
 Phase 3d-5: sliding-window bitset for ReliableOrdered channel dedup.
uint64_t reliableSeenBitmask_ = 0
bool reliableHasAny_ = false
 False until the first reliable event arrives.
std::atomic< int > simulatedLatencyMs_ {0}
 Total simulated RTT in ms (slider value, 0–200).
std::atomic< int > simulatedLossPercent_ {0}
 Per-direction independent UDP-drop probability (slider value, 0–100).
std::mt19937 simLossRng_ {}
 PRNG for the loss simulator.
std::deque< DelayedOutboundsimLatOutbound_
std::deque< DelayedInboundsimLatInbound_

Static Private Attributes

static constexpr Uint64 k_defaultSnapshotIntervalNs = 1'000'000'000ULL / 128ULL

Detailed Description

TCP stream client — sends input to the server and receives state updates.

Member Typedef Documentation

◆ KillEventCallback

using Client::KillEventCallback = std::function<void(const NetKillEvent&)>

◆ LocalPlayerReadyFn

using Client::LocalPlayerReadyFn = std::function<void(entt::entity localEntity)>

◆ MatchStateUpdateFn

using Client::MatchStateUpdateFn = std::function<void(const MatchStatePacket&)>

◆ ParticleEventCallback

using Client::ParticleEventCallback = std::function<void(const NetParticleEvent& evt, entt::entity localEntity)>

◆ ShotDebugCallback

using Client::ShotDebugCallback = std::function<void(const net::shotdebug::ShotDebugCapture&)>

PR-20: callback for SHOT_DEBUG_REPORT.

Fired on the game thread inside dispatchMessage after the bytes have been parsed back into a ShotDebugCapture. The DebugUI registers this and pairs the report with its own client-side fire-time snapshot by shotInputTick.

Member Function Documentation

◆ acceptReliableSequence()

bool Client::acceptReliableSequence ( uint16_t seq)
private

Sliding-window dedup helper.

Returns true if the caller should dispatch this sequence (i.e. it's new); false if it's a duplicate or too old to track.

Here is the caller graph for this function:

◆ applyInterpolatedTransforms()

void Client::applyInterpolatedTransforms ( Registry & registry)

PR-19: overwrite Position.value (and InputSnapshot.yaw) for every non-local entity with an InterpolationBuffer to its interpolated render-time value.

Runs once per frame, BEFORE any renderer / particle / sfx / tracer code reads pos.value — every visual consumer thereafter sees a single, consistent interpolated source of truth.

Pre-PR-19 the renderer interpolated at 3 specific call sites while tracers, ribbon trails, smoke emitters, and beam endpoints kept reading raw pos.value. At 128 Hz × 2-snapshot delay (~16 ms) that's ~6-unit visible separation between the body and effects originating from "where the body really is right now".

Why mutate Position.value in place rather than ship a separate RenderPosition component? Two reasons: (1) every consumer already reads pos.value, no per-call-site touch-up needed; (2) the next snapshot apply unconditionally overwrites pos.value with the server-authoritative value (entt's continuous_loader), so the mutation has no lasting effect on registry state — it's effectively a per-frame derived view. Concrete cycle:

  1. Snapshot apply → pos.value = server's value at tick T.
  2. recordInterpolationSamples reads server value, appends.
  3. (this method) → pos.value = interp_sample(buffer, renderTime).
  4. Renderer + particles + tracers + sfx read pos.value.
  5. Next snapshot apply re-overwrites pos.value with new server value (step 1 again).

No-op when render-delay interp is disabled (interpDelaySnapshots_ == 0) or no buffered playback yet (renderTimeNs == 0). Excludes local player (which has no InterpolationBuffer because recordInterpolationSamples filters local out, and which is driven by client-side prediction anyway).

Here is the call graph for this function:

◆ consumeSnapshotApplied()

bool Client::consumeSnapshotApplied ( )
inlinenodiscardnoexcept

Whether a snapshot was applied since the last call to consumeSnapshotApplied().

Phase 5b: the game thread reads this each iterate() to know when to trigger reconciliation. Self-resets so a single snapshot only triggers a single reconciliation pass.

◆ dispatchMessage()

void Client::dispatchMessage ( const uint8_t * data,
Uint32 size,
Registry & registry )
private

Decode and dispatch a single complete framed message.

Called by poll(registry) after pulling the bytes out of recvBuf.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ getInterpolationRenderTimeNs()

Uint64 Client::getInterpolationRenderTimeNs ( ) const
nodiscard

Render time the renderer should display non-local entities at.

PR-11 (server-perf): Valorant / Fortnite / Source-engine cl_interp style render-delay interpolation. Returns SDL_GetTicksNS() − delayTicks × snapshotIntervalNs() where delayTicks is read from GROUP2_CLIENT_INTERP_DELAY_SNAPSHOTS (default 2) and snapshotIntervalNs is the EMA of the last two snapshot apply times.

Returns 0 until two snapshots have been applied — callers treat 0 as "no buffered playback yet, fall back to the Phase-5a alpha path" (see entity_interpolation::sample).

Why N=2? At 32 Hz snapshot rate, 2 ticks ≈ 62.5 ms — enough that the renderer always has at least one buffered "future" sample to interpolate toward, so a single dropped snapshot is invisible. Trade-off: visual feedback for remote players is delayed by 62.5 ms from server truth, but lag-comp on the server already accounts for the client's display-time-to-fire-time gap (Phase 6 lag comp).

Here is the caller graph for this function:

◆ getLocalPlayerEntity()

std::optional< entt::entity > Client::getLocalPlayerEntity ( ) const
inlinenodiscard

PR-21: server-assigned local-player entity (post-mapping).

Returns nullopt before the first snapshot containing the local player has applied (i.e. before the localPlayerReadyFn callback has fired). After that, returns the LOCAL entt::entity (mapped through continuous_loader) that the bot or game thread can use to find its own player in the registry.

◆ getNetStats()

const NetworkStats & Client::getNetStats ( ) const
inline

Access current network statistics.

◆ getServerAckedClientTick()

uint32_t Client::getServerAckedClientTick ( ) const
inlinenodiscardnoexcept

Latest server-acked client predict tick.

Phase 5b: when the server applies an INPUT packet stamped with client-tick T, then later sends a snapshot, the snapshot's local- player position represents state-after-applying-input-T. The client uses this value to know where to start replaying stored inputs from for reconciliation. 0 if no snapshot has been applied yet, or if the local player wasn't in the most recent snapshot.

◆ getSimulatedLatencyMs()

int Client::getSimulatedLatencyMs ( ) const
inlinenodiscardnoexcept

Get the currently-effective simulated total RTT.

◆ getSimulatedLossPercent()

int Client::getSimulatedLossPercent ( ) const
inlinenodiscardnoexcept

Get the currently-effective simulated packet loss %.

◆ getSnapshotAlpha()

float Client::getSnapshotAlpha ( ) const
nodiscard

Render-time interpolation alpha based on snapshot timing.

Phase 5a: with the snapshot rate decoupled from the physics tick rate (Phase 4a default = 32 Hz vs 128 Hz physics), the renderer can no longer use accumulator / k_physicsDt as the lerp alpha — that span is ~7.8 ms while two consecutive snapshots are ~31 ms apart. The result was the entity stepping in 7.8 ms bursts every 31 ms.

This helper returns alpha as (now - lastSnapshotApplyNs) / (lastSnapshotApplyNs - prevSnapshotApplyNs) clamped to [0, 1]. Self-correcting if the server changes its snapshot rate; freezes at 1.0 (entity at "current" pos, no extrapolation) when a snapshot is overdue. Returns 1.0 before two snapshots have arrived (no interpolation reference yet).

Note
PR-11 supersedes this for non-local entities. When the InterpolationBuffer path is in effect, the renderer uses getInterpolationRenderTimeNs() to play back at now − delay instead of lerping forward over the most recent interval. The alpha here remains the local-player / fallback path.

◆ getSnapshotIntervalNs()

Uint64 Client::getSnapshotIntervalNs ( ) const
nodiscard

Approximate snapshot interval in nanoseconds.

EMA over the last two snapshot apply times. Falls back to the default 32 Hz period (~31.25 ms) before two snapshots have arrived.

◆ init()

bool Client::init ( const char * addr,
Uint16 port,
const TransportConfig & transport = {} )

Create the TCP socket and connect to the server.

Parameters
addrHostname or IP address of the server.
portTCP port the server is listening on. The UDP sidecar (Phase 3d) connects to the same port.
transportPhase 3d: which UDP features to enable.
Returns
False on socket creation or DNS failure.
Here is the call graph for this function:

◆ networkLoop()

void Client::networkLoop ( )
private

Network-thread main loop body.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ onKillEvent()

void Client::onKillEvent ( KillEventCallback fn)
inline

◆ onLocalPlayerReady()

void Client::onLocalPlayerReady ( LocalPlayerReadyFn fn)
inline

◆ onMatchStateUpdate()

void Client::onMatchStateUpdate ( MatchStateUpdateFn fn)
inline

◆ onParticleEvent()

void Client::onParticleEvent ( ParticleEventCallback fn)
inline

◆ onShotDebugReport()

void Client::onShotDebugReport ( ShotDebugCallback fn)
inline

◆ poll()

bool Client::poll ( Registry & registry)

Receive and process one pending message.

Returns
True if a message was received, false if the queue is empty.
Here is the call graph for this function:

◆ recordInterpolationSamples()

void Client::recordInterpolationSamples ( Registry & registry,
Uint64 captureNs )
private

PR-11: append a sample to every replicated remote entity's InterpolationBuffer, AFTER the loader has rewritten the registry from the just-arrived snapshot.

Skips the local player (the LocalPlayer tag is set by the localPlayerReadyFn callback, which fires earlier in dispatchMessage's UPDATE_REGISTRY/_DELTA path, so by the time this runs the exclude filter is correct). No-op when interpDelaySnapshots_ is 0 (kill switch).

Parameters
registryClient registry post-Loader::apply.
captureNsWall-clock timestamp to stamp on every sample — same value for every entity in the same snapshot.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ recvUdpDelayed()

void Client::recvUdpDelayed ( std::vector< uint8_t > && payload)
private

Enqueue an assembled UDP message into udpRecvQueue_ immediately if the simulator is off, otherwise hold it in the inbound delay queue.

Caller MUST already hold stateMutex_.

Here is the caller graph for this function:

◆ send()

bool Client::send ( const void * data,
uint32_t size )

Send a raw message to the server.

Parameters
dataPointer to the payload bytes.
sizePayload length in bytes.
Returns
False if the send fails.
Here is the caller graph for this function:

◆ sendInputSnapshot()

bool Client::sendInputSnapshot ( const InputSnapshot & snap)

Push the latest input into the redundant ring and send to the server.

Each call appends snap to a small ring buffer (capacity k_inputRedundancy) and emits one INPUT packet containing the last N stored snapshots in tick order, oldest-first. The server dedups by InputSnapshot.tick against lastAppliedInputTick, so resending the last few inputs costs ~5x bandwidth on this packet type while making the input stream resilient to single-packet loss or reorder. Caller is responsible for stamping snap.tick with the current clientPredictTick before calling.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ sendPing()

void Client::sendPing ( )

Send a PING packet to the server for RTT measurement.

Here is the call graph for this function:

◆ sendShotIntent()

bool Client::sendShotIntent ( std::uint32_t shotInputTick,
std::uint16_t targetClientId,
const AnimSnapshot & targetAnim )

PR-27 (netsync): send a SHOT_INTENT packet describing the client's view of the target's animation state at fire time.

Server pairs this with the corresponding INPUT (by (shooterClientId, shotInputTick)) and computes the anim-state delta against its own historical state at the rewound tick. Sent once per rising-edge of input.shooting. targetClientId = 0xFFFF when the client wasn't aiming at any specific target.

Here is the call graph for this function:

◆ sendUdpDelayed()

bool Client::sendUdpDelayed ( net::PacketHeader hdr,
const void * data,
int len )
private

Send a UDP datagram immediately if the latency simulator is off, otherwise queue it for delayed send.

Caller MUST already hold stateMutex_ (matching the existing UDP send call sites).

Returns
False if the immediate send failed; true otherwise (queued sends always optimistically return true — failures surface later from the network thread's drain).
Here is the call graph for this function:
Here is the caller graph for this function:

◆ setSimulatedLatencyMs()

void Client::setSimulatedLatencyMs ( int totalMs)
noexcept

Phase 6 testing: simulate added round-trip latency.

Setting this to N causes outbound UDP datagrams to be held for N/2 ms before the kernel sees them, and incoming UDP messages to be held for N/2 ms before being delivered to the game-thread dispatch queue. The two halves combined produce an extra N ms of round-trip on top of whatever the real network has.

Range: 0–200 ms (slider-bounded; values outside the range are clamped on entry). 0 disables the simulator entirely — packets take the same fast path they did before this feature existed, no per-packet allocation, no extra mutex contention.

Why split into outbound + inbound halves? It models a symmetric real network: client→server and server→client each take half the RTT. With outbound-only delay, the server would see stale inputs but reply at full speed, leaving lag-comp's RTT/2 rewind formula systematically under-correcting by the inbound half. Symmetric delay matches the formula and gives the same hit-feel as a real WAN player at the slider's RTT.

◆ setSimulatedLossPercent()

void Client::setSimulatedLossPercent ( int percent)
noexcept

Phase 6 testing: simulate UDP packet loss.

Setting this to N makes each outbound and each inbound UDP datagram an independent N% Bernoulli drop. With redundancy disabled (PING/PONG) you'll see N% loss directly. With redundancy on (5-input INPUT packets, 3x reliable events, fragmented snapshots) effective loss is much lower:

  • Inputs: a tick is lost only if 5 consecutive packets are dropped — at 50% loss that's ~3 % per-tick loss.
  • Reliable events: lost only if all 3 redundant copies are dropped — at 50% loss that's ~12.5 % per-event loss.
  • Fragmented snapshots: any single fragment loss kills the whole snapshot — at 50% loss with 5 fragments that's ~97 % per-snapshot loss. Use small values (5–15 %) for testing snapshot resilience.

Range: 0–100. 0 disables. Higher values are accepted but pin the connection (the slider in the debug UI caps at 50 %).

◆ shouldDropPacketLocked()

bool Client::shouldDropPacketLocked ( )
private

Roll the loss RNG.

Caller MUST already hold stateMutex_.

Returns
True when the caller should treat the packet as dropped.
Here is the caller graph for this function:

◆ shutdown()

void Client::shutdown ( )

Close the socket and release the resolved address.

◆ updateStats()

void Client::updateStats ( float dt)

Update bandwidth stats. Call once per frame with the frame delta time.

Member Data Documentation

◆ bytesRecvWindow

uint64_t Client::bytesRecvWindow = 0
private

◆ bytesSentWindow

uint64_t Client::bytesSentWindow = 0
private

◆ connectionId_

uint32_t Client::connectionId_ = 0
private

◆ inputRing_

std::array<InputSnapshot, k_inputRedundancy> Client::inputRing_ {}
private

◆ inputRingCount_

size_t Client::inputRingCount_ = 0
private

Valid entries in ring; saturates at k_inputRedundancy.

◆ inputRingHead_

size_t Client::inputRingHead_ = 0
private

Next write index, wraps mod k_inputRedundancy.

◆ interpDelaySnapshots_

int Client::interpDelaySnapshots_ = 2
private

◆ k_defaultSnapshotIntervalNs

Uint64 Client::k_defaultSnapshotIntervalNs = 1'000'000'000ULL / 128ULL
staticconstexprprivate

◆ k_inputRedundancy

size_t Client::k_inputRedundancy = 5
staticconstexpr

Number of recent inputs included in each INPUT packet for redundancy.

At 128 Hz client tick rate, 5 inputs covers ~40 ms of redundancy — enough to recover from single-packet loss without retransmission, at the cost of ~5x INPUT-packet payload (still tiny: ~200 bytes/packet).

◆ keyframePayload_

std::vector<uint8_t> Client::keyframePayload_
private

◆ keyframeTick_

std::uint32_t Client::keyframeTick_ = 0
private

◆ killEventFn_

KillEventCallback Client::killEventFn_
private

Called for each replicated kill event from server.

◆ lastSnapshotApplyNs_

Uint64 Client::lastSnapshotApplyNs_ = 0
private

◆ localPlayerEntity

std::optional<entt::entity> Client::localPlayerEntity
private

The local player's entity, once assigned by the server.

◆ localPlayerReadyFn

LocalPlayerReadyFn Client::localPlayerReadyFn
private

Called once the server assigns a player entity.

◆ localPlayerReadyNotified

bool Client::localPlayerReadyNotified = false
private

True if localPlayerReadyFn has been called.

◆ matchStateUpdateFn_

MatchStateUpdateFn Client::matchStateUpdateFn_
private

Called whenever a MATCH_STATE packet is received.

◆ msgStream

MessageStream Client::msgStream {nullptr}
private

Framed message stream for server communication.

◆ networkThread_

std::thread Client::networkThread_
private

◆ outbound_

OutboundQueue Client::outbound_
private

◆ particleEventFn_

ParticleEventCallback Client::particleEventFn_
private

Called for each replicated particle event from server.

◆ prevSnapshotApplyNs_

Uint64 Client::prevSnapshotApplyNs_ = 0
private

◆ registryLoader

std::optional<registry_serialization::Loader> Client::registryLoader
private

◆ registryUpdatesWindow

uint32_t Client::registryUpdatesWindow = 0
private

◆ reliableHasAny_

bool Client::reliableHasAny_ = false
private

False until the first reliable event arrives.

◆ reliableHighestSeen_

uint16_t Client::reliableHighestSeen_ = 0
private

Phase 3d-5: sliding-window bitset for ReliableOrdered channel dedup.

Each event arrives k_reliableRedundancy times; only the first occurrence triggers dispatch. The window is 64 sequences wide, enough to cover RTT × redundancy at any reasonable network speed. Sequences older than that get dropped (very rare — would require 64 events to arrive during one RTT).

◆ reliableSeenBitmask_

uint64_t Client::reliableSeenBitmask_ = 0
private

◆ serverAckedClientTick_

uint32_t Client::serverAckedClientTick_ = 0
private

◆ serverAddr

NET_Address* Client::serverAddr = nullptr
private

Resolved server address.

◆ serverUdpAddr_

net::UdpEndpointAddr Client::serverUdpAddr_
private

◆ shotDebugFn_

ShotDebugCallback Client::shotDebugFn_
private

PR-20: called for each SHOT_DEBUG_REPORT from server.

◆ shouldStop_

std::atomic<bool> Client::shouldStop_ {false}
private

◆ simLatInbound_

std::deque<DelayedInbound> Client::simLatInbound_
private

◆ simLatOutbound_

std::deque<DelayedOutbound> Client::simLatOutbound_
private

◆ simLossRng_

std::mt19937 Client::simLossRng_ {}
private

PRNG for the loss simulator.

Always accessed under stateMutex_ (every loss-roll site already holds it for other reasons). Seeded once in init() so behaviour varies run-to-run; not cryptographically secure, but the simulator is a debug aid, not a security boundary.

◆ simulatedLatencyMs_

std::atomic<int> Client::simulatedLatencyMs_ {0}
private

Total simulated RTT in ms (slider value, 0–200).

Atomic so the UI thread can write while the network thread reads.

◆ simulatedLossPercent_

std::atomic<int> Client::simulatedLossPercent_ {0}
private

Per-direction independent UDP-drop probability (slider value, 0–100).

Each outbound and each inbound datagram rolls against this; rolls below the threshold are dropped silently.

◆ snapshotAppliedFlag_

bool Client::snapshotAppliedFlag_ = false
private

◆ snapshotIntervalEmaNs_

Uint64 Client::snapshotIntervalEmaNs_ = k_defaultSnapshotIntervalNs
private

◆ socketDead_

std::atomic<bool> Client::socketDead_ {false}
private

Latched-true once the network thread observes a socket error.

poll(registry) checks this and reports false to the game thread, so the existing "server died" disconnect path still works.

◆ stateMutex_

std::mutex Client::stateMutex_
private

◆ stats

NetworkStats Client::stats
private

Live network metrics.

◆ statsAccumulator

float Client::statsAccumulator = 0.0f
private

◆ transportConfig_

TransportConfig Client::transportConfig_
private

◆ udpEndpoint_

net::UdpEndpoint Client::udpEndpoint_
private

◆ udpInputSequence_

uint16_t Client::udpInputSequence_ = 0
private

Per-channel sequence for INPUT datagrams.

◆ udpRecvQueue_

std::vector<std::vector<uint8_t> > Client::udpRecvQueue_
private

UDP-received payloads waiting for the game thread to dispatch.

Filled by the network thread under stateMutex_; drained by Client::poll. Format of each entry is [PacketType][rest] — same as a complete framed message off the TCP path so the same dispatchMessage() handles both.

◆ unreliableReassembler_

net::FragmentReassembler Client::unreliableReassembler_
private

Phase 3d-4: reassembly buffer for fragmented snapshot datagrams on the Unreliable channel.

Tracks one in-progress reassembly per client (the most-recent sequence). Older fragments dropped via the FragmentReassembler's drop-stale rule.


The documentation for this class was generated from the following files: