group2 0.1.0
CSE 125 Group 2
Loading...
Searching...
No Matches
Server Class Reference

TCP stream socket — receives client packets and echoes them back. More...

#include <Server.hpp>

Collaboration diagram for Server:
[legend]

Classes

struct  ClientNetState
 PR-2b (server-perf): bulk-snapshot every connected client's last-reported network state in one (mostly lock-free) operation. More...
struct  Connection
 Per-client connection state. More...
struct  ClientRttSnapshot
 Atomic-published snapshot of every connected client's network state. More...

Public Member Functions

bool init (const char *addr, Uint16 port, const TransportConfig &transport={})
 Bind a TCP socket to the given address and port.
void shutdown ()
 Close the socket and release resources.
void poll ()
 No-op since stage 3b moved I/O onto a dedicated network thread.
bool isEmpty ()
 Check whether the event queue is empty.
Event dequeueEvent ()
 Remove and return the next event from the queue.
void drainEvents (std::vector< Event > &out)
 PR-2b (server-perf): drain every queued event in FIFO order into out under a single mutex acquisition.
bool notifyPlayerClientId (ClientId clientId, entt::entity playerEntity)
 Update client with new entity id.
void broadcastRegistry (const Registry &registry)
 Broadcast the full registry state to all clients.
void broadcastParticleEvents (const std::vector< NetParticleEvent > &events)
 Broadcast particle events to all clients for effect replication.
int getClientCount ()
 Get the number of currently connected clients.
uint16_t getClientRttMs (ClientId clientId)
 Phase 6: get this client's most-recently-reported smoothed RTT.
void snapshotClientNetStates (std::vector< ClientNetState > &out)
void broadcastMatchStatus (MatchStatePacket packet)
 Broadcast match status updates to clients.
void broadcastKillEvents (const std::vector< NetKillEvent > &events)
 Broadcast kill events to clients for kill feed updates.
bool sendToClient (const ClientId &clientId, const void *data, int len)
 PR-20: unicast a serialized SHOT_DEBUG_REPORT (or any other already-framed payload) to a single client.
void flushAllOutbound ()
 Drain every connection's outbound queue to its socket.

Private Member Functions

void handleMessage (Connection &client, const void *data, Uint32 len)
 Dispatch a single decoded message from a client.
void acceptClients ()
 Accept up to one new client connection per call.
void disconnectClient (Connection &conn)
 Disconnect a client and clean up resources.
void readClients ()
 Read and process pending messages from all connected clients.
ClientId getNextClientId ()
 Generate next unique client ID.
bool enqueueTo (const ClientId &clientId, uint8_t replaceKey, const void *data, int len)
 Enqueue raw data for one client.
void enqueueBroadcast (uint8_t replaceKey, const void *data, int len)
 Enqueue raw data for all currently-connected clients.
void enqueueReliableEvent (const void *data, int len)
 Phase 3d-5: enqueue a reliable event for all clients.
void networkLoop ()
 Network-thread main loop body.
void handleUdpUnreliable (uint32_t connId, const net::UdpEndpointAddr &from, const uint8_t *payload, uint32_t len)
 Dispatch a UDP datagram received with channel == Unreliable.

Private Attributes

NET_Server * server = nullptr
 Underlying SDL_net server handle.
std::unordered_map< ClientId, Connectionclients
 Currently connected clients.
EventQueue eventQueue
 Incoming events awaiting processing.
ClientId nextClientId
 Counter for assigning client IDs.
net::UdpEndpoint udpEndpoint_
TransportConfig transportConfig_
std::unordered_map< uint32_t, ClientIdconnIdToClient_
 UDP connection-id → ClientId lookup.
std::shared_mutex stateMutex_
std::thread networkThread_
std::atomic< bool > shouldStop_ {false}
std::shared_ptr< const std::vector< uint8_t > > pendingSnapshotPayload_
std::shared_ptr< const std::vector< uint8_t > > pendingSnapshotFramed_
std::vector< uint8_t > keyframeRaw_
std::uint32_t keyframeTick_ = 0
std::uint32_t snapshotCounter_ = 0
std::shared_ptr< const ClientRttSnapshotrttSnapshotAtomic_
std::atomic< std::uint32_t > clientCountAtomic_ {0}

Detailed Description

TCP stream socket — receives client packets and echoes them back.

Call poll() every tick to drain incoming messages. Extend handleMessage() with proper packet dispatch as the game protocol grows.

Member Function Documentation

◆ acceptClients()

void Server::acceptClients ( )
private

Accept up to one new client connection per call.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ broadcastKillEvents()

void Server::broadcastKillEvents ( const std::vector< NetKillEvent > & events)

Broadcast kill events to clients for kill feed updates.

Here is the call graph for this function:

◆ broadcastMatchStatus()

void Server::broadcastMatchStatus ( MatchStatePacket packet)

Broadcast match status updates to clients.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ broadcastParticleEvents()

void Server::broadcastParticleEvents ( const std::vector< NetParticleEvent > & events)

Broadcast particle events to all clients for effect replication.

Here is the call graph for this function:

◆ broadcastRegistry()

void Server::broadcastRegistry ( const Registry & registry)

Broadcast the full registry state to all clients.

Here is the call graph for this function:

◆ dequeueEvent()

Event Server::dequeueEvent ( )

Remove and return the next event from the queue.

Returns
The front event.

◆ disconnectClient()

void Server::disconnectClient ( Connection & conn)
private

Disconnect a client and clean up resources.

PR-5b (server-perf): now takes a reference. Pre-PR-5b it took Connection by value, which was a quiet per-disconnect std::vector<...> + per-deque copy. Once Connection grew an internal std::mutex (in OutboundQueue) it stopped being copyable and the by-value form would no longer compile — switching to a reference is both faster and required. The function reads only fields it doesn't mutate beyond the final socket-destroy + udpAddr release; the caller is expected to erase the entry from clients after.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ drainEvents()

void Server::drainEvents ( std::vector< Event > & out)

PR-2b (server-perf): drain every queued event in FIFO order into out under a single mutex acquisition.

Pre-PR-2b the game thread's tick loop did isEmpty()+dequeueEvent() per event, each acquiring the state mutex separately; at 100 bots × 128 Hz that was the dominant tick scope (12 ms p99). This collapses the per-event cost to a single lock + swap per tick.

◆ enqueueBroadcast()

void Server::enqueueBroadcast ( uint8_t replaceKey,
const void * data,
int len )
private

Enqueue raw data for all currently-connected clients.

Parameters
replaceKeySee enqueueTo.

◆ enqueueReliableEvent()

void Server::enqueueReliableEvent ( const void * data,
int len )
private

Phase 3d-5: enqueue a reliable event for all clients.

The event is pushed to each client's reliableQueue with a fresh per-client sequence and k_reliableRedundancy send budget. Network loop ships entries via UDP each cycle and decrements the budget; entries with budget==0 get popped. Falls back to TCP OutboundQueue (with replaceKey=0) when the events-over-udp toggle is off, so the same broadcast helpers work in both modes.

Here is the caller graph for this function:

◆ enqueueTo()

bool Server::enqueueTo ( const ClientId & clientId,
uint8_t replaceKey,
const void * data,
int len )
private

Enqueue raw data for one client.

Parameters
replaceKeySee OutboundEntry::replaceKey (0 = always append, non-zero = replace existing entry with same key).
Returns
False if the client is not connected.
Here is the caller graph for this function:

◆ flushAllOutbound()

void Server::flushAllOutbound ( )

Drain every connection's outbound queue to its socket.

Call once per server tick, after all per-tick broadcasts. Disconnects any client whose socket reports an error during drain.

Stage 3a: runs on the game thread. Stage 3b moves the actual I/O to a dedicated network thread; this method's behaviour from the gameplay layer's perspective stays the same.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ getClientCount()

int Server::getClientCount ( )

Get the number of currently connected clients.

Returns
The client count.
Here is the caller graph for this function:

◆ getClientRttMs()

uint16_t Server::getClientRttMs ( ClientId clientId)

Phase 6: get this client's most-recently-reported smoothed RTT.

Parameters
clientIdNetwork client identifier.
Returns
RTT in milliseconds, or 0 if the client isn't connected or hasn't sent its first INPUT packet yet (which carries the RTT field — see the wire format note in Connection::lastReportedRttMs).

◆ getNextClientId()

ClientId Server::getNextClientId ( )
private

Generate next unique client ID.

Here is the caller graph for this function:

◆ handleMessage()

void Server::handleMessage ( Connection & client,
const void * data,
Uint32 len )
private

Dispatch a single decoded message from a client.

Parameters
clientThe connection the message arrived on.
dataPointer to the message payload.
lenPayload length in bytes.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ handleUdpUnreliable()

void Server::handleUdpUnreliable ( uint32_t connId,
const net::UdpEndpointAddr & from,
const uint8_t * payload,
uint32_t len )
private

Dispatch a UDP datagram received with channel == Unreliable.

Reads the first byte of payload as a PacketType discriminator (mirrors the TCP wire format) and routes to the appropriate handler. Currently handles INPUT (Phase 3d-2) and PING (3d-3). All others are dropped silently — they're either meant for TCP or not yet ported to UDP.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ init()

bool Server::init ( const char * addr,
Uint16 port,
const TransportConfig & transport = {} )

Bind a TCP socket to the given address and port.

Parameters
addrHostname or IP to bind to (e.g. "127.0.0.1").
portTCP port to listen on. The UDP sidecar binds to the same port (different protocol = different socket; OS handles the demux).
transportPhase 3d: which UDP features to enable.
Returns
False on DNS or socket creation failure.
Here is the call graph for this function:

◆ isEmpty()

bool Server::isEmpty ( )

Check whether the event queue is empty.

Returns
True if no events are pending.

◆ networkLoop()

void Server::networkLoop ( )
private

Network-thread main loop body.

Runs continuously between init and shutdown, taking the mutex briefly for each I/O phase so the game thread isn't starved while (e.g.) draining 100 client outbound queues. Uses SDL_Delay(1) between cycles for a ~1 kHz tick — fast enough that game-thread enqueues turn into wire bytes within a millisecond, slow enough not to burn a full core.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ notifyPlayerClientId()

bool Server::notifyPlayerClientId ( ClientId clientId,
entt::entity playerEntity )

Update client with new entity id.

Returns
true if sent, otherwise false.

◆ poll()

void Server::poll ( )
inline

No-op since stage 3b moved I/O onto a dedicated network thread.

Kept as a public function so existing ServerGame code keeps compiling; the network thread (started by init and stopped by shutdown) does the real work continuously in the background. Safe to call from the game thread; just doesn't do anything.

◆ readClients()

void Server::readClients ( )
private

Read and process pending messages from all connected clients.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ sendToClient()

bool Server::sendToClient ( const ClientId & clientId,
const void * data,
int len )

PR-20: unicast a serialized SHOT_DEBUG_REPORT (or any other already-framed payload) to a single client.

Wraps the private enqueueTo so call sites in ServerGame can address the shooter without the broadcast cost paid by every player.

Returns
False if the client isn't currently connected.
Here is the call graph for this function:

◆ shutdown()

void Server::shutdown ( )

Close the socket and release resources.

◆ snapshotClientNetStates()

void Server::snapshotClientNetStates ( std::vector< ClientNetState > & out)

Member Data Documentation

◆ clientCountAtomic_

std::atomic<std::uint32_t> Server::clientCountAtomic_ {0}
private

◆ clients

std::unordered_map<ClientId, Connection> Server::clients
private

Currently connected clients.

◆ connIdToClient_

std::unordered_map<uint32_t, ClientId> Server::connIdToClient_
private

UDP connection-id → ClientId lookup.

◆ eventQueue

EventQueue Server::eventQueue
private

Incoming events awaiting processing.

◆ keyframeRaw_

std::vector<uint8_t> Server::keyframeRaw_
private

◆ keyframeTick_

std::uint32_t Server::keyframeTick_ = 0
private

◆ networkThread_

std::thread Server::networkThread_
private

◆ nextClientId

ClientId Server::nextClientId
private

Counter for assigning client IDs.

◆ pendingSnapshotFramed_

std::shared_ptr<const std::vector<uint8_t> > Server::pendingSnapshotFramed_
private

◆ pendingSnapshotPayload_

std::shared_ptr<const std::vector<uint8_t> > Server::pendingSnapshotPayload_
private

◆ rttSnapshotAtomic_

std::shared_ptr<const ClientRttSnapshot> Server::rttSnapshotAtomic_
private

◆ server

NET_Server* Server::server = nullptr
private

Underlying SDL_net server handle.

◆ shouldStop_

std::atomic<bool> Server::shouldStop_ {false}
private

◆ snapshotCounter_

std::uint32_t Server::snapshotCounter_ = 0
private

◆ stateMutex_

std::shared_mutex Server::stateMutex_
private

◆ transportConfig_

TransportConfig Server::transportConfig_
private

◆ udpEndpoint_

net::UdpEndpoint Server::udpEndpoint_
private

The documentation for this class was generated from the following files: