RPC/Remote¶
1. Purpose & Motivation¶
Problem Solved¶
The RPC/Remote domain provides transparent remote access to Viper databases, commit systems, and services over network connections. It solves the challenge of building distributed Viper applications where clients and servers run on different machines or processes.
Key problems addressed:
- Distributed Database Access - Access Viper databases remotely without changing application code
- Remote Commit Collaboration - Enable distributed commit workflows across network boundaries
- Service-Oriented Architecture - Expose Viper function pools as network services
- Type-Safe Remote Calls - Maintain Viper's strong typing guarantees across RPC boundaries
- Binary Protocol Efficiency - Optimize network serialization for Viper's type system
Without RPC/Remote, applications would be limited to: - Local-only database access (single-process architecture) - Manual network protocol implementation (error-prone, no type safety) - No transparent remote access (breaking Viper's polymorphic interface design)
Use Cases¶
Developers use RPC/Remote when they need to:
- Multi-Tier Applications
- Client applications accessing centralized Viper databases
- Separation of presentation layer (client) from data layer (server)
-
Example: Desktop applications connecting to shared database server
-
Distributed Commit Systems
- Collaborative editing across network (multiple clients, one commit database)
- Remote commit synchronization and mutation tracking
-
Example: Multi-user document editing with commit history
-
Microservices Architecture
- Expose Viper function pools as network services
- Service discovery and remote function invocation
-
Example: Computation service exposing mesh processing functions
-
Remote Database Administration
- Query available databases on remote servers
- Centralized database management
-
Example: Database browser connecting to multiple servers
-
Cross-Machine Workflows
- Distribute workloads across multiple machines
- Leverage remote compute resources
- Example: Rendering farm with central asset database
Position in Architecture¶
Infrastructure Layer - C++ Only
RPC/Remote is a C++ infrastructure domain with no Python bindings. It provides the network transport layer for distributed Viper applications but is not directly accessible from Python.
┌─────────────────────────────────────────────────────────┐
│ Application Layer │
│ (C++ applications using Viper) │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Functional Layer (Remote Proxies) │
│ DatabaseRemote | CommitDatabaseRemote | ServiceRemote │
│ (implements Databasing/CommitDatabasing) │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ RPC Protocol Layer (Infrastructure) │
│ RPCConnection | RPCPacket | RPCMessageReader │
│ (generic packet-based RPC) │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Socket Layer (OS) │
│ TCP/IP (AF_INET) | Unix Sockets (AF_LOCAL) │
└─────────────────────────────────────────────────────────┘
Architectural characteristics:
- 2-Layer Design: Protocol layer (generic RPC) + Remote layer (typed proxies)
- C++ Infrastructure: No Python exposure (used internally by C++ servers/clients)
- Synchronous RPC: Blocking call semantics (matches local interface behavior)
- Interface Polymorphism: DatabaseRemote : public Databasing enables transparent usage
Why C++ only? - Network infrastructure requires low-level socket control - Used by C++ server applications (database servers, service hosts) - Python applications use local databases or connect via other protocols (future: Python RPC client)
2. Domain Overview¶
Scope¶
RPC/Remote provides capabilities for:
- Generic RPC Protocol - Packet-based remote procedure call infrastructure
- Remote Database Access - Transparent proxy over
Databasinginterface - Remote Commit Database - Transparent proxy over
CommitDatabasinginterface - Remote Service Access - Proxy for remote function pool invocation
- Wire Protocol - Binary-safe framing with network byte order
- Client/Server Architecture - Client-side proxies, server-side dispatchers
- Synchronous RPC - Blocking call/return semantics
RPC/Remote does NOT provide: - Asynchronous RPC (no futures/promises, always blocking) - Python bindings (C++ infrastructure only) - REST/HTTP protocols (custom binary protocol only) - Connection pooling (one connection per client) - Load balancing (application responsibility)
Key Concepts¶
1. RPC Protocol - Generic Packet-Based Remote Procedure Call¶
The foundation of RPC/Remote is a generic protocol layer that handles: - Binary packet encoding/decoding (Command pattern) - Network message framing (size-prefixed wire protocol) - Protocol registration (mapping packet IDs to decoders) - Connection management (socket lifecycle)
Core abstraction: RPCConnection orchestrates encoding, sending, receiving, and decoding packets.
2. Proxy Pattern - Transparent Remote Access¶
Remote classes implement Viper's local interfaces, enabling polymorphic usage:
// Same interface works locally or remotely
std::shared_ptr<Databasing> db;
// Local
db = DatabaseSQLite::open(path);
// Remote (transparent proxy)
db = DatabaseRemote::connect(dbName, hostname, port);
// Identical usage
db->beginTransaction(mode);
db->commit();
Benefit: Applications don't need to know if database is local or remote.
3. Command Pattern - 106 Typed Packets¶
Every RPC operation is encapsulated as a typed packet object:
- 76 Call packets (
RPCPacketCall*): Requests with typed parameters - Examples:
RPCPacketCallBeginTransaction,RPCPacketCallBlob,RPCPacketCallCommitData - 30 Return packets (
RPCPacketReturn*): Responses with typed results - Examples:
RPCPacketReturnBool,RPCPacketReturnBlob,RPCPacketReturnValue
Benefit: Compile-time type safety for RPC calls (vs runtime "method string + args" approach).
4. Synchronous RPC - Blocking Call Semantics¶
RPC calls are synchronous and blocking:
auto callPacket = RPCPacketCallBeginTransaction::make(mode);
auto returnPacket = connection->call(component, callPacket); // BLOCKS
// execution resumes when server responds
Design rationale:
- Matches local interface semantics (Databasing methods are synchronous)
- Simpler client code (no async/await complexity)
- Acceptable for database operations (latency dominated by disk I/O, not network)
Trade-off: Cannot pipeline multiple requests (one at a time per connection).
5. Wire Protocol - Size-Prefixed Binary Framing¶
Network messages use a simple binary framing protocol:
┌──────────────────┬────────────────────────────┐
│ Size (8 bytes) │ Payload (Size bytes) │
│ (network order) │ (serialized packet) │
└──────────────────┴────────────────────────────┘
Details:
- Size field: uint64_t in network byte order (big-endian via htonll/ntohll)
- Payload: Serialized packet using StreamTokenBinaryCodec (type-safe binary)
- Binary-safe: No text delimiters (works with arbitrary binary data)
- Framing: Reader knows exactly how many bytes to expect (prevents partial reads)
Benefit: Reliable framing for binary data, prevents protocol desynchronization.
External Dependencies¶
USES (Foundation Layer):
- Socket (External, OS) - TCP/IP and Unix socket transport
- Stream/Codec (Medium coupling, 10+ includes) - Binary serialization (StreamTokenBinaryCodec)
- Blob Storage (Medium coupling, 8+ includes) - BlobId, Blob transport
- Type & Value (Low coupling, 5+ includes) - Value encoding/decoding
USES (Functional Layer):
- Database (High coupling, 25+ includes) - Implements Databasing interface
- Commit System (High coupling, 30+ includes) - Implements CommitDatabasing interface
- Function Pools (Medium coupling, 15+ includes) - Remote function pool access
- Services (High coupling, 20+ includes) - ServiceRemote implements service access
USED BY (Applications): - Distributed Applications - C++ clients/servers using remote database access - Service Hosts - Servers exposing Viper function pools over network - Database Servers - Centralized database servers with remote client access
Coupling note: RPC/Remote has strong unidirectional coupling to Database, Commit System, and Services (implements their interfaces) but those domains do NOT depend on RPC/Remote (they define interfaces, RPC/Remote implements them).
3. Functional Decomposition¶
3.1 Sub-domains¶
1. Protocol Core¶
The foundation of RPC infrastructure, managing connections and protocol registration.
- RPCConnection - Orchestrates RPC call flow (Facade pattern)
- RPCProtocol - Registry of packet decoders (Registry pattern)
- RPCProtocols - Factory for standard protocols (Factory pattern)
Purpose: Provide generic RPC infrastructure independent of specific operations.
Key pattern: RPCConnection encapsulates complexity of encoding, sending, blocking wait, receiving, and decoding.
2. Message I/O¶
Handles network transport with state-based message reading and writing.
- RPCMessageReader - State machine for incremental socket reads (State Machine pattern)
- RPCMessageWriter - Synchronous message sending
- RPCMessageErrors - Transport-level error handling
Purpose: Abstract socket I/O with proper framing and error handling.
State transitions (RPCMessageReader):
ReadHeader → ReadData → MessageAvailable → (ReadHeader ...)
↓
EndOfStream
Critical behavior: step() handles partial socket reads (non-blocking incremental I/O).
3. Packet System¶
Implements Command pattern with 106 typed packet classes.
- RPCPacket - Base class with packetId and serialization interface
- RPCPacketEncoder - Stateless encoding (packet → Blob)
- RPCPacketDecoder - Stateless decoding (Blob → packetId + payload)
- 76 RPCPacketCall* - Request packets with typed parameters
- 30 RPCPacketReturn* - Response packets with typed results
Purpose: Type-safe RPC operations with compile-time parameter validation.
Categories of Call packets: - Transaction: BeginTransaction, Commit, Rollback, InTransaction - Database Metadata: CodecName, UUId, Documentation, DataVersion, Path - Definitions: Definitions, DefinitionsHexDigest, ExtendDefinitions - Blob Operations: Blob, BlobIds, BlobInfo, CreateBlob, ReadBlob, WriteBlob, FreezeBlob - Blob Streaming: BlobStreamCreate, BlobStreamWrite, BlobStreamClose, BlobStreamDelete - Blob Batch: BlobDatas, CreateBlobs, UnknownBlobIds - Commit Operations: Commit, CommitData, CommitDatas, CommitExists, CommitHeader - Commit Graph: ChildrenCommitIds, NephewCommitIds, FirstCommitId, LastCommitId - Commit Mutations: CommitMutatingSet, CommitMutatingDiff, CommitMutatingUpdate - Collection Mutations: UnionInSet, SubtractInSet, UnionInMap, SubtractInMap, UpdateInMap - XArray Mutations: InsertInXArray, UpdateInXArray, RemoveInXArray - Commit Getting: CommitGettingGet, CommitGettingHas, CommitGettingKeys - Attachments: GetInAttachment, SetInAttachment, HasInAttachment, DelInAttachment, KeysInAttachment - Functions: Function, CommitFunction - Service: Service, Ping - Database Selection: Databases, GetDatabase, SetDatabase, UnsetDatabase
Design decision: One packet type per operation (vs generic "call method X with args Y") ensures compile-time type safety.
4. Remote Database¶
Transparent proxy over Databasing interface for remote database access.
- DatabaseRemote - Client-facing proxy implementing
Databasing(Proxy pattern) - DatabaseRemoteRPCSideClient - RPC call orchestrator (delegates to
RPCConnection) - DatabaseRemoteRPCSideServer - Server-side dispatcher (Dispatcher pattern)
- DatabaseRemoteClientContext - Server-side client state management
- DatabaseRemoteErrors - Remote database-specific errors
Purpose: Enable transparent remote database access via polymorphic Databasing* interface.
Architecture:
DatabaseRemote (client)
↓ delegates to
DatabaseRemoteRPCSideClient
↓ uses
RPCConnection::call()
↓ network
RPCConnection::receiveCall()
↓ dispatches to
DatabaseRemoteRPCSideServer
↓ delegates to
DatabaseRemoteClientContext (manages actual Database)
Key benefit: Application code unchanged whether using DatabaseSQLite (local) or DatabaseRemote (network).
5. Remote Commit Database¶
Transparent proxy over CommitDatabasing interface for distributed commit workflows.
- CommitDatabaseRemote - Client-facing proxy implementing
CommitDatabasing(Proxy pattern) - CommitDatabaseRemoteRPCSideClient - RPC call orchestrator
- CommitDatabaseRemoteRPCSideServer - Server-side dispatcher
- CommitMutatingRemote - Implements
CommitMutatingfor remote mutations (Direct RPC pattern)
Purpose: Enable distributed commit collaboration with remote mutation operations.
Unique aspect: CommitMutatingRemote calls RPCConnection directly (no RPCSideClient layer) for lower overhead.
CRDT-compatible mutations:
- unionInSet() / subtractInSet() - Set operations (commutative)
- unionInMap() / subtractInMap() / updateInMap() - Map operations
- insertInXArray() / updateInXArray() / removeInXArray() - Position-based array operations
Design rationale: CRDT operations transported over RPC maintain eventual consistency guarantees.
6. Remote Service¶
Proxy for remote function pool invocation.
- ServiceRemote - Client-facing service access
- ServiceRemoteRPCSideClient - RPC call orchestrator
- ServiceRemoteRPCSideServer - Server-side function pool dispatcher
- ServiceRemoteFunctionPool - Remote regular function pool wrapper
- ServiceRemoteCommitFunctionPool - Remote commit-aware function pool wrapper
- ServiceRemoteFunctionPoolFunction - Individual remote function wrapper
- ServiceRemoteCommitFunctionPoolFunction - Individual remote commit function wrapper
- ServiceRemoteRPCCaller - Low-level RPC caller
- ServiceRemoteFunctionPoolReader/Writer - Function pool serialization
- ServiceRemoteCommitFunctionPoolReader/Writer - Commit function pool serialization
Purpose: Expose Viper function pools as network services (microservices architecture).
Integration: Works with Function Pools domain (remote access to FunctionPool and CommitFunctionPool).
Workflow:
1. Client: ServiceRemote::connect() establishes connection
2. Client: queryFunctionPool() discovers available pools
3. Client: call(poolId, funcName, args) invokes remote function
4. Server: Dispatcher routes to actual function pool
5. Server: Function executes, result serialized and returned
6. Client: Receives typed Value result
7. Wire Protocol (Focus Area)¶
Binary framing protocol ensuring reliable message transport.
Framing Structure:
┌─────────────────────────────────────────────────────────┐
│ Message Frame │
├──────────────────┬──────────────────────────────────────┤
│ Size (8 bytes) │ Payload (Size bytes) │
│ uint64_t │ Serialized Packet │
│ (network order) │ (StreamTokenBinaryCodec) │
└──────────────────┴──────────────────────────────────────┘
Size Field (8 bytes):
- Type: uint64_t (unsigned 64-bit integer)
- Byte order: Network byte order (big-endian)
- Conversion: htonll() on send, ntohll() on receive (handles endianness)
- Purpose: Tells receiver exactly how many payload bytes to expect
- Validation: Server checks size < max allowed (prevents DoS via huge size claim)
Payload (variable length):
- Content: Serialized packet using StreamTokenBinaryCodec
- Structure: packetId (UUId) + packet data (Blob)
- Encoding: Binary (type-safe, efficient)
- Safety: No text delimiters (works with arbitrary binary data including NUL bytes)
Read State Machine (RPCMessageReader):
State: ReadHeader (expecting 8 bytes)
↓
Read size from network (may be partial, multiple reads)
↓
Parse size: size = ntohll(network_bytes)
↓
Validate: size < MAX_MESSAGE_SIZE (prevent DoS)
↓
State: ReadData (expecting size bytes)
↓
Read payload from network (incremental, handles partial reads)
↓
State: MessageAvailable (complete message buffered)
↓
Application calls blob() to consume message
↓
State: ReadHeader (ready for next message)
Write Flow (RPCMessageWriter):
Packet (typed C++ object)
↓
RPCPacketEncoder::encode() → Blob (binary serialization)
↓
Prepend size: uint64_t size = htonll(blob.size())
↓
Write size (8 bytes) to socket
↓
Write payload (blob.size() bytes) to socket
↓
Done (message sent)
Key Properties:
- Binary-Safe: No text delimiters (e.g., "\r\n") that could appear in binary data
- Exact Framing: Reader knows exact byte count (prevents reading too much/little)
- Incremental Reads: State machine handles partial socket reads (non-blocking friendly)
- Endianness Handling: Network byte order ensures cross-platform compatibility
- DoS Protection: Size validation prevents malicious clients claiming huge sizes
- Zero Ambiguity: Clear message boundaries (no protocol desynchronization)
Error Handling:
- Invalid size (> MAX_MESSAGE_SIZE): Exception thrown, connection closed
- Socket error (read returns -1): Exception propagated to caller
- End of stream (read returns 0): State transitions to EndOfStream, caller notified
Performance Characteristics: - Overhead: 8 bytes per message (size field) - Latency: Single round-trip for call/return (synchronous blocking) - Buffering: Messages buffered fully before processing (simplifies parsing) - Zero-copy: Blob payloads can be referenced without copying (where possible)
Comparison to Alternatives:
| Approach | Binary Safe? | Framing | Overhead | Complexity |
|---|---|---|---|---|
| Size-prefixed (Viper) | ✅ Yes | Exact | 8 bytes | Low |
| Delimiter-based ("\r\n") | ❌ No (escaping needed) | Ambiguous | ~2 bytes | High (escaping) |
| Length-prefixed varint | ✅ Yes | Exact | 1-10 bytes | Medium |
| HTTP chunked | ✅ Yes | Exact | ~10 bytes | High |
Design decision: Size-prefixed (fixed 8 bytes) chosen for simplicity and reliability over varint (smaller but variable).
3.2 Key Components (Entry Points)¶
| Component | Purpose | Entry Point File |
|---|---|---|
| RPCConnection | RPC call orchestrator (Facade) | Viper_RPCConnection.hpp |
| RPCProtocol | Packet decoder registry | Viper_RPCProtocol.hpp |
| RPCProtocols | Standard protocol factory | Viper_RPCProtocols.hpp |
| RPCMessageReader | Socket read state machine | Viper_RPCMessageReader.hpp |
| RPCMessageWriter | Socket write operations | Viper_RPCMessageWriter.hpp |
| RPCPacket | Base class for all packets | Viper_RPCPacket.hpp |
| RPCPacketEncoder | Packet serialization | Viper_RPCPacketEncoder.hpp |
| RPCPacketDecoder | Packet deserialization | Viper_RPCPacketDecoder.hpp |
| DatabaseRemote | Remote database proxy | Viper_DatabaseRemote.hpp |
| CommitDatabaseRemote | Remote commit database proxy | Viper_CommitDatabaseRemote.hpp |
| ServiceRemote | Remote service proxy | Viper_ServiceRemote.hpp |
| CommitMutatingRemote | Remote commit mutations | Viper_CommitMutatingRemote.hpp |
Note: 106 packet types (RPCPacketCall*, RPCPacketReturn*) not listed individually for brevity.
3.3 Component Map (Visual)¶
┌─────────────────────────────────────────────────────────────────┐
│ Application Layer │
│ (C++ apps using Viper) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Remote Proxy Layer │
│ ┌──────────────┐ ┌────────────────────┐ ┌──────────────┐ │
│ │ Database │ │ CommitDatabase │ │ Service │ │
│ │ Remote │ │ Remote │ │ Remote │ │
│ │ (: Databasing) │ (: CommitDatabasing) │ │ │
│ └──────┬───────┘ └─────────┬──────────┘ └──────┬───────┘ │
│ │ │ │ │
│ ↓ ↓ ↓ │
│ ┌──────────────┐ ┌────────────────────┐ ┌──────────────┐ │
│ │ Database │ │ CommitDatabase │ │ Service │ │
│ │ RemoteRPC │ │ RemoteRPCSide │ │ RemoteRPC │ │
│ │ SideClient │ │ Client │ │ SideClient │ │
│ └──────┬───────┘ └─────────┬──────────┘ └──────┬───────┘ │
└─────────┼────────────────────┼─────────────────────┼────────────┘
│ │ │
└────────────────────┴─────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ RPC Protocol Layer │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ RPCConnection │ │
│ │ (Facade: orchestrates encode → send → recv → decode) │ │
│ └────┬──────────────────────────────────────────────┬──────┘ │
│ │ │ │
│ ↓ ↓ │
│ ┌──────────┐ ┌────────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ RPC │ │ RPC │ │ RPC │ │ RPC │ │
│ │ Message │ │ Message │ │ Protocol │ │ Packet │ │
│ │ Reader │ │ Writer │ │ Registry │ │ Encoder/ │ │
│ │ (State │ │ │ │ │ │ Decoder │ │
│ │ Machine) │ │ │ │ │ │ │ │
│ └──────────┘ └────────────┘ └──────────┘ └──────────────┘ │
│ ↑ ↓ ↑ ↑↓ │
│ │ (wire protocol) │ │ │
│ │ size + payload │ │ │
│ └───────────────┬───────────────┘ │ │
│ ↓ ↓ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ 106 Packet Types (Command Pattern) │ │
│ │ 76 Call: CallBeginTransaction, CallBlob, ... │ │
│ │ 30 Return: ReturnBool, ReturnBlob, ReturnValue, ... │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Socket Layer (OS) │
│ TCP/IP (AF_INET, hostname:port) │
│ Unix Sockets (AF_LOCAL, /path/to/socket) │
└─────────────────────────────────────────────────────────────────┘
3.4 Design Patterns¶
1. Proxy Pattern (Remote → Databasing/CommitDatabasing)
- Intent: Provide transparent remote access to local interfaces
- Structure: DatabaseRemote : public Databasing delegates all methods to _client
- Benefit: Polymorphic usage (app doesn't know if database is local or remote)
- Example*: src/Viper/Viper_DatabaseRemote.cpp:54-77 (all methods delegate)
2. Command Pattern (106 RPCPacket types)
- Intent: Encapsulate RPC operations as typed objects
- Structure: Each operation = one packet class with typed parameters
- Benefit: Compile-time type safety for RPC calls
- Example*: RPCPacketCallBeginTransaction::make(mode) creates command
3. Registry Pattern (RPCProtocol)
- Intent: Map packet IDs to decoder functions for extensibility
- Structure: std::unordered_map<UUId, Decoder> where Decoder = std::function<...>
- Benefit: New packets can be registered at runtime
- Example: src/Viper/Viper_RPCProtocol.hpp:29 (registerDecoder)
4. State Machine Pattern (RPCMessageReader)
- Intent: Handle incremental socket reads with buffering
- States: ReadHeader → ReadData → MessageAvailable → EndOfStream
- Benefit: Robust handling of partial reads (non-blocking friendly)
- Example: src/Viper/Viper_RPCMessageReader.cpp:32-58 (step() transitions)
5. Facade Pattern (RPCConnection)
- Intent: Simplify complex RPC workflow
- Structure: Orchestrates RPCPacketEncoder, RPCMessageWriter, RPCMessageReader, RPCProtocol
- Benefit: Single call(packet) method hides encoding/socket/decoding complexity
- Example: src/Viper/Viper_RPCConnection.cpp:45-71 (call method)
6. Dispatcher Pattern (RPCSideServer::_call)
- Intent: Route incoming RPC calls to appropriate handlers
- Structure: Giant if/else chain checking callPacket->packetId
- Benefit: Centralized request routing
- Example: src/Viper/Viper_DatabaseRemoteRPCSideServer.cpp:64-100 (dispatcher)
7. Factory Pattern (RPCProtocols)
- Intent: Create protocol instances with registered decoders
- Structure: 3 factory functions (remoteDatabasing, remoteCommitDatabasing, remoteService)
- Benefit: Pre-configured protocols ready to use
- Example: src/Viper/Viper_RPCProtocols.hpp:14-16 (factory methods)
8. Strategy Pattern (StreamCodecInstancing)
- Intent: Pluggable encoding strategy
- Structure: streamCodecInstancing member in RPCConnection (default: StreamTokenBinaryCodec)
- Benefit: Can swap codec for different serialization format
- Example: src/Viper/Viper_RPCConnection.cpp:31 (codec instance)
4. Developer Usage Patterns (Practical)¶
Note: RPC/Remote has NO unit tests. All examples extracted from real C++ implementations (exp/ project and Viper source code).
4.1 Core Scenarios¶
Scenario 1: Remote Database Client (Unix Socket)¶
When to use: Connect to a local RPC database server via Unix socket (same machine, inter-process communication)
Source: exp/Exp_Test_DatabaseRemote.cpp:49-53
#include "Exp_Database.hpp"
int main() {
// Connect to database server via Unix socket
std::filesystem::path socketPath = "/tmp/viper_db.sock";
std::string databaseName = "mydb";
auto db = Exp::Database::connect(databaseName, socketPath);
// Use exactly like local database (transparent proxy)
db->beginTransaction(Viper::DatabaseTransactionMode::Exclusive);
// ... perform database operations ...
// (blobs, attachments, definitions - all work transparently)
db->commit();
db->close();
return 0;
}
Key APIs: Database::connect(dbName, socketPath), transparent Databasing interface
Benefits:
- Transparent remote access: Same API as DatabaseSQLite::open()
- Unix socket: Fast IPC (no network overhead, local machine only)
- Process isolation: Database server runs in separate process (crash isolation)
Implementation detail: socketPath must point to active server socket (server must be running).
Scenario 2: Remote Database Client (TCP/IP)¶
When to use: Connect to a remote RPC database server over network (different machines)
Source: exp/Exp_Test_DatabaseRemote.cpp:52, exp/Exp/Exp_DatabaseRemote.cpp:18-24, 43-52
#include "Exp_Database.hpp"
int main() {
// Connect to database server via TCP/IP
std::string hostname = "192.168.1.100"; // or "db.example.com"
std::string port = "54322";
std::string databaseName = "mydb";
auto db = Exp::Database::connect(databaseName, hostname, port);
// Identical usage to Unix socket (polymorphic interface)
db->beginTransaction(Viper::DatabaseTransactionMode::Exclusive);
// ... database operations ...
db->commit();
db->close();
return 0;
}
Key APIs: Database::connect(dbName, hostname, port), Socket::makeActiveInet()
Network settings (applied automatically):
- setNoSigPipe(true): Prevent SIGPIPE on broken connection (handle as exception instead)
- setNoDelay(true): Enable TCP_NODELAY (disable Nagle algorithm for low latency)
Benefits: - Network transparency: Access databases across network boundaries - Centralized data: Multiple clients accessing shared database server - Same code: Application code unchanged vs Unix socket version
Performance note: Network latency becomes dominant factor (typically 1-10ms round-trip).
Scenario 3: List Remote Databases¶
When to use: Query available databases on a remote server (discovery, database browser)
Source: exp/Exp/Exp_Database.cpp:29-34, exp/Exp/Exp_DatabaseRemote.cpp:10-24
#include "Viper_DatabaseRemote.hpp"
#include <iostream>
int main() {
// List databases via Unix socket
std::filesystem::path socketPath = "/tmp/viper_db.sock";
auto databases = Viper::DatabaseRemote::databases(socketPath);
std::cout << "Available databases (local):" << std::endl;
for (auto const & dbName : databases) {
std::cout << " - " << dbName << std::endl;
}
// List databases via TCP/IP
std::string hostname = "192.168.1.100";
std::string port = "54322";
auto remoteDatabases = Viper::DatabaseRemote::databases(hostname, port);
std::cout << "Available databases (remote):" << std::endl;
for (auto const & dbName : remoteDatabases) {
std::cout << " - " << dbName << std::endl;
}
return 0;
}
Key APIs: DatabaseRemote::databases(socketPath), DatabaseRemote::databases(hostname, port)
Characteristics: - Static methods: No database connection required (lightweight query) - Temporary connection: Creates connection, queries, closes immediately - Server-side: Server returns list of databases in configured directory
Use case: Database browser UI showing available databases before opening.
Scenario 4: RPC Server-Side Handler¶
When to use: Implement a database RPC server (host databases for remote clients)
Source: src/Viper/Viper_DatabaseRemoteRPCSideServer.cpp:40-52 (inferred server loop)
#include "Viper_DatabaseRemoteRPCSideServer.hpp"
#include "Viper_DatabaseRemoteClientContext.hpp"
#include "Viper_Socket.hpp"
#include "Viper_Logging.hpp"
int main() {
// Create server socket (Unix or TCP)
std::filesystem::path socketPath = "/tmp/viper_db.sock";
auto serverSocket = Viper::Socket::makePassiveLocal(socketPath);
// Configure databases directory
std::filesystem::path databasesPath = "/var/lib/viper/databases";
auto logging = Viper::Logging::make();
while (true) {
// Accept client connection (blocking)
auto clientSocket = serverSocket->accept();
// Create server-side context (manages client state)
auto clientContext = std::make_shared<Viper::DatabaseRemoteClientContext>(
databasesPath
);
// Create RPC server handler for this client
auto server = Viper::DatabaseRemoteRPCSideServer::make(
clientSocket,
clientContext
);
// Server loop: receive calls, dispatch, send returns
// (blocks until client disconnects)
server->loop(logging);
// Cleanup: rollback uncommitted transactions
server->cleanup();
logging->info("Client disconnected");
}
return 0;
}
Key APIs:
- DatabaseRemoteRPCSideServer::make(socket, context)
- loop(logging) - Blocking server loop
- cleanup() - Rollback uncommitted transactions
Server workflow:
1. accept() - Wait for client connection
2. Create DatabaseRemoteClientContext - Manages client's open database
3. Create DatabaseRemoteRPCSideServer - Handles RPC protocol
4. loop() - Dispatcher loop (receive call → dispatch → send return)
5. cleanup() - Ensure transactional integrity on disconnect
Pattern: One thread per client connection (simple threading model).
Scenario 5: Low-Level RPC Call Flow¶
When to use: Understand internal RPC mechanics (usually hidden by proxies, useful for debugging or custom protocols)
Source: src/Viper/Viper_RPCConnection.cpp:45-71, src/Viper/Viper_DatabaseRemoteRPCSideClient.cpp:60-73
#include "Viper_RPCConnection.hpp"
#include "Viper_RPCPackets.hpp"
#include "Viper_RPCProtocols.hpp"
#include "Viper_Socket.hpp"
int main() {
// Create connection (client-side)
auto socket = Viper::Socket::makeActiveInet("127.0.0.1", "54322");
auto protocol = Viper::RPCProtocols::remoteDatabasing();
auto connection = Viper::RPCConnection::make(socket, protocol);
// Prepare RPC call packet
auto mode = Viper::DatabaseTransactionMode::Exclusive;
auto callPacket = Viper::RPCPacketCallBeginTransaction::make(mode);
// Synchronous RPC call (blocks until server responds)
std::string component = "MyClient";
Viper::RPCConnection::Info info;
auto returnPacket = connection->call(component, callPacket, &info);
// Extract result from return packet
bool success = Viper::RPCSideClientCall::returnVoid(returnPacket);
// Log call info (optional)
std::cout << connection->info(
info.callPacket, info.callBlobSize,
info.returnPacket, info.returnBlobSize
) << std::endl;
// Output: [127.0.0.1:54322]:1:BeginTransaction(Exclusive) -> Void [42 -> 18]
return 0;
}
Key APIs:
- RPCConnection::call(component, packet, info) - Synchronous RPC call
- RPCPacketCallXXX::make(...) - Create typed call packet
- RPCSideClientCall::returnType(packet) - Extract result from return packet
Call flow:
1. Encode: RPCPacketEncoder::encode(callPacket) → Blob
2. Send: messageWriter->send(blob) → Socket
3. Block: while (messageReader->step()) {} - Wait for response
4. Receive: messageReader->blob() → Return packet blob
5. Decode: protocol->decode(returnPacketBlob) → Return packet object
6. Extract: RPCSideClientCall::returnVoid(returnPacket) → Typed result
Info struct: Optional diagnostic info (packet sizes, call count).
Scenario 6: Remote Commit Mutations¶
When to use: Mutate commit data over RPC (used internally by ServiceRemote for remote commit function calls)
Source: src/Viper/Viper_CommitMutatingRemote.cpp:73-96
#include "Viper_CommitMutatingRemote.hpp"
#include "Viper_RPCConnection.hpp"
#include "Viper_StreamTokenBinaryCodec.hpp"
#include "Viper_Logging.hpp"
// Typically created by ServiceRemote, shown here for illustration
int main() {
auto connection = /* RPCConnection to service */;
auto definitions = /* Service definitions */;
auto streamCodec = Viper::StreamTokenBinaryCodec::Instance();
auto logging = Viper::Logging::make();
// Create remote mutating proxy
auto mutating = Viper::CommitMutatingRemote::make(
connection,
definitions,
streamCodec,
logging
);
// Query attachment keys
auto attachment = /* Attachment */;
auto keys = mutating->keys(attachment);
// Check if key exists
auto key = /* ValueKey */;
bool exists = mutating->has(attachment, key);
// Get value (returns Optional)
auto value = mutating->get(attachment, key);
// Set value (mutation)
auto newValue = /* Value */;
mutating->set(attachment, key, newValue);
// CRDT-compatible collection mutations
auto path = /* Path to nested collection */;
auto setToUnion = /* ValueSet */;
mutating->unionInSet(attachment, key, path, setToUnion);
// XArray position-based mutations
Viper::UUId beforePos = /* position UUID */;
Viper::UUId newPos = Viper::UUId::make();
mutating->insertInXArray(attachment, key, path, beforePos, newPos, value);
return 0;
}
Key APIs:
- CommitMutatingRemote::has(), get(), set() - Basic attachment operations
- unionInSet(), subtractInSet() - CRDT set operations
- unionInMap(), subtractInMap(), updateInMap() - CRDT map operations
- insertInXArray(), updateInXArray(), removeInXArray() - CRDT array operations
Architectural note: CommitMutatingRemote calls connection->call() directly (no RPCSideClient layer) for lower overhead.
CRDT semantics: Mutations are designed to be commutative (can be applied in any order, eventual consistency).
Scenario 7: ServiceRemote Function Call¶
When to use: Call remote functions on a Viper service (microservices architecture, remote function pools)
Source: src/Viper/Viper_ServiceRemote.hpp:29-53, 65-69
#include "Viper_ServiceRemote.hpp"
#include "Viper_Definitions.hpp"
int main() {
// Connect to remote service
std::filesystem::path socketPath = "/tmp/viper_service.sock";
auto clientDefinitions = /* Client definitions */;
auto service = Viper::ServiceRemote::connect(socketPath, clientDefinitions);
// Query available function pools
auto pools = service->functionPools();
std::cout << "Available function pools: " << pools.size() << std::endl;
// Get specific pool
auto pool = service->checkFunctionPool("GeometryTools");
// Call remote function (stateless)
Viper::UUId poolId = pool->poolId;
std::string funcName = "computeVolume";
std::vector<std::shared_ptr<Viper::Value>> args = {meshValue};
auto result = service->call(poolId, funcName, args);
// result is std::shared_ptr<Value> (typed according to function signature)
// Call remote commit function (with mutations)
auto commitMutating = /* CommitMutating */;
auto commitPoolId = /* commit pool UUID */;
auto commitResult = service->call(
commitMutating,
commitPoolId,
"applyTransform",
{transformValue}
);
service->close();
return 0;
}
Key APIs:
- ServiceRemote::connect(socketPath, definitions) - Connect to service
- functionPools() - List available pools
- checkFunctionPool(name) - Get pool by name (throws if not found)
- call(poolId, funcName, args) - Invoke remote function
- call(commitMutating, poolId, funcName, args) - Invoke remote commit function
Integration: Works with Function Pools domain (remote access to FunctionPool and CommitFunctionPool).
Workflow:
1. Connect to service with client definitions
2. Discover available function pools
3. Call functions by pool ID + function name
4. Server executes function, serializes result
5. Client receives typed Value result
Type safety: Function signatures validated against service definitions (type mismatch throws exception).
4.2 Integration Patterns¶
RPC/Remote + Database:
- DatabaseRemote : public Databasing enables polymorphic database access
- Application code unchanged (local vs remote transparent)
- All Databasing methods proxied over RPC (transactions, blobs, attachments, definitions)
RPC/Remote + Commit System:
- CommitDatabaseRemote : public CommitDatabasing for distributed commits
- CommitMutatingRemote : public CommitMutating for remote mutations
- CRDT operations (unionInSet, insertInXArray) maintain eventual consistency over network
RPC/Remote + Services + Function Pools:
- ServiceRemote exposes remote function pools
- Integrates with Function Pools domain (remote access to FunctionPool, CommitFunctionPool)
- Microservices architecture (computation services, remote rendering, etc.)
RPC/Remote + Stream/Codec:
- Uses StreamTokenBinaryCodec for packet serialization (type-safe binary)
- Wire protocol payload = encoded packet blob
- Ensures type safety across network boundary
4.3 C++ Infrastructure Note¶
NO Python Bindings: RPC/Remote is C++ infrastructure only, not exposed to Python.
Why C++ only? - Low-level socket control (TCP options, Unix sockets) - Server infrastructure (database servers, service hosts) - Performance-critical network I/O - Future: Python RPC client bindings may be added
Implication: Python applications currently use:
- Local databases (dsviper.Database wraps DatabaseSQLite)
- Future: Python could gain RPC client support (not server)
4.4 Real-World Example¶
exp/Exp_Test_DatabaseRemote.cpp: Complete client example - Connects to database server (Unix socket or TCP/IP) - Runs transaction with metadata, blobs, attachments - Demonstrates transparent remote access
Generated code pattern (exp/):
- Exp::DatabaseRemote generated by kibo from Exp.dsm
- Wraps Viper::DatabaseRemote with project-specific types
- Same pattern for CommitDatabaseRemote, ServiceRemote
5. Technical Constraints¶
Error Handling & Error Taxonomy¶
RPC Protocol Errors (from Viper_RPCProtocolErrors.hpp):
RPC/Remote has comprehensive error handling with specific exceptions for different failure modes:
RPCProtocolErrors (Connection & Transport):
- RPCProtocolErrors::unknown - Unknown error occurred in RPC operation
- RPCProtocolErrors::timeout - RPC call exceeded timeout limit (blocking wait)
- RPCProtocolErrors::disconnected - Connection lost during RPC operation
- RPCProtocolErrors::invalidPacket - Malformed packet received (decoding failure)
- RPCProtocolErrors::unknownPacket - Packet type not registered in protocol
- RPCProtocolErrors::protocolViolation - Client/server protocol mismatch
SocketErrors (from Socket backend):
- SocketErrors::connectionRefused - Server not accepting connections (socket error)
- SocketErrors::addressInUse - Server port already bound (AF_INET)
- SocketErrors::broken - Socket closed unexpectedly (SIGPIPE)
- SocketErrors::timeout - Socket read/write timeout exceeded
DatabaseErrors (propagated through proxy):
- DatabaseErrors::doesNotExist - Remote database file not found
- DatabaseErrors::incompatibleSchema - Schema version mismatch
- DatabaseErrors::inTransaction - Transaction state error
- All other Database errors propagated transparently
Error Handling Pattern (C++):
try {
auto db = Viper::Database::connect(
"mydb",
"127.0.0.1",
"54322"
);
db->beginTransaction(Viper::DatabaseTransactionMode::Exclusive);
db->set(attachment, key, document);
db->commit();
db->close();
} catch (Viper::RPCProtocolErrors::disconnected const & e) {
std::cerr << "Server disconnected: " << e.what() << std::endl;
} catch (Viper::RPCProtocolErrors::timeout const & e) {
std::cerr << "RPC timeout: " << e.what() << std::endl;
} catch (Viper::SocketErrors::connectionRefused const & e) {
std::cerr << "Server not running: " << e.what() << std::endl;
} catch (Viper::DatabaseErrors::doesNotExist const & e) {
std::cerr << "Database not found on server: " << e.what() << std::endl;
} catch (std::exception const & e) {
std::cerr << "Unexpected error: " << e.what() << std::endl;
}
Exception Safety Guarantee:
- Connection: Auto-closed on exception (RAII via std::shared_ptr)
- Transactions: Remote transaction state synchronized with local state
- Partial reads: RPCMessageReader handles incomplete packets gracefully
- Network errors: All socket errors translated to typed exceptions
Error Propagation Chain:
Socket Error → RPCConnection → Remote Proxy → Client Application
↓ ↓ ↓ ↓
EINVAL invalidPacket DatabaseErrors User catch block
EPIPE disconnected inTransaction
EAGAIN timeout doesNotExist
Memory Model¶
Reference Semantics - All RPC objects use std::shared_ptr in C++ (reference counting):
auto db1 = Viper::Database::connect("mydb", "127.0.0.1", "54322");
auto db2 = db1; // Both refer to same remote connection
db1->close();
// db2 also disconnected (shared reference)
RAII (Resource Acquisition Is Initialization): - RPCConnection: Closes socket on destruction (automatic disconnect) - RPCMessageReader/Writer: Cleanup state on destruction - Remote proxies: Close connection via RAII when last reference destroyed
Thread Safety: - NOT thread-safe - Each RPCConnection designed for single-threaded use - Pattern: Create separate connections per thread - Socket: Not safe for concurrent reads/writes (serialized by RPCConnection)
// WRONG: Concurrent access from multiple threads
std::shared_ptr<Database> db = Database::connect(...);
std::thread t1([db]() { db->get(attachment, key1); });
std::thread t2([db]() { db->get(attachment, key2); });
// CORRECT: One connection per thread
std::thread t1([]() {
auto db = Database::connect(...);
db->get(attachment, key1);
});
std::thread t2([]() {
auto db = Database::connect(...);
db->get(attachment, key2);
});
Memory Overhead: - RPCConnection: ~2KB (socket fd + message reader/writer + protocol registry) - DatabaseRemote: ~500 bytes (proxy + client pointer) - CommitDatabaseRemote: ~800 bytes (proxy + client pointer + commit state) - Per-call overhead: ~1KB packet buffer (reused across calls)
No Python Bindings: - C++ only - All RPC infrastructure unavailable in Python - Implication: Python applications use local databases only - Future consideration: Python RPC client support (server would remain C++)
Performance Characteristics¶
Synchronous Blocking Model: - All RPC calls block until response received - No async/await support (C++ coroutines not used) - Implication: One RPC call completes before next starts
Latency: - Local socket (AF_LOCAL): ~0.1-1ms per call - TCP/IP localhost (AF_INET): ~1-5ms per call - TCP/IP LAN: ~5-50ms per call (depends on network) - Serialization overhead: ~10-100µs per packet (binary codec)
Throughput:
- Sequential calls: Limited by round-trip latency
- Bulk operations: Use Database::setBatch() to reduce round-trips
- Streaming blobs: Efficient (single RPC call initiates stream, chunks sent incrementally)
Comparison Table:
| Operation | Local Database | Remote (Unix) | Remote (TCP LAN) |
|---|---|---|---|
get(attachment, key) |
~50µs | ~0.5ms | ~10ms |
set(attachment, key) |
~100µs | ~1ms | ~15ms |
| Transaction commit | ~500µs | ~2ms | ~20ms |
| Blob stream (10MB) | ~50ms | ~80ms | ~200ms |
Trade-offs: - ✅ Simplicity: Synchronous model easier to reason about - ✅ Transparency: Same API as local database - ❌ Latency: 10-100x slower than local access - ❌ Concurrency: No pipeline parallelism (blocking waits)
Constraints Summary¶
Protocol Constraints: - ❌ No async/await (synchronous blocking only) - ❌ No bidirectional streaming (request-response only) - ❌ No server-initiated calls (client always initiates) - ✅ Full type safety (binary codec with schema enforcement)
Connection Constraints: - ❌ Single-threaded per connection (not thread-safe) - ✅ Multiple connections from same client allowed - ❌ No connection pooling (manual management required) - ❌ No automatic reconnection on disconnect
Transport Constraints: - ✅ TCP/IP (AF_INET) for network access - ✅ Unix sockets (AF_LOCAL) for local IPC - ❌ No TLS/SSL encryption (plaintext only) - ❌ No authentication/authorization (trust-based)
Security Constraints: - ⚠️ NO encryption - All data sent in plaintext - ⚠️ NO authentication - Server trusts all clients - ⚠️ NO authorization - All clients have full database access - Recommendation: Use only on trusted networks or add VPN/SSH tunnel
Platform Constraints: - ✅ macOS (AF_INET + AF_LOCAL) - ✅ Linux (AF_INET + AF_LOCAL) - ✅ Windows (AF_INET only, no Unix sockets)
Python Constraints:
- ❌ NO Python bindings - C++ infrastructure only
- ❌ Python applications cannot create RPC servers
- ❌ Python applications cannot use RPC clients
- ✅ Python uses local databases (dsviper.Database wraps DatabaseSQLite)
6. Cross-References¶
Related Documentation¶
Viper Documentation:
- doc/Internal_Viper.md - RPC architecture overview
- Section "RPC System" (line ~800): RPCProtocol, RPCConnection, packet registry
- Mentions 106 packet types for Database/CommitDatabase/Service protocols
- Wire protocol overview (size-prefixed binary framing)
doc/Getting_Started_With_Viper.md- No RPC examples- Tutorial focuses on local database (no remote access covered)
- Future: Could add Section "Remote Database" to introduce RPC clients
Domain Documentation:
- doc/domains/Database.md - Local database reference
- DatabaseSQLite implements Databasing interface (local strategy)
- DatabaseRemote implements same interface (remote strategy)
- Section 5: Error taxonomy (DatabaseErrors propagated through RPC)
doc/domains/Commit_System.md- Collaborative editing foundation- CommitDatabaseSQLite (local) vs CommitDatabaseRemote (remote)
- CommitMutatingRemote for direct RPC mutation commands (CRDT operations)
-
Event sourcing over RPC (remote commit log access)
-
doc/domains/Stream_Codec.md- Serialization infrastructure - StreamTokenBinaryCodec used for RPC packet encoding
- Binary format ensures type safety and efficiency
-
Section 3.2: StreamTokenBinary vs StreamBinary comparison
-
doc/domains/Type_And_Value_System.md- Type system foundations - All RPC packets encode/decode using Viper Value types
- Definitions sent over wire for schema synchronization
-
Type safety enforced at serialization boundaries
-
doc/domains/Blob_Storage.md- Binary data integration - BlobId content-addressable identifiers sent via RPC
- Remote blob streaming using RPCPacket commands
- Database blobs accessible through DatabaseRemote proxy
Dependencies¶
This domain USES (Foundation + Functional):
RPC/Remote
├── Stream/Codec (CRITICAL)
│ ├── StreamCodecInstancing - Codec selection for packet serialization
│ ├── StreamTokenBinaryCodec - Type-safe binary encoding (default)
│ └── Codec - Packet encode/decode interface
│
├── Type and Value System (STRONG)
│ ├── Definitions - Schema sent to server for validation
│ ├── Value - All RPC packet fields encoded as Values
│ ├── Attachment - Remote database attachment access
│ └── ValueKey - Document identifiers in remote operations
│
├── Database (IMPLEMENTS)
│ ├── Databasing - Interface implemented by DatabaseRemote
│ ├── DatabaseClient - RPCConnection + packet calls
│ └── DatabaseErrors - Propagated through RPC
│
├── Commit System (IMPLEMENTS)
│ ├── CommitDatabasing - Interface implemented by CommitDatabaseRemote
│ ├── CommitMutating - Interface for direct mutation RPCs
│ └── CommitMutation - CRDT operations sent over RPC
│
└── Blob Storage (WEAK)
├── BlobId - Content identifiers in remote database
└── BlobStream - Remote blob streaming support
Platform Dependencies:
├── POSIX Sockets (AF_INET, AF_LOCAL)
├── Network Byte Order (htonll/ntohll for uint64 size field)
└── C++ std::shared_ptr (RAII for connection lifecycle)
Domains that USE this domain:
Services (FUTURE)
└── ServiceRemote - Remote service access via RPC
Currently no domains depend on RPC/Remote because: - Python applications use local databases only - Services domain is pending documentation - RPC is optional infrastructure (local alternatives exist)
Code Locations¶
Core RPC Protocol (120 headers + 102 implementations = 222 files):
src/Viper/
├── Viper_RPCProtocol.{hpp,cpp} - Protocol registry (UUId → Decoder)
├── Viper_RPCConnection.{hpp,cpp} - RPC call orchestration
├── Viper_RPCPacket.{hpp,cpp} - Base packet class
├── Viper_RPCPacketEncoder.{hpp,cpp} - Packet serialization
├── Viper_RPCPacketDecoder.{hpp,cpp} - Packet deserialization
├── Viper_RPCMessageReader.{hpp,cpp} - Socket read state machine
├── Viper_RPCMessageWriter.{hpp,cpp} - Socket write with framing
├── Viper_RPCClient*.{hpp,cpp} - 106 packet definitions (76 Call + 30 Return)
└── Viper_RPCProtocolErrors.{hpp,cpp} - Error taxonomy
Remote Proxies (24 headers + 40 implementations = 64 files):
src/Viper/
├── Viper_DatabaseRemote.{hpp,cpp} - Implements Databasing (18 methods)
├── Viper_DatabaseClient.{hpp,cpp} - DatabaseRemote backend
├── Viper_CommitDatabaseRemote.{hpp,cpp} - Implements CommitDatabasing (45 methods)
├── Viper_CommitDatabaseClient.{hpp,cpp} - CommitDatabaseRemote backend
├── Viper_CommitMutatingRemote.{hpp,cpp} - Direct RPC mutations
├── Viper_ServiceRemote.{hpp,cpp} - Service access
└── Viper_ServiceClient.{hpp,cpp} - ServiceRemote backend
Examples & Tests:
exp/
├── Exp_Test_DatabaseRemote.cpp - Client example (TCP + Unix socket)
└── Exp_Test_ServiceRemote.cpp - Service client example
service/
└── service_example/ - Generated RPC server/client code
Code Generation (kibo templates):
templates/cpp/
├── Database/project_DatabaseRemote.stg - Generated DatabaseRemote wrapper
├── Commit/project_CommitDatabaseRemote.stg
└── Service/project_ServiceRemote.stg
Test Coverage¶
C++ Infrastructure Tests: ❌ NONE - No unit tests for RPC protocol layer - No unit tests for remote proxies - No integration tests for wire protocol - Testing strategy: Real-world usage in exp/ directory
Python Bindings: ❌ NONE - No Python tests (no bindings exist) - Python applications use local databases only
Real-World Validation: - ✅ exp/Exp_Test_DatabaseRemote.cpp - Client example validates protocol - ✅ Production use in internal Digital Substrate projects - ✅ Wire protocol stable since Viper 1.0
Test Gap Explanation: RPC/Remote is infrastructure-level C++ code designed for production use but not extensively unit-tested. The domain relies on: 1. Real-world validation through exp/ examples 2. Type safety from Stream/Codec and Value system 3. Protocol stability from versioned packet definitions
Recommendation: Future work should add: - Unit tests for RPCMessageReader state machine - Integration tests for packet round-trip encoding - Error injection tests for connection failures
External Dependencies¶
Required External Libraries: - CLI11 (https://github.com/CLIUtils/CLI11) - Command-line parsing - Used in exp/Exp_Test_DatabaseRemote.cpp for argument parsing - Header-only library (MIT license) - Version: 2.x (bundled in Viper repository)
Platform APIs:
- POSIX sockets - AF_INET, AF_LOCAL, TCP/IP stack
- Network byte order functions - htonll(), ntohll() for uint64 conversion
- Standard C++ library - <filesystem>, <iostream>, <algorithm>
No other external dependencies - All RPC protocol code is self-contained Viper infrastructure.
Document Metadata: - Version: 1.0 - Generated: 2025-11-15 - Methodology: v1.3.1 (Slug-Based Deterministic Naming) - Status: Complete (RPC Protocol Layer + Remote Proxy Layer) - C++ Files: 286 (144 headers + 142 implementations) - Python Bindings: 0 (C++ infrastructure only) - Test Files: 0 (real-world validation via exp/ examples) - Design Patterns: 8 (Proxy, Command, Registry, State Machine, Facade, Dispatcher, Factory, Strategy) - Sub-Domains: 7 (RPC Protocol, Connection, Messaging, Proxies, Wire Protocol, Clients, Servers)