Developer Guide
This guide is intended for developers who wish to extend the Data Diode Connector (DDC) by adding support for new protocols, creating custom filters, or modifying the core transport logic.
Project Structure
The codebase is organized as a Rust workspace with several crates:
framework/: Core libraries shared by all components.osdd: The main application runner and trait definitions.bip_utils: Utilities for the Bipartite Buffer.logging: Centralized logging setup.
protocol_handlers/: Plugins for external communication.ph_kafka: Kafka Producer/Consumer implementation.ph_udp: Raw UDP socket handling.ph_mock_handler: Test stub.
filters/: Content inspection plugins.filter: Basic keyword filtering.
settings/: Default configuration files.
Adding a New Protocol Handler
To add support for a new protocol (e.g., MQTT, HTTP, AMQP), you need to create a new crate in protocol_handlers/ and implement the standard traits.
1. Create the Crate
Generate a new library crate inside protocol_handlers/:
cargo new --lib protocol_handlers/ph_mqtt2. Implement the Traits
DDC uses a generic system where handlers are loaded based on configuration. You will need to implement structs that handle the initialization and the run loop.
Key Traits to Implement:
For Ingress (Reading data): You typically create a struct that reads from your source and writes to a BipBufferWriter.
// Pseudo-code structure
pub struct MqttIngress {
// ... connection details
}
impl MqttIngress {
pub fn run(&mut self, mut writer: BipBufferWriter) {
loop {
let message = self.mqtt_client.recv();
// Serialize message to internal binary format
// Write to 'writer'
}
}
}For Egress (Writing data): You create a struct that reads from a BipBufferReader and writes to your destination.
// Pseudo-code structure
pub struct MqttEgress {
// ... connection details
}
impl MqttEgress {
pub fn run(&mut self, mut reader: BipBufferReader) {
loop {
if let Some(data) = reader.read() {
// Deserialize 'data'
// Publish to MQTT broker
}
}
}
}3. Register the Handler
You must update the main factory logic (usually in framework/osdd or the main binary entry point) to recognize your new type = "ph_mqtt" string in Config.toml and instantiate your struct.
Working with Data Formats
DDC is payload-agnostic, but ph_kafka uses a specific KafkaMessage struct serialized via bincode.
If you are integrating with the existing ecosystem, you should likely respect this format or wrap your data similarly:
#[derive(Serialize, Deserialize)]
pub struct MyMessage {
pub payload: Vec<u8>,
pub metadata: String,
}Creating a Custom Filter
Filters are the easiest component to extend.
- Look at
filters/filter/src/lib.rs. - The core function signature usually resembles:rust
pub fn filtering( buffer: &[u8], length: usize, writer: &mut BipBufferWriter, // ... custom args ) - Logic:
- Deserializes the packet from
buffer. - Inspects the content.
- If Allowed: Writes it to
writer(the next stage). - If Denied: Does not write to
writer, effectively dropping it. Adds todropped_packetsstats.
- Deserializes the packet from
Building and Testing
Running Tests
Use standard Cargo commands:
cargo test --workspaceBuilding for Release
cargo build --releaseCross-Compilation
The project supports ARM64 (for edge devices).
cross build --target aarch64-unknown-linux-gnu --release