PROTOCOL STATUS: OPERATIONAL

Train ML models privately.
Data stays local. Weights travel encrypted.

DROMEUS is a decentralized federated learning protocol. Trainers contribute compute and earn x402. Users submit jobs and get trained models. AI agents can orchestrate training autonomously via A2A.

1,284 workers online
3,921 jobs completed
142K rounds trained
dromeus — protocol init
$
// protocol features

const features = {

🔒
"privacy": Data Never Leaves

Training scripts execute locally on trainer machines. Only encrypted model weights travel over the AXL P2P mesh. Raw data stays where it belongs.

"aggregation": Federated Averaging

Coordinator merges weight updates via FedAvg — mathematically equivalent to centralized training without the central data. Round timeout at 120s with partial aggregation.

🌐
"p2p-mesh": Encrypted P2P Mesh

AXL nodes form an end-to-end encrypted mesh (Yggdrasil + gVisor). No TUN, no root, no port forwarding. Nodes operate behind NATs and firewalls.

🔧
"frameworks": Framework Agnostic

Weights are opaque bytes. Coordinator uses torch for FedAvg but workers can use PyTorch, TensorFlow, JAX, sklearn — any framework that reads/writes weight files.

}
// two interfaces — one protocol

Built for humans & AI agents

interface Human {

Web Dashboard

Submit training jobs through the browser. Upload your dataset, paste your training script, set rounds and workers. Watch loss curves update in real-time via SSE.

GET /jobs — list all jobs
POST /jobs — submit training job
SSE /metrics/{jobId} — live loss/accuracy
}
interface Agent {

A2A / AXL Protocol

AI agents discover DROMEUS coordinators via A2A AgentCards. An agent can submit jobs, monitor training, and retrieve weights — all programmatically, no human in the loop.

agent → coordinator
// Claude/GPT submits a job
POST /a2a/coordinator
{
 "type": "JOB_ASSIGN",
 "script_b64": "...",
 "dataset_url": "https://..."
}
}
// become a trainer

$ dromeus start

Two commands. Your machine trains models. You earn x402. No code required. The CLI handles everything — AXL node, keypair, Python deps, and a live training dashboard.

install & setup
# Install globally (requires bun)
$ npm install -g dromeus

# One-time setup
$ dromeus setup --coordinator
 <coordinator-ip>

# Start earning
$ dromeus start
 DROMEUS ● ONLINE waiting for jobs...
submit a job
# Encode your training script
$ curl -X POST /jobs \
 -d {
 "script_b64": "$(base64 <train.py)",
 "dataset_url": "https://...",
 "num_rounds": 10,
 "min_workers": 3
  }

// watch live metrics
$ curl /jobs/{job_id}
open source · github.com/trainmesh/dromeus · MIT license