DROMEUS is a decentralized federated learning protocol. Trainers contribute compute and earn x402. Users submit jobs and get trained models. AI agents can orchestrate training autonomously via A2A.
Training scripts execute locally on trainer machines. Only encrypted model weights travel over the AXL P2P mesh. Raw data stays where it belongs.
Coordinator merges weight updates via FedAvg — mathematically equivalent to centralized training without the central data. Round timeout at 120s with partial aggregation.
AXL nodes form an end-to-end encrypted mesh (Yggdrasil + gVisor). No TUN, no root, no port forwarding. Nodes operate behind NATs and firewalls.
Weights are opaque bytes. Coordinator uses torch for FedAvg but workers can use PyTorch, TensorFlow, JAX, sklearn — any framework that reads/writes weight files.
Submit training jobs through the browser. Upload your dataset, paste your training script, set rounds and workers. Watch loss curves update in real-time via SSE.
AI agents discover DROMEUS coordinators via A2A AgentCards. An agent can submit jobs, monitor training, and retrieve weights — all programmatically, no human in the loop.
// Claude/GPT submits a job POST /a2a/coordinator { "type": "JOB_ASSIGN", "script_b64": "...", "dataset_url": "https://..." }
Two commands. Your machine trains models. You earn x402. No code required. The CLI handles everything — AXL node, keypair, Python deps, and a live training dashboard.
# Install globally (requires bun) $ npm install -g dromeus # One-time setup $ dromeus setup --coordinator <coordinator-ip> # Start earning $ dromeus start DROMEUS ● ONLINE waiting for jobs...
# Encode your training script $ curl -X POST /jobs \ -d { "script_b64": "$(base64 <train.py)", "dataset_url": "https://...", "num_rounds": 10, "min_workers": 3 } // watch live metrics $ curl /jobs/{job_id}