TL;DR: CoFlow learns a natively joint-coupled averaged velocity field for offline multi-agent reinforcement learning. It combines Coordinated Velocity Attention, adaptive coordination gating, and a finite-difference consistency surrogate so coordinated multi-agent trajectories can be generated in 1--3 denoising steps without distilling a joint teacher into independent agents.
Overview
CoFlow targets the quality-efficiency dilemma in offline multi-agent trajectory generation. Existing diffusion methods preserve coordination but require many denoising steps; existing few-step routes accelerate inference but weaken cross-agent coupling. CoFlow occupies the Pareto region where few-step inference and coordination preservation coexist.