基于大小模型协同的无人艇集群自主协作算法研究

Autonomous cooperation of USV swarms via large-small language model collaboration

  • 摘要:
    目的 针对大语言模型(LLM)在多智能体协同任务中应用思维链(CoT)推理所带来的推理延迟高、冗余信息多和实时性差等问题,提出一种基于大小模型协同的无人艇(USV)集群自主协作算法。
    方法 首先通过句级遮蔽测试与词级语义剪枝实现两级逻辑压缩,剥离CoT中冗余逻辑描述并生成轻量化的关键词序列,然后将这些关键词序列用于微调轻量化语言模型(小模型),使其具备从任务描述中生成CoT核心内容的能力。其次,引入多模型协作机制,使用3个8 B轻量模型并行生成候选决策方案,再由32 B验证模型进行置信度验证并快速筛选最优解,形成“压缩−生成−验证”闭环流程。
    结果 实验结果显示,在USV动态避障任务中,该方法将单步推理延迟从3.45 s压缩至1.53 s,任务成功率保持在98.8%,超越了传统CoT推理加速方法。
    结论 所提方法在保证输出质量的同时显著降低了模型的推理延迟,为复杂海洋环境下的实时决策提供了高效的技术路径。

     

    Abstract:
    Objective To address the challenges of high reasoning latency, redundant logical chains, and poor real-time performance when applying chain-of-thought (CoT) reasoning with large language models (LLMs) to the collaborative autonomy of unmanned surface vehicle (USV) swarms, this paper proposes an autonomous cooperation algorithm based on the collaboration between large and small language models. The objective is to substantially reduce the inference latency of the decision-making model while maintaining output quality, thereby providing a highly effective technical framework for real-time decision-making in complex maritime environments.
    Method The proposed approach is built around two core technical innovations. First, a two-stage logical compression process is implemented through sentence-level masking and word-level semantic pruning. This process effectively removes redundant logical expressions from the original CoT reasoning sequences and produces streamlined keyword-based representations. These optimized sequences are then utilized to fine-tune lightweight models, enabling them to directly generate the essential CoT reasoning content from task descriptions. Second, a multi-model collaborative mechanism is introduced, in which three lightweight models with 8B-parameters each generate candidate decision solutions in parallel. Subsequently, a 32B-parameter verifier model performs confidence-based evaluation and rapidly selects the optimal solution. This architecture establishes a highly efficient closed-loop pipeline of "compression-generation-verification".
    Results Extensive experiments were conducted on three representative collaborative tasks: multi-objective pursuit, dynamic obstacle avoidance, and resource-constrained missions. The empirical results demonstrate the superior performance of the proposed approach. In the dynamic obstacle avoidance task, the method reduces single-step reasoning latency from 3.45 s to 1.53 s while maintaining a high task success rate of 98.8%, with the logical chain achieving an average compression rate of 72.3%. In the multi-objective pursuit task, it attains a high success rate of 97.6% and reduces single-step reasoning latency from 3.21 s to 1.42 s. Furthermore, in the resource-constrained mission, the proposed method achieves a high success rate of 95.1% and significantly decreases the average number of decision steps from 29.7 to 21.4, demonstrating marked improvements in planning efficiency. Ablation studies confirm the complementary effects of the logical compression and speculative decoding modules in balancing decision quality and response speed.
    Conclusion By integrating a two-stage logical compression mechanism with a multi-model speculative decoding framework, the proposed method substantially reduces model inference latency without compromising output quality. It provides a robust and efficient technical solution for real-time collaborative decision-making in USV swarms operating in complex and dynamic maritime environments, demonstrating strong application potential and practical value.

     

/

返回文章
返回