0:00
/
0:00
Transcript

"SRMT: Shared Memory for Multi-agent Lifelong Pathfinding"

Below podcast is generated with Google's Illuminate.

Unlock multi-agent coordination with implicitly shared memory in transformers.

The paper addresses the challenge of coordinating multiple agents in pathfinding tasks. It proposes a novel architecture called Shared Recurrent Memory Transformer (SRMT). SRMT allows agents to implicitly share information, enhancing cooperation and leading to improved performance in complex navigation scenarios.

-----

Paper - https://arxiv.org/abs/2501.13200

Original Problem 🤖:

→ Coordinating multiple agents in decentralized Multi-Agent Pathfinding (MAPF) is difficult.

→ Explicit prediction of other agents' behavior is needed for cooperation.

→ Existing methods often struggle with deadlocks and poor generalization in new environments.

-----

Solution in this Paper 💡:

→ This paper introduces the Shared Recurrent Memory Transformer (SRMT).

→ SRMT extends memory transformers to multi-agent settings.

→ It enables agents to implicitly exchange information through shared memory.

→ Each agent maintains a personal recurrent memory.

→ SRMT pools these individual memories into a shared global memory.

→ Agents use cross-attention to access and utilize this shared memory.

→ This allows implicit coordination without explicit communication protocols.

-----

Key Insights from this Paper ✨:

→ Shared recurrent memory enhances coordination in decentralized multi-agent systems.

→ SRMT outperforms baselines, especially with sparse rewards and in long corridors.

→ Implicit information sharing via memory is effective for multi-agent cooperation.

→ SRMT demonstrates good generalization to unseen environments and scales well.

-----

Results 📊:

→ Shared Recurrent Memory Transformer (SRMT) achieves higher Cooperative Success Rate in Bottleneck tasks, especially with Sparse rewards.

Discussion about this video