The relationships between objects and language are fundamental to meaningful communication between humans and AI, and to practically useful embodied intelligence. We introduce HieraNav, a multi-granularity, open-vocabulary goal navigation task where agents interpret natural language instructions to reach targets at four semantic levels: scene, room, region, and instance. To this end, we present Language as a Map (LangMap), a large-scale benchmark built on real-world 3D indoor scans with comprehensive human-verified annotations and tasks spanning these levels. LangMap provides region labels, discriminative region descriptions, discriminative instance descriptions covering 414 object categories, and over 18K navigation tasks. Each target features both concise and detailed descriptions, enabling evaluation across different instruction styles. LangMap achieves superior annotation quality, outperforming GOAT-Bench by 23.8% in discriminative accuracy using 4 times fewer words. Comprehensive evaluations of strong zero-shot and supervised models on LangMap reveal that richer context and memory improve success, while long-tailed, small, context-dependent, and distant goals, as well as multi-goal completion, remain challenging. HieraNav and LangMap establish a rigorous testbed for advancing language-driven embodied navigation.
@article{miao2026langmap,
title = {LangMap: A Hierarchical Benchmark for Open-Vocabulary Goal Navigation},
author = {Bo Miao and Weijia Liu and Jun Luo and Lachlan Shinnick and Jian Liu and Thomas Hamilton-Smith and Yuhe Yang and Zijie Wu and Vanja Videnovic and Feras Dayoub and Anton {van den Hengel}},
journal = {arXiv preprint arXiv:2602.02220},
year = {2026}
}