Skip to content
Webbeon
  • Models
  • Research
  • Safety
  • Posts
  • Careers
  • Contact
Webbeon

Built for what comes next.

Models
  • ArcOne
  • Oracle
  • Object
Research
  • AI Safety
  • Medicine
  • Quantum
  • Biophysics
  • Robotics
  • Silicon
Company
  • About
  • Careers
  • Philanthropy
  • Contact
  • News
Legal
  • Privacy Policy
  • Terms of Service
  • Safety
Connect
  • hello@webbeon.com
  • research@webbeon.com
  • careers@webbeon.com
  • press@webbeon.com
Webbeon
© 2026 Webbeon Inc. All rights reserved.
Robotics2026-02-25

Navigation Without Maps: Embodied Intelligence in Unstructured Environments

How Object navigates complex, unstructured environments using perception alone — no pre-built maps required.

Webbeon Robotics Team

Navigation Without Maps: Embodied Intelligence in Unstructured Environments

The dominant paradigm in robot navigation assumes the world is known in advance. A SLAM system builds a map, a planner plots a path through it, and a controller follows that path. This pipeline works well in structured environments — factory floors, hospital corridors, suburban roads — where geometry is stable and the map stays valid. It fails in exactly the conditions where autonomous robots would be most valuable: disaster sites where structures have collapsed, outdoor terrain that shifts with weather and season, cluttered warehouses where inventory reconfigures daily. Object was designed for these environments. It navigates without pre-built maps, constructing spatial understanding on the fly from raw perception and maintaining that understanding as a living, revisable model of its surroundings.

Building Spatial Understanding from Sensory Input

Object perceives the world through a sensor suite comprising six stereo camera pairs, a 128-beam lidar, and an inertial measurement unit fused with leg odometry. Rather than constructing a traditional occupancy grid or point-cloud map, Object maintains what we call a spatial memory — a learned, compressed representation of traversed and observed geometry stored as a neural implicit field. This representation captures not just where surfaces are, but their material properties (estimated from visual texture and lidar reflectance), their traversability (can this surface support the robot's weight? is the slope within stability limits?), and their semantic identity (door, staircase, debris pile, human). The spatial memory is updated continuously as new observations arrive and old regions are revisited, allowing the system to notice and adapt to changes — a door that was open is now closed, a pile of boxes has been moved. Critically, this representation scales sublinearly with environment size. A traditional voxel map of a 10,000-square-meter warehouse requires gigabytes of storage; Object's spatial memory for the same area fits in 340 MB and can be queried in constant time for any local region.

Integrating Oracle's Perception with Motor Planning

Navigation is not a perception problem alone — it requires closing the loop between understanding the world and acting in it. Object's motor planner operates on the spatial memory in a hierarchical fashion. A high-level policy, informed by Oracle's scene understanding capabilities, sets waypoints and strategic intentions: "cross the room via the clear corridor along the east wall" or "ascend the staircase to reach the second floor." Oracle contributes semantic reasoning that pure geometric perception cannot provide — recognizing that a closed door can be opened, that a narrow gap between shelves is passable if the robot rotates its torso, that a darkened hallway likely continues beyond the range of current sensors. The low-level locomotion controller then executes the plan, choosing footstep placements and body poses that maintain stability on the local terrain. The two levels communicate through a shared cost representation that encodes both geometric feasibility and semantic context. When the low-level controller encounters an obstacle the high-level plan did not anticipate — a cable stretched across the floor, a puddle of uncertain depth — it triggers replanning at the strategic level, and the system adapts within 200 milliseconds.

Results Across Three Domains

We evaluated Object's mapless navigation in three deployment domains chosen for their diversity. In warehouse environments, Object was tasked with navigating to specified shelf locations in a 5,000-square-meter fulfillment center where inventory layout changed between sessions. Over 200 trials, Object reached the target location 96.2% of the time, with a mean path efficiency (ratio of actual path length to optimal path length) of 1.14 — meaning its emergent routes were, on average, only 14% longer than the shortest possible path. In outdoor terrain — a 2-kilometer course through mixed forest, gravel, mud, and rocky slopes — Object completed the traverse in 94% of attempts, with failures concentrated on steep, loose-gravel descents where the stability margin was genuinely marginal. The most demanding test was a simulated disaster-response scenario in a partially collapsed parking structure. Floors were buckled, debris blocked expected pathways, lighting was absent in large sections, and dust degraded lidar returns. Object navigated to the designated search zones in 87% of trials, relying heavily on Oracle's semantic reasoning to distinguish passable rubble from structural hazards.

Why Mapless Navigation Matters

The practical argument for mapless navigation is deployment speed. Map-dependent systems require a mapping phase before they can operate — someone must drive the robot through the space, or process architectural drawings, or fly a drone to collect survey data. This phase can take hours or days and must be repeated whenever the environment changes significantly. Object can be placed in a new environment and begin useful navigation immediately. But the deeper argument is about the nature of spatial intelligence itself. Biological organisms do not navigate by consulting a pre-built map. They build understanding incrementally, revise it constantly, and act under uncertainty about what lies beyond the current horizon of perception. Object's approach moves robotic navigation closer to this biological model — not because biomimicry is a goal in itself, but because the biological solution is well-adapted to the fundamental structure of the problem: a world that is too large, too dynamic, and too complex to be fully known in advance.

Related Research
2026-03-12
Object's First Steps: Learning Dexterous Manipulation from Scratch
2026-01-30
The Sim-to-Real Gap: What We've Learned Transferring Intelligence to Hardware