
We introduce RAVEN, a 3D open-set memory-based behavior tree framework for aerial outdoor semantic navigation. RAVEN not only navigates reliably toward detected targets, but also performs long-range semantic reasoning and LVLM-guided informed search
AirLab
523 posts

@AirLabCMU
We develop perception, control, & planning algorithms for robot autonomy | @CMU_Robotics | https://t.co/gWjGiUaBeP | https://t.co/9wO6amxfFc

We introduce RAVEN, a 3D open-set memory-based behavior tree framework for aerial outdoor semantic navigation. RAVEN not only navigates reliably toward detected targets, but also performs long-range semantic reasoning and LVLM-guided informed search









Want to push the online 🌎 understanding & search capabilities of robots? Introducing RayFronts 🌟→ 💡 Semantics within & beyond depth sensing 🏃♂️ Online & real-time mapping 🔍 Querying with images & text ⚙️ Operating in any environment rayfronts.github.io The trick →🧵👇








🐞A bug led to a RA-L paper🤪 Our paper AirIO is started when we accidentally used raw IMU data in body frame—and it worked better. Turns out, keeping body-frame observability helps generalization. No control inputs. No extra sensors. Just better IO. air-io.github.io
