Guest Post by Luca Herlein
The artificial divide between robotics and drone ecosystems made sense once. Today, it’s holding back the entire industry.
Robots and Drones Are the Same Thing (And Pretending Otherwise Is Holding Us Back)
For the last decade and a half, two parallel ecosystems have grown up around machines that move through the real world:
- The robotics world, centered around frameworks like ROS 2 and YARP
- The drone world, built around ArduPilot, PX4, Pixhawk, and MAVLink
Technically, these systems are not different classes of machines. A drone is a robot. A robot is a drone.
Yet the tooling, expectations, and development experience between these two worlds could not feel more different.
That disconnect made sense once. Today, it doesn’t. And increasingly, it’s becoming a real problem.
Two Worlds, Two Philosophies
Let’s look at a simple, concrete example: GPS integration.
The “Robot” World: Total Freedom, Total Complexity
In the robotics ecosystem, integrating GPS is powerful—but unforgiving.
If you want globally-referenced localization in ROS, you quickly encounter robot_localization, nav2, simulated GPS pipelines, frame transforms, covariance tuning, and topic wiring that assumes deep familiarity with the system.
The upside is enormous flexibility:
- Arbitrary sensor fusion
- Full physical simulation
- Precise control over every assumption
The downside is equally real:
- High barrier to entry
- Steep learning curve for even “basic” capabilities
- Every integration is custom
The output of this work is usually excellent data—but it’s just data. Consuming it for autonomy, navigation, or higher-level behavior still requires writing additional software.
This ecosystem optimizes for engineering freedom, not time-to-function.
The “Drone” World: Constrained, Standardized, Productized
Now contrast that with the drone ecosystem.
Buy a GPS module designed for Pixhawk. Plug it in. It speaks DroneCAN or MAVLink. Position estimates appear immediately inside the flight stack. Autonomous missions already know how to consume it.
Want RTK? Buy a compatible module. Same wiring. Same expectations.
This world is optimized for:
- Plug-and-play modules
- Strict interface standards
- Immediate usability
The cost of this simplicity is reduced flexibility. The benefit is that things just work.
This ecosystem optimizes for deployment, not abstraction purity.
Why the Split Existed (And Why It Made Sense)
This divide wasn’t arbitrary.
ROS emerged around 2007, when robotics platforms were wildly heterogeneous. Pixhawk and MAVLink followed a few years later, targeting a much narrower problem space: multirotors and fixed-wing aircraft.
At the time:
- Robots had to be generic
- Drones could be standardized
Quadcopters shared motors, sensors, control loops, and failure modes. It made sense to lock down interfaces and build a modular ecosystem around them.
Robots, by contrast, ranged from tank drives to humanoids to robotic arms. ROS had to be a universal abstraction layer.
The split was necessary.
Why It No Longer Is
Fast forward 15 years.
Drones now:
- Run companion computers
- Perform onboard AI inference
- Carry manipulators, gimbals, winches, and payloads
- Execute complex autonomous missions
- Coordinate as fleets
Meanwhile, the robotics ecosystem has only grown more powerful—and more intimidating.
Ironically, many things the drone world standardized years ago still require substantial effort in robotics frameworks. At the same time, the drone world is straining to fit increasingly complex behaviors into protocols and assumptions that were never designed for general robotics.
We’ve ended up in a strange place:
- Robotics systems are incredibly capable—but hard to use
- Drone systems are easy to use—but increasingly constrained
Both worlds are bending toward each other, but from opposite directions.
The Distinction Is Now Artificial
The core assumption separating these ecosystems—that drones are platforms and robots are purpose-built machines—no longer holds.
Everything that:
- Perceives the world
- Makes decisions
- Acts autonomously
…is a robot.
A quadcopter with obstacle avoidance is a robot. A rover following GPS waypoints is a robot. A surface vessel coordinating with others is a robot.
The fact that one flies and one drives is an implementation detail—not a category boundary.
The software problems they face are now fundamentally the same:
- Distributed systems
- Sensor fusion
- Autonomy and safety boundaries
- Simulation vs reality gaps
- Human oversight and control
- Failure handling and recovery
Yet we still treat them as separate worlds.
The Cost of Clinging to Old Boundaries
This artificial separation has real consequences:
- Higher barriers to entry in robotics than necessary
- Increasing technical debt in drone stacks as complexity grows
- Duplicated effort across ecosystems solving the same problems differently
- Missed opportunities to apply modern distributed-systems thinking consistently
Most importantly, it slows progress.
Engineers are forced to choose between:
- Power and flexibility
- Usability and speed
That’s a false choice in 2026.
A Convergence Is Inevitable
Robotics and drones are converging whether we like it or not.
The question isn’t if these worlds merge—it’s how intentionally we do it.
There is room for a new approach that:
- Treats drones and robots as the same class of system
- Preserves the autonomy and simulation strengths of robotics
- Retains the plug-and-play pragmatism of the drone world
- Applies modern distributed-software principles end-to-end
That’s the direction we believe the industry is heading. And it’s the problem space we’re building for.
More on that soon.