If software changes something in the real world, it’s a robot.
Yeah, I said it. That definition might make a few folks uncomfortable. When most people hear “robotics,” they picture R2-D2 or a humanoid arm welding car frames. Metal bodies. Motors. Sensors. Physical stuff in the physical world.
But that’s not the whole picture. Not anymore. And honestly? It never was.
What Actually Makes a Robot?
A robot is fundamentally software that perceives, reasons, and takes action to change something in the world. The motors and sensors? Those are just the interface to the physical world. They’re not what makes it a robot.
Think about it. A submarine’s autopilot? Robot. Your thermostat that adjusts to your schedule? Robot. An algorithm executing trades on Wall Street? That’s absolutely a robot - one that moves billions of dollars around every single day.
The physical actuator is just one possible interface. The real world includes:
- Your file system
- Your production infrastructure
- Financial markets
- Supply chains
- The code you ship to users
Anything that exists and can be changed by software is “the real world.” And software that changes it autonomously is robotics. Period.
Traditional Robotics: The Physical Stuff
Don’t get me wrong - physical robots are having a moment. A big one.
Jensen Huang stood on stage at CES 2026 and declared that “the ChatGPT moment for physical AI is here.” NVIDIA released GR00T N1.6, an open vision-language-action model for humanoid robots. They unveiled Cosmos, a foundation model that can simulate environments governed by actual physics. Boston Dynamics, Caterpillar, LG - they’re all building on NVIDIA’s robotics stack.
And Huang says robots will have some human-level capabilities this year. This year!
“We believe physical AI and robotics will eventually be the largest consumer electronics segment in the world,” said Ali Kani, NVIDIA’s automotive VP. Largest consumer electronics segment. Let that sink in.
The physical stuff is real, it’s happening, and it’s accelerating faster than most people realize.
But here’s where it gets interesting…
The Three Robots You Didn’t Know Were Robots
1. High-Frequency Trading Systems
High-frequency trading systems execute millions of trades per second. Milliseconds matter. Microseconds matter. These systems move roughly 10-40% of all equity trading volume. They provide liquidity, tighten spreads, and occasionally cause flash crashes that wipe out billions in minutes.
Is a trading algorithm a robot? It perceives (market data). It reasons (statistical models, pattern recognition). It acts (executes trades). And it changes the real world - specifically, it moves money. A lot of money.
The 2010 Flash Crash? That was robots behaving badly. Algorithmic trades cascading through interconnected markets, triggering stop-losses, withdrawing liquidity. Billions erased in minutes. That’s about as real-world as it gets.
These aren’t toys. These are autonomous systems making consequential decisions faster than any human could review them. Robots.
2. GitOps and Infrastructure Automation
You’re running ArgoCD or Flux? Congratulations, you’re operating robots.
GitOps treats your infrastructure like code - declarative descriptions of what your production environment should look like, stored in Git. When you merge a PR, an automated system reads the desired state and makes it so. It creates servers. Kills containers. Scales clusters. Configures networks.
That’s a robot taking real-world action. Your infrastructure exists. It costs money. It serves users. When that GitOps controller reconciles drift and spins up a new pod, it’s making a change to the physical world - somewhere, an actual CPU in an actual data center starts running your actual code.
Every DevOps engineer is a roboticist. They just don’t call it that.
3. AI-Assisted Software Development
And now we get to my favorite: tools like Claude Code.
When I fire up Claude Code and say “refactor this module to use dependency injection and add comprehensive tests,” what happens? The AI perceives (reads my codebase). It reasons (understands architecture, identifies patterns). It acts (writes code, creates files, runs tests). And the changes are real.
That code ships. It runs on servers. It affects users. Money changes hands because of it.
I’ve written before about how Claude Code transformed my workflow. The 100x productivity gains aren’t just theoretical - I’m actually shipping more software to production. Real changes to real systems that affect real people.
If an AI agent writing code that gets deployed to production isn’t a robot, I don’t know what is.
The Mouse and Keyboard Era Is Ending
For decades, software meant a person sitting at a keyboard, typing instructions, clicking a mouse. The human was the actuator. Software did nothing without a person manually commanding each action.
That model is dying.
Look at what happened in 2025: Anthropic released the Model Context Protocol. Within months, hundreds of MCP servers appeared. AI agents started connecting to Slack, Jira, databases, cloud infrastructure. Google announced Agent-to-Agent for swarms of collaborative agents. The Linux Foundation created the Agentic AI Foundation.
Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. That’s not a trend. That’s a phase change.
The trajectory is clear: software is becoming autonomous. It’s not waiting for you to click. It’s perceiving, reasoning, and acting on its own. Making changes. Getting things done.
That’s robotics. It’s just robotics without the metal body.
NVIDIA Gets It
Jensen Huang isn’t just talking about humanoid robots and autonomous vehicles. He’s talking about “physical AI” - AI that understands and acts in the real world. But here’s the key insight: the real world includes everything that exists and can be changed.
NVIDIA’s Cosmos models simulate physics because physical robots need to learn physics. But the same reasoning architectures power AI agents that navigate codebases, manage infrastructure, and orchestrate business processes. The substrate is different. The principle is the same.
When NVIDIA says they want to be the “Android of generalist robotics,” they’re not just talking about humanoids. They’re talking about the entire space of autonomous systems that take action in the world. That includes your CI/CD pipeline. That includes your AI coding assistant.
What This Means for You
If you’re a software engineer, you’re building robots. Your deployment pipeline is a robot. Your monitoring and auto-scaling? Robot. That AI assistant helping you write code? Absolutely a robot.
This isn’t just semantic games. The mindset matters.
When you think of automation as robotics, you start asking the right questions:
- What happens when this system acts on bad data?
- How do we handle cascading failures?
- What are the safety boundaries?
- Who’s responsible when the robot makes a mistake?
These are questions that traditional roboticists have grappled with for decades. Now they’re everyone’s questions.
Conclusion
The old definition of robotics - metal bodies with motors and sensors manipulating physical objects - was always too narrow. It was just the most visible form of a broader concept.
A robot is software that autonomously changes the world. The interface to the world can be a servo motor. It can be an API. It can be a database connection. It can be a file write operation. The principle is the same: perception, reasoning, action, consequence.
You’re already building robots. You’re already deploying robots. You’re already relying on robots to keep your systems running.
Might as well call them what they are.
Long live the robots!