Nick Hodges wrote something recently in InfoWorld that stuck with us. He described spending a Saturday building a small project with Claude Code — getting it live, working exactly as intended — and then realizing something interesting: the website itself was almost beside the point. Nobody was going to manually navigate to it. Instead, they'd want their own systems to call it programmatically.
So he had Claude build a CLI.
The whole arc — idea to deployed cross-platform command-line tool — happened in an afternoon.
It's a small story, but it captures something larger that's already underway.
Across many industries, the assumption that a person logs into a system, navigates a dashboard, and manually operates it is starting to break down. More and more, the work is orchestrated. Automation pipelines, scheduled processes, optimization jobs, and increasingly autonomous systems.
That raises an important design question we've been thinking about at Full Stack Energy:
Are we building for users, or for systems acting on behalf of users?
Increasingly, the answer is both — and that changes where you put your energy.
The Real Surface Area Is Underneath the Dashboard
Behind every polished interface is something more fundamental: a command, an API call, a file transformation, a structured output.
When modern systems run, they ultimately resolve to something programmable. Whether a human triggers it or an automated process does, the execution layer looks the same.
In the energy sector this shows up clearly in optimization workflows.
A battery scheduling algorithm, for example, might take in a set of day-ahead prices and output the optimal charge and discharge schedule. Traditionally, that capability might live behind a web dashboard where a user uploads a file and downloads the results.
But the real capability isn't the upload form.
It's the optimization engine underneath.
If that engine can only be accessed through a GUI, it creates friction in automated workflows. If it can be called directly — predictably and programmatically — it becomes something much more useful.
For example, an optimization engine might expose a simple CLI interface:
battery-optimize --prices CSV [options]battery-optimize --prices CSV --out schedule.json
battery-optimize --prices CSV --stdout
battery-optimize --prices CSV --format pretty
A system can feed it a CSV of day-ahead prices and receive back a structured JSON schedule describing the optimal battery dispatch.
That output can then flow directly into other systems — forecasting pipelines, trading tools, asset controllers, or version-controlled operational plans.
Once that layer exists, the dashboard becomes optional.
We're Building Infrastructure, Not Interfaces
When you build energy optimization tools, there are two ways to approach the problem.
The traditional model is interface-first. You build a web form, configure some panels, upload files, and display results in a dashboard.
That works if a person is always in the loop.
But modern energy systems increasingly operate as coordinated workflows. Forecasts generate price expectations, optimization models compute schedules, asset controllers execute dispatch strategies, and monitoring systems evaluate performance.
In that environment, the optimization engine is more useful as a callable service than as a webpage.
A command-line interface or API allows the capability to run:
- inside automated scheduling systems
- as part of trading or forecasting pipelines
- in batch optimization jobs
- in reproducible workflows managed through version control
Instead of being a tool someone opens, it becomes infrastructure other systems can depend on.
We prefer infrastructure.
The CLI as an Interoperability Layer
One interesting thing we've observed is that the command line has quietly re-emerged as one of the most stable interoperability layers in modern software.
CLI tools are:
- scriptable
- testable
- automatable
- composable
- easy to integrate into pipelines
For engineering teams working with energy systems, that matters.
A CLI-based optimization engine can slot into workflows like:
fetch-prices → battery-optimize → validate-schedule → deploy
Or:
forecast-prices | battery-optimize --stdout | store-schedule
These kinds of pipelines are simple, transparent, and reproducible.
Humans Still Make the Decisions
None of this removes humans from the process.
Human judgment still matters enormously in energy systems — from trading strategy to asset configuration to risk management.
But execution increasingly happens through automated processes.
When optimization needs to run daily, hourly, or across many assets simultaneously, it becomes far more reliable to treat that capability as something programmable.
The products that hold up in these environments are the ones that integrate cleanly into larger operational systems.
Composable. Predictable. Pipeline-compatible.
Designing for a System-Driven Future
The shift we're seeing is subtle but important.
Instead of asking:
"What does the user click?"
We start with:
"How does this capability fit into a larger operational workflow?"
When you design from that starting point, priorities change.
Clean APIs matter more.
Structured outputs like JSON or version-controlled artifacts matter more.
Clear contracts between systems matter more.
The interface becomes something you add on top of a solid programmable core, not the thing you build toward.






