A lot of terminal tools are useful.
Very few are memorable.
That difference has been on my mind a lot lately.
Engineers are good at building tools that are fast, scriptable, and powerful. But that still does not explain why some tools feel sharp to use while others feel like work.
I started noticing this while building a small CLI tool called spool to experiment with multi-step terminal workflows. It forced me to pay attention to something I had mostly taken for granted: how a tool feels to use, not just whether it works.
You can have a sneak-peak here: Spool video.
If you have ever run a command, stared at the output, and thought, okay, but am I done?, you have felt the UX gap.
The command finished.
The hesitation did not.
That is the standard I keep coming back to when building terminal tools, especially tools with AI in the loop:
Good tools reduce hesitation.
Not by adding more text. Not by sounding friendlier. By removing doubt at the exact moment a user is deciding what to do next.
The UX problem is hesitation
A lot of terminal tooling still treats output like a log stream. It reports that something happened, but not whether the user is now safe to proceed.
That creates friction in small but expensive ways:
- users rerun commands they already completed
- users stop to inspect files manually
- users reopen the help text
- users ask a teammate what the output means
- users hesitate because they do not trust the result
This is not just a documentation problem. It is a product problem.
A command is not done when the process exits. It is done when the user understands:
- what just happened
- whether it worked
- whether they are ready to continue
- what to do next
That is the bar.
A simple test for command-line UX
When I look at a CLI flow, I use a simple check:
Does this output remove hesitation?
If the user still has to pause and infer the state of the world, the tool has more work to do.
In practice, good command output usually covers four things:
- progress: what changed
- acknowledgement: did it succeed or fail
- readiness: am I safe to continue
- next step: what should I do now
You can treat that as a checklist. You do not need to over-formalize it. But if your output consistently covers those four points, the tool feels sharper immediately.
1. Tell the user what just happened
Vague success messages are one of the fastest ways to make a tool forgettable.
This:
$ spool initSetup complete.technically tells me the command finished.
It does not tell me what was created, whether anything was validated, or whether I can trust the environment.
This is better:
$ spool initβ Workspace initializedβ Config created at .spool/config.jsonβ Local environment looks valid
Next: run `spool verify` to confirm external accessThe difference is not cosmetic.
The second version reduces three forms of uncertainty at once:
- what changed
- whether initialization succeeded
- what the next action should be
That is useful UX because it shortens the distance between output and confident action.
2. Failure should reduce confusion, not increase it
A surprising number of tools treat failure as a dead end.
They report the problem, but not in a way that helps the user recover.
This:
$ spool verifyValidation failed.is not really a failure message. It is a refusal to explain.
A more useful version looks like this:
$ spool verifyβ Verification failed
Reason:- API token not found
Fix:- Set `SPOOL_API_TOKEN` in your shell environment
Next: rerun `spool verify`This is a better user experience for a simple reason:
The user no longer has to translate failure into action.
The tool already did that work.
That matters because the worst part of many failure states is not that they failed. It is that the user is left wondering whether the problem is configuration, permissions, network state, or something more serious.
Good tools narrow the uncertainty quickly.
3. Success is incomplete if readiness is unclear
A command can succeed and still leave the user stuck.
This happens most often after long-running commands.
Output like this is common:
$ spool syncDone.But Done. is not a useful state. It is only a termination signal.
What I actually want to know is:
- what did you process
- what did you skip
- am I safe to continue
- should I inspect anything before moving on
A more helpful result:
$ spool syncβ Sync completeProcessed: 18 itemsSkipped: 2 itemsStatus: safe to continue
Next: run `spool inspect --skipped` to review anything incompleteThis is where memorable CLI UX starts to emerge.
The command is no longer just reporting that it ran. It is managing the handoff between one action and the next.
That handoff is where hesitation lives.
4. The next step should be explicit
One of the easiest ways to improve a CLI is to stop making users remember the workflow.
If a command naturally leads to another command, say so.
Do not assume the user remembers the docs, internal sequence, or happy path.
A lot of tools stop too early:
Completed successfully.That forces the user to ask:
- now what?
- what command comes next?
- do I inspect something first?
- am I already done?
Even a small prompt changes the experience:
Next: run `spool check onboarding-notes.md`That line does more UX work than most styling, animation, or copy polish ever will.
It removes a decision.
And in tooling, removing unnecessary decisions is often the whole game.
AI tools make this problem worse
This matters even more now because more developers are running real workflows through AI tools like Claude Code and OpenAI Codex from the terminal.
Traditional commands usually fail in bounded ways. AI-powered commands introduce a different kind of uncertainty:
- did it understand what I meant
- did it produce the right kind of output
- should I trust this result
- what should I review before using it
That means AI tools need stronger signaling, not weaker signaling.
This is not enough:
$ spool draft --topic onboardingDraft generated.That output hides the most important UX questions.
A better version:
$ spool draft --topic onboardingβ Draft created: onboarding-notes.md
Interpreted request:- topic: onboarding- style: concise internal memo
Review before using:- summary section- action items- open questions
Next: run `spool check onboarding-notes.md`This works better because it addresses the real source of hesitation in AI systems.
The user is not only asking did it run.
They are also asking:
- what did the tool think I asked for
- what exactly did it produce
- what deserves human review before I rely on it
That is why AI UX cannot stop at generation. It has to support verification.
Partial success is where good tools prove themselves
The most interesting UX problems are not clean success or clean failure.
They are messy states:
- some items succeeded, others did not
- output was created, but not fully usable
- the command is safe to continue from, but something still needs attention
This is where vague wording becomes actively harmful.
For example:
$ spool publishCompleted with warnings.That tells me almost nothing.
Warnings about what? Did anything actually publish? Am I supposed to stop?
A better version:
$ spool publishβ Publish completed with warnings
Published:- release-notes.md- status-update.md
Not published:- handoff-checklist.md
Reason:- missing required frontmatter: `owner`
Next: add the missing field and rerun `spool publish handoff-checklist.md`This kind of output respects the user.
It separates success from failure, tells the truth about the current state, and makes recovery obvious.
That is the kind of clarity people remember.
A practical standard for better CLI output
If you build tools, especially terminal tools, it helps to review every important command with four questions:
- what just happened
- did it work
- am I ready to continue
- what should I do next
If the output leaves any of those unanswered, there is a good chance the user will hesitate.
And if the user hesitates, the tool is still making them do work.
That is the core UX mistake.
Useful tools are memorable not because they are flashy, but because they are decisive. They help the user move forward without second-guessing the state of the system.
That is what good command-line UX feels like.
A command is not done when it exits.
It is done when the next step is obvious.