Working with Text on the CLI

Clay Ratliff
14 min readDec 31, 2022

--

Unreadable screen of highlighted and unformatted code on the command line
Photo by Max Chen on Unsplash

Intro

Working with the CLI tends to be a very opinionated environment. Everyone has their own preferred tools. Everyone’s workflow is slightly different depending on personal preferences, the tasks they regularly perform, and even workplace culture. But regardless of preference, culture, or daily tasks, some things remain common.

Overview of CLI Usage

When using the CLI you’re always working with text and you always want it to be as convenient as possible. There are also common scenarios that you’re dealing with on a regular basis. Broadly you can lump these scenarios into working with formatted documents (code, LaTeX, JSON, etc.), working with logs, and interacting with the shell. There is admittedly overlap between the first two scenarios, i.e., logs are certainly formatted, and some are written in JSON, but we interact with logs differently than we interact with other formatted documents; with logs, we’re extracting information and they’re the last thing we want to edit, with formatted documents, editing is almost always involved.

When we interact with the CLI we’re dealing with a serial execution environment, which involves blocking actions. If you execute something like tail then unless you do something to interfere, that shell interaction is now blocked. The same is basically true when you run any executable. Until that executable completes, you can’t interact with that shell session. This means that to work effectively you really need a way to work with multiple interactive shells simultaneously, be that through opening multiple terminals, using an emulator that provides a tabs feature, tmux/screen, etc.

Working with a Blocking Interface

My preferred tool for managing multiple interactive shells is tmux. This is something that gets pretty opinionated pretty quickly so let me outline my reasons for using it.

First, while tmux is a pretty new tool in terms of the *nix tooling, its popularity has grown enough over the last 15 years that if it doesn’t come pre-installed, then it’s easy to install with whatever package manager is available. Second, while it’s extremely customizable, it’s also usable out of the box. A vanilla install gives me the ability to manage multiple shell sessions, group screens together, pick up previous shell sessions where I left off, and much more.

I prefer this over using tab-capable emulators because I’ll almost always have access to it, whereas customizing iTerm2 to meet my needs means my workflow will be awkward on anything other than a Mac.

I also don’t rely on creating a new terminal for each session I need because it’s annoying to organize, and if I don’t have a GUI, I can’t create another terminal anyway. Think, logging into a Linux VM on a cloud provider like GCP or AWS. You can either use a browser to ssh into a terminal session, or you can just ssh from your local machine into a terminal session. If you don’t have some kind of multiplexor you’re stuck creating a new emulator for each session you need.

Regardless of how you solve the problem of working with a blocking UI, be it tmux, iTerm2, screen, or something else entirely, it’s something you’re going to contend with.

Interacting with the Shell

For the purposes of this article, I’m not going into which shell you should select as, again, this is a very opinionated topic. My shell of choice has changed over the years as my workflow, needs, and wants have evolved. Also, a good argument can be made for using any shell I can think of with the exceptions of the Thompson or Mashey shells, both called sh. Unless you’re working with Ancient Unix, you’ll never encounter them.

The most common and useful tools used while interacting with the command line follow the UNIX philosophy. Love it or hate it, what’s relevant to this discussion is that any tooling worth its salt will assume that its output will be the input to another tool that it knows nothing about. In other words, the vast majority of tools will use pipes to string multiple commands together to do new things.

For example, if I want to see all of the lines containing the word error, I can do tail file.log | grep errorand if the logfile is live, I can do tail -f file.log | grep error which will only display log lines if they contain the word error.

The examples above are clearly simplistic and maybe a tad contrived, but the point is that you will have a plethora of tools at your disposal and they can almost all be strung together to accomplish far more than any single one could. They’re legos, learn how to stack them together and you can save yourself a lot of time and pain. Let’s see an example that's a little more “real-world”.

If you work with REST APIs at all, you’re probably familiar with cURL which is used to send an HTTP request and show the response. A common pattern for a PUT command is to return a JSON payload. Unformatted JSON is ugly and difficult to read. We also want to play with OpenAI because that’s all the rage at the moment. First let's send the curl command so that we can verify the response is what we expect:

curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"model": "text-davinci-003", "prompt": "Say this is a test", "temperature": 0, "max_tokens": 7}'

This returned this ugly piece of JSON:

{"id":"cmpl-6SXlLQAk9smbBI2TuNjNCEdKH6FHW","object":"text_completion","created":1672260987,"model":"text-davinci-003","choices":[{"text":"\n\nThis is indeed a test","index":0,"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":5,"completion_tokens":7,"total_tokens":12}}

Let’s pipe this to jq to pretty-print it:

curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer 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" \
-d '{"model": "text-davinci-003", "prompt": "Say this is a test", "temperature": 0, "max_tokens": 7}' | jq '.'

Which yields this much more readable version seen below:

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload Upload Total Spent Left Speed
100 383 100 287 100 96 318 106 --:--:-- --:--:-- --:--:-- 425
{
"id": "cmpl-6SXtWoZY29vj2hIK2PqdsvIe1jUoc",
"object": "text_completion",
"created": 1672261494,
"model": "text-davinci-003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}

By adding one simple tool, we’ve made it possible to read the returned JSON. In addition to pipes, redirections are another tool that makes your life easier. Redirection allows you to send output from STDOUT, which by default is the screen, to a file. A single redirection symbol ( >) will create a new file if one doesn’t exist, or overwrite an existing file if it does. Using two redirection symbols, >>will append to an existing file rather than overwrite. Now I can simply add >> sample_response.json after the jq command and the JSON results are in a file ready to use in an editor.

There are more tools out there than you can count, all of which can be used as building blocks to create results that are only limited in usefulness and power by your willingness to understand and use them.

Working with Logs

When it comes to working with logs, we’re usually trying to locate a specific part(s) of the log in some manner, e.g., time a period, a specific message type, a specific identifier (customer number, service ID, etc.), or some combination factors like the ones listed above.

The other common possibility is that we may be aggregating logs to discover trends. For example, maybe we’re looking for unauthorized login attempts over the past hour or we want to know how many connection timeouts have occurred on a particular service over the past two weeks.

Now, analytics and monitoring can obviously be done far more efficiently in a production environment if we use tools and services specifically built for those purposes such as Grafana, Prometheus, Datadog, etc., but there are occasions when it’s just easier to do investigations with the raw logs themselves with ad-hoc tools. Sometimes, raw logs are the only source of the data you need for your investigation.

Logs (at least good ones) will have a consistent format. Many logs are modeled off of the Linux log format which looks like this by default:

priority, timestamp, hostname, service name, message

Below is a snippet of the log output by sshd when a login attempt occurs and the user doesn’t exist. Note that in this case, the priority is not logged. As an aside, you’ll find that many logs omit the priority from the log messages. You can find more detailed information here.

Jul 7 10:51:24 chaves sshd[19537]: Invalid user admin from spongebob.lab.ossec.net

There are a number of common CLI tools that we can use to make studying logs easy.

Grep:

I think the first tool that comes to mind for most people is grep. Grep provides flexible and powerful search capabilities. The key required to unlock that power is Regular Expressions or RegEx. Grep's power comes from its ability to apply regex patterns to its search. I would also recommend using the -E flag instead of the -e to prevent a lot of extra backslash escapes which will be more error-prone as well as more difficult to read. As an example, the command below will search the syslog file for any entry that contains the text system76 regardless of what comes before or after it, as long as the text is followed by a colon and white space (“: “). It will also print the line number where it found the reference.

grep -n -E ".*system76.*: .*" /var/log/syslog 

Full details on the regex it understands and all command flags can be found here.

Head and Tail:

If you just need a quick peek at the start or end of the log, say, to make sure that it started or stopped execution without errors, then you’ll find head and tail useful commands. They simply display the first N or last N lines of a file. There’s a convenient flag that instructs tail to continue outputting any data appended to the file as it grows. This is extremely useful in a development environment. You can take a look at the man pages for them to get the full details.

Awk and Cut:

Since all useful logs follow some kind of consistent format we can take advantage of awk to parse them for specific information. While Awk is its own text-processing language, and far beyond the scope of this article, it can be a very useful tool for working with any document that has consistent formatting, not just logs. Awk is another tool that requires at least a minimal working knowledge of regex to be used effectively. Going back to our previous log example:

Jul 7 10:51:24 chaves sshd[19537]: Invalid user admin from spongebob.lab.ossec.net

The following command will locate the line using regex, and print the username and hostname that originated the failed request. Note that the comma between the 2 fields is required to separate the output:

awk "/.*Invalid user.*/ { print $8, $10}" /var/log/auth.log

The cut utility is used to select delimited text and print it to standard output. Note that despite its name, it does not alter the source file's text.

Using cut you can select text by character or byte position (or range), and by field using an arbitrary delimiter. As an example, we want a list of all accounts currently on the machine for a security audit. We know that we can find a definitive list of accounts for the machine in /etc/passwd. Let’s say our file looks like this:

root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin

If we want to create a file with a list of all of the accounts created on the machine we can do something like this which selects the first field from the file using the colon (:) as the delimiter, and writes the result into the file called account-list.txt:

cut -d ':' -f 1 /etc/passwd > account-list.txt

The result would be a file containing this:

root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy

There’s a lot of information out there for both of these commands but here’s the man page for both awk and cut to get you started.

Working with Formatted Documents

I’ve saved this for last because it’s both a very simple and very complex topic. While all of the tools previously mentioned can certainly be used to work with formatted documents, there are tools designed specifically for editing text.

Sed:

Sed is a stream editor that allows you to perform insert, delete, and substitution operations based on selected text. A quick example using the input from /etc/passwd used in the cut command above might be to flag all service accounts since we know that all accounts that have /user/sbin/nologin instead of a startup shell are going to be service accounts.

A word of caution here; we’re now starting to get into editing, which can be dangerous if you’re talking about system files like /etc/passwd so I recommend always using the -i flag when using sed on system documents. the -i flag allows you to specify the creation of a backup before any editing occurs. Even better, don’t edit system files directly, just create a duplicate and perform your edits on that. Then make a backup of the original anyway in case something breaks when you start using the new one.

The example below will create a backup of the original file called passwd.backup, read the contents of the original file, replace everything between the first and last delimiters (:) with a space, and replace the /usr/sbin/nologin with the text Service account.

sed -i'.backup' -e 's/:.*:/ /g' -e 's/\/usr\/sbin\/nologin/ Service account/g' cut-test.txt

After this command executes the file will look like this:

root /bin/bash
daemon Service account
bin Service account
sys Service account
sync /bin/sync
games Service account
man Service account
lp Service account
mail Service account
news Service account
uucp Service account
proxy Service account

As you can see, while I was expecting to see the login shell of any non-service account users, I did miss the sync account, which..well, it’s an artifact of an earlier era (pre-1993) that was used to safely shutdown systems without needing to know the root password. I don’t know of any modern systems that still use it. If I’m wrong, please let me know in the comments. Always happy to learn something new!

Also, note that if you want to take full advantage of sed you’ll also want to be comfortable with basic regular expressions.

JQ:

We already briefly met jq already in the, Interacting with the Shell, section. If you work with JSON chances are you’re already familiar with jq. It’s a utility for parsing and formatting JSON documents. The . operator by itself represents the root of the document. You can use the . operator to reference the children of the root document. Given a JSON document called pets.json which looks like this:

{
"pets": {
"cat": {
"name": "Mr Fuzzy",
"color": "calico",
"age": 8
},
"dog": {
"name": "Sir Barky",
"color": "blonde",
"age": 12
}
}
}

Then this command, jq . pets.json, will render the entire document in a prettyprinted format which would look identical to the raw document above. This command, jq .pets pets.json, will render the pets field and the child fields of cat and dog. This command, jq .pets.dog, will render just the dog field as seen below:

{
"name": "Sir Barky",
"color": "blonde",
"age": 12
}

It also allows you to work with JSON arrays (including array slices), built-in functions for filtering, mapping values, transforming documents, and, you guessed it, regular expressions can be used to match with the test function to determine if an input matches the expressions criteria.

Again, the entire topic has is too large for this article and there are already tons of articles that dive into jq usage. Here’s a link to get you started.

And since this could continue to go on forever, I’m going to end it with both the easiest and most difficult tools to recommend. Easy because it’s a no-brainer that you’re going to need it, and difficult because opinions are so strong as to be toxic on occasion. Terminal based editors

Terminal Editors:

Terminal editors a very opinionated topic. Everyone has a go-to favorite. When it comes to editors for the terminal almost everyone thinks of either vim or emacs. While those 2 dominate the terminal editor space, there are a surprising number of editors available for the terminal. Most of them I have never used, at least not in a very long time, but outside of vim and emacs, there are a few worth noting.

Nano — Nano has been part of the GNU ecosystem since 2001. It was created as a free replacement for the Pico editor, which was part of Pine, which was THE email client for UNIX systems at the time. Pico was the editor embedded in Pine used to create and edit emails. Its biggest advantage in my view is that it’s common to find it on many systems, it’s small, and it’s simple. Like a kinder vi.

Mcedit- Midnight Command is a fairly popular terminal file manager (well, it used to be, but I really don’t know anymore tbh). Mcedit is the built-in editor. The only reason to bother with this one is if you use MC. I definitely don’t think it’s worth installing MC to be able to use Mcedit but if you’re already a user of MC I think people sometimes forget it’s there.

There are many more alternatives out there, but I place a heavy weight on ubiquity and skill transference. If I put my time into learning a tool well, I want to make sure that the tool is available in as many environments as possible and that if possible I can transfer those skills to other tools.

Vim and Emacs:

The holy wars between vim and emacs are well documented as are the editors themselves, with countless tutorials available so I won’t go over them here.

The only comments I have to add about them are that they both meet my criteria of ubiquity and skill transference. They can both be found on virtually any platform by default, and the skills from both have some transference to other tools.

Regarding skill transference, all shells that I’ve used in the past 20 years have keybindings for both emacs and vi. This means that I can edit on the interactive shell using the same keystrokes I use for my editor. I can also use emacs or vim key bindings on my favorite GUI editors, e.g., all JetBrains editors, Eclipse, VS Code, etc. If only Word or Google Docs would do that.

With regards to vi/vim/nvim specifically, if you can use vim or neovim, you can use vi despite its age. You might miss the fancy bells and whistles the others give you but you’ll be fine. This is why people refer to them as if they were a singular item. I am starting to separate out neovim in my head from vim. With the announcement that vim will be re-writing its vimscript engine from scratch, rather than follow in neovim’s footsteps and use an already existing language to create plugins and custom scripts, I think it’s warranted. While we’ll always see backward compatibility between nvim and vim, at least for the foreseeable future, I think vim made a mistake and that it will suffer for it. Why reinvent the wheel and spend all the time, effort, and tears creating an entirely new engine from scratch, debugging it, etc., instead of just providing a means for an already existing language that is well-known, well-documented, and well-loved already?

Personally, I use neovim. My choice comes from two simple facts.

  • It’s lighter-weight
  • My first experiences working with emacs were frustrating and the keybindings hurt my hands after extended use.
  • Bonus fact: When I first started trying emacs there were a lot of different versions, and compatibility between them was not guaranteed. That may have contributed to some of my frustrations.

As always, whatever tools you choose to devote your time to learning, do it deliberately and do it well.

--

--

Clay Ratliff
Clay Ratliff

Written by Clay Ratliff

Looking for our dreams in the second half of our lives as a novice sailors as we learn to live on our floating home SV Fearless https://svfearless.substack.com/