Tutorials¶
These tutorials walk through real-world workflows with sample data you can copy and paste. Each tutorial builds on the basics and introduces progressively more advanced features.
Jumping to columns with c
In any tutorial, press c to open a column picker and jump directly to a column by name. This is much faster than scrolling with h / l when your data has many columns.
1. Terminology¶
Before diving in, here are the core concepts you'll encounter throughout nless.
Buffers¶
A buffer is a single view of data. When you open a file, nless creates a buffer to display it. Buffers are like tabs — you can have several open at once and switch between them.
Buffers are created automatically when you:
- Filter (
f/F/e/E) — a new buffer opens showing only the matching (or excluded) rows - Drill into a pivot (++enter++ on a grouped row) — a new buffer opens with the detail rows behind that group
- Create one manually (
N) — a fresh buffer from the original data
Switch between buffers with L (next) and H (previous), or press 1–9 to jump directly. Press q to close the current buffer. When the last buffer is closed, nless exits.
Each buffer maintains its own independent state — sort order, search position, column visibility, and scroll position. This means you can have one buffer sorted by price while another is filtered to a specific customer, without them interfering with each other.
Buffer Groups¶
A buffer group is a collection of related buffers. When you first open a file, nless creates a group to hold its buffers. Groups let you keep separate data sources organized.
New groups are created when you:
- Open a file (
O) — creates a group with a📄icon - Run a shell command (
!) — creates a group with a⏵icon indicating a streaming source - Start nless with a file argument — the initial group
Switch between groups with } (next) and { (previous). Press R to rename a group for easy identification.
Within a group, buffers work as described above — filter, pivot, and create new buffers, all scoped to that group's data.
Other Key Terms¶
| Term | Meaning |
|---|---|
| Delimiter | The character or pattern used to split each line into columns. Auto-detected for CSV, TSV, JSON, and space-aligned formats. Change with D. |
| Column delimiter | A secondary delimiter applied to a single column to split it into sub-columns (d). |
| Pivot / Unique key | Mark columns with U to group rows by their values, adding a count column. Multiple U presses create composite keys. |
| Filter | A regex applied to a column (or all columns) to show only matching rows. |
| Exclude filter | The inverse — hides rows matching the pattern. |
| Pinned column | A column frozen to the left side of the screen with m. Pinned columns stay visible during horizontal scrolling — useful for keeping identifiers (timestamp, name, ID) in view while exploring wide datasets. Shown with a P label in the header. |
| Tail mode | Keeps the cursor pinned to the bottom so you always see the latest data as it streams in (t). |
| Highlights | New rows from streaming sources appear highlighted in green. Press x to clear. |
2. Exploring a CSV¶
Create a file called orders.csv:
order_id,customer,product,quantity,price,status
1001,alice,widget,3,9.99,shipped
1002,bob,gadget,1,24.99,pending
1003,alice,widget,1,9.99,shipped
1004,carol,gizmo,2,14.99,cancelled
1005,bob,widget,5,9.99,shipped
1006,dave,gadget,2,24.99,pending
1007,alice,gizmo,1,14.99,shipped
1008,carol,widget,4,9.99,pending
1009,dave,gizmo,3,14.99,shipped
1010,bob,gadget,1,24.99,cancelled
Open it:
nless orders.csv
Navigate the data:
j/kto move up and downh/lto move left and rightgto jump to the first row,Gto jump to the last0to jump to the first column,$to jump to the lastcto open a column picker — select a column by name to jump straight to it
Search for a value:
- Press
/, typealice, press ++enter++ - The first match is highlighted. Press
nto jump to the next match,pto go back.
Filter to a specific customer:
- Press
cand selectcustomerto jump to that column - Press
f, typebob, press ++enter++ - A new buffer opens showing only Bob's orders
- Press
qto close the filtered buffer and return to the original
Quick filter by cell value:
- Move the cursor to a cell that says
shipped - Press
F— the column is instantly filtered to onlyshippedrows
Sort a column:
- Press
cand selectpriceto jump to the price column - Press
sto sort ascending (indicated by▲) - Press
sagain to sort descending (▼) - Press
sonce more to clear the sort
Column aggregations:
- Press
cand selectquantityto jump to that column - Press
a— a notification shows count, distinct, sum, avg, min, and max for the visible rows - Try filtering first (
fonstatusforshipped), then pressaagain — aggregations update to reflect only filtered rows
Exclude rows:
- Press
cand selectstatus - Press
e, typecancelled, press ++enter++ - Cancelled orders are excluded from the view
3. Pivoting and Grouping¶
Using the same orders.csv from above:
nless orders.csv
Group by a single column:
- Press
cand selectstatusto jump to it - Press
U— the data is deduplicated bystatus, and acountcolumn appears on the left -
The view automatically focuses on just the key and count columns, hiding the rest so you can see the summary clearly
You should see something like:
count status 5 shipped 3 pending 2 cancelled Streaming with pivots
If you're watching live data (e.g.
kubectl get pods -w | nless), the hidden columns automatically reappear when new lines arrive, so you see the full row detail alongside updated counts.
Drill into a group:
- With the cursor on the
shippedrow, press ++enter++ - A new buffer opens showing all 5 shipped orders with full detail
Composite keys — group by multiple columns:
- Start from the original data (press
qto go back to earlier buffers) - Press
cand selectcustomer, then pressU - Press
cand selectstatus, then pressUagain -
Now data is grouped by the combination of
customer+statuscount customer status 3 alice shipped 1 bob pending ... ... ...
This is equivalent to SELECT customer, status, COUNT(*) FROM orders GROUP BY customer, status.
4. Working with JSON Lines¶
Create a file called events.jsonl:
{"ts":"2025-03-01T10:00:00Z","event":"login","user":{"id":1,"name":"alice"},"meta":{"ip":"10.0.0.1","browser":"firefox"}}
{"ts":"2025-03-01T10:05:00Z","event":"purchase","user":{"id":2,"name":"bob"},"meta":{"ip":"10.0.0.2","browser":"chrome"}}
{"ts":"2025-03-01T10:10:00Z","event":"login","user":{"id":3,"name":"carol"},"meta":{"ip":"10.0.0.3","browser":"safari"}}
{"ts":"2025-03-01T10:15:00Z","event":"logout","user":{"id":1,"name":"alice"},"meta":{"ip":"10.0.0.1","browser":"firefox"}}
{"ts":"2025-03-01T10:20:00Z","event":"purchase","user":{"id":2,"name":"bob"},"meta":{"ip":"10.0.0.2","browser":"chrome"}}
{"ts":"2025-03-01T10:25:00Z","event":"login","user":{"id":4,"name":"dave"},"meta":{"ip":"10.0.0.4","browser":"firefox"}}
Open it:
nless events.jsonl
nless auto-detects JSON and parses each line into columns: ts, event, user, meta.
Extract nested fields with J:
- Press
cand selectuserto jump to that column - Press
J— a dropdown appears listing the nested keys - Select
user.name— a new column is added with just the user's name - Press
cand selectmeta, then pressJand selectmeta.ip
You now have flat columns for user.name and meta.ip alongside the original nested data.
Extract nested fields with column delimiter:
- Press
cand selectuser - Press
d, typejson, press ++enter++ - All keys inside
user(id,name) are extracted as new columns at once
Filter and group the extracted data:
- Press
cand selectuser.name, then pressf, typebob, press ++enter++ — filtered to Bob's events - Press
qto return, then presscto selecteventand pressUto see event counts per type
5. Parsing Logs with Regex Capture Groups¶
Regex named capture groups let you define column structure with a pattern. This is one of the most powerful features in nless.
Create a file called access.log:
2025-03-01 10:00:01 GET /api/users 200 45ms
2025-03-01 10:00:02 POST /api/orders 201 120ms
2025-03-01 10:00:03 GET /api/users/42 200 38ms
2025-03-01 10:00:04 DELETE /api/orders/99 403 12ms
2025-03-01 10:00:05 GET /api/health 200 5ms
2025-03-01 10:00:06 POST /api/users 400 67ms
2025-03-01 10:00:07 GET /api/orders 200 89ms
2025-03-01 10:00:08 PUT /api/users/42 200 55ms
2025-03-01 10:00:09 GET /api/orders/100 404 15ms
2025-03-01 10:00:10 POST /api/orders 500 230ms
Open and parse with a regex delimiter:
nless access.log
The default delimiter may split on spaces, but you can get structured columns using regex named capture groups:
- Press
Dto change the delimiter -
Enter this regex:
(?P<date>\d{4}-\d{2}-\d{2}) (?P<time>\d{2}:\d{2}:\d{2}) (?P<method>\w+) (?P<path>\S+) (?P<status>\d+) (?P<duration>\d+)ms -
Press ++enter++
The data is now parsed into clean columns: date, time, method, path, status, duration.
Analyze the structured data:
- Press
cand selectstatus, then pressfand type^[45]to match 4xx and 5xx status codes - Press
cand selectduration, then presssto sort by response time - Press
cand selectmethod, then pressUto see request counts per HTTP method
You can also set the regex delimiter directly from the CLI:
nless -d '(?P<date>\d{4}-\d{2}-\d{2}) (?P<time>\d{2}:\d{2}:\d{2}) (?P<method>\w+) (?P<path>\S+) (?P<status>\d+) (?P<duration>\d+)ms' access.log
The Regex Wizard¶
Writing (?P<name>...) for every group is tedious. nless has a built-in regex wizard that lets you write unnamed groups and then name them interactively.
-
Press
Dand enter a regex with unnamed groups:(\d{4}-\d{2}-\d{2}) (\d{2}:\d{2}:\d{2}) (\w+) (\S+) (\d+) (\d+)ms -
The wizard detects unnamed groups and prompts you to name each one in order:
- "Name for group 1 (
\d{4}-\d{2}-\d{2}):" → typedate - "Name for group 2 (
\d{2}:\d{2}:\d{2}):" → typetime - "Name for group 3 (
\w+):" → typemethod - ...and so on
- "Name for group 1 (
-
After naming all groups, the wizard transforms the regex into the named version and applies it
The wizard validates each name — it must be a valid Python identifier and can't duplicate an existing group name. Press ++escape++ or submit an empty name to cancel.
The wizard also works with column delimiters (d), so you can use unnamed groups when splitting a column too.
6. Splitting Columns with Regex Capture Groups¶
Column delimiters (d) also support regex capture groups — useful for breaking apart a single column into structured sub-columns.
Create a file called requests.csv:
id,request,response_time
1,GET /api/users?page=1&limit=10,45ms
2,POST /api/orders?ref=abc,120ms
3,GET /api/users/42?fields=name,38ms
4,DELETE /api/sessions?token=xyz,12ms
5,PUT /api/users/42?role=admin¬ify=true,55ms
nless requests.csv
Split the request column with a regex:
- Press
cand selectrequest - Press
dto apply a column delimiter -
Enter the regex:
(?P<method>\w+) (?P<path>[^?]+)\?(?P<query>.*) -
Press ++enter++
The request column is now split into method, path, and query columns.
Go further — split the query string:
- Press
cand select the newquerycolumn - Press
d, type&, press ++enter++ - Each query parameter is split into its own column
7. Kubectl and Aligned Output¶
nless works well with space-aligned output from tools like kubectl, docker, and ps.
kubectl get pods -A | nless
Or simulate with this sample data — create a file called pods.txt:
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-7c5b4f6b9-abc12 1/1 Running 0 5d
default redis-6d8f7a3c2-def34 1/1 Running 2 12d
kube-system coredns-5c98db65d4-ghi56 1/1 Running 0 30d
kube-system etcd-master 1/1 Running 0 30d
monitoring prometheus-8b7c6d5e4-jkl78 1/1 Running 1 7d
monitoring grafana-9a8b7c6d5-mno90 0/1 Pending 0 1d
logging fluentd-4e3d2c1b0-pqr12 1/1 Running 3 20d
logging elasticsearch-sts-0 1/1 Running 0 20d
nless pods.txt
nless auto-detects the double-space-aligned format. If it doesn't, press D and enter two spaces () as the delimiter.
Useful workflows:
- Press
cand selectNAMESPACE, then pressfand typemonitoringto see only monitoring pods - Press
cand selectSTATUS, then pressUto see a count of pods in each status - Press
cand selectRESTARTS, then presssto find pods with the most restarts - Press
cand selectSTATUS, then presseand typeRunningto see non-running pods
8. Raw Pager Mode¶
When nless can't detect a delimiter — or when you just want to browse a file as plain text — it switches to raw pager mode. Raw mode uses a virtual-rendering pager optimized for speed: it handles million-line files without the overhead of column parsing.
When raw mode activates¶
Raw mode activates automatically when:
- The input has no consistent delimiter (e.g. source code, config files, free-form logs)
- You explicitly pass
--rawon the command line
You can also switch any buffer to raw mode (and back) with D.
Browsing unstructured text¶
Create a file called app.conf:
# Application configuration
[server]
host = 0.0.0.0
port = 8080
workers = 4
[database]
url = postgresql://localhost:5432/mydb
pool_size = 10
timeout = 30
[logging]
level = INFO
format = %(asctime)s %(levelname)s %(message)s
file = /var/log/app.log
nless app.conf
nless detects no consistent delimiter and opens in raw mode. The background is subtly tinted to indicate you're in raw mode rather than tabular mode.
Navigate the file:
j/kto scroll line by linegto jump to the top,Gto the bottomctrl+d/ctrl+uto page down and uph/lto scroll horizontally for long lines
Search within raw text:
- Press
/, typedatabase, press ++enter++ - Press
nto jump to the next match,pto go back
Switching from raw to structured¶
Raw mode is a starting point — you can switch to a structured delimiter at any time.
Create a file called mixed.log:
=== Server Startup Log ===
Generated at: 2025-03-01 08:00:00
Environment: production
---
timestamp,level,message,user,ip
2025-03-01 08:00:01,INFO,server started,system,10.0.0.1
2025-03-01 08:00:15,INFO,GET /api/health,system,10.0.0.1
2025-03-01 08:01:22,WARN,rate limit exceeded,alice,10.0.0.50
2025-03-01 08:01:45,ERROR,internal server error,bob,10.0.0.51
2025-03-01 08:02:10,INFO,GET /api/users,alice,10.0.0.50
2025-03-01 08:02:33,ERROR,database timeout,carol,10.0.0.52
nless mixed.log
nless detects the CSV data and skips the preamble header lines automatically, parsing timestamp, level, message, user, ip as columns. If you'd rather see the raw text:
- Press
Dand selectraw— the file is shown as plain text with no column splitting - Press
Dagain and select,— the data is re-parsed as CSV
This round-trip between raw and structured views is useful when you need to see the original text alongside the parsed data.
Forcing raw mode from the CLI¶
For large files where you don't need column parsing, --raw skips delimiter inference entirely:
nless --raw /var/log/syslog
This is the fastest way to browse a file — nless loads data incrementally and renders only the visible lines, so even a million-line file is responsive immediately.
9. Live Streaming¶
nless can ingest data in real-time from pipes and shell commands. As new lines arrive, they are highlighted in green so you can instantly distinguish fresh data from what was already on screen. Once you've reviewed the new data, press x to clear the green highlights and reset everything to normal.
Streaming from a pipe¶
Pipe a long-running command directly into nless:
kubectl get events -w | nless
Or try it locally:
ping localhost | nless
New lines appear at the bottom highlighted in green. Press t to enable tail mode — the cursor stays pinned to the bottom so you always see the latest data as it arrives. When the green highlighting becomes distracting, press x to reset it — the next batch of new lines will be highlighted fresh.
Streaming with ! shell commands¶
You can also launch streaming commands from inside nless without leaving the app:
- Open any file:
nless orders.csv -
Press
!and type:tail -f /var/log/syslog -
A new buffer group opens (indicated by
⏵in the group name) and lines stream in, highlighted in green as they arrive - Press
tto enable tail mode and follow the output - Press
xto reset the green highlights once you've seen the new data - Press
}/{to switch between buffer groups, orL/Hto switch buffers within a group
Monitoring a live log with structure¶
Stream a log and apply a regex delimiter to parse it on the fly:
tail -f /var/log/nginx/access.log | nless
- Wait for a few lines to arrive (they appear in green)
-
Press
Dand enter a regex to structure the data:(?P<ip>\S+) \S+ \S+ \[(?P<time>[^\]]+)\] "(?P<method>\w+) (?P<path>\S+) \S+" (?P<status>\d+) (?P<bytes>\d+) -
All existing and future lines are parsed into columns
- Press
tfor tail mode — new lines continue arriving, now structured and highlighted in green - Press
xto clear the highlights, then presscand selectstatus, pressfand type^5to filter to 5xx errors in real-time
Watching Kubernetes pods¶
kubectl get pods -A -w | nless -d ' '
- The initial pod list loads as normal text
- As pods change state, new lines stream in highlighted in green
- Press
xto reset highlights after reviewing the changes - Press
cand selectSTATUS, then pressUto pivot by status — the view focuses on justSTATUSandcount - When a new line arrives, all columns reappear automatically with updated counts, and the new row is highlighted in green
- Press
tto tail and watch changes as they happen
Running multiple streams side by side¶
You can open several streaming commands in separate buffers:
- Start with:
kubectl get pods -w | nless - Press
!and typekubectl get events -w— a second buffer opens with the event stream - Press
!and typetail -f /var/log/app.log— a third buffer opens - Each
!command opens in its own buffer group — switch between groups with}/{ - Each group streams independently, with new lines highlighted in green — press
xin any buffer to reset its highlights
Opening additional files with O¶
You can open more files without leaving nless:
- Start with:
nless orders.csv - Press
Oand type the path to another file (autocomplete suggests files in the current directory) - A new buffer group opens (indicated by
📄in the group name) - Press
}/{to switch between groups - Press
Rto rename a group for easy identification
Streaming JSON logs¶
Many applications emit structured JSON logs. nless handles these in real-time:
docker logs -f my-app | nless
If each log line is a JSON object, nless auto-detects the format and parses fields into columns. As new JSON lines stream in:
- They appear highlighted in green with fields already parsed
- Press
cand selectlevel(or whatever your log level field is called) - Press
fand typeerrorto filter to errors — the filter applies to new lines as they arrive too - Press
Jon a nested field to extract it as a column
10. Reshaping Data with Column Visibility¶
Create a file called employees.csv:
id,first_name,last_name,email,department,title,salary,start_date,office,phone
1,alice,smith,alice@co.com,engineering,senior engineer,120000,2020-03-15,NYC,555-0101
2,bob,jones,bob@co.com,marketing,manager,95000,2019-07-01,SF,555-0102
3,carol,williams,carol@co.com,engineering,staff engineer,140000,2018-01-10,NYC,555-0103
4,dave,brown,dave@co.com,sales,account exec,85000,2021-06-20,CHI,555-0104
5,eve,davis,eve@co.com,engineering,junior engineer,90000,2023-01-05,NYC,555-0105
6,frank,miller,frank@co.com,marketing,director,130000,2017-04-12,SF,555-0106
7,grace,wilson,grace@co.com,sales,manager,100000,2020-11-30,CHI,555-0107
8,hank,moore,hank@co.com,engineering,manager,135000,2019-02-18,NYC,555-0108
nless employees.csv
Filter columns to focus on what matters:
With 10 columns, scrolling to find the right one is slow. Use c to jump directly:
- Press
Cto filter columns - Type
name|department|salaryand press ++enter++ - Only columns matching the regex are shown
To show all columns again, press C and type all.
Pin columns to keep them visible:
With wide datasets, important columns like first_name scroll off screen as you explore. Pin them to the left:
- Press
cand selectfirst_name, then pressm— the column moves to the left and stays fixed while other columns scroll - Press
cand selectdepartment, then pressm— now bothfirst_nameanddepartmentare pinned - Scroll right with
l— pinned columns stay visible with a separator on the left, while unpinned columns scroll normally - To unpin, press
cto jump to a pinned column and pressmagain
Pinned columns show a P label in the header so you can tell which columns are frozen.
Reorder columns:
- Press
cand selectsalaryto jump straight to it - Press
<to move it left,>to move it right - Rearrange columns to your preferred layout — pinned columns can be reordered among themselves, but can't be moved past the pinned/unpinned boundary
Combine with other features:
- Press
Cand typedepartment|title|salaryto focus - Press
cand selectsalary, then presssto sort by compensation - Press
cand selectdepartment, then pressUto see employee counts per department - Press ++enter++ on
engineeringto see all engineers
11. Exporting Results¶
After filtering, sorting, and reshaping data, you can export the current view.
nless orders.csv
- Press
cand selectstatus, then pressfand typeshipped - Press
cand selectquantity, then presssto sort - Press
W, typeshipped-orders.csv, press ++enter++
The output format is inferred from the file extension:
| Extension | Format |
|---|---|
.csv |
CSV (comma-separated) |
.tsv |
TSV (tab-separated) |
.json, .jsonl |
JSON Lines (one object per row) |
.txt, .log |
Raw (tab-separated, no header) |
| anything else | CSV (default) |
So shipped-orders.json would write JSON Lines, shipped-orders.tsv would write tab-separated, etc.
Copy a single cell:
Move the cursor to any cell and press y to copy its contents to the clipboard.
Write to stdout:
Press W and type - to write to stdout and exit — useful for piping nless output to other tools:
nless data.csv # filter/sort interactively, then W and -
12. Live Debugging a Web Server¶
This tutorial combines live streaming, regex parsing, and interactive analysis. Start a log stream in one terminal:
# Simulate a live access log (or use a real one)
while true; do
echo "$(date '+%Y-%m-%d %H:%M:%S') $(shuf -n1 -e GET POST PUT DELETE) /api/$(shuf -n1 -e users orders health sessions) $(shuf -n1 -e 200 200 200 201 400 404 500) $(shuf -n1 -e 5 12 45 89 120 230)ms"
sleep 1
done > /tmp/live-access.log &
Now open it with nless:
tail -f /tmp/live-access.log | nless
- Lines stream in and are highlighted in green as they arrive
-
Press
Dand enter the regex to structure the data:(?P<date>\S+) (?P<time>\S+) (?P<method>\w+) (?P<path>\S+) (?P<status>\d+) (?P<duration>\d+)ms -
Press
tto enable tail mode — you're now watching structured data scroll by in real-time - Press
cand selectstatus, then pressfand type^[45]— you're filtering to errors live - New lines still stream in (highlighted in green), but only errors pass the filter
- Press
cand selectpath, then pressU— the view focuses onpathandcountso you can see which endpoints are failing most - As new errors stream in, all columns reappear with updated counts and the new rows highlighted in green
- Press ++enter++ on a path to drill into the specific errors for that endpoint
- Press
Wand typeerrors.csvto snapshot the current errors to a file
13. Time Windows and Arrival Timestamps¶
When working with streaming data, you often want to focus on recent activity. nless records an arrival timestamp for every row and lets you filter by time window.
Viewing arrival timestamps¶
Start a streaming source:
ping localhost | nless
- Wait for a few lines to arrive
- Press
Ato toggle the_arrivalcolumn — it appears pinned on the left, showing the UTC timestamp (with millisecond precision) when each row was received - Press
Aagain to hide it
Filtering by time window¶
The @ key lets you show only rows that arrived within a time window of now:
- Press
@and type30sto show only the last 30 seconds of data - Rows older than 30 seconds are filtered out
- Supported formats:
30s,5m,1h,2h30m,2d, or a plain number (treated as minutes) - To clear the time window, press
@and type0,off,clear, ornone
Rolling time windows¶
Append + to make the window rolling — it continuously re-evaluates to drop expired rows:
- Press
@and type1m+ - The window automatically refreshes every few seconds, dropping rows older than 1 minute
- The status bar shows the active window duration
This is useful for monitoring dashboards where you want a sliding view of the last N minutes of activity.
Combining time windows with other features¶
Time windows work alongside filters, sorts, and pivots:
- Start with:
kubectl get events -w | nless - Press
@and type5m+to see only the last 5 minutes (rolling) - Press
cand selectTYPE, then pressfand typeWarningto narrow to warnings - Press
cand selectREASON, then pressUto pivot — you're now watching a live count of warning reasons in the last 5 minutes
Column-based time windows¶
Instead of filtering by arrival time, you can filter by parsed timestamps in a column. This is useful for log files with a timestamp column where you want "the last 5 minutes of the log" rather than "rows that arrived in the last 5 minutes."
Create a file called server.log:
timestamp,level,service,message
2024-01-15 09:50:00,INFO,auth,User login
2024-01-15 09:55:00,WARN,gateway,High latency detected
2024-01-15 09:58:00,ERROR,billing,Payment timeout
2024-01-15 10:00:00,INFO,auth,Token refreshed
2024-01-15 10:02:00,INFO,gateway,Request completed
2024-01-15 10:04:00,ERROR,auth,Invalid credentials
nless server.log
- nless auto-detects
timestampas a DATETIME column - Press
@and typetimestamp 5m— only rows within the last 5 minutes of the log's timestamps are shown (relative to the max timestamp in the column, not wall clock) - Press
@and typeoffto clear
The autocomplete suggests column-prefixed durations (e.g. timestamp 5m, timestamp 15m+) for any detected DATETIME column.
Timestamp format conversion¶
You can convert a timestamp column to a different display format. This creates a new buffer with the converted values — sort, filter, and search all work on the converted output.
Using the same server.log:
- Press
@and typetimestamp -> relative— timestamps become2h ago,5m ago, etc. - Press
qto close the converted buffer - Press
@and typetimestamp -> %H:%M:%S— timestamps become09:50:00,09:55:00, etc. - Press
qto close
Convert to epoch:
- Press
@and typetimestamp -> epoch— timestamps become Unix epoch seconds
With timezone conversion:
- Press
@and typetimestamp -> UTC>US/Eastern %H:%M:%S— converts from UTC to Eastern time - The autocomplete suggests common timezones when you type
>
The autocomplete after -> shows format options with example output (e.g. iso → 2024-01-15T10:30:00, relative → 2h ago).
From the command line¶
You can also set a time window on startup:
kubectl get events -w | nless --tail -w '5m+'
Column-based windows work from the CLI too:
nless server.log -w 'timestamp 5m'
14. Auto-Detecting Log Formats¶
nless can automatically detect common log formats and apply the right regex delimiter with a single keypress. This saves you from manually writing regex patterns for well-known formats like syslog, Apache access logs, Spring Boot, and more.
One-press log parsing¶
Create a file called syslog.log:
Jan 5 14:23:01 myhost sshd[12345]: Accepted publickey for deploy from 10.0.0.5 port 52341
Jan 5 14:23:02 myhost sshd[12345]: pam_unix(sshd:session): session opened for user deploy
Jan 5 14:23:03 myhost cron[999]: (root) CMD (/usr/bin/cleanup --force)
Jan 5 14:24:00 myhost kernel: TCP: out of memory -- consider tuning tcp_mem
Jan 5 14:24:01 myhost systemd[1]: Starting Daily apt download activities...
nless syslog.log
- The file opens in space-aligned or raw mode — not very useful for analysis
- Press
P— nless samples the data, matches it against 19 built-in log formats, and detects "Syslog (RFC 3164)" - The data is instantly parsed into columns:
timestamp,host,process,pid,message - The status bar shows
delim: Syslog (RFC 3164)instead of a raw regex
Now you can use all the usual tools — filter by host, sort by process, pivot by pid, search within message.
Supported formats¶
Press P on any of these log formats and nless will detect them automatically:
- Web servers — Apache/nginx Combined and Common, NGINX error logs
- System logs — Syslog RFC 3164 (BSD) and RFC 5424
- Java/Spring — Spring Boot / Logback, ISO 8601 + Level + Logger
- Python —
WARNING:root:messageformat andtimestamp - logger - LEVEL - messageformat - Go — stdlib
logpackage, Logrus / slog text output - Ruby/Rails — Rails Logger format
- PHP/Laravel — Monolog format
- Rust — env_logger format
- Elixir — Elixir Logger format
- C#/.NET — ASP.NET Core logger format
- AWS — CloudWatch / Lambda log format
- Generic — ISO 8601 timestamps with level, bracket timestamp formats
If no known format matches (e.g. CSV data), nless shows "No known log format detected".
Saving custom log formats¶
If your application uses a non-standard log format, you can save it for future P detection:
- Press
Dand enter a regex that matches your log format (named or unnamed capture groups both work — the regex wizard will help you name any unnamed groups) - After the delimiter is applied, nless prompts: "Save as log format? Enter name (Esc to skip)"
- Type a name (e.g. "My App Log") and press ++enter++
The format is saved to ~/.config/nless/log_formats.json and will be checked first (with higher priority) the next time you press P. See Custom Log Formats for details on editing the file directly.
15. Pipe Workflows¶
nless can participate in Unix pipelines — not just as a data sink, but as a middle stage where you interactively explore, filter, and transform data before passing it downstream.
Batch mode with --no-tui¶
Use --no-tui to skip the TUI entirely. nless reads the data, applies any CLI transforms (-f, -s, -u, -c, -F), and writes the result to stdout:
# Filter and sort a CSV, output as TSV
nless data.csv --no-tui -f 'status=shipped' -s 'date=desc' -o tsv
# Extract specific columns from JSON lines
cat events.jsonl | nless --no-tui -c 'timestamp|level|message' -o json
# Convert timestamps to epoch and output as JSON
nless events.csv --no-tui -F 'timestamp -> epoch' -o json
# Convert timestamps to a short time format
cat events.csv | nless --no-tui -F 'timestamp -> %H:%M'
Interactive pipe mode¶
When stdout is a pipe but no CLI transforms are specified, nless opens the TUI normally. When you quit (q), the current buffer is automatically written to stdout:
# Explore interactively, then pipe the result to another tool
nless orders.csv | sort -t, -k2 | uniq
The status bar shows ⇥ Pipe (N rows) · Q to send to remind you that output goes to the pipe on quit. Press Q to quit immediately — in pipe mode this sends the current buffer to stdout; outside pipe mode it's a quick way to exit without closing tabs one-by-one.
Auto-batch detection¶
When stdout is a pipe and CLI transforms are present, nless automatically uses batch mode (no TUI):
# Auto-batch: stdout is a pipe + transforms present
cat data.csv | nless -f 'region=US' -s 'revenue=desc' | head -10
To override auto-batch and force the TUI open (so you can explore before piping), use --tui:
cat data.csv | nless --tui -f 'region=US' -s 'revenue=desc' | head -10
Output formats¶
Control the output format with --output-format / -o:
| Format | Description |
|---|---|
csv |
(default) Comma-separated values with header row |
tsv |
Tab-separated values with header row |
json |
One JSON object per line (JSON Lines) |
raw |
Original lines as-is, no column parsing |
16. Multiple Regex Highlights¶
nless lets you pin multiple search patterns as persistent colored highlights, making it easy to visually distinguish different patterns in your data simultaneously.
Using the app.log from the previous tutorial, or any log file:
nless app.log
Pin search terms as highlights with color picker:
- Press
/, typeERROR, press ++enter++ — matches are highlighted with the search style - Press
+— a color picker appears with 8 colors (red, orange, yellow, green, cyan, purple, pink, blue-grey) - Select red — ERROR is pinned as a red persistent highlight, and the search clears
- Press
/, typeWARN, press ++enter++ — WARN matches are highlighted - Press
+, select orange — WARN is pinned as an orange highlight - Both ERROR (red) and WARN (orange) are now visible simultaneously
Navigate between highlight matches:
- Press
-— a list of pinned highlights appears, each showing its match count and 🎨 / 🗑 options - Select ERROR (3) — ERROR becomes the active search, and the cursor jumps to the first match
- Press
nto jump to the next ERROR match,pfor the previous one - Press
-again, select WARN — nown/pnavigate between WARN matches instead
Recolor a highlight:
- Press
-and select 🎨 ERROR — a color picker appears - Select yellow — ERROR is now highlighted in yellow instead of red
Remove a single highlight:
- Press
-and select 🗑 WARN — a confirmation prompt appears - Select Yes — the WARN highlight is removed, ERROR remains
Clear all highlights:
Press + when no search is active to clear all pinned highlights at once.
17. Sessions¶
Sessions let you save and restore your complete workspace — all buffer groups, filters, sort order, column visibility, highlights, delimiter, search terms, cursor position, and more — tied to a specific data source. When you reopen the same file, nless can auto-restore the session so you pick up exactly where you left off.
Using the app.log from the previous tutorials, or any data file:
nless app.log
Save a session:
- Set up your view — apply some filters, sort a column, pin a few highlights
- Press
S— the session menu opens - Select Save current session… — a text prompt appears
- Type
error-investigationand press ++enter++ — the session is saved to~/.config/nless/sessions/error-investigation.json
Load a session:
- Press
S— the session menu shows your saved sessions (sorted by most recently used) with their data sources and group counts - Select error-investigation — all your filters, sort, highlights, search term, cursor position, and column settings are restored
Auto-restore on file open:
- Close nless and reopen the same file:
nless app.log - nless detects a saved session matching this file and prompts: "Session 'error-investigation' found for this file. Load it?"
- Select Yes — your full workspace is restored automatically
Load from CLI:
nless --session error-investigation app.log
This skips the prompt and loads the session directly.
Rename a session:
- Press
S— select the ✏️ option next to a session - Type the new name and press ++enter++
Delete a session:
- Press
S— select the 🗑 option next to a session - Confirm deletion — the session is removed
18. Views¶
While sessions save your entire workspace tied to a specific file, views save a single buffer's analysis settings as a reusable template. Views are portable — you can save a view while analyzing one dataset and apply it to a completely different file.
Using the app.log from the previous tutorials, or any data file:
nless app.log
Save a view:
- Set up your analysis — filter to
ERRORrows, sort by timestamp, hide some columns - Press
v— the view menu opens - Select 💾 Save current view… — a text prompt appears
- Type
errors-onlyand press ++enter++ — the view is saved to~/.config/nless/views/errors-only.json
Load a view on different data:
- Open a completely different file:
nless other-app.log - Press
v— the view menu shows your saved views - Select 📌 Load errors-only — the filter, sort, and column settings are applied to the new data
- If some settings reference columns that don't exist in the new data, the notification tells you what was skipped (e.g. "2 skipped: sort (column 'response_time' not found), filter on 'status_code'")
Undo a view:
- After loading a view, press
vagain - Select ↩️ Undo last view — the buffer is restored to exactly how it was before the view was applied, including any rows that were filtered out
Rename a view:
- Press
v— select the ✏️ option next to a view - Type the new name and press ++enter++
Delete a view:
- Press
v— select the 🗑 option next to a view - Confirm deletion — the view is removed
Sessions vs. Views
Sessions (S) save your entire workspace (all buffer groups, cursor position, tab layout) and are tied to the data source — great for resuming work on a specific file. Views (v) save a single buffer's analysis settings and work across any data — great for reusable analysis patterns like "show only errors" or "pivot by status code".
19. Merging Multiple Streams¶
When investigating an issue across multiple log files, you can merge them into a single view ordered by arrival time with a _source column to identify where each row came from.
CLI Merge¶
Create two sample files:
# app.log
timestamp,level,message
2024-01-15T10:00:01,INFO,User login
2024-01-15T10:00:03,ERROR,Database timeout
2024-01-15T10:00:05,INFO,Request completed
# worker.log
timestamp,level,message
2024-01-15T10:00:02,INFO,Job started
2024-01-15T10:00:04,WARN,Retry attempt 1
2024-01-15T10:00:06,INFO,Job completed
Merge them from the command line:
nless -m app.log worker.log
The merged view shows all rows interleaved by arrival time, with a _source column pinned on the left identifying which file each row came from. You can filter by _source to isolate one file's data (e.g. f on the _source column, then type app.log).
Delimiter conflicts: If merged files use different delimiters (e.g. CSV + TSV), nless auto-switches to raw mode so all lines render cleanly with the
_sourcecolumn. Override with--delimiterif needed.
Opening Multiple Files as Separate Groups¶
Without -m, passing multiple files opens each in its own buffer group:
nless app.log worker.log metrics.tsv
Switch between groups with } and {. Each group is independent — you can apply different filters, sorts, and columns to each.
In-App Merge¶
You can also merge buffers that are already open:
- Open two files as separate groups with
O - Press
Mto open the merge picker - Select the buffer you want to merge with the current one
- A new "merged" tab appears with combined data and the
_sourcecolumn
The _source column can be hidden/shown like any other column using C (column filter) or h (toggle hidden columns).
20. Ex Mode¶
Ex mode gives you a command-line prompt inside nless for quick operations without remembering keybindings. Press : to open the prompt.
Using orders.csv from earlier tutorials:
nless orders.csv
Substitution — rewrite cell values:
- Press
cand selectstatusto jump to that column - Press
:and types/shipped/delivered/— allshippedvalues in the status column becomedelivered - Press
:and types/pending/in-progress/g— thegflag applies the substitution across all columns, not just the current one
Filtering and sorting by name:
- Press
:and typefilter customer alice— a new buffer opens with only Alice's rows - Press
qto close the filtered buffer - Press
:and typesort price— sorts by the price column (cycles asc → desc → none) - Press
:and typesort price desc— sorts descending directly - Press
:and typeexclude status cancelled— removes cancelled orders
File operations:
- Press
:and typew filtered.csv— writes the current buffer tofiltered.csv - Press
:and typeo other-data.csv— opens a file in a new buffer group - Press
:and typeq— closes the current buffer (same as pressingq)
Settings:
- Press
:and typeset theme monokai— switches to the monokai theme - Press
:and typeset keymap emacs— switches to emacs keybindings - Press
:and typedelim ,— changes the delimiter to comma
Autocomplete:
Ex mode supports autocomplete — start typing a command and press ++tab++ to see suggestions. Column names, theme names, and keymap names are all suggested contextually.
21. Putting It All Together¶
This tutorial ties together regex parsing, filtering, pivoting, excluded lines, and export into a single investigation workflow. Create a file called app.log:
2025-03-01 08:00:01 INFO server started on port 8080
2025-03-01 08:00:15 INFO GET /api/health 200 user=system ip=10.0.0.1
2025-03-01 08:01:22 WARN GET /api/users 429 user=alice ip=10.0.0.50
2025-03-01 08:01:45 ERROR POST /api/orders 500 user=bob ip=10.0.0.51
2025-03-01 08:02:10 INFO GET /api/users/1 200 user=alice ip=10.0.0.50
2025-03-01 08:02:33 ERROR GET /api/orders/99 500 user=carol ip=10.0.0.52
2025-03-01 08:03:01 WARN POST /api/users 400 user=dave ip=10.0.0.53
2025-03-01 08:03:15 INFO GET /api/health 200 user=system ip=10.0.0.1
2025-03-01 08:04:00 ERROR DELETE /api/users/5 403 user=eve ip=10.0.0.54
2025-03-01 08:04:22 INFO PUT /api/users/1 200 user=alice ip=10.0.0.50
2025-03-01 08:05:10 ERROR POST /api/orders 500 user=bob ip=10.0.0.51
2025-03-01 08:05:45 INFO GET /api/orders 200 user=frank ip=10.0.0.55
Step 1 — Parse with regex capture groups:
nless app.log
Press D and enter:
(?P<date>\d{4}-\d{2}-\d{2}) (?P<time>\S+) (?P<level>\w+)\s+(?P<method>\w+) (?P<path>\S+) (?P<status>\d+) user=(?P<user>\w+) ip=(?P<ip>\S+)
Step 2 — Investigate errors:
- Press
cand selectlevel, then pressf, typeERROR, press ++enter++ - You see only error lines with structured columns
Step 3 — Find repeat offenders:
- Press
cand selectuser, then pressUto group by user - Bob appears twice — press ++enter++ on his row to see his specific errors
Step 4 — Check excluded lines:
- Press
qto go back to the original regex-parsed buffer - Press
~to see lines that were excluded — this includes both lines that didn't match the regex pattern and lines removed by filters - The
server startedline appears here (it has no method/path/status/user/ip) - Press
~again from this buffer to chain further — each~accumulates exclusions from all ancestor buffers, letting you drill into what's being filtered out at every level
Step 5 — Export findings:
- Navigate back to the error-filtered buffer (press
L/Hto switch buffers) - Press
W, typeerrors.csv, press ++enter++