This talk is about two tools:
- feedgnuplot - a tool plot data coming in on STDIN http://www.github.com/dkogan/feedgnuplot
- vnlog - a toolkit to read, write and manipulate columnar ASCII data http://www.github.com/dkogan/vnlog
Both of these are free software, and both are available in Debian/unstable now. feedgnuplot has been around much longer than vnlog, so feedgnuplot is available in many more distros currently.
These tools are written in the spirit of UNIX: a large number of simple tools that communicate via pipes. Usually, as part of a larger shell pipeline. They make very powerful one-liners possible. These are very useful for the initial exploration of data. For complex tasks, more powerful toolkits (numpy, matlab, excel, etc) are a better choice, but you can go a LONG way with just the shell.
Why do any of this in the shell? Because we already live in the shell, so we can combine this with other tools to get a LOT of leverage out of it. Examples:
- remote web-server monitoring by visualizing (in real time) web server logs over ssh
- 802.11 network quality logging
- data throughput monitoring
You wouldn’t even attempt any of these with Matlab or Excel.
These tools are not the PERFECT choice for any one task, but are a GOOD choice for a wide range of tasks. They are great for prototyping and initial data exploration, since you can quickly get something running.
Overarching philosophy: do not create new knowledge. These are all largely wrappers around other core tools, so most of the usage is inherited from those
feedgnuplot has much more clear applicability, so I’ll talk about it first.
This is a frontend to the gnuplot plotting tool. It doesn’t actually plot anything: it generates commands for gnuplot, and IT makes the plots. We visualize standard input!
Let’s go over the basics, and get something running.
Before we can plot anything, we need data to plot
seq 10
Let’s plot it!
seq 100 | feedgnuplot
That’s it. We didn’t ask for anything specific, so we got a plot that uses the default settings. Let’s plot lines AND points.
seq 100 | feedgnuplot --lines --points
Most of the interactions between feedgnuplot and gnuplot are passing strings verbatim to gnuplot, but VERY common stylings, such as –points and –lines have their own feedgnuplot options. If we want to change the point style or point size, we tell gnuplot about it:
seq 100 | feedgnuplot --with 'linespoints pointsize 3 pointtype 7'
This MIGHT look cryptic, but it is the gnuplot syntax. If you know how to talk to gnuplot, there’s nothing to learn. If you don’t, then you get to learn two tools for the price of one.
Furthermore, my tool does not even know what this string means, it just passes it down to gnuplot verbatim. So everything the underlying tool supports is available here.
And since I’m reading a pipe, all the other normal pipe things are available. So far, we fed data into the tool, and when all the data has been read in, we made a plot. Instead, we can make a plot of the data AS IT COMES IN by passing –stream. This is REALLY useful for all types of realtime monitoring, so let’s do some of that: let’s investigate the temperature in my laptop. There’re a number of thermal probes in this machine:
cat /proc/acpi/ibm/thermal
Let me read off the temperatures every 1 second, strip “temperatures:”, and send it to the plotter.
while true; do < /proc/acpi/ibm/thermal awk '{$1=""; print}'; sleep 1; done |
feedgnuplot --stream --with linespoints --exit
It looks like there’re a number of sensors that aren’t hooked-up, and always return -128. Let me ignore those, and let’s also label the axes and datasets.
while true; do < /proc/acpi/ibm/thermal awk '{print $2,$3,$4,$6,$8,$10,$11,$12}'; sleep 1; done |
feedgnuplot --stream --with linespoints --autolegend \
--xlabel 'Time (s)' \
--ylabel 'Temperature (degrees C)' \
--title 'Laptop temperatures vs. time' \
--exit
Cool. The data looks uninteresting right now, but I can make it more interesting by spinning a core:
while true; do true; done
Apparently probe 7 is the one sitting on the cpu. We learned something!
Let’s take it a step further. Let’s say I really care about the temperature of this laptop. I’m going to log the temperatures to a file, which makes it possible to analyze later
while true; do < /proc/acpi/ibm/thermal awk '{print $2,$3,$4,$6,$8,$10,$11,$12}'; sleep 1; done > \
/tmp/temperatures.log
Whenever I like, I can then plot the data in this file, to look at ALL the past temperature history.
< /tmp/temperatures.log \
feedgnuplot --with linespoints --autolegend \
--xlabel 'Time (s)' \
--ylabel 'Temperature (degrees C)' \
--title 'Laptop temperatures vs. time'
Or, I can read the data off the end of this file to get realtime telemetry
tail -f /tmp/temperatures.log | \
feedgnuplot --with linespoints --autolegend \
--xlabel 'Time (s)' \
--ylabel 'Temperature (degrees C)' \
--title 'Laptop temperatures vs. time' \
--stream --xlen 10 --exit
We get BOTH: logging AND realtime visualization.
feedgnuplot can do much much more than what I showed here. Colors! 3D plots! Images! Histograms! Contours! Hardcopies! Self-plotting data!
We just made a very rudimentary data logging and visualization system. But if I care about the temperature of this laptop as much as I say I do, this has shortcomings. These will be familiar to everyone who ever needed to log anything.
First of all, if I look at these logs in a year, I won’t know what any of this is: what do the numbers mean? who generated them? how often? on what hardware? We need support for comments.
This is a time series, so a time column is essential. If I made analysis tools to work with these logs, and then decided to add a leading time column later, the existing tools that expect N columns of temperature will be broken.
Similarly, here probe 7 was sitting on the CPU, but maybe I’ll want to process data from some other laptop where probe 5 is on the CPU. This also breaks existing tools.
So what I’d actually do (for years!) is to write out annotated log lines like
time=123 cpu_temp=5 gpu_temp=6
This is unambiguous, but it’s very verbose. And you MUST parse this data before being able to do ANYTHING with it (plot it, load it into numpy, etc). This extra parsing step can be done with awk or perl, but it is tedious and error-prone.
After you live this way for a while, you get some clarity about what the issues are, and how to fix them.
This is a good segue to talk about vnlog.
There are 2 on/off switches that control the interpretation of the data read by feedgnuplot. Both switches are off by default
- –[no]domain
If –domain then the FIRST item on each line is the x-coordinate for the rest of the points on that line. I.e. each line is interpreted as
x y0 y1 y2 y3 y4 ....
Otherwise the line number is used for the x-coordinate
- –[no]dataid
Each dataset has an id. By default, the IDs are numeric, indexed by the data position on each line. So if we run with –nodomain –nodataid and we have a line
y0 y1 y2 y3 y4
then this line describes 5 points, one in 5 different datasets. The datasets have IDs 0,1,2,3,4
If we pass –dataid, then each point is represented by 2 items: a string ID followed by the data. So if we run with –nodomain –dataid and we have a line
position y0 speed y1 direction y2 temperature y3
then this line describes 4 points, one in 4 different datasets. The datasets have IDs “position”, “speed”, “direction” and “temperature”
–domain and –dataid are independent, so together they can describe 4 different data formats.
The data parser is as permissive as possible. Each line can have different numbers of points. Some datasets can appear much more often that others (–dataid makes this possible).
The IDs are used for 2 things:
- dataset labels made with –autolegend
- curve-specific styling
Example:
seq 20 | awk '$1%2 { print $1*$1,"odd",$1 - 5} !($1%2){ print $1*$1,"even",$1}'
seq 20 | awk '$1%2 { print $1*$1,"odd",$1 - 5} !($1%2){ print $1*$1,"even",$1}' | \
feedgnuplot --domain --dataid \
--style odd 'with points pt 7' \
--style even 'with lines' \
--legend odd "Odd domain" \
--autolegend
So far each point was described by one domain value (possibly implicit with –nodomain) and one range value, but this is just a special case. I can specify range counts with –rangesizeall (for ALL the data in a plot) or –rangesize (for each dataset separately). Alternately I can ask for –tuplesizeall/–tuplesize if I’d rather count domain+range together. The extra range points are used for various fancier gnuplot styles: errorbars, vectors, colors, symbol sizes, etc. The gnuplot docs describe the specific formats. For instance:
gnuplot -e 'help yerrorbars'
It is the user’s responsibility to make sure the right data is passed for a specific style: feedgnuplot doesn’t know anything about styles, and just passes on the data to gnuplot. Example: let’s add colors and point sizes to the previous plot:
seq 20 | awk '$1%2 { print $1*$1,"odd",$1 - 5,$1,$1} !($1%2){ print $1*$1,"even",$1,$1}'
seq 20 | awk '$1%2 { print $1*$1,"odd",$1 - 5,$1,$1} !($1%2){ print $1*$1,"even",$1,$1}' | \
feedgnuplot --domain --dataid \
--style odd 'with points pt 7 palette ps variable' \
--tuplesize odd 4 \
--style even 'with lines palette' \
--tuplesize even 3 \
--legend odd "Odd domain" \
--autolegend
For streaming plots to work, feedgnuplot must receive its input as soon as it is available. Thus any buffering upstream must be turned off. Look at fflush() in gawk and -Winteractive in mawk for instance.
We can also plot in 3d. This works like one would expect:
- We’re now plotting (z1,z2, …) against (x,y), so we have 2 domain values
- –domain MUST be given: line number alone can’t provide both x and y
Example:
seq 200 | perl -nE 'chomp; $c=cos($_/10); $s=sin($_/10); \
say "$c $s $_ $_ " . ($_+30) . " " . (200-$_);'
seq 200 | perl -nE 'chomp; $c=cos($_/10); $s=sin($_/10); \
say "$c $s $_ $_ " . ($_+30) . " " . (200-$_);' | \
feedgnuplot --3d \
--domain \
--with 'linespoints pt 7 palette' \
--tuplesizeall 4
I also provide direct histogram support. For instance, let’s look at the distribution of file sizes in /tmp.
ls -l /tmp | awk '$1 != "total" {print $5/1000000}' | \
feedgnuplot --histogram 0 --binwidth 1
I can ask for a square aspect ratio with –square. If plotting in 3D, I can ask for a square xy, but a free z with –square_xy
I can set/unset gnuplot variables with –set/–unset.
I can plot on top of an image with –image (very useful for computer vision).
I can also make hardcopies. Let’s save our beautiful histogram to a file
ls -l /tmp | awk '$1 != "total" {print $5/1000000}' | \
feedgnuplot --histogram 0 --binwidth 1 --hardcopy /tmp/filesizes.pdf
Finally, since this is firmly rooted in the world of UNIXy shells, I can make self-plotting data files. For instance:
cat selfplotting.dat
./selfplotting.dat
Note that all these things work together. I can have a histogram updating in real time with errorbars and colored circles plotted on top and so on.
To make the temperature logging nice AND widely useful I want:
- An ASCII table for interoperability with various tools
- Support for comments
- Field labels. These are at least a comment for humans, but a set of tools that automatically interfaces with these would be really nice
The vnlog toolkit is a set of libraries and tools to read, write and manipulate such data. The tools are all independent; you can use all of them, or just one.
The first part of vnlog is conceptual: it is a data format.
This data format is trivial, and is exactly what one would expect:
- newline-separated records, whitespace-separated fields: just like awk
- lines beginning with # are comments
- first non-##, non-#! comment is a legend, labeling the fields
Here’s a valid vnlog:
## comment # time temperature 1 20 ## another comment 2 21 3 25 4 - 5 22
This format “just works” with awk. It “just works” with feedgnuplot. You can easily read this in matlab or excel or numpy. And you can easily write this even with just printf()
Since this is trivial, you don’t NEED any special tools to do any work. The vnlog toolkit provides some libraries and tools to make working with this data nicer, but again, none of these tools or libraries are strictly necessary.
Philosophy:
- as before, minimal new knowledge is created. I don’t actually do any work. Everything is a wrapper for something else, that we’re already familiar with. A friendly learning curve results
- In a data processing pipeline, as much as possible, each step should use this format for both input and output. This produces a uniformity that’s really pleasant to work with
The vnlog toolkit provides some libraries and some tools to manipulate textual data. In my own work I now read and write this format for pretty much EVERYTHING I do. And a common workflow is to write EVERYTHING to these logs (routinely hundreds of columns!), and to use the tools to pull out the stuff I need for analysis.
The provided libraries to read/write vnlog are useful, but not very interesting, and I won’t talk about them here. I want to focus on the shell tools.
Let’s do some case studies to highlight some useful workflows
Let’s revisit our temperature logging, but let’s add a header identifying the fields
(echo '# temp0 temp1 temp2 temp3 temp4 temp5 temp6 temp7';
while true; do < /proc/acpi/ibm/thermal awk '{print $2,$3,$4,$6,$8,$10,$11,$12}'; sleep 1; done) > \
/tmp/temperatures.vnl
I made this a valid vnlog simply by echoing a legend line. This extra line is still a comment, so tools that ignore all comments still work. The previous plot-temperatures-from-file command still works with no changes:
< /tmp/temperatures.vnl \
feedgnuplot --with linespoints --autolegend \
--xlabel 'Time (s)' \
--ylabel 'Temperature (degrees C)' \
--title 'Laptop temperatures vs. time'
But we can do more things. I can tell feedgnuplot that this is a vnlog (with –vnl), and it’s then able to label the fields
Note: I didn’t have to manually tell feedgnuplot the name of each column: it figured those out from the log file
< /tmp/temperatures.vnl \
feedgnuplot --with linespoints --autolegend \
--xlabel 'Time (s)' \
--ylabel 'Temperature (degrees C)' \
--title 'Laptop temperatures vs. time' \
--vnl
And I can do fancier things. For instance I can pull out just the temperatures from probe 7 (the CPU probe), rename that column to indicate that it’s from the CPU probe, and convert it to degrees Fahrenheit:
< /tmp/temperatures.vnl \
vnl-filter -p CPUtempF='32+temp7*9./5.' | head
Note that the output of vnl-filter is still a valid vnlog, so, I can plot that too with the same, identical plot command.
< /tmp/temperatures.vnl \
vnl-filter -p CPUtempF='32+temp7*9./5.' | \
feedgnuplot --with linespoints --autolegend \
--xlabel 'Time (s)' \
--ylabel 'Temperature (degrees C)' \
--title 'Laptop temperatures vs. time' \
--vnl
Let’s talk about Apriltags: https://april.eecs.umich.edu/software/apriltag.html
Here’s an example showing some of these tags. They’re similar to QR codes, but encode much less data in a much more robust way.
They are useful in robotics. You can place them on robots, and then build tracking systems that are based on detecting these visually
This system was designed and built by Edwin Olson, who’s now a professor at the University of Michigan. There’s a free-software library available to detect the tags in an image
These work well. But how well, exactly? How robust are they to noise? How robust are they to changes in contrast? Let’s find out!
I added a python interface and a commandline tool to the AprilTag library, and
pushed these all to Debian (install with apt install apriltag
). Let’s run it.
apriltag --vnl orig.jpg | tee orig.vnl
Note that I save the output to a file. So if we have downstream tools that ingest tag detections, they can use this format, and I can send them the precomputed file if I want. If everything in a pipeline uses this format for both input and output you get a caching system for free, and you can analyze each stage in the pipeline with the same tools.
Let’s look at the data. There’s a lot of stuff. Let’s align the columns so that we (high-maintenance humans) can more clearly see what’s what.
< orig.vnl vnl-align
That’s better.
vnl-align realigns the columns for easier reading. Since the vnlog format is not whitespace-sensitive, this doesn’t change the meaning of the data.
Note that here we have a record that reports a detection count, followed by the detections themselves, written as one detection per record. The null data fields are represented with “-“. Storing a detection count separately in this way is not required, but is often useful.
Let’s visualize these detections to see if the detector worked.
First, we filter the log to keep only the data we want. Columns xc and yc are the pixel coordinates of the centers of detected tags, and id indicates which tag we’re seeing.
< orig.vnl vnl-filter -p xc,id,yc
And with the filtered data, we can plot it overlaid on top of our image
< orig.vnl vnl-filter -p xc,id,yc | \
feedgnuplot --autolegend --image orig.jpg --square --domain --dataid --with 'points pt 7 ps 2'
So the detector looks like it works.
Note that I gave feedgnuplot xc,id,yc in that order specifically, and I used –domain –dataid. The –domain picks up the first values as the X coordinate, and the ID of the apriltag is interpreted by –dataid.
The detector works, but how robust is it to changes in contrast and to noise? Let’s find out!
Let’s pretend that I gathered lots of images, at different lighting levels, and I had a tool to evaluate the illumination and noise levels for each. For this talk I simulate this by tweaking constrast levels and adding noise:
for c (`seq -40 5 40`) { convert orig.jpg -brightness-contrast x${c}% +noise Gaussian image${c}.jpg }
geeqie image*.jpg(Om)
This creates a different image with the contrast level in the filename, and I pull that out into a separate vnlog. If we had REAL images, I’d get this from the image intensities. I write these into “contrast.vnl”:
(echo '# path contrast'; for fil (image*.jpg) { echo -n "$fil "; echo $fil | sed 's/image//; s/.jpg//' }) | tee contrast.vnl
Let’s run the apriltag detector over each image, dumping everything into one big log file. This is a choice; we can write one result file per image. The tools don’t care.
apriltag --vnl image*.jpg | tee images.vnl
Cool! I now have two logs, one containing apriltag detections, and another the contrast info. Let’s join them:
vnl-join -j path images.vnl contrast.vnl | tee joint.vnl
I just perfomed a databasy inner join. It matched up the path columns in the two input data files, and concatenated the columns in each matching row. So each line now has the appropriate “contrast” column.
This tool is a wrapper around the “join” UNIX tool you already have on your system. Since it is a wrapper, all its various options and flags and optimizations in the “join” tool are supported. The reasons this wrapper exists is
- I can refer to columns by NAME instead of number. Here I asked to join on the “path” column, not “column 1”.
- The vnlog legend is read on input, and written on output. The output is a valid vnlog
Along the same lines there’re more wrappers (vnl-sort, vnl-tail, vnl-ts, …). All of these wrappers barely need any documentation. The instructions are “do what you would do with the core tool, but give it column names”
Since we can, let’s sort the above numerically by contrast, and let’s align the columns
< joint.vnl vnl-sort -s -k contrast -n | vnl-align | tee joint2.vnl
Again, I’m not going to tell you what the flags to “vnl-sort” do: they’re normal “sort” flags that you either already know about, or can look up with “man sort”
We now have a log that contains the input contrast values and the output performance numbers, so we can see how the contrast affects performance. Does it do anything to the detection counts?
< joint2.vnl vnl-filter -p contrast,+Ndetections | \
feedgnuplot --vnl --autolegend --line --points --domain --xlabel contrast --ymin 0 --ymax 8
Apparently it does not; the detector is fairly robust.
There’re two new features in the above command:
- vnl-filter -p +something
is equivalent to
vnl-filter –has something -p something
And “–has something” will return ONLY rows that aren’t “-” in the “something” column. Remember that some of my rows have detection COUNTS in them, and some have DETECTIONS. Here I pick one of those.
- “feedgnuplot –vnl” parses the vnlog headers, and gets the dataset ID from them. Thus –autolegend created the legend in the plot corresponding to the column name.
What, specifically was the difference between a 6-detection case and a 7-detection case? Let’s look at detection IDs for these two cases
< joint.vnl vnl-filter 'contrast==-40' -p +id
< joint.vnl vnl-filter 'contrast==10' -p +id
I can eyeball this, and see that when in the case of contrast==10 we saw tag 10 twice, but in the contrast==-40 case we only saw it once. If we had lots of data, I wouldn’t be able to eyeball this, but a command can do the comparison for me:
comm -3 <(< joint.vnl vnl-filter 'contrast==-40' -p +id | vnl-sort -k id | vnl-uniq -c | sort) \
<(< joint.vnl vnl-filter 'contrast==10' -p +id | vnl-sort -k id | vnl-uniq -c | sort)
For each case I tallied the detection counts of each tag, and reported counts that don’t match. The only such mismatches here are
- 1 detection of tag 10 in the first data file (contrast==-40)
- 2 detections of tag 10 in the second data file (contrast==10)
Did the detector REALLY work even with the darkest, noisiest image? Let’s look at it
c=-40; < joint.vnl vnl-filter -p xc,id,yc contrast==$c | \
feedgnuplot --autolegend --with 'points pt 7 ps 3' --domain --dataid --image image${c}.jpg --square
Apparently it did.
Note the “contrast==$c” in the “vnl-filter” invocation above. The “$c” is expanded by the shell, so vnl-filter sees “contrast==-40”. This is a row-filter expression. Only rows for which that expression is true are returned (similar to tcpdump filters). I.e. I picked the darkest image.
Let’s look at the detection consistency. I plot ALL the detections on top of an arbitrary image.
< joint.vnl vnl-filter -p +xc,contrast,yc | \
feedgnuplot --autolegend --with 'points pt 2 ps 3' --domain --dataid --image orig.jpg --square
Here I use contrast (not the tag id) as the dataid. Looks like the detections are fairly consistent. If we need help identifying specific detections, we can plot them with labels:
< joint.vnl vnl-filter -p +xc,yc,contrast | \
feedgnuplot --autolegend --with labels --domain --tuplesizeall 3 --image orig.jpg --square
Let’s empirically quantify the spread. I pick an arbitrary tag, and plot a histogram of the detection centers, separately for x and y:
< joint.vnl vnl-filter id==14 -p +xc,yc | \
feedgnuplot --autolegend --vnl --histo xc,yc --binwidth 0.1
I can separate the axes, and make the histograms appear next to each other, but it’s not worth the typing to do it. These histograms aren’t very interesting since we don’t have a lot of data. Let’s get the basic statistics
< joint.vnl vnl-filter id==14 -p +xc | \
ministat
< joint.vnl vnl-filter id==14 -p +yc | \
ministat
ministat is not a vnlog tool, but it works with generic data, which is what this is. The detections of tag 14 look fairly consistent.
Let’s look at the detection metrics over contrast.
< joint2.vnl vnl-filter -p contrast,id,+margin | \
feedgnuplot --autolegend --domain --lines --points --dataid
Looks like the detector knows the results are less reliable as the contrast/noise gets crazy because the “margin” metric apparently likes the not-too-dark and not-too-bright images.
vnl-filter is not purely a wrapper, and has enough feature that need to be discussed
This tool
- Reads the input data up-to and including the legend line
- Constructs an awk program that performs the requested function (uses mawk by default for performance)
- execs that program
So none of the actual work is done by vnl-filter. For debugging, we can ask for the generated program. For instance:
< joint.vnl vnl-filter -p contrast,+margin 'xc > 5' --dumpexprs
If for whatever reason mawk isn’t good-enough for us, we can use perl instead by passing –perl.
We can also use vnl-filter as a thin frontend to awk, that allows column names:
< joint.vnl vnl-filter --eval 'xc > 5 {print contrast}' --dumpexprs
It still makes sure comments (including the legend) are ignored.
“vnl-filter -p” picks columns for output. This can be a comma-separated list, or multiple -p options can be given.
First vnl-filter tries to find columns that match the requested names exactly. If that finds nothing, it’ll fall back to a regex. For instance, to pick all the x,y coords in the above examples you can do ‘vnl-filter -p “^[xy]”’. This will pick ALL of
xc yc xlb ylb xrb yrb xrt yrt xlt ylt
The tool is maximally permissive: if we actually had a column named “^[xy]”, then the above command would pick THAT column instead. And if you had such a column, that’s probably what you would have intended.
If we pick columns that start with ‘!’, we’re asking to EXCLUDE the matching columns. All the -p are processed in order, adding/removing columns as requested. If the first -p is an exclusion, we implicitly add ALL the columns first.
If we pick a column of the form ‘a=xxx’, then we get a column “a” in the output whose value is the awk (or perl) expression xxx. For instance I can say “vnl-filter -p a=(b+c+d)/e”. The expression string is passed down to the core language verbatim, after replacing all the field names.
vnl-filter buffers the output by default. To enable streaming, pass “–unbuffered”
Below is code to set M-f1, M-f2, … M-f12, M-S-f1, M-S-f2, … to autoinsert snippets for a super-abbreviated version of this talk
Local Variables: eval: (progn (setq org-confirm-babel-evaluate nil) (org-babel-do-load-languages ‘org-babel-load-languages ‘((shell . t))) (auto-fill-mode)
(setq snippets [ “seq 5”
“seq 5 | feedgnuplot”
“seq 5 | feedgnuplot –lines –points”
“seq 5 | feedgnuplot –with ‘linespoints pt 7 ps 2’ –unset grid”
“cat /proc/acpi/ibm/thermal”
”
i=0
while (true) { cat /proc/acpi/ibm/thermal
awk ‘{print $2}’ \ |
read t; |
echo "$i
–domain \ –xlabel ‘Time (s)’ \ –ylabel ‘Temperature (degrees C)’ \ –title ‘Laptop temperatures vs. time’ \ –exit”
“< /tmp/log
–domain \ –xlabel ‘Time (s)’ \ –ylabel ‘Temperature (degrees C)’ \ –title ‘Laptop temperatures vs. time’”
“(echo ‘# time tempC’; cat /tmp/log) | head”
“(echo ‘# time tempC’; cat /tmp/log)
–domain \ –vnl \ –autolegend \ –xlabel ‘Time (s)’ \ –ylabel ‘Temperature (degrees C)’ \ –title ‘Laptop temperatures vs. time’”
“(echo ‘# time tempC’; cat /tmp/log)
tempC_max=tempC } } \ END {print tempC_max}’”
“(echo ‘# time tempC’; cat /tmp/log)
tee /tmp/log \ |
head \ |
vnl-align”
“(echo ‘# time tempC’; cat /tmp/log) |
vnl-filter ‘time < 10’”
“(echo ‘# time tempC’; cat /tmp/log) |
head \ |
vnl-filter -p temp”
“(echo ‘# time tempC’; cat /tmp/log) |
head \ |
vnl-filter -p tempF=’32+tempC*9./5.’”
“(echo ‘# time tempC’; cat /tmp/log) |
vnl-sort -grk tempC \ |
head -n2 \ |
vnl-filter -p tempC” |
“(echo ‘# time tempC’; cat /tmp/log)
datamash -CW max 2” |
“(echo ‘# time tempC’; cat /tmp/log)
(defun -last-in-sequence (s) (let* ((N (length s))) (elt s (1- N)))) (defun -insert-talk-snippet () “Find an integer in the last keybind, and insert the i-th snippet element; if a S appears in the binding, add 12” (interactive) (let* ((k (symbol-name (-last-in-sequence (recent-keys nil))))) (string-match “[0-9]+” k) (let* ((i (string-to-number (match-string 0 k))) (Sextra (if (string-match “S” k) 12 0))) (insert (elt snippets (1- (+ i Sextra))))) (setq frame-title-format (format “last key: %s” k))))
(global-set-key (kbd “M-<f1>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f2>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f3>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f4>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f5>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f6>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f7>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f8>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f9>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f10>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f11>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-<f12>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f1>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f2>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f3>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f4>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f5>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f6>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f7>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f8>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f9>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f10>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f11>”) ‘-insert-talk-snippet) (global-set-key (kbd “M-S-<f12>”) ‘-insert-talk-snippet)
vnl-filter -p tempC \ |
) End: