I'm glad you mention that because I forgot something: I try and stick with only stuff I can find on most computers in order to minimize dependencies. That's a big advantage imo of learning how to use awk and tr for basic data analysis, for example. You can ssh into any server and work without installing anything.
I don't like this approach. One is being peddled a bucket of old shit that might have been okay in the 80's for portability reasons. This portability is usually not necessary for my one time tasks anyway. Today there are really cool and small tools to do everything with comfort, and if you want some kind of portability you can still put static linked tools quickly on the given system. But of course that is a personal opinion and depends on the situation.
Really depends on your use case though. If you're always going to be working on your own machine, of course customize it to make yourself maximally productive. But if you're regularly going into new situations, it's good to know how to be effective with tools you can count on being there.
My (admittedly very lame) anology: you're hiring a ninja assassin and have two candidates. One tells you about all the special swords and staffs and and smoke bombs he carries, the other says I just need my hands. Who do you hire?
I have a simple thought about this. If your work involves many machines, you usually have enough permissions to transfer your programs. I think that the benefits of a pleasant way of working outweigh the effort.
And by the way, I would prefer the first ninja if the second one is handless.
I'm surprised with some of the responses you've received on this.
But I absolutely know what you are talking about. I often SSH into new machines, that I may not have root access on, that may not even have internet access, or be a distro with a package manager (e.g. some switch running a custom distribution).
In those situations it is a huge advantage to know the tool, rather than try to do some gymnastics to get your tools onto the box or the data off the box.
These "old" and "broken" tools are installed as defaults on all these systems for a reason.
I agree 99%, but sometimes a tool is such a huge productivity boost that it’s worth installing. Fzf is one of those tools for me. If I had to deal with CSV files a lot then xsv would be worthwhile
Here is a script that I use, mainly composed with the Toybox version of awk, that will extract all of the WiFi passwords stored on an Android device.
This is enormously portable, and does not require any new software installations for Android users who have root.
Requiring xsv would reduce availability to a fraction of where it can be run now.
#!/bin/sh
find /data \
-name WifiConfigStore.xml \
-print0 |
xargs -0 awk '
/"SSID/ { s = 1 }
/PreShared/ { p = 1 }
s || p {
gsub(/[<][^>]+[>]/, "")
sub(/^[&]quot;/, "")
sub(/[&]quot;$/, "")
gsub(/[&]quot;/, "\"")
gsub(/[&]amp;/, "\\&")
gsub(/[&]lt;/, "<")
gsub(/[&]gt;/, ">")
}
s { s = 0; printf "%-32.32s ", $0 }
p { p = 0; print }
' | sort -f
Note that the -print0 null processing is not POSIX. This is a reasonable compromise of standards compliance, as it does not reduce the base of available users.
I did try to do this first with arrarys, but awk segfaulted.
That is quite nifty implementation of reverse HTML escaping. But in python that could be done with much less work:
import html
print(html.unescape(foo))
And the best part - you don't need to debug/update the (g)sub list every time you stumble upon new weird &whatever; too. And there are alot of those out there:
XSV is a tool for exploring and manipulating X-separated value files (CSV, TSV, …), which is why I mentioned it in reply to a comment which talks about the exploration of CSV files, and furthermore specifically mentions that
You should install xsv.