Curly brace expansion is one of the most useful and overlooked topic in these kind of posts. And I'm always surprised it isn't mentioned with compiling.
Worth noting that for the particular case you've cited, you can just use make, even without a makefile:
~ cat test.cpp
#include <iostream>
using namespace std;
int main()
{
cout << "hi\n";
return 0;
}
~ stat Makefile
stat: cannot stat 'Makefile': No such file or directory
~ 1 make test
g++ test.cpp -o test
~ ./test
hi
ingrained in muscle memory. The other even more useful one is !$ to get the last word of the previous line. It's probably the terminal feature I use most.
Well, I tend to assume that I'm not the only one who continually forgets to add 'sudo' in front of 'apt-get', or similarly that journalctl usually requires elevated privileges, or that I don't own the files I'm trying to chmod. I suspect that people who do recall such things perfectly are rare. As to muscle memory, well, probably my assumptions there are off. I've been using Linux exclusively both professionally and at home for more than ten years now, and I have a bad habit of coding at odd hours of the night. Other people may not have quite the same exposure or error rate.
I certainly wouldn't advocate blindly retrying; I only use 'sudo !!' when my shell tells me to. That unfortunately is pretty often :(
!$ is not merely the last word, but the last argument - particularly handy when it's a long path. Also love the way zsh expands these with a single tab press - great for clarity or further editing.
That's technically incorrect. !$ is a shortcut for !!:$, which is the last word of the previous command in history. A filename is treated as a single word, as you say. This is however distinct from the last argument, which is accessible with $_ . As an example:
$ echo 'example' > file.txt
Here !$ would return the filename, and $_ would return 'example', which was the last argument to echo.
Disappointingly, this list treats "yes" like a toy that just prints things over and over, and doesn't mention actual useful uses for "yes", like accepting all defaults without having to press enter.
Practical example: when you are doing "make oldconfig" on the kernel, and you don't care about all those questions:
yes "" | make oldconfig
Or, if you prefer answering no instead:
yes "n" | yourcommand
Also, the author refers to watch as a "supervisor" ("supervise command" - his words). That is bad terminology. Process supervision has well defined meaning in this context, and watch isn't even close to doing it.
Examples of actual supervisors are supervisord, runit, monit, s6, and of course systemd (which also does service management and, er, too much other stuff, honestly).
Some handy tips there but I would recommend changing some of the `find` examples:
find . -exec echo {} \; # One file by line
You don't need to execute echo to do that as find will output by default anyway. There is also a `-print` flag if you wish to force `find` to output.
find . -exec echo {} \+ # All in the same line
This is think is a dangerous example because any files with spaces will look like two files instead of one delimited by space.
Lastly, in both the above examples you're returning files and directories rather than just files. If you wanted to exclude directories then use the `-type f` flag to specify files:
find . -type f ...
(equally you could specify only directories with `-type d`)
Other `find` tips I've found useful that might be worth adding:
* You don't need to specify the source directory in GNU find (you do on FreeBSD et al) so if you're lazy then `find` will default to your working directory:
find -type f -name "*.txt"
* You can do case insensitive named matches with `-iname` (this is also GNU specific):
For some reason I recalled -iname failing on FreeBSD but I've just logged onto some devs boxes (not that I didn't trust your post!) and it seems you're right as the option is there in the man pages.
Apologies for this. It just goes to show how fallible the human memory is. :-/
One of the cruelest things you can do is a filename that consists only of a combining diacritic (without a glyph that it could combine with). Will break outputs of various programs (starting with ls) in sometimes hilarious ways.
If you're trying it out now and cannot figure out how to delete it: "ls -li" to find the file's inode number, then `find -inum $INODE_NUMBER -delete`.
Wow, that's really horrible. I have a file sitting around with a couple of newlines in the name just so I can see how many programs don't cope with it, but I hadn't thought of using a lone combining diacritic.
If anyone wants a command to make one, try
touch $'\U035F'
(using U+035F COMBINING DOUBLE MACRON BELOW for no particular reason, see [1] for more)
Indeed. This is one of the reasons why I wrote a shell that handles file names as JSON strings.
However for normal day to day usage, file names with \n are rare while files with spaces in their name are common. So returning an array of space delimited file names is a potentially dangerous practice for common scenarios where as find's default behaviour is only dangerous for weird and uncommon edge cases. (And if you think those are a likely issue then you probably shouldn't be doing your file handling inside a POSIX shell in the first place).
Not criticising the author here, as grep -P is good, but you might not also know that you can enter tabs and other codes in bash by pressing ctrl-v. So you could also type:
And Ctrl-V does not work in GVim on Windows, but Ctrl-Q does instead. E.g. if you have a text file with mixed Unix and DOS/Windows line endings, like Ctrl-J (line feed == LF == ASCII 10) and Ctrl-M (carriage return == CR == ASCII 13) + Ctrl-J, you can search for the Ctrl-Ms with /Ctrl-Q[Enter] and for tabs with /Ctrl-Q[Tab]. When you type the Ctrl-Q, it does not show as Ctrl-Q, but as a caret (^). And you can also use the same quoted Ctrl-M (Enter) pattern in a search and replace operation to remove them globally, by replacing them with a null pattern:
:%s/Ctrl-Q[Enter]//g[RealEnter]
or replace each tab with 4 spaces:
:%s/Ctrl-Q[Tab]/[4 spaces]/g[RealEnter]
where [RealEnter] means an unquoted Enter.
This is mainly useful for making such changes in a single file (being edited) at a time. For batch operations of this kind, there are many other ways to do it, such as dostounix (and unixtodos), which come built-in in many Unixen, or tr, maybe sed and awk (need to check), a simple custom command-line utility like dostounix, which can easily be written in C, Perl, Python, Ruby or any other language that supports the command-line interface (command-line arguments, reading from standard input or filenames given as command line arguments, pipes, etc.).
Funny enough I was saying this to a work colleague yesterday when I was looking at a shell script and scratching me head wondering how it was working when a variable named $RANDOM wasn't assigned a value. After a little investigation it turned out Bash (and possibly other $SHELL's?) returns a random number with each call of the $RANDOM variable. Handy little trick.
There is also the inbuilt variable $SECONDS, which seconds since the instance of bash was started. Makes it really easy to do 'time elapsed' at the end of a bash script.
# will give 2 digits after the decimal point, instead of bc -l which gives more digits of precision after the decimal point, than we may usually want for this particular calculation.
Interesting. I suppose you had to type Ctrl-D or Ctrl-Z to signal no input to awk, since it expects some in the above command? Never tried running awk without either a filename as input or stdin coming from a file or a pipe. Will check it and see what happens.
Indeed, I was about to say this. I clicked thinking "No way I'll learn something here". Turns out there's 2 or 3 commands that I didn't know and might prove useful!
> 6. List contents of tar.gz and extract only one file
> tar -ztvf file.tgz
> tar -zxvf file.tgz filename
Tar no longer requires the modifier for the compression when extracting an archive. So no matter if it's a .tar.gz, .tar.Z, .tar.bz2, etc you can just use "tar xvf"
As long as we are on the topic of Linux shell commands I would like to share a tip which has helped me, whenever you are using wildcards with rm you can first test them with ls and see the files that will be deleted and then replace the ls with rm. This along with -i and -I flags makes rm just a tad bit less scary for me. Kinda basic but hopefully somebody finds it helpful :)
As mbrock says, you can also use echo instead of ls for that. And "echo *" serves, in a pinch, as a rudimentary ls, when the ls command has been deleted and you are trying to do some recovery of a Unix system. Similarly dd can be used to cat a file if cat is deleted: dd < file
And in fact whenever you want to do some command like:
cmd some_flags_and_args_and_metacharacters ...
you can just replace cmd with echo first to see what that full command will expand to, before you actually run it, so you know you will get the result you want (or not). This works because the $foo and its many variants and other metacharacters are all expanded by the shell, not by the individual commands. However watch out for > file and >> file which will overwrite or append to the given file with the output of the echo command.
Was not really an issue for me personally, since it was not on my machine. Used to encounter such situations with some regularity when I was a Unix (and general) system engineer in the field, for a large hardware + Unix vendor, early in my career. The job involved solving all sorts of software problems (sometimes involving many levels of the stack) for customers of my employer. Sometimes, less technically savvy customers, like data entry operators or end users of the Unix systems sold by that vendor to them, would end up doing things like that - deleting critical files, losing backups or not taking backups and then the hard disk or OS crashed, corrupting the data on disk, etc. Got to handle many and diverse interesting problems in this area, and learned a lot from it; some of those skills have served me in good stead later in my career too - troubleshooting, Unix fundamentals and skills, interaction of apps with the OS, etc.
As sethrin says, you don't have to acquire the scars personally. And thanks, sethrin, for those links - should make for some good reading, I'm interested in this area though I do not work on it from some time. It can be good fun and mental exercises in problem solving.
There are a number of accounts of that sort of situation floating around the Internet; it's not necessary to acquire the scars personally. As to how we get into those situations, I suspect that the universal path would be an erroneous 'rm -rf' command affecting the /bin directory.
and this will rename all of my files that start with flight. to flight-new. preserving the file extension. So useful, when you've got a bunch of different files with the same name but with different extensions such as html, txt and sms.
Check out qmv from renameutils.[1] Using it you can use your $EDITOR to batch-rename files. It does sanity checking like making sure you don't rename two files to the same name and lets you fix such mistakes before trying again.
I guess I'm just overly sensitive (and maybe English is not the poster's first language), but I cringe at "the Linux shell"... I have used ksh, bash, (a little) csh, and zsh over the years, and love the architecture that makes "the shell" just another non-special binary that's not unique to the OS.
And yes, a lot of the things mentioned work in a lot of shells, but some don't, or act differently.
This is more correct than the previous poster's approach, because it will work with files that contain space characters, while the previous poster's version breaks in this situation.
Another approach that works is
find . type f -print0 | xargs -0 chmod -x
although find's built-in -exec can be easier to use than xargs for constructing some kinds of command lines.
Nice! Point 23 is a bit misleading though: comm only works on sorted input files. Also, "disown -h" in Point 21 works on bash but not on zsh. Also, in Point 22, "timeout" only works if the command does not trap SIGTERM (or you have to use -k).
> If a program eats too much memory, the swap can get filled with the rest of the memory and when you go back to normal, everything is slow. Just restart the swap partition to fix it: sudo swapoff -a; sudo swapon -a
Is this true? Not impossible, but I am surprised. If true, what is this fixing? In my naive view (never studied swap management), if at the current time a page is swapped out (and by now we have more memory -- we can kill swap completely and do fine), it should get swapped in when needed next time. As there is more memory now it should not, in general, be swapped out again.
If true we are exchanging a number of short delays later for a longer delay now, which to me hardly looks like a win.
I you run a very memory hungry program (like some specific scientific program), it can happen that 3GB of your memory from the browser, text editor, etc. is moved to swap. Then, when you kill the program and want to go back to normal, just changing the tab in your browser can take a long time.
By flushing the swap, you wait some time first but then it all runs smoothly. When using a hard drive and not SSD the difference is even bigger.
I've hit the same kind of problem before. Big task runs & eats all the memory. Later on, a login to the machine or existing shells are horribly slow until they get swapped back in.
However, swapoff/swapon only solves part of the problem - you still have binaries, libraries and other file-backed memory that were thrown out under the memory pressure and they won't be reloaded with the swapoff/swapon. Does anyone know how to force these kinds of things to be re-loaded before they are needed again?
It is useful in the case that you want a predictable and expected delay immediately, rather than unpredictable delays for an unknown length of time to come.
I get that part. I was wondering if by doing it all at once you somehow gain significantly over doing it normally. I doubt it, but prepared to learn otherwise. If it is much quicker to swap everything in maybe it is worthwhile to expose this functionality directly instead of doing hacks like swap off/on.
It depends on when you next need to use the system. If it is immediately after the memory hogging code exits, there's probably not much of a win.
But if you run the memory hogging program, then go to lunch, if the swapoff/swapon is triggered before you get back, you will be avoiding the delays entirely.
In my experience, it makes a big difference. My swap is on a hard drive, which can handle the sequential reading of the swap quite quickly. Whereas the random accessing of it in normal use is much slower per byte.
If you have a memory-constrained system, yes. There's a surprising amount of stuff that sits untouched in RAM for very long periods of time. I generally run fairly low-end systems, so it's not unusual for me to have a few GB in swap. (Though I just found out last week that I could buy another 4GB for $26 and free shipping, and I have to admit that was a worthwhile investment.)
Yes, if you have 8GB of RAM which is enough for me 99% of the time, but you'd rather your computer slow down 1% of the time rather than crash and have to start over.
But is hibernate relevant? I've found suspend to be much more reliable, and good enough in terms of energy consumption (thanks to modern low-power states in CPUs).
I find it relevant. I use it to store the state my work laptop is in at the end of the day, and then restore that state when I get back to work the next work day. That way I don't need to keep it switched on when I'm not using it/transporting it.
Is there a better solution to this other than hibernating?
> (sort -R sorts by hash, which is not really randomisation.)
I looked at the source code for GNU sort and what they're doing is reading 16 bytes from the system CSPRNG and then initializing an MD5 digest object with those 16 bytes of input. Then the input lines are sorted according to the hash of each line with the 16 bytes prepended.
Although they should no longer use MD5 for this, I don't think we know anything about the structure of MD5 that would even allow an adversary to have any advantage above chance in distinguishing between output created this way and an output created via a different randomization method. (Edit: or distinguishing between the distribution of output created this way and the distribution of output created via another method!)
The output of sort -R is different on each output and ordinarily covers the whole range of possible permutations.
$ for i in $(seq 10000); do seq 6 | sort -R | sha256sum; done | sort -u | wc -l
720
I know whatever < file.txt is slightly more efficient, but there is value is keeping things going left to right with pipes in between. It makes it easy to insert a grep or sort, or swap out a part.
The shell (not the command) is the one expanding those metacharacters, so (within limits), in:
cmd < file
or
< file cmd
where you put that piece (< file) on the line does not matter, because the actual command cmd never sees the redirection at all - it is done [1] by the time the command starts. The cmd command will only see the command line arguments (other than redirections) and options (arguments that start with a dash). Even expansion of $ and other sigils (unless quoted) happens before the command starts.
[1] I.e. the shell and kernel set up the standard input of the command to be connected to the file opened for reading (in the case of < file).
Is it possible to register new file formats in ripgrep though (say for example slim files so that I could do rg -tslim PATTERN)? Last time I checked it was somehow possible but way too complicated for so common a task.
You'll definitely want to check `free -m` before calling `swapoff` to make sure you really have enough memory for everything in swap, lest you want to invoke the wrath of the OOM killer.
Not a command as such, but I recommend skimming the readline man page[0] and trying some of the bindings out to see if you're missing out on anything. I went years without knowing about ctrl+R (search command history).
Not really a single command, it's a one-liner - a pipeline, but might be interesting, not only for the one-liner but for the comments about Unix processes that ensued on my blog:
It's not nearly as flexible as bc overall, but GNU units has lots more constants built-in and also trig functions.
I checked and it has all of the ones that you mentioned, sometimes under slightly different names. I was surprised that e is defined the elementary charge rather than Euler's constant!
That might be useful if you want floating point math, but most shell scripts can just use the integer math in shell:
$ x=10
$ echo $((x - 1))
9
Though if I need to do a floating point calculation at the shell, I start python or R, which both have their own interactive shells (with the same readline interface, which I like).
Don't read it like this:
rename <expression> <replacement file> ...
Read it like this:
rename <expression> <replacement> <file(s)...>
Argument 1 is a string (not a regex), argument 2 is string to replace the old string, and arguments 3+ is a file, or list of files, or a glob.
The man page includes a warning about the lack of safeguards. It is unfortunate that rename doesn't have an equivalent of mv or cp's -i flag because it's so easy to overwrite files if your expressions aren't exactly correct.
On some Ubuntu systems, I've seen another version of rename that's actually a Perl script that uses regular expressions. I think that version lets you set some safeguards, which are sorely lacking in the binary version of rename that ships with CentOS.
curly brace substitution:
Caret substitution: Global substitution (and history shortcut): I have a (WIP) ebook with more such tricks on GitHub if anyone is interested: https://tenebrousedge.github.io/shell_guide/