The term control structure does not meet the meaning, control flow does not either. I am talking about the building blocks of any imperative programming language:
These are the processes, decisions, repetitions and delegations of our everyday life, determined by the way we think and how successful our actions are. How these things can be done with a UNIX shell script I would like to summarize here and now.
The UNIX shell has been around for a long time now, and it won't die out. It is a great helper in automating things on UNIX platforms, and with the help of open-source like CYGWIN it is also available to WINDOWS users. There has been the Bourne Shell (Stephen Bourne), the C Shell (William Joy), the Korn Shell (David Korn), and a number of derivates like ash, zsh, ....
specialized in different environments. The latest and most widely supported shell is bash
("born-again-shell":-), distributed with LINUX. Their compatibilities are different, I refer to the Bourne-Shell here, because it was born first :-) and thus might be compatible to all others (except the C-Shell, which is a special case).
Mind that the term "Shell" does not include all those useful applications like grep, sed, ex, find, awk, ...
that are available on UNIX platforms. The shell has been made to bind them together for automating different tasks.
The simplest form of a computer program is a sequence of statements:
cd $HOME
mkdir png
cd png
cp ../images/*.png .
rm _*.png
This script changes into your personal home-directory, creates a directory named "png", goes into it, copies all png
files from $HOME/images
to there, then removes all files that start with an underscore.
A shell script is a collection of commands you enter on a terminal screen. There were times when everything was done like that, and you could automate quickly just by collecting your daily commands into a script, and then let the script do the work. Try this with our graphical tools today :-!
Sometimes the order of statements matters, sometimes the order could be changed. In the above example, the mkdir png
command is a precondition for the following cd png
command. But in the following script no command depends on some predecessor:
echo "Logged in at `date +'%Y-%m-%e %H:%M'`" >>logfile
rm -rf WASTEBASKET/*
mount /windisk
ls -la
This appends date and time to the file "logfile", cleans the wastebasket, mounts a drive and then lists the current directory. We could do these statements in any order.
What I want to make clear is that programming-languages have no means to distinguish between these two types of sequences (statement order significant or not). Which is an underestimated cause of bugs. When seeing a sequence of statements, you never know whether the programmer ordered them intentionally or not. Best is to write conditions instead of sequences when order is significant.
A pipe is a special way to chain a sequence of commands. The output (stdout
) of the leftmost command is fed into the input (stdin
) of the next command, which again might produce output for the next command, and so on. That way a lot of applications can be used to process a stream of data.
ls -1 *.png | grep "edit" | sed 's/\.png$/.old.png/'
This code lists all .png
files in current directory (every file-name in one line, without additional information), then filters out all names that contain "edit" (grep = globally search a regular expression and print), and then replaces the trailing ".png" by ".old.png" (sed = stream editor).
The UNIX Shell finds out whether a command was successful or not be checking its exit-code. The $?
variable always holds the exit code of the latest launched command.
$ test -n hello
$ echo $?
0
This uses the test command to check whether "hello" is a non-zero (-n) string. Then it outputs the exit code of the latest command.
An exit-code of 0 (zero) indicates success, anything else is to be interpreted as error number.
(Which is exactly the opposite to most other programming languages, where if (0)
would be false!)
So when we write sequences of commands that depend on each other, we can make this explicit by testing whether the previous command succeeded or not.
if mkdir png
then
cp images/*.png png
else
echo "Could not create directory png!"
fi
This looks much more robust, right? The "if" is a reserved shell keyword. The command after it gets executed, its exit code is queried, and the sequence after "then" is executed when the exit code indicated success (0). When not, it executes the sequence in the "else" section. Any such condition must be closed by "fi". The "else" is optional. There is also an "elif" for catenation of conditions:
if test "$1" = "me"
then
echo "First parameter was me"
elif test "$1" = "you"
then
echo "First parameter was you"
else
echo "First parameter was >$1<"
fi
This condition catenation uses the test
command to check the first script (or function) parameter $1
for values "me" and "you", and outputs it by calling the echo
command.
The test
command is the most used with if-conditions, because it can do a lot of different things. And it can be written in a way that gives the feeling of a real programming language.
Following two conditions do exactly the same, but the second uses the built-in "shortcut" for the test
command, which materializes as [ condition ]
:
if test 2 -gt 1; then echo "test can compare numbers!"; fi
if [ 2 -gt 1 ]; then echo "test can be written as brackets!"; fi
Mind how the semicolon ; can replace newlines. This example shows that the test command also can be used to compare numbers.
You can combine several conditions within the test
command using the options -a
(AND) or -o
(OR). Following script outputs when the variable count
is greater than 0 (-gt
) and less equal 10 (-le
).
count=4
if [ -n "$count" -a "$count" -gt 0 -a "$count" -le 10 ]
then
echo "Not empty and between 1 and 10: $count"
fi
You can also use the operators &&
(AND) or ||
(OR) outside the test
command to build composite conditions:
count=-1
if test -n "$count" && ( test "$count" -lt 0 || test "$count" -gt 10 )
then
echo "Not empty and less than 0 or greater than 10: $count"
fi
This script uses ( parentheses ) to group conditions together, because &&
binds stronger than ||
.
As you might suspect, using the brackets and parentheses together can result in quite cryptic expressions, so try to stay "human readable":-)
Following kind of condition I call "one-armed". It does not have "else" or "elif".
pwd || echo "Print Working Directory failed!"
The ||
demands that the part to the right is executed only when the left condition failed. Reads as "print current directory or echo error". So when the pwd
command failed (which is not very likely:-), this outputs an error message.
grep "you" logfile && echo "Found you in logfile!"
As expected, the &&
demands that the part to the right is executed only when the left condition succeeded. Reads as "when finding 'you' in logfile, echo success".
The one-armed condition is not restricted to just one follower command. You can enclose any number of follower commands into braces:
grep "$pattern" logfile >/dev/null && {
echo "Found $pattern in logfile!"
exit 3
}
This checks whether a pattern (given as shell variable) is to be found in logfile. The output redirection ">/dev/null" causes the output of the grep
command to be dismissed. When the pattern was found, the block within the braces gets executed, and the script exits with code 3.
What is switch-case
in most programming languages, is a pattern-matching case
keyword in shell.
case "$parameter" in
no\ version)
echo "Not a version: $parameter"
;;
*[Dd]ocumentation* | [Ss]pecification* )
echo "Is documentation or specification: $parameter"
;;
*)
echo "Unknown: $parameter"
;;
esac
The case
gives you the opportunity to test a variable or command output against several patterns that can contain a reduced set of regular expressions. The above case
first tests for the value "no version", then for some text containing "documentation" or "specification" (whereby the first letter could be capitalized), and outputs "Unknown" in any other case.
To find some information, we have to look at a lot of records and compare them to some criteria. Maybe we want to print any element of a collection. We might want to wait until a condition gets true. There are a lot of use cases for a loop.
The shell has no C-like for (int i = 0; i < limit; i++)
loop. Instead you have a for-in
.
todos="
a
b
c"
for todo in $todos
do
echo $todo
done
The shell would split the $todos
array by spaces or newlines, and output every word of it in the loop.
To have a loop count you would need to write:
count=0
for todo in x y z
do
echo "$count: $todo"
count=`expr \$count + 1`
done
The basic shell can not do arithmetics. But the expr
command can. The above script defines a count
variable and uses expr
in a so-called "command substitution" to increment it in every loop pass (see chapter about procedure calls).
A conditional loop is a merge of loop and condition. The while
loop repeats as long as its condition is true.
while read line
do
echo $line
done <.profile
This loop is fed by lines from the file .profile
that normally resides in $HOME directory. It outputs any line from that file. Mind that the input redirection has to be written after the loop end, not after the read
command in first line.
while read input && [ "$input" != "OVER" ]
do
echo $input
done
This script will output any line you write on the keyboard, it reads from stdin
. You can terminate the loop by typing OVER
, which makes the second condition false. Or you can type Ctrl-d
, this is the UNIX end-of-input key code, and it will make read
exit with a code different from 0, which will make the first condition false.
For your pleasure there is also an until
-loop:
count=0
until [ "$count" -gt 10 ]
do
count=`expr \$count + 1`
echo $count
done
Now that we can write sequences, conditions and loops, we can write a computer program. But we want to reuse other programs in it, and maybe we want to nest our program into an even bigger program.
When our ready-made shell script is in some directory that is in our execution PATH (echo $PATH
), any other script should be able to call our script. A convenient way to do this is to have a $HOME/bin
directory where all the scripts are, and to have that in $PATH
(can be done in .profile
or .bashrc
). Do not forget to set the script executable: chmod 754 scriptfile.sh
.
What we need to know is how scripts can receive the information they need to do their work (parameters).
if [ -z "$1" -o -z "$2" -o -z "$3" ]
then
echo "Not enough parameters"
exit 1
fi
echo "Starting to work ..."
This script checks its command line arguments whether they are non-empty and whether there are three of them. If less than three parameters were passed to it, or when one of them is empty, the script terminates with exit code 1. Store that script in a file checkarg.sh
, call chmod 754 checkarg.sh
, and then test it:
$ ./checkarg.sh
Not enough parameters
$ ./checkarg.sh 1
Not enough parameters
$ ./checkarg.sh 1 2
Not enough parameters
$ ./checkarg.sh 1 2 3
Starting to work ...
The second way to receive information is via the environment. The calling script can do the following:
JAVA_HOME=$HOME/jdk18
export JAVA_HOME
The difference between shell variables and environment variables is that environment variables are already exported, while shell variables need to be explicitly exported to be present in a called script.
The called script that needs a pre-set JAVA_HOME
environment variable should do the following:
[ -z "$JAVA_HOME" ] && {
echo "Need JAVA_HOME to work!"
exit 1
}
On a UNIX shell you can list the environment using printenv
or env
.
A way to structure script code is to use functions. The word function
is a reserved shell keyword. A function is an "in-file" script, it is called in the same way as an external script or command. Functions can have parameters, but they do not declare it as parameter list. There are three different ways to define functions:
function foo {
echo "foo: $1"
}
bar() {
echo "bar: $1"
}
function foobar () {
echo "foobar: $1 $2"
}
foo Hello
bar World
foobar Hello World
Functions receive their parameters like a script in $1 - $9
, thus they can not see their enclosing script's top-level paremeters.
The above script outputs:
foo: Hello
bar: World
foobar: Hello World
You can write "library scripts" that contain just functions and no main section. When you want to use these functions in some script, you can read the functions using the source
command, better known as .
command.
Store the foo()
, bar()
and foobar()
functions above in script called foobar.sh
(remove their calls on bottom). Then create a script called libtest.sh
where you write the following:
source foobar.sh # kind of import
foo Hello
bar World
foobar Hello World
The foobar.sh
library must be in PATH
, or in same directory as the calling script.
This is a smart procedure call mechanism to quickly receive and use the output of some command or pipe.
oldDir=`pwd`
doSomeUncalculableWork
cd $oldDir # change to directory having been before
The command enclosed in `backquotes` will be executed by a subshell, and its output pasted to where the backquotes have been. In modern shells, the backquotes could be replaced by $(commandOrPipe)
, so it would be $(pwd)
.
In the script above, the output of pwd
will be the current working directory. The script remembers this directory in variable oldDir
, then executes some function that can change directory to wherever it pleases, the caller then will safely return to where it has been before. (Mind that this will fail when doSomeUncalculableWork
also uses a variable oldDir
, because all shell variables are global!)
case "`uname`" in
CYGWIN*)
echo "This is a special operating system"
;;
*)
echo "We are on some UNIX system"
;;
esac
The above code calls the uname
command that outputs the operating system. The output then is tested for being on UNIX or a CYGWIN system.
You might already have noticed that interpreted script languages are not suitable for bigger projects. They are made for small tasks, and to do it quick and dirty.
The UNIX shell is no exception, there is no data-typing, there are no encapsulation mechanisms, every variable you declare is global. A big problem is to keep control over the working directory when using the cd
command (change directory), especially when using functions that change the directory.
Shell scripts are small and efficient, but developing them takes some time and nerves. Their escaping mechanism and quote-precedences must be studied thoroughly. Every shell command has its own set of regular expressions. What you can apply in a case
you can not use with grep
, what egrep
understands is not known to sed
. Batch editors like ed
and ex
are very useful, but developing their commands always is an extra task.
Nevertheless scripts can be cheap and flexible replacements for big administration applications. Remember that make provided the possibility to execute shell commands. So use it, and enjoy your scripts!
ɔ⃝ Fritz Ritzberger, 2015-05-01