Welcome to Linux Knowledge Base and Tutorial
"The place where you learn linux"
International Rescue Committe

 Create an AccountHome | Submit News | Your Account  

Tutorial Menu
Linux Tutorial Home
Table of Contents
Up to --> Shells and Utilities

· Commonly Used Utilities
· Examples of Commonly Used Utilities

Man Pages
Linux Topics
Test Your Knowledge

Site Menu
Site Map
Copyright Info
Terms of Use
Privacy Info
Masthead / Impressum
Your Account

Private Messages

News Archive
Submit News
User Articles
Web Links


The Web

Who's Online
There are currently, 59 guest(s) and 0 member(s) that are online.

You are an Anonymous user. You can register for free by clicking here

Linux Tutorial - Shells and Utilities - Commonly Used Utilities - Examples of Commonly Used Utilities
  Commonly Used Utilities ---- Looking for Files  

Examples of Commonly Used Utilities

Directory listings: ls

When doing a long listing of a directory or file and looking at the date, you typically only want to see the date when the contents of the file was last changed. This is the default behavior with the -l option. However there may be cases, where you want to see when other aspects of the file were changed, such as the permissions. This is done adding the -c option (i.e. -lc). Note that if you leave off the -l option you may not see any dates at all. Instead, the output is sorted in columns by the time the file was changed.

Typically, when you do a simple ls of a directory, the only piece of information you get is the filename. However, you could use the -p option to display a little bit more. For example, you might end up a something that looks like this:

Data/ letter.txt script* script2@

Here you can see that at the end of many of the files are a number of different symbols. The / (forward slash) indicates it is a directory, the @ says it is a symbolic link, and * (asterisk) says it is executable.

For many years, this is the extent of what you could do (that is, differentiate file types by which symbol was displayed). However, with newer systems there is a lot more that you can do. If your terminal can display colors, it is possible to color-code the output of ls. Newer versions of ls have the option --color= followed by when it should display colors. For example, never, always, or " auto". If set to auto, output will only be in color if you are connected to a terminal. If, for example, you used ls any script, it may not be useful to have the output displayed in color. In fact, it might mess up your script. On some systems, you can also set it to tty so that color is only turned on when running on the console or terminal that supports colors.

By default, a number of different file types and their associated colors are specified in the /etc/DIR_COLORS file. For example, dark red is used for executable files, light red is used for archives (tar, rpm), dark blue is for directories, magenta is for image files and so forth. If you have a symbolic link that points nowhere (i.e the target file does not exist) the name will blink red. If you want to change the system defaults, copy /etc/DIR_COLORS to .dir_colors in your home directory.

Some linux distributions, the ls command is by default an alias and defined like this:

alias ls='/bin/ls $LS_OPTIONS'

where $LS_OPTIONS might contain --color=tty. I have run into cases where the different colors are very hard to see. In such cases, the easiest thing to do is simply disable the alias like this:

unalias ls

In many cases, you may have a long list of files where you want to find the most recent ones (such as log file). You could do a long listing and check the date of each one individually to find the most recent ones. Instead, you could use the -t option to ls, which sorts the files by their modification time. That is when the data was last changed. Using the -r option, ls prints them in reverse order, so the most recent ones are at the botton of the list. So, to get the 10 most recent files, you would have a command liek this:

Removing files: rm

-i queries you before removing the file

-r recursively removes files

-f forces removal

The way files and directories are put together in Linux has some interesting side effects. In the section on files and filesystems, we talk about the fact that a directory is essentially just a list of the files and pointers to where the files are on the hard disk. If you were to remove the entry in the directory list, the system would not know where to find the file. That basically means you have removed the file. That means that even if you did not have write permission on a file, you could remove it if you had write permission on its parent directory. The same thing applies in reverse. If you did not have write permissions on the directory, you could not remove the file.

Copying files: cp

More than likely you'll sometimes need to make a copy of an existing file. This is done with the cp command, which typically takes two arguments, the source name and destination name. By default, the cp command does not work on directories. To do that, you would use the -r option which says to recursively copy the files.

Typically the cp command only takes two arguments, the source and destination of the copy. However, you can use more than two arguments if the last argument is a directory. That way you could copy multiple files into a directory with a single command.

One thing to keep in mind is that the system will open the target file for writing and if the file does not yet exist it will be created using default files for permissions, owner and group. However, if the file already exists, the contents are written to the targer file using the old values for permissions, owner and group. Assume we have the follwing two files:

-rw-r--r-- 1 root root 29 Mar 19 18:59 file1 -rw-r--r-- 1 jimmo root 538 Mar 19 19:01 file2

If I ran as root this command:

I end up with a new file that looks like this:

-rw-r--r-- 1 root root 29 Mar 19 19:06 file3

Howe if I ran this command:

I end up with a new file and all of the files looks like this:

-rw-r--r-- 1 root root 29 Mar 19 18:59 file1 -rw-r--r-- 1 jimmo root 29 Mar 19 19:09 file2 -rw-r--r-- 1 root root 29 Mar 19 19:06 file3

The owner of file2 did not change. This was because the file was not created, but rather the contents were simply overwritten with the contents of file1. You can use the -p option to ensure "preserve" the attributes on the new file.

Often times you don not want to overwrite an existing file if it exists. This is where the -i, --interactive comes in. It will interactively query you to ask if the target file should be overwritten or not. The opposite of this is the -f, --force option which forces cp to overwrite the target file.

I also use the -R, -r, --recursive to recursively copy a directory tree from one place to another. That means, all of the files and directories from the source directory are copied into the target.

Option Meaning
-a, --archive same as -dpR
--backup[=CONTROL] make a backup of each existing destination file
-b like --backup but does not accept an argument
--copy-contents copy contents of special files when recursive
-d same as --no-dereference --preserve=link
--no-dereference never follow symbolic links
-f, --force if an existing destination file cannot be opened, remove it and try again
-i, --interactive prompt before overwrite
-H follow command-line symbolic links
-l, --link link files instead of copying
-L, --dereference always follow symbolic links
-p same as --preserve=mode,ownership,timestamps
--preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: links, all
--no-preserve=ATTR_LIST don't preserve the specified attributes
-R, -r, --recursive copy directories recursively
-s, --symbolic-link make symbolic links instead of copying
--target-directory=DIRECTORY move all SOURCE arguments into DIRECTORY
-u, --update copy only when the SOURCE file is newer than the destination file or when the destination file is missing
-v, --verbose explain what is being done

Renaming and moving files: mv

To rename files, you use the mv command (for move). The logic here is that you are moving the files from one name to another. You would also use this command if moving a file between directories. Theoretically one could say you are "renaming" the entire path to the file, therefore, "rename" might be a better command name.

You can see the effects of this if compare the time to copy a very large file as opposed to moving it. In the first case, the entire contents needs to be rewritten. In the second cases only the filename is changed, which obviously is a lot fast.

Note that simply changing the file name only works if the source and target files are on the same file system. If you move files between files systems the new one must be rewritten and basically takes the same time as with a copy.

Like the cp command mv also takes the -i, --interactive option to query you prior to overwriting an existing file.

Option Meaning
--backup[=CONTROL] make a backup of each existing destination file
-b like --backup but does not accept an argument
-f, --force do not prompt before overwriting equivalent to --reply=yes
-i, --interactive prompt before overwrite equivalent to --reply=query
-u, --update move only when the SOURCE file is newer than the destination file or when the destination file is missing
-v, --verbose explain what is being done

Linking files: ln

Linux provides a couple of different ways of giving a file multiple names. One place this is frequently used is for scripts that either start a program or stop it, depending on the name. If you were to simply copy one file to another, and you needed to make a change, you would have to change both files. Instead, you would create a "link". Links are nothing more than multiple files, with different names, but referring to the exact same data on the hard disk.

There are actually two different kinds of links: "hard" and "soft". A hard link simply creates a new directory entry for that particular file. This new directory entry can be in the current directory, or any other directory on the same file system. This is an important aspect because Linux keeps track of files using a numbered table, with each number representing a single set of data on your hard disk. This number (the inode) will be unique for each file system. Therefore, you cannot have hard links between files on different file systems. (We'll get into details of inodes in the section of the hard disk layout.) You can actually see this number if you want by using the -i to the ls command. You might end up with output that looks like this.

184494 -rw-r--r-- 2 root root 2248 Aug 11 17:54 chuck 184494 -rw-r--r-- 2 root root 2248 Aug 11 17:54 jimmo 184502 -rw-r--r-- 1 root root 761 Aug 11 17:55 john

Look at the inode number associated with files jimmo and chuck; they are the same (184494). This means that the two files are linked together and therefore are the exact same file. If you were to change one file the other one would be changed as well.

To solve the limitation that links cannot cross filesystems, you would use a soft or "symbolic" link. Rather than creating a new directory entry, like in the case of a hard link, a symbolic link is actually a file that contains the pathname to the other file. Since a symbolic link contains the path, it can point to files on other file systems, including files on completely different machines (for example, if you are using NFS).

The downside of symbolic links is that when you remove the target file for a symbolic link, your data is gone, even though the symbolic link still exists. To create either kind of link you use the ln command, adding the -s option when you want to create a symbolic link. The syntax is basically the same as the cp or mv command:

In this case "source" is the original file and "destination" is the new link.

In addition to being able to link files across file systems using symbolic links, symbolic links can be used to link directories. Creating links to directories is not possible with hard links.

Display the contents of files: cat

You can display the contents of a file using the cat command. The syntax is very simple:

If the file is large it may scroll off the screen. In that case you won't be able to see it all at one time. In that case, you would probably use either the more or less commands. Both allow you to display the contents of a file, while less allows you to scroll forward and backward throughout the files. However, less is not found on every Unix operating system, so becoming familiar with more is useful.

One might think that cat isn't useful. However, it is often used to sent the contents of a file through another file (using a pipe). For example:

This would send the file through the sed, replacing all occurrences of "James" with "Jim".

I often use cat to quickly create files without having to use an editor. One common thing is short-n-sweat shell scripts like this:

The first line redirects the standar output of the cat command into the file script. Since we did not pass a filename to cat, it reads its input from standard input. The input is read into I press CTRL-D. I then change the permissions and have an executable shell script. For details on these constucts, see the section on basic shell scripting.

Note that the CTRL-D key combination is normally the default for the end-of-file character. This can be displayed and changed using the stty command The end-of-file character is shown by the value of eof=.

cat also has a few options that change it's behaviour. For example, the -E or --show-ends will show a dollar sign ($) at the end of the line. Using -v or --show-nonprinting options will display non-printable characters. Both of these are useful in determining if there are characters in your file that you would not normally see.

Option Meaning
-b, --number-nonblank number nonblank output lines
-E, --show-ends display $ at end of each line
-n, --number number all output lines
-s, --squeeze-blank never display more than one single blank line
-T, --show-tabs display TAB characters as ^I
-T equivalent to -vT
-v, --show-nonprinting use ^ and M- notation to show non-printable characters, except for LFD and TAB
-e equivalent to -vE
-A, --show-all equivalent to -vET

The cat command can also be used to combine multiple files. Here we need to consider to things. First, the cat command simply displays all of the files listed to standard output. So to display three files we might have this command:

cat filename1 filename2 filename3

In the section on pipes and redirection, we talked about being able to redirect standard output to a file using the greater-than symbol (>). So combining these concepts we might end up with this:

This sends the contents of the three files (in the order given) into the file "newfile". Note that if the file already exists, it will be overwritten. As we also discussed, you can also append to an existing file using two greater-than symbols (>>).

Display the contents of files: more

Display the contents of files: less

Display the contents of files with line numbers: nl

As in many other cases, the nl command does some of the same thing that other commands do. For example, cat -n will show you you lines numbers in the output, just as nl does.

Option Meaning
Option Meaning
-b, --body-numbering=STYLE use STYLE for numbering body lines
-d, --section-delimiter=CC use CC for separating logical pages
-f, --footer-numbering=STYLE use STYLE for numbering footer lines
-h, --header-numbering=STYLE use STYLE for numbering header lines
-i, --page-increment=NUMBER line number increment at each line
-l, --join-blank-lines=NUMBER group of NUMBER empty lines counted as one
-n, --number-format=FORMAT insert line numbers according to FORMAT
-p, --no-renumber do not reset line numbers at logical pages
-s, --number-separator=STRING add STRING after (possible) line number
-v, --first-page=NUMBER first line number on each logical page
-w, --number-width=NUMBER use NUMBER columns for line numbers

Display the beginning of files: head

The head command displays the beginning or "head" of a file. By default, it displays the first 10 lines. Using the -n or --lines= option, you can specify how many lines to display. In some versionsv of head you can simply preceed the number of lines with a dash, like this:

head -20

I commonly use the head -1 when I want one just the first line of a file.

You can also specify multiple files on the command line. In which case, head will show you the name of each file before the output. This can be suppressed with the -q or --quiet options. Also the -v, --verbose will always display the header.

Note that head can also read from standard input. This means that it can server as one end of a pipe. Therefore, you can send the output of other command through head. For example:

sort filename | head -5

This will sort the file and then give you the last five lines.

Option Meaning
-c, --bytes=SIZE print first SIZE bytes
-n, --lines=NUMBER print first NUMBER lines instead of first 10
-q, --quiet, --silent never print headers giving file names
-v, --verbose always print headers giving file names

Display the end of files: tail

The counterpart to the head command is tail. Instead of printing the start of a file, tail prints the end of the file.

One very useful option that I use all of the time is -f. This "follows" a file or, in other words, it continues to display the end of files as they are being written to. I use this quire often when analyzing log files. Sometimes entries are coming to fast so I have to pipe the whole thing through more, like this:

tail -f logfile | more

Once this is running, you can end it by pressing Ctrl-C. (Or whatever stty says your interrupt key is).

If you use a plus-sign (+) along with a number, tail will start at that line number and then display the read of the file. This is often useful if you want the output of a particular command, but not the header information the comman displays. For example, I often use it like this:

vmstat 3 10 | tail +3

This starts at line 3 and displays the rest of the file.

Option Meaning
--retry keep trying to open a file even if it is inaccessible when tail starts or if it becomes inaccesible later -- useful only with -f
-c, --bytes=N output the last N bytes
-f, --follow[={name|descriptor}] output appended data as the file grows; -f, --follow, and --follow=descriptor are equivalent
-F same as --follow=name --retry

Seperating files by column: cut

The cut command is, as its name implies, used to cut up files. This can be done after a specific number of characters or at specific "fields" within the file. If you look in the /etc/init.d/ directory, you will find that there are quite a few scripts that use cut in one way or another.

In some cases, the file (or output) has fields that are a certain width. For example, a particular column always starts at character 18 and the next one started at character 35. If you wanted to display just the one field, your command might look like this:

cut -c18-34 filename

Note that if you only specific a single number, you will get the single character at that position. If you leave of the last number, then cut will display everything from the given position to the end of the line.

If the file (or output) seperates the fields with a particular character (e.g. tab or semi-colon), you cannot split the file at a specific character number, instead you need to split it by field number. For example, if you wanted a list of the real names of all users in the /etc/passwd, your command might look like this:

cut -f 5 -d: /etc/passwd

Here too, you can specify a range of fields. For example, -f 5-8 would display fields 5 through 8. If you wanted specific, non-ajoining fields you seperate them with a comma. For example, to display the 1st and 5th fields in the previous example, the command might look like this:

cut -f 1,5 -d: /etc/passwd
Option Meaning
-b, --bytes=LIST output only these bytes
-c, --characters=LIST output only these characters
-d, --delimiter=DELIM use DELIM instead of TAB for field delimiter
-f, --fields=LIST output only these fields; also print any line that contains no delimiter character, unless the -s option is specified
-n (ignored)
-s, --only-delimited do not print lines not containing delimiters
--output-delimiter=STRING use STRING as the output delimiter the default is to use the input delimiter

Combining files: paste

The paste command is used to combine files. Lines in the second file that correspond sequentially to lines in the first file are appended to the lines in the first file. Assume the first file consists of these lines:

jim david daniel

and the second file looks like this:

jan dec sept

When you paste the two together you end up with this:

jim jan david dec daniel sept
Option Meaning
-d, --delimiters=LIST reuse characters from LIST instead of TABs
-s, --serial paste one file at a time instead of in parallel

Combining files: join

You can think of join as an enhance version of paste. However, in the case of join, the files you are combining must have a field in common. For example, assume the first file consists of these lines:
jim jan david dec daniel sept

and the second looks like this:

jim pizza david soda daniel ice cream

When you join the two together you end up with this:

join three four jim jan pizza david dec soda daniel sept ice cream

This only works because both of the files have a common field. Note that the common field is not as it would be had you used paste.

To avoid problems with not being able to find matches, I suggest that you first sort the files before you user join. Note that you do not necessarily need to match on the first field as we did in the example. If necessary, the fields that match can be in any position in either file. The -1 option defines which file do use in file one, and -2 defines the field to use in file 2.

Option Meaning
-a FILENUM print unpairable lines coming from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2
-e EMPTY replace missing input fields with EMPTY
-i, --ignore-case ignore differences in case when comparing fields
-j FIELD (obsolescent) equivalent to `-1 FIELD -2 FIELD'
-j1 FIELD (obsolescent) equivalent to `-1 FIELD'
-j2 FIELD (obsolescent) equivalent to `-2 FIELD'
-o FORMAT obey FORMAT while constructing output line
-t CHAR use CHAR as input and output field separator
-v FILENUM like -a FILENUM, but suppress joined output lines
-1 FIELD join on this FIELD of file 1
-2 FIELD join on this FIELD of file 2

copying and converting files: dd

The dd command is used to create a "digital dump" of a file. It works very simply by opening the source and destination files in binary mode and copying the contents of one to the other. In essence, this is what the cp command does. However, dd also works with device nodes. Thus, you can use dd to copy entire devices from one to the other.

Note that if you would use the dd command to copy a filesystem from one device to another (for example /dev/hda1 to /dev/hdb1), you would not be copying invidivual files. Instead you would be copying an image of the filesystem. This means that all of the the metadata for the file system (i.e. inode table) would be overwritten and you would loose any existing data. If the target device was smaller, you won't be able to get all of the old on the new one. Further, if the target device is just one of several partition you may end up overwritting parts of other filesystems.

In it's simplest form, dd looks like this:

dd if=input_file of=output_file

Where it reads from the input file and writes to the output file.

Two very useful options are ibs= (input bytes) and obs= (output bytes). Here you tell dd how many bytes to read or write at the same time. When used properly this can save a great deal of time. For example, if you are copying from one hard disk to another, the system reads one block and then writes it. Because of the the latency of the spinning hard disk, it takes time for the disk to rotate back to the correct position. If you choose a block size equal to the sector size of the hard disk, you can read the whole sector at once, thus saving time.

The dd command can also be used when converting from one encoding to another. For example you can convert files from ASCII to EBCDIC.

Option Meaning
bs=BYTES force ibs=BYTES and obs=BYTES
cbs=BYTES convert BYTES bytes at a time
conv=KEYWORDS convert the file as per the comma separated keyword list
count=BLOCKS copy only BLOCKS input blocks
ibs=n input block size (defaults to 512 byte blocks)
if=FILE read from FILE instead of stdin
obs=n output block size (defaults to 512 byte blocks)
of=FILE write to FILE instead of stdout
seek=BLOCKS skip BLOCKS obs-sized blocks at start of output
skip=BLOCKS skip BLOCKS ibs-sized blocks at start of input

 Previous Page
Commonly Used Utilities
  Back to Top
Table of Contents
Next Page 
Looking for Files


Test Your Knowledge

User Comments:

You can only add comments if you are logged in.

Copyright 2002-2009 by James Mohr. Licensed under modified GNU Free Documentation License (Portions of this material originally published by Prentice Hall, Pearson Education, Inc). See here for details. All rights reserved.



Security Code
Security Code
Type Security Code

Don't have an account yet? You can create one. As a registered user you have some advantages like theme manager, comments configuration and post comments with your name.

Help if you can!

Amazon Wish List

Did You Know?
You can get all the latest Site and Linux news by checking out our news page.


Tell a Friend About Us

Bookmark and Share

Web site powered by PHP-Nuke

Is this information useful? At the very least you can help by spreading the word to your favorite newsgroups, mailing lists and forums.
All logos and trademarks in this site are property of their respective owner. The comments are property of their posters. Articles are the property of their respective owners. Unless otherwise stated in the body of the article, article content (C) 1994-2013 by James Mohr. All rights reserved. The stylized page/paper, as well as the terms "The Linux Tutorial", "The Linux Server Tutorial", "The Linux Knowledge Base and Tutorial" and "The place where you learn Linux" are service marks of James Mohr. All rights reserved.
The Linux Knowledge Base and Tutorial may contain links to sites on the Internet, which are owned and operated by third parties. The Linux Tutorial is not responsible for the content of any such third-party site. By viewing/utilizing this web site, you have agreed to our disclaimer, terms of use and privacy policy. Use of automated download software ("harvesters") such as wget, httrack, etc. causes the site to quickly exceed its bandwidth limitation and are therefore expressly prohibited. For more details on this, take a look here

PHP-Nuke Copyright © 2004 by Francisco Burzi. This is free software, and you may redistribute it under the GPL. PHP-Nuke comes with absolutely no warranty, for details, see the license.
Page Generation: 0.35 Seconds