Linux

  

These are my personal notes that I use as a quick help in my work.
You are welcome to read them.

Contents of current page Top-level home page
 
Index  Java Internet Oracle Notes
Linux Basics Web Basics SQL Notes
Informatica Servlets Apache BkpRstore SQL*Plus
Teradata   LDAP Storage PL/SQL
Windows     Tables OEM
UML   Net8 Portal
SQL Server Python perl Performance OLAP
Vmware Visual Basic PHP/MySQL User Mgmt  
Git        
More technical pages here

Contents

 


 

 

Introduction

 

Basic Diagnostics

 

Help for Tru64

select instance_name, host_name from v$instance;

 


Basics

 

Syntax

Dash '-' in place of an input file indicates standard input and dash in place of an output file indicates standard output

Double dash '--' indicates end of options. Useful for filenames starting with dash: ls -ltr -- -a_file
Another option: ls -ltr ./-a_file

cmd; cmd Semi-colon separates two commands on the same line

Wildcards:
* Zero or more chars except a leading '.'
? Any single character
[aeiouAEIOU] Set
[A-Z] Range
[^set] Matches characters NOT in the set
[!set] Same as previous. For literal ^ or !, don't put as first character
[]set] Literal bracket
[-set] Literal dash first or last
[set-] Literal dash first or last
{a,b,c} Expands to "a b c"
Filename{1,2,3}.txt Expands to "Filename1.txt Filename2.txt Filename3.txt"

~username Expands to user's home directory
~ exands to my home

Excape control characters with ^V, in particular ^V^I gives a tab character.

Look for help

apropos something
Show all entries in the whatis database with something. Same as man -k. search the whatis database for strings
whatis something
Show match based on complete word
man a_command
See manual for a_command
q for quit
more aFile
See file contents. Type h for help.
:n skip to next file
:p go to previous file
:f display current file
which aCommand
Displays the file name of the executable behind a command
which
Show current aliases, and full path
alias
Show current aliases
alias aString
Show the hidden options of a command
alias alias_string='command_string'
Define an alias (BASH, Korn, and C shells)
Examples:
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'
type a_file
Gives file type
a_command --help
On most commands, gives a help page
echo $SHELL
Show which shell I am using
man ascii
Show the ASCII codes. Useful for the control codes.
 
 
 
 

 


find

find . -name doc
find file doc in directory and sub-directories
Other search operators are:
-iname: case insensitive
-path: search for files in a particular path, with wildcards in path. The argument contains filename in addition to path
-regex: regular expression
-maxdepth n: limit the recursive search to n levels
-mindepth n: extend the recursive search at or beyond beyond n levels
-size:
find . -iname "whatever"
Case insensitive
find . -mtime -1 -print
find changes within last. Remember the minus sign, otherwise it will look for files exactly a day old
find find -print
To find all accessible files whose path name begins with find
question: what difference with or without "print?"
find / -name .profile -print
List all files in the file system with a given base file name
find . -name file\*
Finds all files that have a name starting with file. Notice that the asterisk must be escaped to prevent the shell from interpreting it as a special character.
find . -name file\* -group adm
Finds all files that have a name starting with file and an owning group of adm. Notice that this is the default behavior, and is identical to the next example using the -a operator.
find . -name file\* -o -group adm
Finds all files that have a name starting with file or that have an owning group of adm.
find . -name '*' -mtime -1
find changes within last day. Remember the minus sign. Use single quotes (double quotes seem to work the same way)
find -ctime: creation time
find . -name '*' -mtime +7 -exec rm -f {} \;
Find files changed that are older than a week and remove them. Use single quotes (double quotes seem to work the same way). The curly brackets tell the command that this is where the outcome of the find goes. The escaped ";" indicates the end of the "exec" command
Note that the "exec" spawns a different process for each result of the find command. Try replacing "\;" with "\+", which takes the output of the find command as one long argument. "-ok" instead of "-exec" prompts the user for each result.
find . -name '*sh' -print0 | xargs -0 grep something
Find script files ending in 'sh' and grep for something. Same output as below
find . -type f -exec grep -il something {} \;
Look for regular files and grep for "something". Only show the file names, not the matching content. Same output as below
find . -type f -name "*py" -exec grep -iH something {} \;
Find script files ending in 'py' and grep for "something". List the file names (option H). Option i is for case insensitive

 


Directories and Files

For viewing files, see further below

cd
to home directory. The tilde ~ also indicates the user's home directory
cd ..
go up
locate filename_or_directory
Searches throughout all the disk. Partial names allowed.
question: what is difference with "find?"
chgrp grp file
change the group ownership of a file
chmod ugo+rwx filename
User, Group, Others(world)
Read, Write, eXecute.
Use - instead of + to remove
-R to apply recursively
chmod go-r filename
Prevent others from reading
chown new_user[:new_group] the_file
Change ownership (generally as root)
-R to apply recursively
cp [-i]
copy files, option i: prompt to overwrite
-p copy with same date
-r copy directory
ls -lt
sort by date, descending
ls -ltr
sort by date, ascending
ls -a
all
ls -l
show details. Depending on first character on the line: d=dir, c=character special device, b=block special device, l=symbolic link, p=named pipe
-F /=dir, *=exec, @=links
-R sub-dir
-t1 Lists all files names with no details, formatted in a single column, newest file first
-b list non-printing characters in \ddd octal notation
-d list only a directory without its contents
-s lists size first, useful for sorting
-c multi-column (default)
-t sort by time of modification
-L list linked file if link
-i Show inode number
fdisk -l
View partitions. Also view partition file: cat /proc/partitions. File systems: cat /etc/fstab and mounted file systems: /etc/mtab.
question: are you sure is it just viewing?
df -k
mount points (~ disks)
df -h
mount points with more usable disk sizes (linux)
ln -s /realpath link
Symbolic link. It's "ln" with "n" followed by "-s"
Without the -s option, it is a hard link.
Remember that hard links cannot link directories and cannot cross file system boundaries, and soft links can.
find . -xtype l 2> /dev/null
Find broken soft links created with ln -s. Optionally use bit bucket so as not to list errors such as denied permission
head -20 file_name
Show first 20 lines of file
tail -20 file_name
Show last 20 lines of file
tail -1 -f file_name
every second, shows all new lines
tail `ls -t1 thread* | head -n 1`
Show the tail of the most recently modified file
du -rk .
disk space used by directory and sub-directories in KB
du -k: KBytes
du -r: Show error if directory not accessible (not on Ubuntu)
du -s: current directory only
du -kr | sort -nr | head -n 20
du -k | sort -nr | head -n 20
du -sk /*: show size of each directory in KB (in the example: root directories)
du -Sht1G *
t1G: Show files larger than 1G
h: human readable size
S: do not include large sub-directories
mkdir
make a directory.
Make many levels in one command: mkdir a/b/c
mv
move or rename files
pwd
Pring Working Directory. Display the pathname of the current working directory
pushd dir
Push directory (not in ksh)
popd
Pop directory (not in ksh)
dirs
Show stack
rm [-i] filename
remove (unlink) files or directories. -i to prompt before deleting
rmdir [-rf] dir
remove (unlink) directories; -rf to remove without warning, including sub-directories
umask
show the permissions that are given to view files by default
zcat
view compressed file (extension Z)
mkdir /mnt/usb-drive
mount /dev/sda1 /mnt/usb-drive
Mount a drive
If the directory /mnt/usb-drive exists and contains files, these files are hidden.
Do not copy on top of a mount point (overwriting the mount point). If something goes wrong, you erase all attached disks

 

Viewing Files

cat file
display to screen
cat f1 > f2
overwrite
cat f1 >> f2
append to f2
cat > a_file_name
Create a file with input from standard input. End with ctrl-D
cat -vet aFile
Show non-printable characters. ^I is tab; $ is end of line.
cat << END_OF_TEXT_LABEL > a_file_name
something..
something else...
END_OF_TEXT_LABEL
This puts "something \n something else" in a file. Use ">>" when adding to an existing file.
The END_OF_TEXT_LABEL label has to go at the beginning of the line
This construct is called a "here document"
cat -vet a_file_with_null_chars | grep "\^A"
Look for null characters in the file
more
browse or page through a text file, typically do: | more ("pipe" character)
Space: next page, / search, n next occurrence, :n next file.
less filename
Display contents, but with use of arrows up and down
strings aFile
Show user-readable content in aFile
file aFile
display the file type

 

fsck

umount the file system before running fsck.
Then run fsck /dev/abcd: see man for options. You may want to force a test of bad clusters and verbose output. Fedora: options fv.

If trouble shows up, go into single-user mode (init 1), run fsck several times until no damage is reported, and reboot.

File systems

 

Comments on NFS (Network File System):
Time stamps are defined by the client, and different clients can have different times
NFS can't do file locking, so each NFS comes with a file locking daemon, for files accessed by several clients. NFS does not know if files are open or closed.

 

Disk Partitions

 


Users and security

 

id
Gives the user id and the group membership. Also see with:
grep $USER /etc/passwd
Username:password:UID (user ID):GID (group ID):comment:home directory:default shell (/sbin/noloigin if appropriate)
Root is UID 0, system accounts have UID less than 100
grep $USER /etc/group
Account name:password:group ID:member list
useradd -m -s /bin/bash user_name # simplest
useradd -G primary_group -g secondary_group -d /home/usrs/default_dir -m the_new_user
create a new user.
-s defines the default shell
-m creates a home directory
-c "fname the_last_nm" is the full name (optional)
-Gsambashare add to group
usermod ...
Change the user settings
userdel -r the_user
Delete user. -r removes the default directory
groupadd
create a user group
groups a-user
List groups the a-user is in
usermod -a -G <groupname> username
Add a user to a group
usermod -g <groupname> username
Modify a user's primary group
passwd [the_user]
change local or Network Information System (NIS) password information
Some places use yppasswd.
Specify user if it is not the current user
passwd --expire the_user
Force the user to change the password
rlogin host-name
start a login to a different machine. See below
ssh
Secure shell
logout or ctrl-D
Logout of session
su
login as root
su - username
login as "username"
sudo command username
or is it?
sudo -u username command
Execute the command as the user "username"
Note that many people do not login as root but do "sudo command" instead.
runuser - username -c command
Execute the command as the user "username" inside a script.
The script must be run with root
who
Shows who is on the system
w -u
List the process IDs linked to users
w
Shows what users are doing
last [username]
Shows when a person logged on

The /etc/passwd file contents are :
account name : pw placeholder (pw in shadow) : user ID : default group ID : comment : home directory : login shell

Note that if the account name changes but not the user ID, then file permissions are not changed.

The /etc/group file contents are :
group name : pw placeholder : group ID : member list

Edit password file with vipw, edit groups with vigr. Change passwords for groups with gpasswd.

Steps to add a user in Linux (or use the useradd command for adding and userdel for deleting; see useradd.conf file for configuration):

The command passwd -Sa gives the status of all users. The L means locked, P means a password is set, NP means no password (linux).

 

Remote Logins

Local Remote Host
rlogin host_name The user's .rhost file contains name of originating host
Otherwise prompt for password
The .profile is bypassed

 

Remote Logins

Create key:
ssh-keygen -t rsa -f ~/.ssh/aName: Creates aName and aName.pub. Share the .pub file. If no name, it generates id_rsa and id_rsa.pub
Or: ssh-keygen -t dsa -b 1024 -f ~/.ssh/aName
cp authorized_keys authorized_keys_bkup; cat tmp >> authorized_keys:
ssh -i ~/.ssh/aName username@remotehost
ssh-keygen -f ~/.ssh/id_rsa.pub -m pem -e: Convert from pub format to pem
ssh-keygen -f ~/.ssh/id_rsa -p : Change passphrase
The public key is in the file with the extension .pub.
Secure the files with chmod go-wrx *.pub


Log in with ssh -i name_of_private_key_file server_name_or_ip
ssh -i .ssh/id_file_name remote_host "ls -l"
scp -i .ssh/id_file_name the_file remote_username@remote_host:the_file
sftp -vb batch_file remote_username@remote_host (-v is for verbose)

Start ssh agent
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_my

 

Secure the Server

See http://phanvinhthinh.blogspot.com/2010/02/how-to-secure-your-freenas-server.html

Copy /etc/ssh/sshd_config and edit:
PasswordAuthentication no # comment out to allow ssh-copy-id
AuthenticationMethods publickey # comment out to allow ssh-copy-id
PubkeyAuthentication yes # comment out to allow ssh-copy-id
PermitRootLogin no
PermitEmptyPasswords no
# other changes, to be confirmed:
ChallengeResponseAuthentication no
UsePAM no
RSAAuthentication yes
# Limit User ssh access: by default, all systems user can login via SSH using their password or public key
AllowUsers chris someone
# restart
systemctl restart sshd

config file:
Host my-ssh-host
  HostName 10.0.0.5
  Port 22
  User myuser
    IdentityFile ~/.ssh/thekey # Not the .pub file, but the secret key file
    IdentitiesOnly yes

 

 


Processes

ps $$
Show current login shell
Note: options preceeded by a dash are UNIX/Linux style, options without a dash are BSD style
ps
All my processes
ps ax / ps -A
All processes
ps -ef
HP-UX
ps -f -u a_user
All processes for a particular user a_user (this works on linux)
ps -ef
All processes (this works on linux)
ps -uuser1,user2
Processes linked to users user1 and user2
ps -t console
Processes linked to a console
ps -o pid,user,stat
Show pid, user and stat columns
ps -a
all processes
ps g
all processes
ps l
more details: ppid, cp, pri, nice, vsize, rssize and wchan.
ps -u username
all processes for a user
ps u
user-oriented output: user, pcpu, pmem, vsize, rssize, and start.
ps v
virtual-memory-oriented output: cputime, sl, pagein, vsize, rssize, pcpu, and pmem.
ps aux
Best ???
ps aux | grep abc
pgrep abc
Look for a specific process
ps aux | grep bash
Look for bash shells, typically open terminal windows
ps x
also processes without terminal
pstree
Show tree of process dependencies
top
monitor
Top memory consumers
free -m
Show available memory
vmstat
Show swap space
kill
send a signal to a process, or terminate a process
kill -9
First use ps to see processes

Nice values from -20 to 19. 0 is default. -20 is highest priority. A "nice" process is a process that is nice to others and is willing to wait more.

renice 0 pid
Put back to default 0 priority
renice +1 pid
Lower priority
renice -1 pid
Higher priority. Sudo may be necessary.
renice -n -2 pid
Set to nice value of -2 (more agressive)

Nice values from -20 to 19. 0 is default. -20 is highest priority.

kill pid
Stop job
kill -9 pid
Stop harder
kill -STOP pid
kill -CONT pid
Temporarily stop a process (it is not killed)
pkill firefox
Stop all processes with "firefox"
ps aux | grep Xorg
sudo kill -9 pid-of-Xorg
# or pkill -9 Xorg ???
sudo init 5 # this goes back to graphical interface
This stops the graphical interface.
You may have to do sudo init 3 first

 

 

  pid tty stat time cmd F S UID PPID C %CPU PRI NI ADDR SZ WCHAN STIME %MEM PSR            
Options process
ID
Terminal See below CPU
time
consumed
command Process
flags
Same
as
stat
  Parent
process
ID
% of
CPU
Priority
state
(ignore)
Relative
runtime
priority
  Mem
size
(blocks
of 1K)
where
in kernel
space
Start
time
  Pro-ces-sor            
Default x x x x x                                      
-l x x   x   x x x x x x x x x x                  
-f x x   x x     x x x           x                
-u x                 x             x              
                                                 
U/D Sleep (uninterruptible, input/output blocked)
R Runnable, i.e. in the run queue
S Sleeping (less than about 20 sec)
I Idle (sleeping more than about 20 sec)
T Traced or stopped (could be a ^Z by user)
Z Zombie process
H Halted process

 

Tru64

ps -A -o pmem,pid,rss,command | sort -nr | head -20
Memory in %, process ID, resident memory, command, sorted
ps -A -o vsz,pcpu,time,user,nswap
Virtual memory size, percent CPU, total CPU used, username, swapping
ps -A -o pmem,pid,rss,vsz,pcpu,nswap,time,stime,user,command | sort -nr | head -20
Useful on Tru64
#!/usr/bin/ksh
ps -A -o pcpu,psr,pmem,rss,vsz,nswap,pid,time,stime,user,command | head -1
ps -A -o pcpu,psr,pmem,rss,vsz,nswap,pid,time,stime,user,command | grep -v "\%CPU" | sort -nr | head -10
echo
ps -A -o pmem,rss,vsz,nswap,pcpu,psr,pid,time,stime,user,command | head -1
ps -A -o pmem,rss,vsz,nswap,pcpu,psr,pid,time,stime,user,command | grep -v "\%CPU" | sort -nr | head -10
Even better on Tru64: it shows processes by CPU consumption and memory consumption. The header is shown.
swapon -s (with root)
swap

 

?? box at hbsp:
ps -o pcpu,vsz,ruser,pid,ppid,nice,time,tty -A | sort -n

vmstat n s
Virtual memory statistics. Repeat every s seconds, for n times. First line is since boot.
iostat n s
I/O statistics. No other options.
vmstat -i
vmstat -w
vmstat -r
Show statistics, choice of different sets of columns
vmstat -P
Breakdown of page use
vmstat -p s n
Show statistics every s seconds, for n times
vmstat -s
Accumulated statistics

Explanation of columns for "vmstat n s" (first line is since boot):

active pages
In use but can be used for paging
wired pages
In use but cannot be used for paging
inactive pages
Could be used for paging
free pages
Free pages ready for use
reattach / react
Use a page from the inactive list
cow
Copy-on-write page fault: a child process needs a page for writing, so a copy is made.
page in / pin
Number of page requests
page out / pout
Number of pages paged paged out. If constantly over 20, then a lot of paging is happening. Look at io too. A lot of io means a lot of paging.
us sy id iowait
CPU information: us = user and normal processes; sy = system time; id = idle time; iowait = % of iowait.

 

View uptime by looking at this file: cat /proc/loadavg

 

Booting

Lilo's purpose is to start up the second stage loader. See /usr/doc/lilo for doc. The configuration is in /etc/lilo.conf

The next step is loading the kernel. Then the init program is run. The init reads the /etc/inittab file. Typically, the inittab will first tell the init program to run the initialization script such as /etc/init.d/rcS for Debian. Then the scripts for each run-level are executed. The filesystems in /etc/fstab are checked (fsch) and mounted (mount).

The inputrc variable says which inputrc program to use to input from the keyboard. The .inputrc could be in the user's home directory, in /etc/inputrc or it could point to /etc/profile.

 

Shutdown / Boot

shutdown -h now
shutdown button on the left to power off
shutdown -r now
reboot
>>> b
boot
systemctl suspend
Suspend (remember to keep the power cord in, otherwise it will act as if it crashed)

man init; man inittab: specifics for startup/shutdown

 

Set up terminal (put in .profile, or whatever):

init

/etc/inittab: format is unique ID : run levels : action : process
Levels are:

The /etc/rc.d directory has the configurations of each run level. See runlevel with runlevel. The symbolic links with K correspond to processed that are stopped, and symbolic links with S correspond to those that are started. Do ./xyz status to see if one of them is running

 

Get out of the graphical interface (GUI):sudo init 3

 

Log files

 

Other

who
users who are logged on
nohup command &
Allow a process to continue to run in the background even though the calling user has logged out (no hangup)

 


Scripting

 

Simple script:
Simple script (the #! is called a shebang (from sharp-bang) and indicates that the file is a script. The operating system will try to execute it using the interpreter specified by the rest of the first line.
Colon or ":" is no-op command, and can be found as the first line in a Bourne shell script. So, it should correspond to a valid interpreter. Be careful when moving to a new machine.)
See /bin/sh --help for flags
#!/bin/bash
TXT="Hello World"   #
No spaces around the equal. Surround the value with quotes if there are spaces in the string. See more in "variables" below.
set -e: scripts stop on error
set -x: each command is echoed

Execute with:
  ./script_name
Note that execute permissions may have to be set:
  chmod u+x script_name
Run in the background:
  ./script_name &
List background jobs with:
  jobs
Stop a foreground job with ^Z.
Put a stopped job in the background with:
  bg %1

Single quotes and double quotes are used to surround strings. However, variables expand in double quotes (hence the term partial or weak quoting) and do not expand in single quotes (full or strong quoting).
Double quotes protect all special characters except $, ` (backquote), and \ (backslash, used for escape). Also ! for history substitution in C shell.
Single quotes suppresses filename and variable substitution
Back quotes "`" is command substitution

Separate commands on the same line with ";"
Continue a command to the next line with \
The colon (:) is a null command (does nothing).

CommandA && commandB: commandB executes only if commandA is successful
CommandA || commandB: commandB executes if commandA failed.
Therefore, [ a_cond ] && commandB is equivalent to : if [ a_cond ]; then
commandB
fi

Zero before the number means octal notation (012)
Zero X before the number means hexadecimal notation (0xA23D)
nn# before the number means nn base (binary: 2#1001001001)

Eventually put in production in /usr/local/bin

Comments with #. But this may be escaped with " ' or \ (double quote, single quote or backslash) or certain pattern matching expressions.

Redirection:

Startup files

Question: find equivalent for the other shells

  User's default setting User's init file Executed when exiting shell system-wide defaults system-wide aliases
bash $HOME/.bash_profile $HOME/.bashrc $HOME/.bash_logout /etc/profile /etc/bashrc
ksh $HOME/.profile        

set prompt = `hostname`':\!)'   # (not sure about this one: see PS1 below)
set prompt = `hostname`'-'`whoami`':\!)'
alias rm "rm -i"
alias mv "mv -i"
alias cp "cp -i"
alias ll 'ls -la'
alias ls "ls -F"

In BASH, Bourne, and Korn: export PS1=`hostname`'('`whoami`'):$PWD-> '   # $PWD is the current directory (Korn shell)

 

Variables:

 

Escaping

Syntax:

csh bourne shell Korn Shell Bash Shell
  Recognized as first word of a command    
  if [expression]
then
    ...
elif [expression]
then
    ...
else
    ...
fi
if [expression]
then
    ...
elif [expression]
then
    ...
else
    ...
fi
if [expression];
then
    ...
else if [expression]
then
    ...
else
    ...
fi
foreach one_item (list)
    ...
    continue # skip the rest and go to beginning of next item
    ...
    break # skip the rest and exit loop
    ...
end
a_list="green blue red"
for a_var in $a_list # do not quote ("$a_list") so as to handle each item separately
do
done

for a_var in `ls dir`
do
   ll $a_var
done
for a_var in word1 word2
do
done

for a_var in `ls dir`
do
   ll $a_var
done
for a_var in $( ls )
do
   ll $a_var
done
while (expr)
end
while command # commands include [[ ]]
do
done
while command
do
done
while [ $I -gt 0 ]; do
   let $I=$I-1
done
  until command # commands include [[ ]]
do
done
same until [ $I -gt 0 ]; do
   let $I=$I-1
done
switch (word) 
case str1: 
  breaksw 
case str2: 
  breaksw 
default: 
  breaksw 
endsw
case value in  
  str1)  
    ;; 
  str2|str3)  
    ;; 
  [abc])  
    ;; 
  wildc*rd)  
    ;; 
  *) #otherwise 
    ;; 
esac
same
case expression in  
  pattern) 
    ... 
    ;; 
  pattern)  
    ... 
    ;; 
  *)  
    # default 
    ;;  
esac

Do not forget the double semi-colons
  #!/bin/sh
function func_name ()
{
PARAMETER1=$1
PARAMETER2=$2
...
}
#(need to work on the variants between shells)
don't forget the brackets "()"
Call the function without brackets:
func_name param1 param2
function func_name #or func_name()
{
PARAMETER1=$1
PARAMETER2=$2
...
return $A_VAR
}

"function" allows Korn shell semantics
Need a space after the () and the command
Call the function without brackets:
func_name param1 param2
[function] func_name ()
# with or without "()"
{
PARAMETER1=$1
PARAMETER2=$2
...
return $A_VAR
}

Call the function without brackets:
func_name param1 param2
set A_VAR=value
setenv A_VAR value
A_VAR=value
export A_VAR
A_VAR=value
export A_VAR=value
A_VAR=value
export A_VAR=value
`command` `command` $(command) `command` or $(command)
       

 

 

 


Technically, the "[expression]" is a command and the exit status is the condition. If the exit status is 0 then the following statement is executed. The left bracket is a dedicated command. Therefore, "if command" can be used too. However, double left brackets "[[" is a keyword (bash 2.02) and allows more extended tests.

Another construct is "if ((expr))". The (( )) evaluates and arithmetic expression. Note that (( 0 )) returns 1 and is considered false and (( 1 )) returns 0 and is considered true.

Best practice is to surround the variables with curly brackets and double quotes as in:
if [ "${1}" = "" or "${2}" = "" ]
if [ "${AP_BASE}" = "" ]
Best is to use the test operators -n (non-zero-length string) and -z (zero-length string) instead of comparing to "":
if [ -z "$1" ]; then
   echo 'Usage $0 argument'
fi

Loop example:
i=1
while [ $i -lt 9 ] ; do
  i=`expr $i + 1`
done

The test expression needs spaces around the operator (do NOT write "a=b" but "a = b").
The "[" is an alias for the test command (you can do man test).
When comparing strings, add a character in each one: if [ "x$the_variable" = "xsomething" ]
List of operators (see man test for shell-specific operators):

Korn shell:  if [ .... ]:

This list may be different for [[...]] (see man ksh, under "Conditional Expressions"):

for i in $( ls ); do   # the variable $i takes values in the list
     ...
     if [ ... ]; then break; fi
done

COUNTER=0
while [ $COUNTER -lt 10 ]; do
    ...
    let COUNTER+=1
done

COUNTER=0
until [ $COUNTER -gt 10 ]; do
    echo COUNTER $COUNTER
    let COUNTER+=1
done

case ${the_var} in
   "MED"|"TMT" )
     echo "must ..."
     return 1
     ;;
   "a" )
     echo "OK"
     ;;
   * )
     echo "Valid parameters are ..."
     return 1
     ;;
esac

Conditional on success of previous command

ls -1 non_existant_file
if [ $? -eq 0 ]
then
    echo "success"
else
    echo "failed"
fi
echo "done"

function fctn_name {
   echo $1
}

echo -n   # -n does not break the line

exit n   # exits the script with exit code n, with 0 meaning successful, 1 to 255 meaning error. If no argument then the exit status of the last command is returned. Note that some exit codes have special meanings, but there is no "official" list.

OPTIONS="list of words"
select opt in $OPTIONS; do   #
This prompts the user to choose one of the options. Needs exit option.
    ...
done

echo Please enter ...
read VAR_NAME VAR_NAME_2   #
Prompts user to enter a value / input from command line
Use stty -echo; read DB_PWD; stty echo for passwords

Command substitution:
$(commands) expands to the output of the commands. Nesting is possible. Newline characters are not possible.
`commands` expands to the output of the commands (the characters ` is referred to as "back tick"). Newline characters are not possible.
Example: base_list=`cat /etc/oratab | grep -v "#" | awk -F\: '{print "-"$1"-"}'`

 

Script for user input of options (tldp.org)
Not yet tested
#!/bin/bash
OPTIONS="cmd1 cmd2 x"
select opt in $OPTIONS; do
    if [ "$opt" = "cmd1" ]; then
        ...
    elif [ "$opt" = "cmd2" ]; then
        ...
    elif [ "$opt" = "x" ]; then
        exit
    else
        echo Option "$opt" not known
    fi
done

 

Sample wrapper for calling another script

LOG_FILE=/u01/usr/target/logs/a.log
ERR_FILE=/u01/usr/target/logs/e.log
SOURCE_FILE=/u01/usr/target/data_file.txt

echo "`date '+%Y-%m-%d %H:%M'` Starting ... \n" >> ${LOG_FILE}
echo "`date '+%Y-%m-%d %H:%M'` Starting ... \n" >> ${ERR_FILE}

if [ -f "${SOURCE_FILE}" ]
then
   ./a_script_file.sh ${SOURCE_FILE} >> ${LOG_FILE} 2>> ${ERR_FILE}
   if [ ${?} -eq 0 ]
   then
     echo "`date '+%Y-%m-%d %H:%M'` Successful execution \n" >> ${LOG_FILE}
     exit 0
   else
     echo "\n ERROR ##### \n" >> ${LOG_FILE}
     echo "`date '+%Y-%m-%d %H:%M'` Failure \n" >> ${LOG_FILE}
     exit 1
   fi
else
   echo "\n ERROR ##### \n" >> ${LOG_FILE}
   echo "`date '+%Y-%m-%d %H:%M'` File $SOURCE_FILE not found \n" >> ${LOG_FILE}
   exit 1
fi

Infinite loop:
while :; do
echo infinite loop (remember to put an exit condition!!)
done

 


awk

ls -lt udump | awk '{print "ls -l "$9}' > a_file
print rm followed by 9th column (file name). Run a_file then :%s/^ls -l/rm/
$0 returns the whole line (to be tested)
awk -F\t '{print $2, $3}' the_file
Show the second and third columns of a tab-delimited file
awk '{print $3}' | sort -u
Sort the third column and eliminate the duplicates
awk '$1 == "something" {print $1, "|", $3}'
awk '$1 ~ /^something$/ {print $1, "|", $3}'
Two ways to look for rows where first column is equal to something.
In the output, separate the two columns with a vertical bar.
awk '$1 !~ /^$/ {print $1, "|", $3}'
Look for rows where first column is empty.
awk '{print $1","$2",\""$3"\",\""$4"\""}' a_file > a_csv_file.csv
Create a comma-separated-version file. The last two columns are surrounded by double quotes.
awk '{print $1"\t"$2"\t"$3"\t"$4}' a_file > a_tab_delim_file
Create a tab-delimited file.
awk '{print substr($0, 10, 30)}' a_file
Substring, here substring of the whole line.
substr($0, n [, m]): n is the start position (1 is left-most), m is the optional length
awk -F"\t" -v OFS="\t" '{print FILENAME,$0}' file_in > file_out
input tab-delimited, output tab-delimited (OFS = specifies output delimiter)
awk -v var_name=${a_variable} '{...}'
Assign a variable for use within the awk command
ls -la | awk -F"/" '{print $NF }'
Get the file name without the path. $NF is the last "column"

cook book / Fre

grep "package succeed" thread* | awk '{print $6":"$10 }' | awk -F":" '{print $1 $2}' | sort -u
grep "package fail"    thread* | awk '{print $6":"$10 }' | awk -F":" '{print $1 $2}' | sort -u
Processes that succeeded with the hours
awk '$3 == "package" {print $4":"$6":"$10 }' thread* | awk -F":" '{print $2, $1, $3}' | sort -u
Processes, success/failure, and hour
awk '{print substr($0, 20)}' a_file | sort -u
List distinct values of lines starting with 20th character
awk '{ sub("\r$", ""); print }' dos_format > unix_format
Get rid of the CR at the ends of the lines
awk 'sub("$", "\r")' unix_file > dos_format
Change LF into CRLF
awk -F"|" '{print $1 " " $2}' dmrt_xxx.ext | sort -u
Get a distinct list of the fac ID and P.A.N. (MRN is $51 in pat)
awk -F"|" '{print $51, $2, $1}' the_file
Show a subset of the columns in a pipe-delimited file
awk -F"|" '$51 == "(10 spaces)" {print $1 " " $2 " " $51}' the_pat_file
Rows with empty MRN
 
 
 
 

Documentation:

 

 


Networking

 

ifconfig [-a] [-v] [interf]
Network configuration, a for all, v for verbose. Lists interfaces. "interf" can be eth0, lo, ...
ip addr
IP addresses
netstat [-i]
Summary of interfaces
ping -c n
n is number of packets to be sent
traceroute
traceroute6 for IPv6
tracepath ip-or-name
On Linux, similar to traceroute
mtr ip-or-name
On Linux, combines traceroute and ping
host ip-or-name
On Linux, gives IP address, or, for IP address, gives hostname
ifplugstatus [eth0]]
On Linux, gives status of the interfaces. A message like "link beat detected" means plugged in, otherwise, a message like "unplugged"
(install with sudo apt-get install ifplugd)
route -n
 
netstat -anp
All sockets, numeric display, display pid
netstat -nr
Display in numeric output, display routing table
Option a for all open ports
Option l for listening ports
netstat -a
find all open ports
/etc/inetd.conf
/etc/services
Files to help find open ports

See /etc/sysconfig/network. This is where the hostname is defined.
Each interface has a configuration file in /etc/sysconfig/network-scripts.

Network file is: /etc/network/interfaces.
It contains the defintions for the network interfaces.
Two situations: dynamic (with DHCP server) or static:
Dynamic addresses Static addresses
auto eth0
iface eth0 inet dhcp
auto eth0
iface eth0 inet static
address 192.168.1.10
gateway 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255

Restart network: /etc/init.d/networking restart

See the hostname: execute /bin/hostname or look at file /etc/hostname

Address lookups (host file): /etc/hosts
The file /etc/resolv.conf points to a specific server for name lookups (dns servers). In the case of a router, the address of the router would be in this file.

Samba

The Samba configuration is in /etc/samba/smb.conf
Restart Samba server after changing the configuration: /etc/init.d/samba restart
To install, use the Synaptic Package Managger and search for Samba.
Query the Samba server: smbclient -L ubuntu -U% (Replace the % with a username to see what a specific user will see.)

 

 


tar

-c --> create
-r --> write at end of archive
-t --> list contents
-u --> add files to tape
-x --> extract
options:
-b : block factor (block size) for tapes
-e : exclude
-f : tar file name
-v : verbose

examples:

tar cvf /dev/ntape/tape0 -e ./foo $HOME
create a tar on tape with $HOME, exclude ./foo
 
tar cvf the_file.tar sub_dir/*
create a tar with all of the sub-directory sub_dir
 
tar -xvf abc.tar
extract the contents of the tar.
tar xvf -C dir abc.tar
extract the contents of the tar to the "dir" directory (verify).

 

 

compress *.tar
uncompress *.z

Other utilities: zip
zip a_file gives a_file.gz (tar directories before gzip)
zip -d a_file.gz decompresses

 

 


vi

 

Configuration in .exrc in home directory
In vim, do :edit $MYVIMRC. Locations shown in :version.
See variable values with :echo $VIM and :echo $HOME
Try C:\users\the_user\_vimrc or ...\gVimPortable\Data\settings\_vimrc.

 

vi -R Edit in read-only mode
ESC cancels a partially entered command
DEL interrupts an executing command
^L refreshes a scrambled screen
^G Shows current line and file information

Insert

i insert, ESC to end insert
I Insert at beginning of line
a insert after the cursor
A insert at the end of the line
o New line (below) and insert
O New line (above) and insert
J Join current line with the next
^W Erase last word when in insert mode
^H Erase last character when in insert mode

Moving

^U
^D
Up --> Scroll Up
Down --> Scroll Down
^B
^F
Back --> Page back
Forward --> Page forward
^Y
^E
Go up one line
Go down one line
 k
h l
 j
Go up, left, right, down (like arrow). Note that J is join!
iG
:i
Go to line i
G Go to end of file
+ or CR Go to first non blank character of next line
- Go to first non blank character of previous line
H, iH
M
L, iL
Home --> Move to upper left of screen (home), to line i of screen
Middle --> Move to middle of screen
Low --> Move to low part of screen, to line i from bottom
space move to next character
backspace move to previous character
Bxxx.bxxx[cursor]xxxe.wxxxE Wxxx B - b - ew - E W
w, W Move to next word, next big word
b, B Move to previous word, previous big word
e, E Move to end of word, end of big word
(, ) Move to beginning of previous, next sentence
{, } Move to beginning of previous, next paragraphe
0 Move to the beginning of the line
^ Move to the first non-blank character
$ Move to the end of the line
mc Put a marker c (use 'a' to 'z')
'c Move to marker c
'' Move to previous position
% Move to the matching parenthesis or brace.

Deleting

x Delete current character
ix Delete i characters
dd Delete line
dw Delete word
db Delete backwards
d) Delete to end of sentence
^W Erase last word when in insert mode
^H Erase last character when in insert mode

Searching

/text Search
/\%uUUUU Search for a character with unicode UUUU (hex notation). Slash back-slash percent lowercase u ...
?text Search backwards
n Search next
/^text, /text$ Search for text at beginning, end of line
fc Find character c on current line. Semi-colon ";" repeats
Fc Find character c backwards on line. Semi-colon ";" repeats
tc, Tc Find character c and put cursor before. Semi-colon ";" repeats

Advanced Editing

. Repeat last change
xp Transpose characters
y Yank
Y Yank line
p Put yanked line
s Substitute (esc to end)
r, ir replace with next character, replace next i characters
R Replace, end with ESC
dd delete line
dw delete word
c$ change until the end of the line
cw change word
cc or S Change line, end with ESC
:[%]s/s1/s2/[g]
:4,8s/s1/s2/
Substitute, % is for all lines, g (=global) all occurences (otherwise just the first). See section on regular expressions. Repeat with "&"
Range with comma: perform substitution on specific lines (set nu to see line numbers)
sed 'command' file Apply command to each line of the file. Examples:
sed 's/ //g' file > output_file # Remove spaces
   
u undo (undo the undo with ctrl-r)
U Restore current line as before
v, V Start visual selection, then move and do one of these commands: d, c, y, >, <
V selects the whole line.
Ctrl-v after moving selects a box (nice trick)
V>, V< Shift the whole line right or left
gu, gU Put selection in lower case, upper case
ga Gives the hex and octal representation of the character under the cursor
qa  (...commands...)  q
@a
Register commands in "a"
Re-play with @a. There are 26 registers, a..z.

EX commands, including setting parameters and exiting

:f Show current file and line
:w write to the file
:r filename Insert file 'filename' after the current line
:w name write to file 'name'
:q quit
:q! quit and discard changes
:ZZ Write changes and exit
!!cat filename Replace current line with file 'filename'
:set Set various parameters, e.g. :set ic ignore case
:set all lists all parameters
Configurations can also go into the .exrc file
Show value: :set option?
See possible parameters below. See vimdoc too
:ab short_string string Abbreviation: replaces "short_string" with "string"
Remove with :una short_string
:vi Go back to vi mode when in EX mode
:X (uppercase) Set the pw for the file.
:setlocal cm? to see the method. Set method with :setlocal cm=mm with mm=blowfish2, blowfish, or zip.
Note that zip method is backwards compatible, but breakable too.

 

Some vim tips

Modify multiple files:
vi *html # or whatever wildcard is appropriate
:bufdo %s/look-for/replace-with/ge | update

Sample vimrc file:

set lines=40
set columns=150
set ignorecase
set noautoindent
set indentexpr=""
set expandtab
set noshowmatch
set backup
set backupext=.bak
set patchmode=.orig.txt

Portable version
Execute with C:\progfile\papps\gVimPortable\App\vim\vim73\gvim.exe
vimrc in "C:\progfile\papps\gVimPortable\Data\settings\_vimrc"

Macros

For macros in vim, see below. Consider using the macro function in reflexion too.

To record a macro

Play back the macro

 


Regular expressions

Alphanumeric characters stand for themselves. Prefix other characters with \ if needed.

. (period) Any one character
\ Next character is taken literally
$ Match end of line
^ Match beginning of line
% All lines
[xy] Match with 'x' or 'y'
[0-9],[A-Z],[A-Za-z],[a-zA-Z0-9_] More than one option for a character
[^A-Z] Match with any character except upper case.
x*
x+
xx*
x?
.*
Match with 0 or more 'x'
1 or more 'x' (repeating character, use [ ]+ too)
1 or more 'x' ("+" does not seem to work)
0 or 1 occurrences of 'x'
0 or more of any characters
(reg_expr)*
(?:abc)
Match with 0 or more occurrences of a group of characters: capture group
or cluster group: looks for strings without "abc"
x*?
x+?
"Ungreedy" matching
one_string|another several possible matchings. Best if used in a cluster group: (?:one_string|another)
\{min, max\} Specifies a minimum and a maximum number of occurrences
\{x\} Matches x occurrences
\(...\) Store matched character
\n Retrieve stored characters (n=1..9)
(a string|another|3rd option) Search for at least one of three strings (altneration)
   
i i switch at end indicates case in-sensitive. Switches can be cumulated: gei
g g switch at end indicates all occurences (otherwise just the first)
e e switch at end treats the substitution text as a normal Perl expression

See www.regular-expressions.info (explore some more)

Examples:

:%s/x.*$//

asdf x 7890 --> asdf

:%s/^..../....y/ asdf --> asdfy
:%s/</&lt;/g
:%s/>/&gt;/g
:%s/$/<BR>/
Prepare for HTML
s/xyz/abc/g g is global option
sed '5d' delete line 5
sed '/[Tt]est/d' Delete lines with "test" or "Test"
sed -n '20,25p' Print lines 20-25
sed -n '20,25p' Print lines 20-25
sed '1,10s/xyz/abc/g' change in first 10 lines
sed '/jan/s/-1/-5/' change -1 to -5 on lines with jan
sed 's/^...//' Delete the first 3 chars
sed 's/...$//' Delete last 3 chars
sed 's/[ ]*$//' Delete trailing spaces at end of lines
sed 's/\x93/"/g' Look for character with unicode 93 (in hex) and replace
sed -n 'l' Print all lines
non printable chars --> \nn
tab --> >
'^[a-zA-Z_]+@[a-zA-Z]+\.[a-zA-Z]+$' Email
   
   
   
   

 

 

 


Misc

echo
echo arguments to the standard options
grep str filename
search a file for a string or regular expression
Surround "str" with double quotes if there is a space, or single quotes if there is a special character such as $, *, [ or other. The filename can have a wildcard
^ = beginning of line ; $ = end of line; . = single char; \ escape the following char
-i ignore case
-n line number
-l list files
-v lines that do not contain the string
-h Do not display filenames when searching multiple files.
sort -nr filename
-n interpret first field as numbers
-r reverse order
-u eliminate duplicate lines
-o give file for output
uniq in_file out_file
eliminates consecutive matching lines
-d gives ony the duplicate lines
-c give count of duplicate lines
cut -c list [file ...]
Extract based on character position.
cut -b for byte position (equivalent to -c option for single-byte character sets)
list of positions: 1,4,7 = pos 1, 4, and 7; 1-3 = pos 1 to 3; -5 = pos 1 to 5; 3- = pos 3 to last
cut -f list [-d char] [-s] [file ...]
Extract based on field delimiter char. List is same as for cut -c. Put quotes around char if it has a special meaning. Default is tab.
Option -s to suppress output of lines with no delimiters
cut -f 2-4 -d: get columns 2 to 4 defined by colon as a delimiter
grep something * | cut -f 1 -d: | sort -u tidy up grep output
paste -d list file1 file2
Merge the two files on a line by line basis, with [line from file1][delimiter][line from file2]
The delimiter is tab by default. With -d option, the members of the list will be used in order as delimiters. Enclose comma in double quotes: paste -d "," file1 file2
/usr/xpg4/bin/grep -E 'full regex' filename
The pattern is a full regular expression (this option is only available for some version of grep such as xpg4)
/usr/xpg4/bin/grep -E '(792503|796006|801801)' a_file
Output the rows with at least one of the three numbers
tr 'abc' 'ABC' < a_file.txt
translate
-d delete
tr -d ' ' #deletes spaces
tr -c 'a' '-' everything but 'a'
tr e z
tr '[a-z]' '[A-Z]'
tr ':lower:' ':upper:' tokens, such as :alnum:, :alpha:, :digit:, :blank: (space character), :space: (all whitespace), :punct:
tr '()' '{}'
tr -s '[:blank:]' squeeze out multiple occurences
Note: tr tranlsates character to character. Also see sed
crontab -l
list crontab
mm hh * * d where mm=minutes, hh=hour, d=day in week (0=Sun)
See below for more details on cron configuration
echo $SHELL
See current shell
xhost +
export DISPLAY=ip-add:0.0
Have an x-windows run on another machine
sysconfig -s
System configuration
sysconfig -q vm
Sub-system configuration (here virtual memory)
date +"%m%d %H%M %Y"
Shows date (mon-day hour-minutes year)
%Y year, %m month, %d day, %a day of week, %H hour 00..23, %M minute, %S seconds
%F YYYY-MM-DD, %z time zone, %T HH24:MI:SS, %s seconds since 1970-01-01
Remember the "+" at beginning of format
-u shows GMT
mv a_name.txt a_name`date +"%Y%m%d%H%M"`.txt: rename file with timestamp
date mmddHHMMyyyy
Change the date (default format on Tru64)
date +%s
current epoch time
cal [m] year
Show calendar for year, optionally just a month
bc
Basic calendar. Enter calculation then press ENTER. End with ctrl-D.
Change the number of decimal places with scale = n
grep "19\-OCT\-2004" ...log/listener.log | grep -i genio | grep -v SID
Check for connections to the database on a given date
echo $PATH
Display the $PATH variable
PATH=${PATH}:new_path_to_add
Add a path to $PATH
ls | xargs
Take a multi-line list and make a single-line list
dos2unix in_file_name > out_file_name
Get rid of the CR at the ends of the lines
awk '{ sub("\r$", ""); print }' dos_format > unix_format
sed 's/^M$//' dos_format > unix_format (sed -e option might be needed). Try also tr -d '\r'
unix2dos in_file_name > out_file_name
Change LF into CRLF
awk 'sub("$", "\r")' unix_file > dos_format
sed 's/$'"/`echo \\\r`/" unix_file > dos_format (verify this)
diff file1 file2
Show the differences between two files
-c gives a context of three lines around the changes, -C n gives a context of n lines
-u produces a merged view of the differences
-b ignore blanks
-y side by side (-by is good for side by side of two files)
diff file1 file2
Compare two sorted files. Displays three columns:
rows in file1, rows in file2, rows in both files
diff -rq dir1 dir2
Show the differences between two directories
Option -r shows more details
^S ... ^Q
Stop input and output (remember the screen lock of old days...)
cat a_file | tr '|' '\t'
tr "|" "\t" < cat a_file
Replace pipes with tabs. Remember to pipe in the file or use "<" for file input. The quotes can be simple or double.
sar -u -s 02:00:00
See stored sar results since 2AM
cal 3 2012 | cut -c3-18
One month, Monday to Friday

 

History

!!
Redo last command (C shell)
!^
First argument of last command (C shell)
!$
Last argument of last command (C shell)
!*
All arguments of last command (C shell)
!-n
Command types n comands ago
history
Show previous commands
set -o vi
Edit the past commands in vi mode. Press escape to edit. Press i to insert a new command.
With this, to see previous command(s): [esc]k (Korn shell)
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

Keyboard Shortcuts

Ctrl-C Interrupt
Ctrl-Z Suspend
Ctrl-D Exit the bash shell
Ctrl-L Clear the screen
Ctrl-S and Ctrl-Q Stop scrolling and resume output to the screen

Editing line:
Ctrl-A or Home: beginnning of line
Ctrl-E or End: end of line
Ctrl-B Back one character. Alt-B back one word
Ctrl-F Forward one character. Alt-F Forward one word
Ctrl-XX Go to beginning of line, then with Ctrl-XX return to previous position
Ctrl-D Delete Alt-D Delete to end of line
Ctrl-H or backspace
Ctrl-_ (underscore): undo last key press
Ctrl-T Swap the previous two characters

Cut and paste:
Ctrl-W Cut the word before the cursor
Ctrl-K Cut from cursor to end of line
Ctrl-U Cut from beginning of line to cursor
Ctrl-Y Paste what was previously yanked. Note that the clipboard is local to the bash shell.
Ctrl-Ins and Shift-Ins to copy and paste from computer's clipboard
Alt-U Convert to upper case from cursor to end of current word
Alt-L Convert to lower case from cursor to end of current word
Alt-C Convert character under cursor to upper case and go to end of word

History:
Ctrl-P or up arrow: previous command in history
Ctrl-N or down arrow: next command in history
Alt-R Revert changes to command
Ctrl-R Recall: press ctrl-R and type characters (reverse-i-search)
Ctrl-O Run the command found with ctrl-R above
Ctrl-G Exit this recall mode
Tab completion: press tab to complete a directory or file name.
It appears that the shortcuts above require emacs mode (set -o emacs) as opposed to the vi mode (set -o vi)
Many thanks to L.H. at How-To Geek. The website is an excellent source of tips, tricks, and in-depth explanations

 

Fonts

Location:
- ~/.fonts
- ~/.local/share/fonts
Update with fc-cache command after adding or removing

Formats:
- TrueType (.ttf)
- PostScript Type 1 (.pfb + .pfm)
- OpenType (.otf)

 

Miscellaneous

Double-click a text to copy, and right-click to paste

 

curl -O url: Download a file (capital "O")
wget url: Download a file

 

Cron

Examples:


0 0 * * 0 = weekly
0 0 * * * = daily, at midnight
0 8 * * * = daily, at 08:00
0 * * * * = hourly

# +------------- minute (0 - 59)
# | +------------- hour (0 - 23)
# | | +------------- day of the month (1 - 31)
# | | | +------------- month (1 - 12)
# | | | | +------------- day of the week (0 - 6) (Sunday to Saturday;
# | | | | |                                   7 is also Sunday on some systems)
# | | | | |
# | | | | |
# * * * * * command-to-execute

See Wikipedia - cron

 

 


Terraform Scripts

 

See working implementation on https://github.com/yekesys/terraform_ec2. It creates a local file and spins up an EC2 instance.

See Module Structure

# comment
// comment
/* multi-line comment */
Basic Commands

Test installation with:
terraform -v

First run after defining Terraform files:
terraform init
This is idempotent, and can be executed multiple times

Show the changes that will be made (like a dry run):
terraform plan

Apply the changes:
terraform apply or
terraform apply -auto-approve

Remove objects with:
terraform destroy

Format the code:
terraform fmt
Check validity:
terraform validate

Download a new version (to be verified):
terraform init -upgrade

Basic structure of a Terraform block:

<BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {
  # Block body
  <IDENTIFIER> = <EXPRESSION> # Argument
}

State File

The state is stored in the file terraform.tfstate.

Terraform is "declarative". We define the desired end state. Terraform figures out how to get there.
This approach requires that Terraform know the current state. This is in the state file terraform.tfstate (a json file)

Update the state file:
terraform refresh

Use the -refresh-only mode as a safe way to check Terraform state against real infrastructure. It does not update the infrastructure nor the state file:
terraform plan -refresh-only
terraform apply -refresh-only

Best practices for the state file:

Show the state from state file:
terraform show

Logging

Log Terraform core:
export TF_LOG_CORE=TRACE
export TF_LOG_PATH=logs.txt
Log a provider:
export TF_LOG_PROVIDER=TRACE
export TF_LOG_PATH=logs.txt
When done, unset the variables:
export TF_LOG_CORE=
export TF_LOG_PROVIDER=

Providers

First, we define the providers

Providers are defined in the root, not in the modules.

Examples of providers: aws, local
Doc for local: https://registry.terraform.io/providers/hashicorp/local/latest/docs
Doc for AWS: https://registry.terraform.io/providers/hashicorp/aws/latest/docs

Provider information should only be in the root module.

Set AWS profile (user) in the main.tf file:

# main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" { 
  profile = "user2" 
}

Assume a role:

provider "aws" {
  assume_role {
    role_arn = "arn:aws:iam::123456789012:role/dev-full-access"
  } 
}

list of providers:
https://registry.terraform.io/browse/providers
Documentation:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs

Resources

The resource block identifies the type of the resource and name of the resource.
Below is the basic structure of a resource block:
resource "provider_type" "name" {
  ...
}

The resource type is composed of the provider (above provider, which corresponds to the name of the provider) and the type (above type). The name of the resource is my choice (above name).
The id of the resource is provider_type.name (as in the first line of the resource block, with "." between the resource type and the name).

List resources:
terraform state list

 

Locks

remote backend:
-lock-timeout=10m : wait 10 minutes for lock to be released

Variables

Define as follows:

variable "var_name" {
  description = "Text ..."
  type        = string  # or bool or number or list(string)
}

Ideally, put all variables in the variables.tf file, although they can be in any file

Get the value with var.var_name in other files.
Inside a string, use "....${var.var_name}..."

Local variables:
locals { var_name = a_value }

Get the value with local.var_name

Define an object:

variable "a-name" {
  type = object({
    name    = string
    address = string
  })
}

Prevent sensitive values from showing in the output. However, nothing prevents these values from showing in the state file, which therefore has to be treated as holding confidential data:

variable "a-name" {
  sensitive = true   # will not show in outputs
}

Outputs

output "a_value" {
  value = aws_instance.example.public_ip
  description = "Text..."
}

Show all outputs with terraform output and a specific variable with terraform output a_value.

For output in json format, do terraform output -json.

Modules

All .tf files in the same directory are part of the same module.
Each module (or directory) should ideally have a README.md file, or README
And should always contain main.tf, variables.tf, outputs.tf, even if empty

Some people say to not use the file names main.tf, variable.tf, outputs.tf in all the modules, but use different file names. Likewise, some people suggest using json files instead of the tfvars format, and calling the files name.tfvars.json instead of name.tfvars. Instead of outputs, use data blocks and tags.

Child modules should not have a provider block. Provider configurations should only be in the root module. If they are in child modules, there can only be one. But requirements for providers can exist in the various modules in "required_providers" block

Example:

# main.tf in root
provider "aws" {
  region = "us-east-1"
}
module "abc" {
  source = "../../abc_module"    # For local sub-directory, use "./sub-dir" notation
  var_name = "asdfasdf"
}

The name "abc" above is the local name that the calling module uses to designate the instance of the module. The source is mandatory.

Multiple module blocks can use the same source. This creates multiple copies, usually with different variables.

All configuration values should be in the variable.tf file

If a variable is used but does not have a value, provide the value at run-time:
terraform apply -var "server_port=8080"
Or:
export TF_VAR_server_port=8080

Catch the output variable values with:
module.the-module-name.the-output-name
Note: the-module-name is not the sub-directory name, but what follows the keyword module in the calling module. The output name is the output defined in the called module.

Loops

Count:

Create three things that are the same:

resource "..." "example" {
  count = 3
  name  = "something${count.index}"
}

This creates something1, something2, and something3

List:

Define the variable as type "list(string)"

variable "var_name" {
  description = "Text ..."
  type        = list(string)
}

In the resource creation:

  count = length(var.var_name)
  filename = var.var_name[count.index]

Get the values with [*]:

output "all_..." {
  value       = aws_iam_user.example[*].arn
  description = "..."
}

Collection:

Define the variable as type "list(string)"

variable "var_name" {
  description = "Text ..."
  type        = list(string)
}

In the resource creation:

  for_each = toset(var.var_name)
  filename = each.value

Get the values with [*]: output "all_..." { value = aws_iam_user.example[*].arn description = "..." }

Module Structure
Root Module
  Sub-directory
./the_module

terraform {
  required_providers {
    providername = {
      source = "hashicorp/providername"
    }
  }
}

provider "providername" {
    version = "~> 1.4"
}


 
 
module "the_module_name" {
  source       = "./the_module"
  variable1 = "A text"
  variable2 = "filea.txt"
}

==>    

 



variable "variable1" {
  description = "...."
}

variable "variable2" {
  description = "...."
  default     = "t2.micro"    # 'default' not 'value'
}


 

resource "providername_resourcetype1" "the_resource1_name" {
  param1 = var.variable1
}

resource "providername_resourcetype2" "the_resource2_name" {
  param1 = var.variable1
  param2 = var.variable2
  param3 = providername_resourcetype1.the_resource1_name.property
}





output "public_ip" {
  value       = module.the_module_name.public_ip
  description = "The public IP of the web server"
}

output "module_outputs" {
  value       = module.the_module_name
  description = "All the outputs of module"
} # Shows public_ip, variable1, and any others 

 
output "public_ip" {
  value = providername_resourcetype2.the_resource2_name.public_ip
  description = "Web server public IP"
}

output "variable1" {
  value = var.variable1
  description = "...."
}

<==

Notes:

Example: Local File

main.tf in root:

provider "local" {
#  version = "~> 1.4"
}
module "create_a_file" {
  source = "./file_module"
  file_content = "\nA text"
  file_name = "filea.txt"
}
module "create_b_file" {
  source = "./file_module"
  file_content = "\nB text"
  file_name = "fileb.txt"
}

file_module/variable.tf:

variable "file_content" {
  description = "What goes in the file"
  type = string
}
variable "file_name" {
  description = "List of file names"
  type = string
}

file_module/main.tf:

resource "local_file" "a-file" {
  content = var.file_content
  filename = var.file_name
}
Example: Multiple Files (with for_each)

main.tf in root:

provider "local" {
#  version = "~> 1.4"
}
module "create_a_file" {
  source = "./file_module"
  file_content = "\nA text"
  file_names = ["file1.txt", "file2.txt", "file3.txt"]
}
module "create_b_file" {
  source = "./file_module"
  file_content = "\nB text"
  file_names = ["fileb1.txt", "fileb2.txt", "fileb3.txt"]
}

file_module/variable.tf:

variable "file_content" {
  description = "What goes in the file"
  type = string
}
variable "file_names" {
  description = "List of file names"
  type = list(string)
}

file_module/main.tf:

resource "local_file" "a-file" {
  content = var.file_content
  for_each = toset(var.file_names)
  filename = each.value
}
Example: AWS Instance
provider "aws" {
  region = "us-east-1"
}

variable "server_port" {
  description = "The port the server will use for HTTP requests"
  type        = number
}


resource "aws_security_group" "instance" {
  name = "terraform-example-instance"
  ingress {
    from_port   = var.server_port
    to_port     = var.server_port
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "example" {
  ami           = "ami-0022f774911c1d690"
  instance_type = "t2.micro"

  # reference to another part of the configuration:
  vpc_security_group_ids = [aws_security_group.instance.id]


  user_data = <<-EOF
              #!/bin/bash
              echo "Hello, World" > index.html
              nohup busybox httpd -f -p "${var.server_port}" &
              EOF

  tags = {
    Name = "terraform-example"
  }

}

output "public_ip" {
  value       = aws_instance.example.public_ip
  description = "The public IP of the web server"
}

output "port" {
  value       = var.server_port
  description = "The port of the web server"
}
Example: S3 Bucket and DynamoDB For Storing tfstate
terraform {
  backend "s3" {
    bucket         = "bucket name"
    key            = "global/s3/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-up-and-running-locks-table-name"
    encrypt        = true
  }
}

provider "aws" {
  region = "us-east-1"
}


resource "aws_s3_bucket" "terraform_state" {
  bucket = "bucket name"
  versioning { 
    enabled = true 
  } 
  # Enable server-side encryption by default
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}


resource "aws_dynamodb_table" "terraform_locks" {
  name         = "terraform-up-and-running-locks-table-name"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  # the key has to have "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }
}

output "s3_bucket_arn" {
  value       = aws_s3_bucket.terraform_state.arn
  description = "The ARN of the S3 bucket"
}
output "dynamodb_table_name" {
  value       = aws_dynamodb_table.terraform-up-and-running-locks-table-name.name"
  description = "The name of the DynamoDB table"
}

 

https://learn.hashicorp.com/collections/terraform/aws-get-started
https://blog.gruntwork.io/an-introduction-to-terraform-f17df9c6d180
https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9
Reference

 

 

Terragrunt

 

Keeps Terraform code DRY = Don't Repeat Yourself

Installation Steps:

The terragrunt.hcl file contains the following indications:

  1. Terraform source
    terraform {source = "tfr:...."} # link to terraform source and version

  2. Indicate the AWS region in "generate" block

  3. Input values to use for the variables of the module.
    Equivalent to having the contents of the map in a tfvars file and passing that to terraform.

example:

include "root" {
  path = find_in_parent_folders()
}

dependencies {
  paths = ["../vpc", "../mysql", "../redis"]
}

Location:

Run terragrunt commands instead of the equivalent terraform commands:

terragrunt plan
terragrunt apply
terragrunt output
terragrunt destroy

key = "${path_relative_to_include()}/terraform.tfstate" # basically the local folder

terragrunt will generate the file:

generate = {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
}

Searches up the directory tree for the root terragrunt.hcl and adds the remote_state configuration from that root:

include "root" {
  path = find_in_parent_folders()
}

https://terragrunt.gruntwork.io/docs/getting-started/quick-start/

Further documentation:

 

Plz

 

The root of repo is defined in: .plzconfig. The .plzconfig file is located in:

Profile-specific config file (here "remote")

Sample BUILD file:

python_library(
        name = "my_library",
        srcs = ["file1.py", "file2.py"],
    )

    python_binary(
        name = "my_binary",
        main = "my_main.py",                 # main = entry point
        deps = [":my_library"],
    )

    python_test(
        name = "my_library_test",
        srcs = ["my_library_test.py"],
        deps = [":my_library"],
    )

Do these:

plz build
plz run   # builds and runs
plz test  # build and runs tests

With example above:

plz build //package:my_library
plz build //package:my_binary
plz run //package:my_binary
plz test //package:my_library_test
plz run //package:all      # all targets in the sub-dir package
plz run //package:…      # all targets in sub-dir package and lower

package is the path from the repo root

https://please.build/quickstart.html

 

 

 


Oracle

Set variables
ORACLE_SID=db
export ORACLE_SID
 
mount /dev/disk/cdrom0c /cdrom
ls /cdrom
cd /cdrom/runInstaller
Insert CD and start installer
ssh -l user remote_machine command
Execute command on remote machine. Without command, opens remote shell
uerf -r 300 -R | more
See OS events linked to startup

Some examples:   

sqlplus /nolog <<END_OF_TEXT_LABEL >>a_file
connect / as sysdba;
shutdown immediate;
END_OF_TEXT_LABEL

With the following, the spool_file and the output_file contain the same thing, except when there is an error. The error shows in output_file, but the spool_file is untouched (you may want to delete it explicitely).
sqlplus -s $DB_USER/$DB_PWD@$ORACLE_SID <<END_OF_TEXT_LABEL >>output_file
set pagesize 0 feedback off trimspool on
spool spool_file
select * from ...;
spool off
END_OF_TEXT_LABEL

Get the date:
DATE=`date +"%y%m%d"`

 


Mail

Local mail

mail
Read mails, if mails exist
mail user-name
Send a mail to a user on the machine. End with "." on empty line.
<enter>
Read next
d
Delete current mail
p
Print current
-
Print previous
q
quit
 

 


Linux Specifics

Go though this and remove double information

 

Run a script : ./script_name
echo $ORACLE_SID --> shows value of variable
echo ORACLE_SID --> shows "ORACLE_SID"

srvmgrl --> run the server manager

Set an environment variable
variable_name= value; export variable_name (Bourne shell)
setenv variable_name value (C shell)

Updating the Environment after change of values of environment variables in .profile or .login,
. .profile (Bourne or Korn Shell)
source .cshrc (C shell)

env
obtain or alter environment variables for command execution
set
set the values of all shell variables
setenv
set environment variables
uname [-a]
display name of the current system. -a gives more details.
cat /etc/issue
Shows the distribution
cat /proc/version
Shows the distribution
hostnamectl
Shows the hostname, and also other information including the distribution and kernel version
uname -srm
Shows the kernel version
lshw
Hardware information
lshw | grep product
PC model number

Software Updates

Ubuntu

Redhat

 

Ubuntu Notes

USB stick is mounted as /media/SWISSMEMORY with /dev/sda1 (on T30) and with /dev/sdb1 (on DELL)
To mount explicitely:
mkdir /mnt/usb-drive
mount /dev/sda1 /mnt/usb-drive

 

Define keyboard in menu system > preferences > keyboard. Look for "Switzerland" and expand with the little triangle.

Sudo notes:

Some links:

 

 


FreeNAS

 

Basics

 

Installation

Steps