Oracle Backup & RestoreThese are my personal notes that I use as a quick help in my work.
|
|
In parameter file:
log_archive_start = true
log_archive_dest_1 = "location=/u01/oradata/db/arch"
log_archive_format = arch_%t_%s.arc
%s = redo log sequence number
%t = thread number
LOG_ARCHIVE_MAX_PROCESSES=3;
Stop the database then:
startup mount pfile=...
alter database [no]archivelog;
alter system set log_archive_start = { false | true } scope = spfile; -- 9i
Set archivelog mode in 9i:
stop immediate
startup mount
alter database archivelog;
alter system set log_archive_start = true scope = spfile;
stop immediate
startup
Remove archivelog mode in 9i: same as above, but "noarchivelog
"
and "log_archive_start=false
"
alter database noarchivelog;
alter system set log_archive_start = false scope = spfile;
Some commands:
ARCHIVE LOG LIST
SELECT SEQUENCE#, DECODE(STATUS, 'CURRENT', '<--', ' '), STATUS, ARCHIVED, DECODE(ARCHIVED, 'NO', '<--', ' ')
FROM V$LOG;
ALTER SYSTEM SWITCH LOGFILE; -- Equivalent to the following command
ALTER SYSTEM ARCHIVE LOG CURRENT;
ALTER SYSTEM ARCHIVE LOG STOP; --> stops the ARCH process.
ALTER SYSTEM ARCHIVE LOG START TO 'dest'; -- Redirect archived log files
ALTER SYSTEM ARCHIVE LOG START; -- Restart the archiver after it has failed (for example after disk full)
SHOW PARAMETER ARCHIVE;
ALTER DATABASE [NO] FORCE LOGGING;
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=3;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = DEFER ;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE ;
ALTER SYSTEM SET log_archive_trace=127;
Views
select log_mode from v$database;
ARCHIVELOG
or NOARCHIVELOG
mode. select archiver from v$instance;
V$ARCHIVED_LOG
(blocks * block_size / 1024)
V$ARCHIVE_DEST
V$ARCHIVE_PROCESSES
V$BACKUP_REDOLOG
V$LOG
NOARCHIVELOG
mode).V$LOG_HISTORY
Analysis:
Recommendations:
Physical cold backup
Physical hot backup (datafile by datafile, but all of a tablespace is in backup mode)
alter tablespace x begin backup;
-- backup associated data files
alter tablespace x end backup;
alter system archive log current;
-- backup archived log files
alter database backup controlfile to 'backup_ctrlfile.ctl' reuse; --binary
alter database backup controlfile to trace; -- SQL file in USER_DUMP_DEST
select 'alter database datafile ', file#, ' end backup;'
from v$backup where status = 'ACTIVE';
-- In case of error "File ... needs media recovery"
There is no need to backup on-line redo logs. And it can be dangerous when restoring because the current on-line redo logs could be over-written.
Recovery situations
- Corrupted or missing data file
- Put tablespaces off-line, restore, recover and put on-line (ARCHIVEDLOG mode).
- All datafiles lost
- Restore the files from a backup and apply the archived logs (ARCHIVEDLOG mode). Resetlogs is not necessary if on-line redo logs are available (in addition to the archived logs).
- Corrupted or missing control file
- One member: copy from other members
If all are missing, then recreate the control files- Redo log file
- Most difficult problem to handle if it is the current redo log file. If only one member is affected, copy from the other members. If all members are affected, then data is lost.
Note that if the database was closed when the redo logs were lost, then there is no problem. If a redo log was already archived, then there is no problem.- Archived log file
- This only affects the ability to restore the database. Do a full backup of the database as soon as possible.
- Password file
- Recreate the password file
- Init file
- Restore init file from a backup.
- Table accidentally dropped
- Incomplete recovery of the whole database or tablespace point-in-time recovery or import utility
- Corrupt blocks in tables (shows in alert.log file)
- Block recovery if possible, otherwise restore and recover the data file/tablespace.
- Disaster recovery
- Use backed up data on tapes or other media
- Statement error
- Get the oracle error and treat it
- User error (data deleted)
- Point-in-time recovery (either database or tablespace). Best would be of course to have exports.
- Instance failure
- The database crashes or aborts.
Test various methods!
Recovery
SELECT * FROM v$recover_file;
recover database; -- or alter database recover...
alter database open;
alter tablespace t offline temporary;
recover tablespace t;
alter tablespace t online;
recover datafile 'd';
Log applied
" should showMedia recovery complete
" should showSee also Oracle8i Backup and Recovery Guide, chap 5 Performing Media Recovery
If incomplete recovery (not all archived logs available and/or on-line redo logs not available), then alter database open resetlogs; Do a backup asap. All transactions in the missing archived logs and the redo logs are lost. Open resetlogs is always needed for a database in noarchivelog mode.
If old, backed-up on-line redo logs are used, do "open resetlogs" anyway so that the archived log time-lines are not duplicated, with possible confusion if another restore is needed.
Export (Logical backup)
Exports can be done in table, user or full mode. A query mode exists too.
EXP PARFILE=parms_exp.txt
FILE=files\exp_file
LOG=files\exp_log_file
USERID=user@SID (password entered when executing. User SYS for full)
FULL=Y (Either put FULL=Y or put OWNER=..)
OWNER=the_owner (just one schema)
TABLES=(tablename,tablename)
CONSTRAINTS=Y
BUFFER=1000000
CONSISTENT=Y (necessary if there are a lot of updates during export)
Imports can be done in table, user or full mode.
IMP PARMFILE=parms_imp.txt
FILE=files\exp_file
LOG=files\imp_log_file
USERID=user@new_SID (password entered when executing. User SYS for full)
FULL=Y (Either put FULL=Y or put FROMUSER/TOUSER)
(create database before full import)
FROMUSER=old_user
TOUSER=new_user (generally the same)
TABLES=(tablename,tablename)
GRANTS=Y
INDEXES=Y
ROWS=Y (if N, then no data is imported, but the tables are created)
BUFFER=1000000
SHOW=Y (this does NOT import, but just shows what is in the dump file. Be careful because it says "successfull" but nothing is imported.)
create user "OPS$THE_DOMAIN\THE_USER" identified externally
default tablespace usr_data temporary tablespace temp;
grant create session to "OPS$THE_DOMAIN\THE_USER" ;
grant exp_full_database to "OPS$THE_DOMAIN\THE_USER" ;
alter user "OPS$THE_DOMAIN\THE_USER" default role all;
the_param_file |
FILE=the_dump_file_name |
EXP /@the_sid PARFILE=the_param_file
RMAN also requires a regular "normal" (OS) disk backups.
A channel ~ process.
Catalog preferred. But not in target database.
DBMS_RCVCAT
DBMS_RCVMAN
dbmsrman.sql
and prvtrmns.plb
.DBMS_BACKUP_RESTORE
dbmsbkrs.sql
and prvtbkrs.plb
.CREATE TABLESPACE rman_ts datafile '.../RMAN01.dbf' size 20M autoextend
on next 20M;
CREATE USER rman IDENTIFIED BY rman
DEFAULT TABLESPACE rman_ts
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON rman_ts;
GRANT recovery_catalog_owner TO rman;
GRANT dba, connect, resource, sysdba TO RMAN;
connect rman/rman
@.../rdbms/admin/dbmsrman.sql
RMAN
RMAN TARGET user/pw@db [RCVCAT rman/rman@the_catalog_db ]
RMAN TARGET ... RCVCAT ... CMDFILE script_file [MSGLOG output_file
[APPEND]]
REGISTER DATABASE;
RESET DATABASE;
RESET DATABASE TO INCARNATION id;
LIST INCARNATION OF DATABASE;
RESYNC CATALOG [FROM CONTROLFILECOPY control-file-name];
RESYNC CATALOG FROM BACKUP CONTROLFILE 'control-file-name';
CATALOG { ARCHIVELOG | DATAFILECOPY | CONTROLFILECOPY }'...' TAG =
'...';
ALLOCATE CHANNEL name { TYPE=DISK | TYPE='type of tape' | NAME='...'
} options;
ALLOCATE CHANNEL FOR DELETE { TYPE=DISK | TYPE='type of tape'
| NAME='...' } options;
Backup command details:
%p = piece number within the backup set
%s = backup set number, incremented by the control file
%d = name of target database
%n = padded name of database
%t = timestamp
%u = name created by rman
Some views
SET NEWNAME FOR DATAFILE {'old_name' | number } TO 'new_name';
RESTORE DATABASE;
RESTORE DATAFILE number
RESTORE TABLESPACE '..';
RESTORE ARCHIVELOG ALL ;
SWITCH DATAFILE number_or_new_name;
RECOVER DATABASE [UNTIL ...];
RECOVER TABLESPACE name;
RECOVER DATAFILE name_or_number;
RESTORE CONTROLFILE TO 'location';
RESTORE DATABASE;
SQL "ALTER DATABASE OPEN RESETLOGS";
NLS_DATE_FORMAT='YYYY-MM-DD:HH24:MI:SS'
):SET UNTIL TIME = 'yyyy-mm-dd:hh:mm:ss';
RESTORE DATABASE;
RECOVER DATABASE;
SQL "ALTER DATABASE OPEN RESETLOGS";
To use the following scripts, connect with:
export ORACLE_SID=...
rman TARGET /
Example of backup script (full database backup with control files):
(does this include the archived logs??????)
|
Example of restore script for one data file:
connect catalog rman/rman@the_db |
See scripts, catalog command and change command in the course.
Note that the database must be in archive log mode.
Preparations:
Add an entry for the standby database in the TNSNAMES.ORA (sid=.. or service_name=...) on the primary site and in the LISTENER.ORA on the standby site.
Add a destination for the archived logs in the parameter file:
LOG_ARCHIVE_DEST_3 = 'SERVICE=stby' (where stby is defined in the TNSNAMES.ORA)
STANDBY_ARCHIVE_DEST = "..." (define on standby site)Note that the DB_NAME must be the same for both the primary and the standby databases.
1. Copy datafiles
alter tablespace ... begin backup;
copy
alter tablespace ... end backup;
select * from v$backup;
Returns"ACTIVE"
or"NOT ACTIVE"
2. Create standby controlfile and copy to standby site. See v$controlfile
for file locations.
alter database create standby controlfile as '$ORACLE_BASE/standby_<SID>.ctl' reuse;
select 'rcp ',
'$ORACLE_BASE/standby_<SID>.ctl',
' oracle@<remote_server>:' || substr(name, 1, 100)
from v$controlfile;
3. Copy the newest archive log files to the standby site.
4. Mount:
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE; -- ALTER DATABASE MOUNT won't work anyway
[ALTER DATABASE RENAME FILE 'old_name' TO 'new_name';] -- if needed (or use conversion initialization parameters)
RECOVER [AUTOMATIC] [FROM '.../arch'] STANDBY DATABASE; -- This applies archived logs that were transferred manually
5. Managed recovery mode / open in read-only. Receives the archived logs via Net8. You can go from open read-only to managed standby (the database is closed) only if there are no connections.
ALTER DATABASE OPEN READ ONLY; -- Open in read-only, but does not apply the archved logs
RECOVER MANAGED STANDBY DATABASE [TIMEOUT integer]; -- Automatically applies archved logs that are sent
6. Tell main (primary) database where to archive. Remember to modify the init.ora. Note that it may still be necessary to shutdown/startup for archived logs to be sent automatically.
-- modify init.ora and add log_archive_dest_2 = 'service=... OPTIONAL reopen=60'
ALTER SYSTEM SET log_archive_dest_2 = 'service=... OPTIONAL reopen=60';
ALTER SYSTEM SET log_archive_dest_state_2 = DEFER;
ALTER SYSTEM SET log_archive_dest_state_2 = ENABLE;
If the standby database is stopped, then there is a gap. Copy the files manually, then:
RECOVER STANDBY DATABASE;
RECOVER MANAGED STANDBY DATABASE;
Activation / failover (cannot be undone). Open in read-only to test, do not activate.
ALTER SYSTEM ARCHIVE LOG CURRENT; -- On primary database
RECOVER AUTOMATIC [FROM '/logs'] STANDBY DATABASE;
ALTER DATABASE ACTIVATE STANDBY DATABASE;
ALTER TABLESPACE TEMP ADD TEMPFILE '...' REUSE; -- Add the temporary files
After activation, it is best to back up immediately.
SHUTDOWN IMMEDIATE;
backup
STARTUP;
N.B. The backup of a database in managed recovery is not consistent, but the backup of a database in read-only mode is consistent.
If a no-loggin operation occurred on the master database, then copy the database from the master to the standby:
alter system set log_archive_dest_state_2 = defer
;
alter tablespace ... begin backup;
copy
alter tablespace ... end backup;
select * from v$backup;
alter database create standby controlfile as
'$ORACLE_BASE/standby_<SID>.ctl' reuse;
select 'rcp ', '$ORACLE_BASE/standby_<SID>.ctl
',
'oracle@<remote_server>:' || substr(name,
1, 100)
from v$controlfile;
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
RECOVER STANDBY DATABASE
alter system set log_archive_dest_state_2 = enable
;
If data files are in backup mode, stop the database, then:
startup nomount
alter database mount standby database;
select 'alter database datafile ', file#, ' end backup;' from v$backup
where status = 'ACTIVE';
select * from v$backup;
--> verify that the backup is "NOT
ACTIVE"N.B. New parameter in 9i: STANDBY_FILE_MANAGEMENT=AUTO
which causes
replication of file creation on the standby database.
select max (sequence#) from v$log_history;
select max(sequence#) from v$archived_log;
select max(sequence#) from v$archived_log where trim(name)
in (select trim(destination) from v$archive_dest where target = 'STANDBY');
select status, sequence# from v$log;
(Needs Enterprise Edition) These notes from 9i documentation.
The operating system-dependent (OSD) components contains two primary subcomponents: Cluster Manager (CM) and Interprocess Communication (IPC) software (UNIX specific; for windows, comes with Oracle installation).
See database basics about the SID and database names.
The control file, server parameter file, and all datafiles are shared; they reside on a shared cluster file system or on shared disks. Each instance has its own redo log files, with shared access (names include the instance number: db_redo1_1 db_redo1_2 for instance 1, db_redo2_1 db_redo2_2 for instance 2). The files must be shared in a cluster file system environment or must be shared raw devices in UNIX environments, or a shared logical partition in Windows environments.(NMI&U).
Disk setup needs extra care: see Oracle documentation. Installation is done on one node; the OUI copies to the other nodes. This includes patches, etc.
To create a data with DBCA, use the "Oracle cluster database" option. If nodes are missing in the list, perform diagnostics.
Use the srvconfig command to import or export the contents of the shared configuration file to or from a text file. The srvConfig.loc is in /var/opt/oracle directory (HPUX: /etc).
Do not use the default location of the server parameter file because all databases
must the same spfile. Instead, put a pfile ('init$ORACLE_SID.ora' or 'init%ORACLE_SID%.ora')in
the default location with: SPFILE='/dev/vx/rdsk/oracle_dg/db_spfile'
SRVCTL architecture: the console talks to the agent on each node, which talks
to the SVRCTL (server control), which talks to the GSD (Global Services Deamon)
not only on the local node but on all nodes (communication with RMI). The GSD
must run at all times (check with gsdctl $ORACLE_HOME/bin/stat
).
TNSNAMES.ORA must have entries for the database and for each instance.
LISTENER.ORA must have a listener defined for each node with the TCP/IP address
and entries for the SIDs in sid_LIST_listener_name
When load balancing is on, the client randomly selects a database to connect to.
Cross-node registration:
This requires configuration of the LOCAL_LISTENER and REMOTE_LISTENER (global
list of listeners) initialization parameters. The LOCAL_LISTENER parameter identifies
the local listener and the REMOTE_LISTENER parameter identifies the global list
of listeners of the instance. (The dispatcher setting can be changed to override
the local and remote listener settings.)
The default listener's name has the node name included (in addition to the SID).
Do not use the global_dbname so as to allow failover.
See the Net8 page for information about the TNSNAMES.ORA file.
include FAILOVER_MODE parameter settings in the CONNECT_DATA portion of a connect descriptor. See other documentation for this.
See the V$SESSION view for details on fail overs (specifically columns FAILOVER_TYPE, FAILOVER_METHOD, and FAILED_OVER ).
Primary/seconday configurations: configure parameterization file with ACTIVE_INSTANCE_COUNT =1 on both instances. Turn load balancing off.
"Golden Rule" for oracle recovery: two disk drives:
Data disk drive with
- The data files, control files and on-line redo logs, parameter file, tnsnames.ora, and listener.ora
Backup disk drive with
- Backups of the datafiles and control files
- Archived log files since the last backup
- Duplicates of the control files and on-line redo logs (multiplexing, mirroring, or whatever). A proble with multiplexing the control file can make the whole instance fail, so use mirroring or other type of copy too with the control files. Do not backup online redo logs because it is useless.
- Copies of parameter file, tnsnames.ora, and listener.ora
For emergencies, keep this information handy:
You should also keep the following documentation about the software configuration:
Edit and put these in appropriate places:
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE OPEN READ ONLY;
ALTER DATABASE ACTIVATE STANDBY DATABASE;
ALTER DATABASE OPEN [READ WRITE] [NO]RESETLOGS;
Export / import
Workaround for curruption with large buffer:
export ORA_OCI_NO_OPTIMIZED_FETCH=1 UNIX ksh)
set ORA_OCI_NO_OPTIMIZED_FETCH=1 (Windows)
dbverify:
dbv datafile
Fast table copy:
CREATE TABLE tgt_tb NOLOGGING AS SELECT * FROM src_tb@db WHERE 0=1;
INSERT /*+ APPEND ENABLE_PARALLEL_DML */ INTO tgt_tb SELECT * FROM src_tb@db WHERE ....;
COMMIT;