Oracle Database
on Linux OS
Search This Blog
Wednesday, March 3, 2021
Database Administration Tips: Upgrade from 11.2.0.3 to 19c Using GoldenGate [Coo...
Database Administration Tips: Upgrade from 11.2.0.3 to 19c Using GoldenGate [Coo...: Introduction: In this post, I'll discuss the migration of 11.2.0.3 RAC DB on Oracle Linux 6 to 19.4 RAC on a different cluster on Ora...
Tuesday, February 2, 2016
"ORA-10567: Redo is inconsistent with data block" on Physical Standby 11g
Today is a wonderful day. It's cool and sunny. We are in Spring. But I don't want to talk about the weather at this moment. I am sipping a cup of coffee and typing a note on my Oracle Database experience yesterday.
I have a Dataguard 11g using physical standby. After a long active time and some end of year batch jobs the size of UNDO tablespace so large. The result from checking show that the usage space is very little. So I decide resize the datafile of UNDO tablespace. The commands are success. And the file has a smaller size.
But on the Standby Database, the archivelogs are still recieved from Primary Database and cannot be applied. The alert log show the error:
I have faced the lost of sync on Standby Database before. I have resolved by guide on http://arup.blogspot.com/2009/12/resolving-gaps-in-data-guard-apply.html. But yesterday's error is different. The following are steps that I performed to fix the error.
Check the SCN on Primary:
Check the SCN on Standby:
SQL> select scn_to_timestamp(1446590956) from dual;
SCN_TO_TIMESTAMP(1446590956)
---------------------------------------------------------------------------
02-FEB-16 09.26.47.000000000 AM
Check archivelog on Standby: logs are transfered but cannot be applied
Check process on Standby: MRP-apply process not running
On Primary: backup datafile 6 as shown in the alertlog & copy to Standby host:
On Standby: catalog the new backuppiece; shutdown, mount DB and restore datafile 6:
Starting apply process:
Now you can re-check the archivelog & MRP process on Standby as mentioned above.
Then taking a cup of coffe as the reward after a work hard.
Untimately, I want to say thank you to Bhavani and Arup Nanda. Yours' post help me to solve problems.
I have a Dataguard 11g using physical standby. After a long active time and some end of year batch jobs the size of UNDO tablespace so large. The result from checking show that the usage space is very little. So I decide resize the datafile of UNDO tablespace. The commands are success. And the file has a smaller size.
But on the Standby Database, the archivelogs are still recieved from Primary Database and cannot be applied. The alert log show the error:
Recovered data files to a consistent state at change 1443961551
Errors in file /oracle/app/diag/rdbms/proddr/PRODDR/trace/PRODDR_pr00_2930.trc:
ORA-00448: normal completion of background process
Errors in file /oracle/app/diag/rdbms/proddr/PRODDR/trace/PRODDR_mrp0_2918.trc:
ORA-00600: internal error code, arguments: [3020], [6], [11627], [25177451], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 6, block# 11627, file offset is 95248384 bytes)
ORA-10564: tablespace UNDOTBS1
ORA-01110: data file 6: '/oradata/PRODDR/undotbs02.dbf'
ORA-10560: block type 'KTU UNDO BLOCK'
MRP0: Background Media Recovery process shutdown (PRODDR)
Errors in file /oracle/app/diag/rdbms/proddr/PRODDR/trace/PRODDR_pr00_2930.trc:
ORA-00448: normal completion of background process
Errors in file /oracle/app/diag/rdbms/proddr/PRODDR/trace/PRODDR_mrp0_2918.trc:
ORA-00600: internal error code, arguments: [3020], [6], [11627], [25177451], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 6, block# 11627, file offset is 95248384 bytes)
ORA-10564: tablespace UNDOTBS1
ORA-01110: data file 6: '/oradata/PRODDR/undotbs02.dbf'
ORA-10560: block type 'KTU UNDO BLOCK'
MRP0: Background Media Recovery process shutdown (PRODDR)
I have faced the lost of sync on Standby Database before. I have resolved by guide on http://arup.blogspot.com/2009/12/resolving-gaps-in-data-guard-apply.html. But yesterday's error is different. The following are steps that I performed to fix the error.
Check the SCN on Primary:
SQL> select current_scn, scn_to_timestamp(current_scn) from v$database;
CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
----------- ---------------------------------------------------------------------------
1447116920 02-FEB-16 10.43.58.000000000 AM
CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
----------- ---------------------------------------------------------------------------
1447116920 02-FEB-16 10.43.58.000000000 AM
Check the SCN on Standby:
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
1446590956
CURRENT_SCN
-----------
1446590956
SQL> select scn_to_timestamp(1446590956) from dual;
SCN_TO_TIMESTAMP(1446590956)
---------------------------------------------------------------------------
02-FEB-16 09.26.47.000000000 AM
Check archivelog on Standby: logs are transfered but cannot be applied
SQL> select dest_id, sequence#, applied, deleted, to_char(first_time,'dd/mm/yyyy hh24:mi:ss') first_time
2 from (
3 select dest_id, first_time, sequence#, applied, deleted from v$archived_log order by first_time desc, dest_id
4 )
5 where rownum<=5
6 ;
DEST_ID SEQUENCE# S APPLIED DEL FIRST_TIME
---------- ---------- - --------- --- -------------------
1 19220 A NO NO 02/02/2016 10:30:18
1 19219 A NO NO 02/02/2016 10:13:53
1 19218 A NO NO 02/02/2016 09:56:23
1 19217 A NO NO 02/02/2016 09:39:35
1 19216 A NO NO 02/02/2016 09:30:16
2 from (
3 select dest_id, first_time, sequence#, applied, deleted from v$archived_log order by first_time desc, dest_id
4 )
5 where rownum<=5
6 ;
DEST_ID SEQUENCE# S APPLIED DEL FIRST_TIME
---------- ---------- - --------- --- -------------------
1 19220 A NO NO 02/02/2016 10:30:18
1 19219 A NO NO 02/02/2016 10:13:53
1 19218 A NO NO 02/02/2016 09:56:23
1 19217 A NO NO 02/02/2016 09:39:35
1 19216 A NO NO 02/02/2016 09:30:16
Check process on Standby: MRP-apply process not running
SQL> select process, status from v$managed_standby;
PROCESS STATUS
--------- ------------
ARCH CLOSING
ARCH CLOSING
ARCH CONNECTED
ARCH CLOSING
RFS IDLE
RFS IDLE
PROCESS STATUS
--------- ------------
ARCH CLOSING
ARCH CLOSING
ARCH CONNECTED
ARCH CLOSING
RFS IDLE
RFS IDLE
On Primary: backup datafile 6 as shown in the alertlog & copy to Standby host:
rman> backup format '/BACKUP/undofile2%U' datafile 6
# scp /BACKUP/undofile2d8qsrpml_1_1 oracle@standby-host:/BACKUP/
# scp /BACKUP/undofile2d8qsrpml_1_1 oracle@standby-host:/BACKUP/
On Standby: catalog the new backuppiece; shutdown, mount DB and restore datafile 6:
rman> catalog start with '/BACKUP/undofile2d8qsrpml_1_1'
alter database recover managed standby database cancel
shutdown immediate;
startup mount;
RMAN> restore datafile 6; Starting restore at 01-FEB-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=577 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=768 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=959 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /oradata/PRODDR/undotbs02.dbf
channel ORA_DISK_1: reading from backup piece /BACKUP/undofile2d8qsrpml_1_1
channel ORA_DISK_1: piece handle=/BACKUP/undofile2d8qsrpml_1_1 tag=TAG20160201T181341
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 01-FEB-16
alter database recover managed standby database cancel
shutdown immediate;
startup mount;
RMAN> restore datafile 6; Starting restore at 01-FEB-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=577 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=768 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=959 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /oradata/PRODDR/undotbs02.dbf
channel ORA_DISK_1: reading from backup piece /BACKUP/undofile2d8qsrpml_1_1
channel ORA_DISK_1: piece handle=/BACKUP/undofile2d8qsrpml_1_1 tag=TAG20160201T181341
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 01-FEB-16
Starting apply process:
SQL> alter database recover managed standby database using current logfile disconnect from session
Now you can re-check the archivelog & MRP process on Standby as mentioned above.
Then taking a cup of coffe as the reward after a work hard.
Untimately, I want to say thank you to Bhavani and Arup Nanda. Yours' post help me to solve problems.
Wednesday, January 6, 2016
Configuration files for Oracle Business Intelligence system component
BI Component
|
Configuration File
|
File Location
|
Oracle BI Server
|
NQSConfig.INI
logconfig.xml |
ORACLE_INSTANCE\config\OracleBIServerComponent\coreapplication_obisn
For example: \instances\instance1\config\OracleBIServerComponent\coreapplication_obis1 Note: Although DBFeatures.ini is also located in this directory, do not edit this file directly. See Oracle Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition for information about how to edit features for a database. |
Oracle BI Presentation Services
|
instanceconfig.xml
|
ORACLE_INSTANCE\config\OracleBIPresentationServicesComponent\coreapplication_obipsn
For example: \instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1 Note: Do not add elements to the instanceconfig.xml file unless you are overriding the stated default values. Override only those settings that are necessary for configuring the system to meet the needs of your organization. |
Cluster Controller
|
ClusterConfig.xml
ccslogging.xml |
ORACLE_INSTANCE\config\OracleBIApplication\coreapplication
For example: \instances\instance1\config\OracleBIApplication\coreapplication |
Oracle BI Scheduler
|
instanceconfig.xml
ccslogging.xml (for Cluster Controller) |
ORACLE_INSTANCE\config\OracleBISchedulerComponent\coreapplication_obischn
For example: \instances\instance1\config\OracleBISchedulerComponent\coreapplication_obisch1 |
JavaHost
|
config.xml
logging_config.xml |
ORACLE_INSTANCE\config\OracleBIJavaHostComponent\coreapplication_obijhn
For example: \instances\instance1\config\OracleBIJavaHostComponent\coreapplication_obijh1 |
Oracle BI Presentation Services Plug-in
|
bridgeconfig.properties
|
MW_HOME\user_projects\domains\domain_name\config\fmwconfig\biinstances\coreapplication
For example: mw_home\user_projects\domains\bifoundation_domain\config\fmwconfig\biinstances\coreapplication |
Friday, May 8, 2015
Exporting data from Table to CSV file
At a beautyful day, your manager said that: "give me the log of database". Of course, you can not tell your manager that the log is real large to select and read. I'm sure that you must perform the task cheerfully. And you'll think of exporting log data from tables to csv files. If that's what you're thinking about. Then the following is what you must do to finish your task.
I'm not the author. I only cache them here and retest.
The result you'll get is the file as bellow:
If you want to split the output to many file:
I'm not the author. I only cache them here and retest.
CREATE OR REPLACE PACKAGE csv AS
--
--------------------------------------------------------------------------
-- Name :
http://oracle-base.com/dba/miscellaneous/cvs.sql
-- Author : Tim Hall
-- Description : Basic CSV API. For usage notes see:
--
http://oracle-base.com/articles/9i/GeneratingCSVFiles.php
--
-- CREATE OR REPLACE DIRECTORY
dba_dir AS '/u01/app/oracle/dba/';
-- ALTER SESSION SET
NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
--
-- EXEC csv.generate('DBA_DIR',
'generate.csv', p_query => 'SELECT * FROM emp');
--
-- Requirements : UTL_FILE,
DBMS_SQL
-- Ammedments :
-- When Who What
-- =========== ========
=================================================
-- 14-MAY-2005 Tim Hall
Initial Creation
--
--------------------------------------------------------------------------
PROCEDURE generate (p_dir IN VARCHAR2,
p_file IN VARCHAR2,
p_query IN VARCHAR2);
END csv;
/
SHOW ERRORS
CREATE OR REPLACE PACKAGE BODY csv AS
--
--------------------------------------------------------------------------
-- Name :
http://oracle-base.com/dba/miscellaneous/cvs.sql
-- Author : Tim Hall
-- Description : Basic CSV API. For usage notes see:
--
http://oracle-base.com/articles/9i/GeneratingCSVFiles.php
--
-- CREATE OR REPLACE DIRECTORY
dba_dir AS '/u01/app/oracle/dba/';
-- ALTER SESSION SET
NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
--
-- EXEC
csv.generate('DBA_DIR', 'generate.csv', p_query => 'SELECT * FROM emp');
--
-- Requirements : UTL_FILE,
DBMS_SQL
-- Ammedments :
-- When
Who What
-- =========== ========
=================================================
-- 14-MAY-2005 Tim Hall
Initial Creation
--
--------------------------------------------------------------------------
g_sep VARCHAR2(5) :=
',';
PROCEDURE generate (p_dir IN VARCHAR2,
p_file IN VARCHAR2,
p_query IN VARCHAR2) AS
l_cursor PLS_INTEGER;
l_rows PLS_INTEGER;
l_col_cnt PLS_INTEGER;
l_desc_tab DBMS_SQL.desc_tab;
l_buffer VARCHAR2(32767);
l_file UTL_FILE.file_type;
BEGIN
l_cursor := DBMS_SQL.open_cursor;
DBMS_SQL.parse(l_cursor, p_query, DBMS_SQL.native);
DBMS_SQL.describe_columns (l_cursor, l_col_cnt, l_desc_tab);
FOR i IN 1 .. l_col_cnt LOOP
DBMS_SQL.define_column(l_cursor, i, l_buffer, 32767 );
END LOOP;
l_rows := DBMS_SQL.execute(l_cursor);
l_file := UTL_FILE.fopen(p_dir, p_file, 'w', 32767);
-- Output the column names.
FOR i IN 1 .. l_col_cnt LOOP
IF
i > 1 THEN
UTL_FILE.put(l_file, g_sep);
END IF;
UTL_FILE.put(l_file, l_desc_tab(i).col_name);
END LOOP;
UTL_FILE.new_line(l_file);
-- Output the data.
LOOP
EXIT WHEN DBMS_SQL.fetch_rows(l_cursor) = 0;
FOR i IN 1 .. l_col_cnt LOOP
IF
i > 1 THEN
UTL_FILE.put(l_file, g_sep);
END IF;
DBMS_SQL.COLUMN_VALUE(l_cursor, i, l_buffer);
UTL_FILE.put(l_file, l_buffer);
END LOOP;
UTL_FILE.new_line(l_file);
END LOOP;
UTL_FILE.fclose(l_file);
EXCEPTION
WHEN OTHERS THEN
IF
UTL_FILE.is_open(l_file) THEN
UTL_FILE.fclose(l_file);
END IF;
IF
DBMS_SQL.is_open(l_cursor) THEN
DBMS_SQL.close_cursor(l_cursor);
END IF;
RAISE;
END generate;
END csv;
/
SHOW ERRORS
exec sys.csv.generate('DUMP_DIR', 'testcsv.dat', 'select username,
account_status from dba_users');
|
The result you'll get is the file as bellow:
CREATE OR REPLACE PACKAGE csv AS
--
--------------------------------------------------------------------------
-- Name :
http://oracle-base.com/dba/miscellaneous/cvs.sql
-- Author :
Tim Hall
-- Description :
Basic CSV API. For usage notes see:
--
http://oracle-base.com/articles/9i/GeneratingCSVFiles.php
--
--
CREATE OR REPLACE DIRECTORY dba_dir AS '/u01/app/oracle/dba/';
-- ALTER SESSION SET
NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
--
--
EXEC csv.generate('DBA_DIR', 'generate.csv', p_query => 'SELECT *
FROM emp');
--
-- Requirements : UTL_FILE, DBMS_SQL
-- Ammedments :
-- When Who What
--
=========== ========
=================================================
--
14-MAY-2005 Tim Hall Initial Creation
--
--------------------------------------------------------------------------
PROCEDURE generate (p_dir IN VARCHAR2,
p_file IN VARCHAR2,
p_query IN VARCHAR2,
p_row_per_file IN NUMBER :=0);
END csv;
/
CREATE OR REPLACE PACKAGE BODY csv AS
--
--------------------------------------------------------------------------
-- Name :
http://oracle-base.com/dba/miscellaneous/cvs.sql
-- Author :
Tim Hall
-- Description :
Basic CSV API. For usage notes see:
--
http://oracle-base.com/articles/9i/GeneratingCSVFiles.php
--
--
CREATE OR REPLACE DIRECTORY dba_dir AS '/u01/app/oracle/dba/';
--
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
--
--
EXEC csv.generate('DBA_DIR', 'generate.csv', p_query => 'SELECT *
FROM emp');
--
-- Requirements : UTL_FILE, DBMS_SQL
-- Ammedments :
-- When Who What
--
=========== ========
=================================================
--
14-MAY-2005 Tim Hall Initial Creation
--
--------------------------------------------------------------------------
g_sep VARCHAR2(5) := ',';
PROCEDURE generate (p_dir IN VARCHAR2,
p_file IN VARCHAR2,
p_query IN VARCHAR2,
p_row_per_file IN NUMBER :=0) AS
l_cursor PLS_INTEGER;
l_rows PLS_INTEGER;
l_col_cnt PLS_INTEGER;
l_desc_tab DBMS_SQL.desc_tab;
l_buffer VARCHAR2(32767);
l_file UTL_FILE.file_type;
i_row PLS_INTEGER;
i_page PLS_INTEGER;
BEGIN
l_cursor := DBMS_SQL.open_cursor;
DBMS_SQL.parse(l_cursor, p_query, DBMS_SQL.native);
DBMS_SQL.describe_columns
(l_cursor, l_col_cnt, l_desc_tab);
FOR i IN 1 .. l_col_cnt LOOP
DBMS_SQL.define_column(l_cursor, i, l_buffer, 32767 );
END LOOP;
l_rows := DBMS_SQL.execute(l_cursor);
i_page := 0;
l_file := UTL_FILE.fopen(p_dir, TO_CHAR(i_page)||p_file, 'w', 32767);
-- Output the column names.
FOR i IN 1 .. l_col_cnt LOOP
IF i > 1 THEN
UTL_FILE.put(l_file, g_sep);
END IF;
UTL_FILE.put(l_file, l_desc_tab(i).col_name);
END LOOP;
UTL_FILE.new_line(l_file);
-- Output the data.
i_row := 0;
LOOP
EXIT WHEN DBMS_SQL.fetch_rows(l_cursor) = 0;
FOR i IN 1 .. l_col_cnt LOOP
IF i > 1 THEN
UTL_FILE.put(l_file, g_sep);
END IF;
DBMS_SQL.COLUMN_VALUE(l_cursor, i, l_buffer);
UTL_FILE.put(l_file, l_buffer);
END LOOP;
UTL_FILE.new_line(l_file);
i_row := i_row+1 ;
-- New file with column header
IF nvl(p_row_per_file,0) >= 10 THEN
IF MOD(i_row, p_row_per_file)=0 THEN
i_page
:=
i_page + 1;
UTL_FILE.fclose(l_file);
l_file
:=
UTL_FILE.fopen(p_dir, TO_CHAR(i_page)||p_file, 'w', 32767);
-- Output the column names.
FOR i IN 1 .. l_col_cnt LOOP
IF i > 1 THEN
UTL_FILE.put(l_file, g_sep);
END IF;
UTL_FILE.put(l_file, l_desc_tab(i).col_name);
END LOOP;
UTL_FILE.new_line(l_file);
END IF;
END IF;
END LOOP;
UTL_FILE.fclose(l_file);
EXCEPTION
WHEN OTHERS THEN
IF UTL_FILE.is_open(l_file) THEN
UTL_FILE.fclose(l_file);
END IF;
IF DBMS_SQL.is_open(l_cursor) THEN
DBMS_SQL.close_cursor(l_cursor);
END IF;
RAISE;
END generate;
END csv;
/
|
Subscribe to:
Posts (Atom)