Wednesday, April 15, 2009
Here are some scripts related to Tables/Indexes .
Tabs w/ Questionable Inds
TABLES WITH QUESTIONABLE INDEX(ES) NOTES:
• Owner - Owner of the table
• Table Name - Name of the table
• Column - Name of the column in question
• The above query shows all tables that have more than one index with the same leading column. These indexes can cause queries to use an inappropriate indexes; in other words, Oracle will use the index that was created most recently if two indexes are of equal ranking. This can cause different indexes to be used from one environment to the next (e.g., from DEV to TEST to PROD).
• The information does not automatically indicate that an index is incorrect; however, you may need to justify the existence of each of the indexes above.
select TABLE_OWNER,
TABLE_NAME,
COLUMN_NAME
from dba_ind_columns
where COLUMN_POSITION=1
and TABLE_OWNER not in ('SYS','SYSTEM')
group by TABLE_OWNER, TABLE_NAME, COLUMN_NAME
having count(*) > 1
Tabs With More Than 5 Inds
TABLES WITH MORE THAN 5 INDEXES NOTES:
• Owner - Owner of the table
• Table Name - Name of the table
• Index Count - Number of indexes
select OWNER,
TABLE_NAME,
COUNT(*) index_count
from dba_indexes
where OWNER not in ('SYS','SYSTEM')
group by OWNER, TABLE_NAME
having COUNT(*) > 5
order by COUNT(*) desc, OWNER, TABLE_NAME
Tables With No Indexes
TABLES WITHOUT INDEXES NOTES:
• Owner - Owner of the table
• Table Name - Name of the table
select OWNER,
TABLE_NAME
from
(
select OWNER,
TABLE_NAME
from dba_tables
minus
select TABLE_OWNER,
TABLE_NAME
from dba_indexes
)
orasnap_noindex
where OWNER not in ('SYS','SYSTEM')
order by OWNER,TABLE_NAME
Tables With No PK
NO PRIMARY KEY NOTES:
• Table Owner - Owner of the table
• Table Name - Name of the table
select OWNER,
TABLE_NAME
from dba_tables dt
where not exists (
select 'TRUE'
from dba_constraints dc
where dc.TABLE_NAME = dt.TABLE_NAME
and dc.CONSTRAINT_TYPE='P')
and OWNER not in ('SYS','SYSTEM')
order by OWNER, TABLE_NAME
Disabled Constraints
DISABLED CONSTRAINT NOTES:
• Owner - Owner of the table
• Table Name - Name of the table
• Constraint Name - Name of the constraint
• Constraint Type - Type of constraint
• Status - Current status of the constraint
select OWNER,
TABLE_NAME,
CONSTRAINT_NAME,
decode(CONSTRAINT_TYPE, 'C','Check',
'P','Primary Key',
'U','Unique',
'R','Foreign Key',
'V','With Check Option') type,
STATUS
from dba_constraints
where STATUS = 'DISABLED'
order by OWNER, TABLE_NAME, CONSTRAINT_NAME
FK Constraints
FOREIGN KEY CONSTRAINTS NOTES:
• Table Owner - Owner of the table
• Table Name - Name of the table
• Constraint Name - Name of the constraint
• Column Name - Name of the column
• Referenced Table - Name of the referenced table
• Reference Column - Name of the referenced column
• Position - Position of the column
select c.OWNER,
c.TABLE_NAME,
c.CONSTRAINT_NAME,
cc.COLUMN_NAME,
r.TABLE_NAME,
rc.COLUMN_NAME,
cc.POSITION
from dba_constraints c,
dba_constraints r,
dba_cons_columns cc,
dba_cons_columns rc
where c.CONSTRAINT_TYPE = 'R'
and c.OWNER not in ('SYS','SYSTEM')
and c.R_OWNER = r.OWNER
and c.R_CONSTRAINT_NAME = r.CONSTRAINT_NAME
and c.CONSTRAINT_NAME = cc.CONSTRAINT_NAME
and c.OWNER = cc.OWNER
and r.CONSTRAINT_NAME = rc.CONSTRAINT_NAME
and r.OWNER = rc.OWNER
and cc.POSITION = rc.POSITION
order by c.OWNER, c.TABLE_NAME, c.CONSTRAINT_NAME, cc.POSITION
FK Index Problems
FK CONSTRAINTS WITHOUT INDEX ON CHILD TABLE NOTES:
• Owner - Owner of the table
• Constraint Name - Name of the constraint
• Column Name - Name of the column
• Position - Position of the index
• Problem - Nature of the problem
• It is highly recommended that an index be created if the Foreign Key column is used in joining, or often used in a WHERE clause. Otherwise a table level lock will be placed on the parent table.
select acc.OWNER,
acc.CONSTRAINT_NAME,
acc.COLUMN_NAME,
acc.POSITION,
'No Index' Problem
from dba_cons_columns acc,
dba_constraints ac
where ac.CONSTRAINT_NAME = acc.CONSTRAINT_NAME
and ac.CONSTRAINT_TYPE = 'R'
and acc.OWNER not in ('SYS','SYSTEM')
and not exists (
select 'TRUE'
from dba_ind_columns b
where b.TABLE_OWNER = acc.OWNER
and b.TABLE_NAME = acc.TABLE_NAME
and b.COLUMN_NAME = acc.COLUMN_NAME
and b.COLUMN_POSITION = acc.POSITION)
order by acc.OWNER, acc.CONSTRAINT_NAME, acc.COLUMN_NAME, acc.POSITION
Inconsistent Column Names
INCONSISTENT COLUMN DATATYPE NOTES:
• Owner - Owner of the table
• Column - Name of the column
• Table Name - Name of the table
• Datatype - Datatype of the column
select OWNER,
COLUMN_NAME,
TABLE_NAME,
decode(DATA_TYPE, 'NUMBER', DATA_PRECISION, DATA_LENGTH) datatype
from dba_tab_columns
where (COLUMN_NAME, OWNER) in
(select COLUMN_NAME,
OWNER
from dba_tab_columns
group by COLUMN_NAME, OWNER
having min(decode(DATA_TYPE, 'NUMBER', DATA_PRECISION, DATA_LENGTH)) <
max(decode(DATA_TYPE, 'NUMBER', DATA_PRECISION, DATA_LENGTH)) )
and OWNER not in ('SYS', 'SYSTEM')
order by COLUMN_NAME,DATA_TYPE
Object Extent Warning
TABLES THAT CANNOT EXTEND NOTES:
• Owner - Owner of the object
• Object Name - Name of the object
• Object Type - Type of object
• Tablespace - Name of the tablespace
• Next Extent - Size of next extent (bytes)
select OWNER,
SEGMENT_NAME,
SEGMENT_TYPE,
TABLESPACE_NAME,
NEXT_EXTENT
from (
select seg.OWNER,
seg.SEGMENT_NAME,
seg.SEGMENT_TYPE,
seg.TABLESPACE_NAME,
t.NEXT_EXTENT
from dba_segments seg,
dba_tables t
where (seg.SEGMENT_TYPE = 'TABLE'
and seg.SEGMENT_NAME = t.TABLE_NAME
and seg.owner = t.OWNER
and NOT EXISTS (
select TABLESPACE_NAME
from dba_free_space free
where free.TABLESPACE_NAME = t.TABLESPACE_NAME
and BYTES >= t.NEXT_EXTENT))
union
select seg.OWNER,
seg.SEGMENT_NAME,
seg.SEGMENT_TYPE,
seg.TABLESPACE_NAME,
c.NEXT_EXTENT
from dba_segments seg,
dba_clusters c
where (seg.SEGMENT_TYPE = 'CLUSTER'
and seg.SEGMENT_NAME = c.CLUSTER_NAME
and seg.OWNER = c.OWNER
and NOT EXISTS (
select TABLESPACE_NAME
from dba_free_space free
where free.TABLESPACE_NAME = c.TABLESPACE_NAME
and BYTES >= c.NEXT_EXTENT))
union
select seg.OWNER,
seg.SEGMENT_NAME,
seg.SEGMENT_TYPE,
seg.TABLESPACE_NAME,
i.NEXT_EXTENT
from dba_segments seg,
dba_indexes i
where (seg.SEGMENT_TYPE = 'INDEX'
and seg.SEGMENT_NAME = i.INDEX_NAME
and seg.OWNER = i.OWNER
and NOT EXISTS (
select TABLESPACE_NAME
from dba_free_space free
where free.TABLESPACE_NAME = i.TABLESPACE_NAME
and BYTES >= i.NEXT_EXTENT))
union
select seg.OWNER,
seg.SEGMENT_NAME,
seg.SEGMENT_TYPE,
seg.TABLESPACE_NAME,
r.NEXT_EXTENT
from dba_segments seg,
dba_rollback_segs r
where (seg.SEGMENT_TYPE = 'ROLLBACK'
and seg.SEGMENT_NAME = r.SEGMENT_NAME
and seg.OWNER = r.OWNER
and NOT EXISTS (
select TABLESPACE_NAME
from dba_free_space free
where free.TABLESPACE_NAME = r.TABLESPACE_NAME
and BYTES >= r.NEXT_EXTENT))
)
orasnap_objext_warn
order by OWNER,SEGMENT_NAME
Segment Fragmentation
OBJECTS WITH MORE THAN 50% OF MAXEXTENTS NOTES:
• Owner - Owner of the object
• Tablespace Name - Name of the tablespace
• Segment Name - Name of the segment
• Segment Type - Type of segment
• Size - Size of the object (bytes)
• Extents - Current number of extents
• Max Extents - Maximum extents for the segment
• Percentage - Percentage of extents in use
• As of v7.3.4, you can set MAXEXTENTS=UNLIMITED to avoid ORA-01631: max # extents (%s) reached in table $s.%s.
• To calculate the MAXEXTENTS value on versions < 7.3.4 use the following equation: DBBLOCKSIZE / 16 - 7
• Here are the MAXEXTENTS for common blocksizes: 1K=57, 2K=121, 4K=249, 8K=505, and 16K=1017
• Multiple extents in and of themselves aren't bad. However, if you also have chained rows, this can hurt performance.
select OWNER,
TABLESPACE_NAME,
SEGMENT_NAME,
SEGMENT_TYPE,
BYTES,
EXTENTS,
MAX_EXTENTS,
(EXTENTS/MAX_EXTENTS)*100 percentage
from dba_segments
where SEGMENT_TYPE in ('TABLE','INDEX')
and EXTENTS > MAX_EXTENTS/2
order by (EXTENTS/MAX_EXTENTS) desc
Extents reaching maximum
TABLES AND EXTENTS WITHIN 3 EXTENTS OF MAXIMUM :
• Owner - Owner of the segment
• Segment Name - Name of the segment
select owner "Owner",
segment_name "Segment Name",
segment_type "Type",
tablespace_name "Tablespace",
extents "Ext",
max_extents "Max"
from dba_segments
where ((max_extents - extents) <= 3)
and owner not in ('SYS','SYSTEM')
order by owner, segment_name
Analyzed Tables
ANALYZED TABLE NOTES:
• Owner - Owner of the table
• Analyzed - Number of analyzed tables
• Not Analyzed - Number of tables that have not be analyzed
• Total - Total number of tables owned by user
• The ANALYZE statement allows you to validate and compute statistics for an index, table, or cluster. These statistics are used by the cost-based optimizer when it calculates the most efficient plan for retrieval. In addition to its role in statement optimization, ANALYZE also helps in validating object structures and in managing space in your system. You can choose the following operations: COMPUTER, ESTIMATE, and DELETE. Early version of Oracle7 produced unpredicatable results when the ESTIMATE operation was used. It is best to compute your statistics.
• A COMPUTE will cause a table-level lock to be placed on the table during the operation.
select OWNER,
sum(decode(nvl(NUM_ROWS,9999), 9999,0,1)) analyzed,
sum(decode(nvl(NUM_ROWS,9999), 9999,1,0)) not_analyzed,
count(TABLE_NAME) total
from dba_tables
where OWNER not in ('SYS', 'SYSTEM')
group by OWNER
Recently Analyzed Tables
LAST ANALYZED TABLE NOTES:
• Owner - Owner of the table
• Table Name - Name of the table
• Last Analyzed - Last analyzed date/time
select OWNER,
TABLE_NAME,
to_char(LAST_ANALYZED,'MM/DD/YYYY HH24:MI:SS') last_analyzed
from dba_tab_columns
where OWNER not in ('SYS','SYSTEM')
and LAST_ANALYZED is not null
and COLUMN_ID=1
and (SYSDATE-LAST_ANALYZED) < 30
order by (SYSDATE-LAST_ANALYZED)
Cached Tables
CACHED TABLE NOTES:
• Owner - Owner of the table
• Table Name - Name of the table
• Cache - Cached?
• Oracle 7.1+ provides a mechanism for caching table in the buffer cache. Caching tables will speed up data access and improve performance by finding the data in memory and avoiding disk reads.
select OWNER,
TABLE_NAME,
CACHE
from dba_tables
where OWNER not in ('SYS','SYSTEM')
and CACHE like '%Y'
order by OWNER,TABLE_NAME;
Wednesday, April 8, 2009
Some Importatnt Concurrent Requests & Manager Related Queries
SQL> select REQUEST_ID,To_char(LAST_UPDATE_DATE,'DD-MON-YYYY=>hh24:mi:ss') LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST_DATE,REQUESTED_BY,PROGRAM_APPLICATION_ID,CONCURRENT_PROGRAM_ID,LAST_UPDATE_LOGIN from fnd_concurrent_requests where REQUEST_ID='&reqid';
SQL> select USER_ID,USER_NAME from fnd_user where LAST_UPDATED_BY=9852;
script to find out NLS patches applied or nor
SELECT driver.driver_file_name
,TO_CHAR(run.start_date,'DD-Mon HH24:MI:SS') start_date
,TO_CHAR(run.end_date ,'DD-Mon HH24:MI:SS') end_date
,lang.language
FROM ad_patch_runs run
,ad_patch_driver_langs lang
,ad_patch_drivers driver
,ad_applied_patches applied
WHERE run.patch_driver_id = driver.patch_driver_id
AND driver.applied_patch_id = applied.applied_patch_id
AND applied.patch_name = '&p_patch_number'
AND lang.patch_driver_id = driver.patch_driver_id
ORDER BY 1,2,3;
Running concurrent request
set lines 132
col os form A7 head AppProc
col spid form a6 head DBProc
col program form A43 trunc
set pages 38
col time form 9999.99 head Elapsed
col "Req Id" form 9999999
col "Parent" form a7
col "Prg Id" form 999999
col qname head "Concurrent Manager Queue" format a25 trunc
col sid format 99999 head SID
set recsep off
select q.concurrent_queue_name || ' - ' || target_node qname
,a.request_id "Req Id"
,decode(a.parent_request_id,-1,NULL,a.parent_request_id) "Parent"
,a.concurrent_program_id "Prg Id"
,a.phase_code,a.status_code
,nvl(a.os_process_id,b.os_process_id) "OS"
,vs.sid
,vp.spid
,(nvl(actual_completion_date,sysdate)-actual_start_date)*1440 "Time"
,c.concurrent_program_name||' - '||
c2.user_concurrent_program_name "program"
from APPLSYS.fnd_Concurrent_requests a,APPLSYS.fnd_concurrent_processes b
,applsys.fnd_concurrent_queues q
,APPLSYS.fnd_concurrent_programs_tl c2
,APPLSYS.fnd_concurrent_programs c
,gv$session vs
,gv$process vp
where a.controlling_manager = b.concurrent_process_id
and a.concurrent_program_id = c.concurrent_program_id
and a.program_application_id = c.application_id
and c2.concurrent_program_id = c.concurrent_program_id
and a.phase_code in ('I','P','R','T')
and b.queue_application_id = q.application_id
and b.concurrent_queue_id = q.concurrent_queue_id
and c2.language = 'US'
and vs.process (+) = b.os_process_id
and vs.paddr = vp.addr (+)
and vs.inst_id = vp.inst_id
order by 1,2
/
concurrent request status
col os form A6
col program form A40
set pages 38
set verify off
col time head Elapsed form 9999.99
col "Req Id" form 9999999
col "Prg Id" form 999999
col "Started On" format a10
col "Finished On" format a10
col "Submitted By" format a30 trunc
col argument_text head "Arguments" format a40
col statustxt head Status format a10 trunc
col phasetxt head Phase format a10 trunc
set recsep off
accept cmreqid number prompt 'What is the concurrent request id : '
select l2.meaning phasetxt
,l1.meaning statustxt
,(nvl(actual_completion_date,sysdate)-actual_start_date)*1440 "Time"
,to_char(a.actual_start_date,'mm/dd/yy hh:mi:ssAM') "Started On"
,to_char(a.actual_completion_date,'mm/dd/yy hh:mi:ssAM') "Finished On"
,u.user_name || ' - ' || u.description "Submitted By"
,a.argument_text
from APPLSYS.fnd_Concurrent_requests a
,applsys.fnd_user u
,applsys.fnd_lookup_values l1
,applsys.fnd_lookup_values l2
where u.user_id = a.requested_by
and a.request_id = &cmreqid
and l1.lookup_type = 'CP_STATUS_CODE'
and l1.lookup_code = a.status_code
and l1.language = 'US'
and l1.enabled_flag = 'Y'
and (l1.start_date_active <= sysdate and l1.start_date_active is not null)
and (l1.end_date_active > sysdate or l1.end_date_active is null)
and l2.lookup_type = 'CP_PHASE_CODE'
and l2.lookup_code = a.phase_code
and l2.language = 'US'
and l2.enabled_flag = 'Y'
and (l2.start_date_active <= sysdate and l2.start_date_active is not null)
and (l2.end_date_active > sysdate or l2.end_date_active is null)
/
query to find the details of long running and wating requests details
select description, request_date, requested_start_date, actual_start_date ,
actual_completion_date,request_id , cprogram, argument_text ,user_name,phase_code,status_code
,to_number(substr(run_time,9,2))*24+to_number(substr(run_time,12,2))||'.'||(decode(length(substr(run_time,15,2)),1,'0'||substr(run_time,15,2),substr(run_time,15,2))) RUN_HHMI_NEW
,to_number(substr(wait_time,9,2))*24+to_number(substr(wait_time,12,2))||'.'||(decode(length(substr(wait_time,15,2)),1,'0'||substr(wait_time,15,2),substr(wait_time,15,2))) WAIT_HHMI_NEW
from (
select
p.user_concurrent_program_name description, r.request_date, r.requested_start_date, r.actual_start_date ,
r.actual_completion_date,r.request_id,
p.concurrent_program_name cprogram,r.argument_text ,r.cancel_or_hold, fu.user_name,lup1.meaning phase_code,lup2.meaning status_code,
rtrim((cast(r.actual_completion_date as timestamp) - cast (r.actual_start_date as timestamp)),'000000') run_time,
rtrim((cast(r.actual_start_date as timestamp) - cast (r.request_date as timestamp)),'000000') wait_time
from apps.fnd_concurrent_queues fcq,
apps.fnd_concurrent_queue_content fcqc,
apps.fnd_concurrent_request_class c,
apps.fnd_application f,
apps.fnd_concurrent_programs_vl p,
apps.fnd_concurrent_requests r,
apps.fnd_user fu,
apps.fnd_lookups lup1,
apps.fnd_lookups lup2
where r.program_application_id = p.application_id
and r.concurrent_program_id = p.concurrent_program_id
and r.phase_code <> 'P'
and r.requested_by = fu.user_id
and trunc(request_date) between :P_FDATE and :P_TDATE
and p.application_id = f.application_id
and r.program_application_id = f.application_id
and r.request_class_application_id = c.application_id(+)
and r.concurrent_request_class_id = c.request_class_id(+)
and r.request_class_application_id = fcqc.type_application_id(+)
and r.concurrent_request_class_id = fcqc.type_id(+)
and fcqc.queue_application_id = fcq.application_id(+)
and fcqc.concurrent_queue_id = fcq.concurrent_queue_id(+)
and lup1.lookup_code = r.phase_code
and lup1.lookup_type = 'CP_PHASE_CODE'
and lup2.lookup_code = r.status_code
and lup2.lookup_type = 'CP_STATUS_CODE'
group by
c.request_class_name,
f.application_short_name,
p.concurrent_program_name,lup1.meaning ,lup2.meaning ,
p.user_concurrent_program_name,r.request_date,r.requested_start_date,
r.priority,r.actual_start_date ,r.request_id,r.actual_completion_date,fu.user_name,
r.argument_text,r.cancel_or_hold
order by 6 )
most executed concurrent requests day wise with node details
select request_id,logfile_node_name, p.user_concurrent_program_name "CONCURRENT PROGRAM NAME",
p.concurrent_program_name "PROGRAM",request_date
from apps.fnd_concurrent_queues fcq,
apps.fnd_concurrent_queue_content fcqc,
apps.fnd_concurrent_request_class c,
apps.fnd_application f,
apps.fnd_concurrent_programs_vl p,
apps.fnd_concurrent_requests r
where r.program_application_id = p.application_id
and r.concurrent_program_id = p.concurrent_program_id
--and r.status_code in ('C','G','E')
and trunc(request_date) between :P_FDATE and :P_TDATE
and p.application_id = f.application_id
and r.program_application_id = f.application_id
and r.request_class_application_id = c.application_id(+)
and r.concurrent_request_class_id = c.request_class_id(+)
and r.request_class_application_id = fcqc.type_application_id(+)
and r.concurrent_request_class_id = fcqc.type_id(+)
and fcqc.queue_application_id = fcq.application_id(+)
and fcqc.concurrent_queue_id = fcq.concurrent_queue_id(+)
--and p.user_concurrent_program_name like '%Purge%Manager%'
group by
request_id,c.request_class_name,
f.application_short_name,
p.concurrent_program_name,
p.user_concurrent_program_name,
r.priority, request_date,logfile_node_name
order by 3
most executed concurrent requests
select
p.user_concurrent_program_name "description",
p.concurrent_program_name "program",
count(*) "NUMS"
from apps.fnd_concurrent_queues fcq,
apps.fnd_concurrent_queue_content fcqc,
apps.fnd_concurrent_request_class c,
apps.fnd_application f,
apps.fnd_concurrent_programs_vl p,
apps.fnd_concurrent_requests r
where r.program_application_id = p.application_id
and r.concurrent_program_id = p.concurrent_program_id
--and r.status_code in ('C','G','E')
and trunc(request_date) between :P_FDATE and :P_TDATE
and p.application_id = f.application_id
and r.program_application_id = f.application_id
and r.request_class_application_id = c.application_id(+)
and r.concurrent_request_class_id = c.request_class_id(+)
and r.request_class_application_id = fcqc.type_application_id(+)
and r.concurrent_request_class_id = fcqc.type_id(+)
and fcqc.queue_application_id = fcq.application_id(+)
and fcqc.concurrent_queue_id = fcq.concurrent_queue_id(+)
group by
c.request_class_name,
f.application_short_name,
p.concurrent_program_name,
p.user_concurrent_program_name,
r.priority
order by count(*) desc
most executed concurrent requests day wise
select p.user_concurrent_program_name "CONCURRENT PROGRAM NAME",
p.concurrent_program_name "PROGRAM",to_char(request_date,'DD-MON-YYYY') "DATE",
count(*) "NUMS"
from apps.fnd_concurrent_queues fcq,
apps.fnd_concurrent_queue_content fcqc,
apps.fnd_concurrent_request_class c,
apps.fnd_application f,
apps.fnd_concurrent_programs_vl p,
apps.fnd_concurrent_requests r
where r.program_application_id = p.application_id
and r.concurrent_program_id = p.concurrent_program_id
--and r.status_code in ('C','G','E')
and trunc(request_date) between :P_FDATE and :P_TDATE
and p.application_id = f.application_id
and r.program_application_id = f.application_id
and r.request_class_application_id = c.application_id(+)
and r.concurrent_request_class_id = c.request_class_id(+)
and r.request_class_application_id = fcqc.type_application_id(+)
and r.concurrent_request_class_id = fcqc.type_id(+)
and fcqc.queue_application_id = fcq.application_id(+)
and fcqc.concurrent_queue_id = fcq.concurrent_queue_id(+)
group by
c.request_class_name,
f.application_short_name,
p.concurrent_program_name,
p.user_concurrent_program_name,
r.priority, to_char(request_date,'DD-MON-YYYY')
order by 3, count(*) desc
number_of requests submitted per day
select to_char(request_date,'DD-MON-YYYY') "DATE", count(request_date) "NUM_REQS"
from fnd_concurrent_requests
where trunc(request_date) between :P_FDATE and :P_TDATE
group by to_char(request_date,'DD-MON-YYYY') order by 1
query to find the number of requests submitted previous day
select to_char(sysdate-1,'DD-MON-YYYY') "DATE", count(request_date) "NUM_REQS"
from fnd_concurrent_requests
where trunc(request_date)=trunc(sysdate-1);
Saturday, February 7, 2009
On fresh install: Forms session <1> failed during startup FRM-92101
Problem Description:
We installed 1204 fresh instance then upgraded to RUP6. After completion of rup6 activity the instace was working fine all the services are running fine without any failure.
The main problem encountered when we are accessing forms they are opening fine after some time they got hang state/r FRM-92101,92102 errors were frequently araising after some time. The root cause of this issue is below
[Wed Feb 4 11:11:12 2009] [notice] Oracle-Application-Server-10g/10.1.3.0.0 Oracle-HTTP-Server configured -- resuming normal operations
[Wed Feb 4 11:11:12 2009] [notice] Accept mutex: fcntl (Default: fcntl)
[Wed Feb 4 11:26:19 2009] [error] [client 122.169.29.45] [ecid: 1233726979:192.168.1.202:6608:0:23,0] Directory index forbidden by rule: /erpapps/UAT/apps/apps_st/comn/java/classes/
[Wed Feb 4 11:45:02 2009] [error] [client 122.169.46.193] [ecid: 1233728102:192.168.1.202:6547:0:37,0] Directory index forbidden by rule: /erpapps/UAT/apps/apps_st/comn/java/classes/
[Wed Feb 4 11:45:02 2009] [error] [client 122.169.46.193] [ecid: 1233728102:192.168.1.202:6547:0:38,0] File does not exist: /erpapps/UAT/apps/apps_st/comn/java/classes/oracle/apps/media/oracle/apps/media/splash.gif
[Wed Feb 4 11:45:02 2009] [error] [client 122.169.46.193] [ecid: 1233728102:192.168.1.202:6547:0:39,0] File does not exist: /erpapps/UAT/apps/apps_st/comn/java/classes/oracle/apps/media/oracle/apps/media/splash.gif
[Wed Feb 4 13:35:05 2009] [error] [client 122.169.50.30] [ecid: 1233734705:192.168.1.202:5705:0:62,0] Directory index forbidden by rule: /erpapps/UAT/apps/apps_st/comn/java/classes/
[Wed Feb 4 13:37:40 2009] [error] [client 122.169.50.30] [ecid: 1233734860:192.168.1.202:5704:0:124,0] Directory index forbidden by rule: /erpapps/UAT/apps/apps_st/comn/java/classes/
[Wed Feb 4 13:37:40 2009] [error] [client 122.169.50.30] [ecid: 1233734860:192.168.1.202:5704:0:125,0] File does not exist: /erpapps/UAT/apps/apps_st/comn/java/classes/oracle/apps/media/oracle/apps/media/splash.gif
[Wed Feb 4 13:37:41 2009] [error] [client 122.169.50.30] [ecid: 1233734861:192.168.1.202:5704:0:126,0] File does not exist: /erpapps/UAT/apps/apps_st/comn/java/classes/oracle/apps/media/oracle/apps/media/splash.gif
[Wed Feb 4 13:42:02 2009] [error] [client 122.169.50.30] [ecid: 1233735122:192.168.1.202:5709:0:72,0] Directory index forbidden by rule: /erpapps/UAT/apps/apps_st/comn/java/classes/
[Wed Feb 4 13:42:02 2009] [error] [client 122.169.50.30] [ecid: 1233735122:192.168.1.202:5709:0:73,0] File does not exist: /erpapps/UAT/apps/apps_st/comn/java/classes/oracle/apps/media/oracle/apps/media/splash.gif
[Wed Feb 4 13:42:02 2009] [error] [client 122.169.50.30] [ecid: 1233735122:192.168.1.202:5709:0:74,0] File does not exist: /erpapps/UAT/apps/apps_st/comn/java/classes/oracle/apps/media/oracle/apps/media/splash.gif


cause:
The Rapidinstall created symbolic links to an area not on the system.
Possibly to an internal Oracle test area.
The file "ldflags" in $ORACLE_HOME/lib32 is missing or pointing to a wrong location
Solution:
To implement the solution, please execute the following steps:
1.The file "ldflags" in $ORACLE_HOME/lib32 is missing or pointing to a wrong location, recreate
the symbolic link
$ rm $ORACLE_HOME/lib32/ldflags
$ ln -s $ORACLE_HOME/lib/ldflags $ORACLE_HOME/lib32/ldflags
make -f ins_forms.mk install
2.Relink the forms executables. Please follow the next below steps:
2.1. Source the Applications environment file
2.2. Run the folloiwng comand to relink the forms from 10.1.2 home/lib folder
make -f ins_forms.mk install
Or make -if ins_forms.mk install
2.3.generate all forms through adadmin.
2.4. Re-test the issue.
Monday, February 2, 2009
Usage of adclonectx.pl in Oracle Apps
This perl script is use to retrieve the Application Context File is Currupted/Deleted.
The Applications & Database Context Files are retrieved by running the adclonectx.pl script
To Retrieve the Applications Tier Context File (R12):
1.Exceute the following script
Perl /clone/bin/adclonectx.pl retrieve
2. On being promted for the context file to be retrieved, select the option of retrieving the applications Tier context file that has been lost and retrieve it to the default location specified by the script.
This is procedure will be useful when INST_TOP is still available. If accidentally it was also loss then the application context file retrieval is as follows:
1. Execute the below script in the Database Tier
Perl /appsutil/clone/bin/adclonectx.pl retrieve
2. On being prompted for the context file to be retrieved, select the option of retrieving the applications tier context file that has been lost.
3. While confirming the location of the context file, set it to any existing directory with write permissions.
4. Once the context file has been generated in the specified location, move it to the specified location for the context file in the context variable ‘s_contextfile’.
To Retrieve the Database Tier Context File:
1. Execute the below script in the Database Tier
Perl /appsutil/clone/bin/adclonectx.pl retrieve
2. On being prompted for the context file to be retrieved, select the option of retrieving the database tier context file that has been lost and retrieve it to the default location specified by the script .
Enabling Mails in Linux
First run a quick test to make sure the “sendmail” application is installed and working correctly. Execute the following command, replacing “you@youremailid.com” with your e-mail address.
Enable the mail services in server
For enabling the mail services we have to follow the following steps
U must know the nameserver of the Email server.
U must login as superuser for enebling the services.
1. vi /etc/mail/sendmail.mc ----open file and search 127
edit line dnl# DAEMON_OPTIONS(`PORT=smtp,Addr=127.0.0.1, Name=MTA')
2. m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf
3. /etc/init.d/sendmail restart
# mail -s “Hello world” you@youremailid.com
Hit the return key and you will come to a new line. Enter the text “This is a test from my server”. Follow up the text by hitting the return key again. Then hit the key combination of Control+D to continue. The command prompt will ask you if you want to mark a copy of the mail to any other address, hit Control+D again. Check your mailbox. This command will send out a mail to the email id mentioned with the subject, “Hello world”.
To add content to the body of the mail while running the command you can use the following options. If you want to add text on your own:
# echo “This will go into the body of the mail.” | mail -s “Hello world” you@youremailid.com
And if you want mail to read the content from a file:
# mail -s “Hello world” you@youremailid.com < /home/calvin/application.log
Some other useful options in the mail command are:
-s subject (The subject of the mail)
-c email-address (Mark a copy to this “email-address”, or CC)
-b email-address (Mark a blind carbon copy to this “email-address”, or BCC)
Here’s how you might use these options:
# echo “Welcome to the world of Calvin n Hobbes” | mail -s “Hello world” u@umailid.com -c copier@copiermail.com -b bcc@bccmail.com
Tuesday, January 27, 2009
RAC Basics : CRS, Voting Disk, OCR, Cache Fusion ….
| Oracle Clusterware (Cluster Ready Services in 10g/ Cluster Manager in 9i) - provides infrastructure that binds multiple nodes that then operate as single server. Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry). Voting Disk - is a file that resides on shared storage and Manages cluster members. Voting disk reassigns cluster ownership between the nodes in case of failure. OCR (Oracle Cluster Registry) - resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database. CRS Resource - is anything that Oracle Clusterware manages is classified as CRS resource like database, instance, service, listener, VIP address and so on. Cache Fusion - is a disk less cache coherency mechanism in Oracle RAC that provides copies of data blocks directly from one instance’s memory cache (in which that block is available) to other instance (instance which is request for specific data block). Cache Fusion provides single buffer cache (for all instances in cluster) through interconnect. In Single Node oracle database, an instance looking for data block first checks in cache, if block is not in cache then goes to disk to pull block from disk to cache and return block to client. In RAC Database there is remote cache so instance should look not only in local cache (cache local to instance) but on remote cache (cache on remote instance). If cache is available in local cache then it should return data block from local cache; if data block is not in local cache, instead of going to disk it should first go to remote cache (remote instance) to check if block is available in local cache (via interconnect) This is because accessing data block from remote cache is faster than accessing it from disk. Cache Fusion Model
CVU (Cluster Varification Utility) - is a utility to verify that system meets all the criteria for Oracle Clusterware Installation. |
Saturday, January 24, 2009
Oracle Monitoring and Performance Tuning
One of the biggest responsibilities of a DBA is to ensure that the Oracledatabase is tuned properly. The Oracle RDBMS is highly tunable and allowsthe database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
The speed of computing might be wasting valuable human time (users waiting for response);
Enable your system to keep-up with the speed business is conducted; and
Optimize hardware usage to save money (companies are spending millions on hardware).
Although this FAQ is not overly concerned with hardware issues, one needsto remember than you cannot tune a Buick into a Ferrari.
What database aspects we should be monitored?
One should implement a monitoring system to constantly monitor the following aspects of a database. This can be achieved by writing custom scripts, implementing Oracle's Enterprise Manager, or buying a third-party monitoring product. If an alarm is triggered, the system should automatically notify the DBA (e-mail, page, etc.) to take appropriate action.
Infrastructure availability:
Is the database up and responding to requests
Are the listeners up and responding to requests
Are the Oracle Names and LDAP Servers up and responding to requests
Are the Web Listeners up and responding to requests
Etc.
Things that can cause service outages:
Is the archive log destination filling up?
Objects getting close to their max extents
Tablespaces running low on free space/ Objects what would not be able to extend
User and process limits reached
Etc.
Things that can cause bad performance:
Where should the tuning effort be directed?
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.
Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
What tuning indicators can one use?
The following high-level tuning indicators can be used to establish if a database is performing optimally or not:
Buffer Cache Hit Ratio
Formula: Hit Ratio = (Logical Reads - Physical Reads) / Logical Reads
Action: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) to increase hit ratio
Library Cache Hit Ratio
Action: Increase the SHARED_POOL_SIZE to increase hit ratio
Etc.
What tools/utilities does Oracle provide to assist with performance tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:
TKProf
UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring
Statspack
Oracle Enterprise Manager - Tuning Pack
What is STATSPACK and how does one use it?
Statspack is a set of performance monitoring and reporting utilities provided by Oracle from Oracle8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Install Statspack:
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql -- Install Statspack -
sqlplus "/ as sysdba" @spcreate.sql -- Enter tablespace names when prompted
Use Statspack:
sqlplus perfstat/perfstat
exec statspack.snap; -- Take a performance snapshots
exec statspack.snap;
-- Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;
@spreport.sql -- Enter two snapshot id's for difference report
Other Statspack Scripts:
sppurge.sql - Purge a range of Snapshot Id's between the specified begin and end Snap Id's
spauto.sql - Schedule a dbms_job to automate the collection of STATPACK statistics
spcreate.sql - Installs the STATSPACK user, tables and package on a database (Run as SYS).
spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
sppurge.sql - Delete a range of Snapshot Id's from the database
spreport.sql - Report on differences between values recorded in two snapshots
sptrunc.sql - Truncates all data in Statspack tables
When is cost based optimization triggered?
It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:
Change statistics of objects by doing an ANALYZE;
Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).
How can one optimize %XYZ% queries?
It is possible to improve %XYZ% queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.
Where can one find I/O statistics per table?
The UTLESTAT report shows I/O per tablespace but one cannot see what tables in the tablespace has the most I/O.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
For more details, look at the header comments in the $ORACLE_HOME/rdbms/admin/catio.sql script.
My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
Has OPTIMIZER_MODE been changed in INIT.ORA?
Has the DEGREE of parallelism been defined/changed on any table?
Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
Have the statistics changed?
Has the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.
Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer. The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.
When should one rebuild an index?
You can run the 'ANALYZE INDEX VALIDATE STRUCTURE' command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.
Back to top of file
How does one tune Oracle Wait events?
Some wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
Event Name:
Tuning Recommendation:
db file sequential read
Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
buffer busy waits
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
log buffer space
Increase LOG_BUFFER parameter or move log files to faster disks
What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';
Where can one get more info about Oracle Tuning?
OracleTuning.com - Great site for Oracle Tuning Information
Ixora = Unix + Oracle - scripts, tips and advanced performance tuning information
Kevin Loney's New DBA FAQs: Part 2 - Database Monitoring
On-Line Services:
Yet Another Performance Profiling (YAPP) Method - Upload your BSTAT/ ESTAT and StatsPack output for analysis
itrprof SQL Analyzer - Upload your SQL_TRACE/Event10046 trace files to find bottlenecks and tuning advice
Thursday, January 22, 2009
DMZ Configuration with 11i
The ebusiness suite 11.5.10.2 allows you to have various options ranging from the simple one to the more complex and tightly secure architectures in a relatively simple out of the box solutions.
Typically Ebusiness Suite supports the following topologies in a DMZ architecture
* Using Reverse Proxies in DMZ
* Using Separate Oracle E-Business Suite 11i Web Tier in DMZ
* Using HTTP Hardware Load Balancers in DMZ
* Using Reverse Proxies only in DMZ
You can find more details on the supported methodologies with the DMZ architecture in the following metalink note
DMZ Configuration with Oracle E-Business Suite 11i
In the current post i will discuss on a less complex case where we are using an ebusiness suite application tier as the external tier for our DMZ architecture. The below diagram depicts this architecture in simplistic form.
As discussed earlier we will build a second application server for Oracle Applications using adclone which will be used as the external application for the instance.
Current Configuration
Oracle Applications 11.5.10.2
Dual Node
Operating System Sun Solaris
DMZ Patches
Ensure the following patches have been applied on the instance before proceeding ahead.
3942483 An autoconfig patch to suport your reverse proxy and DMZ configuration
5478710 An autoconfig template roll up.
Pre Clone
Shutdown the application tier services and run adpreclone.pl on the application tier of your instance.
#su - applsam
$cd samappl
$. ./APPSORA.env
$cd $COMMON_TOP/admin/scripts/SAM_SAMLXA
$perl adpreclone.pl appsTier
Filesystem Copy
After that copy across the application tier filesystem to the machine
tar -cf - applsam | ssh Node_B tar -xf - -C /u02/sam/
Post Clone on the External Tier
Now you must run your post clone (adcfgclone.pl) on the external tier and specify that it is your Web Server node only
# su - applsam
$ cd samcomn/clone/bin
$ perl adcfgclone.pl appsTier
Run the txkChangeProfH.sql
Run the txkChangeProfH.sql script to update your profile options default hirerchy to server
$ cd $FND_TOP/patch/115/sql/
$ sqlplus apps/xxxxxxxx @txkChangeProfH.sql SERVRESP
Startup Services
Startup the application services both on the internal and external tier and test the connectivity.
Verify the Nodes
Verify that the nodes have been registered as expected in the instance.
Navigate to
System Administrator->Install->Nodes
Update Node Trust Level
Update the node trust level by identifying the node which you want to be as your external node.
Navigate to
System Administrator->Profile->System and query for 'Node Trust Level'
Set it as external for the external node at the server level only.
Update Responsibilities
After updating the node trust level the user who come in from the external application server will be able to see only those responsibilities that have been marked as external.
Navigate to
System Administrator->Profile->System and query for 'Responsibility Trust Level'
Choose the responsibility that you want to be as external, in my case it was Isupplier Registration.
Update Home Page Mode
Lastly update the home page mode to framework only.
Navigate to
System Administrator->Profile->System and query for 'Self-Service Personal Home Page Mode' and set it to framework only.
Autoconfig
Run autoconfig on both the nodes and restart the application tier services.
Wednesday, January 21, 2009
MIGRATING THE DATABASE FROM ONE MECHINE TO ANOTHER MACHINE USING RMAN
MIGRATING THE DATABASE FROM ONE MECHINE TO ANOTHER MACHINE USING RMAN
DBA11(TARGET), DBA12(CATALOG), DBA15(AUXILIARY DATABASE)
1) create the directory to store rman backup at target database
DBA11$cd /disk3/oradata/san
san$mkdir rman
2) create the password file in dba directory at target database
DBA11$cd $ORACLE_HOME/dbs
dbs$orapwd file=orapw$ORACLE_SID password=sys force=y
3) configure listener at target database
$cp $ORACLE_HOME/network/admin/listener.ora ~
~$vi listener.ora
rman =
(port=2044)(host=dba11)
(sid=
:wq!
4) start the listener at target side
$lsnrctl start
5)create tablespace and user at catalog database
DBA12$
sys>create tablespace rman datafile '/disk3/oradata/san/rman.dbf' size 100m;
sys>create user rman identified by rman;
sys>grant connect,resource,recovery_catalog_owner to rman identified by rman;
sys>alter user rman default tablespace rman;
sys>exit
6)configure tnsnames at catalog database
$cp $ORACLE_HOME/network/admin/tnsnames.ora ~
~$vi tnsnames.ora
torman =
(port=2044) (host=dba11)
(sid=
:wq!
$tnsping torman
7)create catalog under rman user
$rman catalog rman/rman
RMAN> create catalog;
RMAN>exit
$sqlplus rman/rman
RMAN>select count(*) from tab;
RMAN>90 rows
RMAN>exit
$rman catalog rman/rman@torman
RMAN>register database;
RMAN>spool log to script.log
RMAN>show all;
RMAN>exit
8)configure the script.log file
$rman catalog rmna/rman@torman
RMAN>@script.log
RMAN>backup database plus archivelog;
DBA15(AUXILIARY DATABASE)
$export ORACLE_SID=san
9)now configure listener at auxiliary database
$cp $ORACLE_HOME/network/admin/listener.ora ~
$vi listener.ora
aux =
(port=5678)(host=dba15)
(sid=san)
:wq!
$lsnrctl start aux
10) configure tnsnames.ora in the catalog database
DBA12$vi tnsnames.ora
toaux =
(port=5678)(host=dba15)
(sid=san)
:wq!
$tnsping toaux
DBA15$
11)create the passowrd file at auxiliary database
dbs$ orapwd file=orapw$ORACLE_SID password=xyz force=y
dbs$cp init.ora init$ORACLE_SID.ora
dbs$vi initsan.ora
db_name=sandeep(the auxiliary database name and target database name should be same)
control_files=/disk1/oradata/san/control.ctl
user_dump_dest=/disk1/oradata/san/udump
core_dump_dest=/disk1/oradata/san/cdump
background_dump_dest=/disk1/oradata/san/bdump
db_file_name_convert=/disk3/oradata/sandeep /disk1/oradata/san(if u want to have in different location then use this 2 parameters)
log_file_name_convert=/disk3/oradata/sandeep /disk1/oradata/san
:wq!
$mkdir -p /disk3/oradata/sandeep/{cdump,bdump,udump,arch,rman} (create target database structure)
$mkdir -p /disk1/oradata/san/{cdump,udump,bdump,arch} (create auxiliary c,r,d file location)
$cd /disk3/oradata/sandeep/rman
rman$ftp -i dba11
loginid:sandeep
password:
ftp>cd /disk3/oradata/sandeep/rman
ftp>mget *
ftp>bye
rman$du -h
358m(this is the backup size of ur target database)
12) now keep the auxiliary database in nomount stage
sys>startup nomount
13) now connect to auxiliary database using rman
DBA12$rman catalog rman/rman target sys/sys@tormna auxiliary sys/xyz@toaux
now it will connect to target database as well as auxiliary database
RMAN>duplicate target database to san; (here we have to mention the database name of the target database so my database name is san)
14)once we issue the above command then migration will be completed.
