| Oracle Clusterware (Cluster Ready Services in 10g/ Cluster Manager in 9i) - provides infrastructure that binds multiple nodes that then operate as single server. Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry). Voting Disk - is a file that resides on shared storage and Manages cluster members. Voting disk reassigns cluster ownership between the nodes in case of failure. OCR (Oracle Cluster Registry) - resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database. CRS Resource - is anything that Oracle Clusterware manages is classified as CRS resource like database, instance, service, listener, VIP address and so on. Cache Fusion - is a disk less cache coherency mechanism in Oracle RAC that provides copies of data blocks directly from one instance’s memory cache (in which that block is available) to other instance (instance which is request for specific data block). Cache Fusion provides single buffer cache (for all instances in cluster) through interconnect. In Single Node oracle database, an instance looking for data block first checks in cache, if block is not in cache then goes to disk to pull block from disk to cache and return block to client. In RAC Database there is remote cache so instance should look not only in local cache (cache local to instance) but on remote cache (cache on remote instance). If cache is available in local cache then it should return data block from local cache; if data block is not in local cache, instead of going to disk it should first go to remote cache (remote instance) to check if block is available in local cache (via interconnect) This is because accessing data block from remote cache is faster than accessing it from disk. Cache Fusion Model
CVU (Cluster Varification Utility) - is a utility to verify that system meets all the criteria for Oracle Clusterware Installation. |
Tuesday, January 27, 2009
RAC Basics : CRS, Voting Disk, OCR, Cache Fusion ….
Saturday, January 24, 2009
Oracle Monitoring and Performance Tuning
One of the biggest responsibilities of a DBA is to ensure that the Oracledatabase is tuned properly. The Oracle RDBMS is highly tunable and allowsthe database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
The speed of computing might be wasting valuable human time (users waiting for response);
Enable your system to keep-up with the speed business is conducted; and
Optimize hardware usage to save money (companies are spending millions on hardware).
Although this FAQ is not overly concerned with hardware issues, one needsto remember than you cannot tune a Buick into a Ferrari.
What database aspects we should be monitored?
One should implement a monitoring system to constantly monitor the following aspects of a database. This can be achieved by writing custom scripts, implementing Oracle's Enterprise Manager, or buying a third-party monitoring product. If an alarm is triggered, the system should automatically notify the DBA (e-mail, page, etc.) to take appropriate action.
Infrastructure availability:
Is the database up and responding to requests
Are the listeners up and responding to requests
Are the Oracle Names and LDAP Servers up and responding to requests
Are the Web Listeners up and responding to requests
Etc.
Things that can cause service outages:
Is the archive log destination filling up?
Objects getting close to their max extents
Tablespaces running low on free space/ Objects what would not be able to extend
User and process limits reached
Etc.
Things that can cause bad performance:
Where should the tuning effort be directed?
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.
Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
What tuning indicators can one use?
The following high-level tuning indicators can be used to establish if a database is performing optimally or not:
Buffer Cache Hit Ratio
Formula: Hit Ratio = (Logical Reads - Physical Reads) / Logical Reads
Action: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) to increase hit ratio
Library Cache Hit Ratio
Action: Increase the SHARED_POOL_SIZE to increase hit ratio
Etc.
What tools/utilities does Oracle provide to assist with performance tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:
TKProf
UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring
Statspack
Oracle Enterprise Manager - Tuning Pack
What is STATSPACK and how does one use it?
Statspack is a set of performance monitoring and reporting utilities provided by Oracle from Oracle8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Install Statspack:
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql -- Install Statspack -
sqlplus "/ as sysdba" @spcreate.sql -- Enter tablespace names when prompted
Use Statspack:
sqlplus perfstat/perfstat
exec statspack.snap; -- Take a performance snapshots
exec statspack.snap;
-- Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;
@spreport.sql -- Enter two snapshot id's for difference report
Other Statspack Scripts:
sppurge.sql - Purge a range of Snapshot Id's between the specified begin and end Snap Id's
spauto.sql - Schedule a dbms_job to automate the collection of STATPACK statistics
spcreate.sql - Installs the STATSPACK user, tables and package on a database (Run as SYS).
spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
sppurge.sql - Delete a range of Snapshot Id's from the database
spreport.sql - Report on differences between values recorded in two snapshots
sptrunc.sql - Truncates all data in Statspack tables
When is cost based optimization triggered?
It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:
Change statistics of objects by doing an ANALYZE;
Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).
How can one optimize %XYZ% queries?
It is possible to improve %XYZ% queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.
Where can one find I/O statistics per table?
The UTLESTAT report shows I/O per tablespace but one cannot see what tables in the tablespace has the most I/O.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
For more details, look at the header comments in the $ORACLE_HOME/rdbms/admin/catio.sql script.
My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
Has OPTIMIZER_MODE been changed in INIT.ORA?
Has the DEGREE of parallelism been defined/changed on any table?
Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
Have the statistics changed?
Has the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.
Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer. The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.
When should one rebuild an index?
You can run the 'ANALYZE INDEX VALIDATE STRUCTURE' command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.
Back to top of file
How does one tune Oracle Wait events?
Some wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
Event Name:
Tuning Recommendation:
db file sequential read
Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
buffer busy waits
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
log buffer space
Increase LOG_BUFFER parameter or move log files to faster disks
What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';
Where can one get more info about Oracle Tuning?
OracleTuning.com - Great site for Oracle Tuning Information
Ixora = Unix + Oracle - scripts, tips and advanced performance tuning information
Kevin Loney's New DBA FAQs: Part 2 - Database Monitoring
On-Line Services:
Yet Another Performance Profiling (YAPP) Method - Upload your BSTAT/ ESTAT and StatsPack output for analysis
itrprof SQL Analyzer - Upload your SQL_TRACE/Event10046 trace files to find bottlenecks and tuning advice
Thursday, January 22, 2009
DMZ Configuration with 11i
The ebusiness suite 11.5.10.2 allows you to have various options ranging from the simple one to the more complex and tightly secure architectures in a relatively simple out of the box solutions.
Typically Ebusiness Suite supports the following topologies in a DMZ architecture
* Using Reverse Proxies in DMZ
* Using Separate Oracle E-Business Suite 11i Web Tier in DMZ
* Using HTTP Hardware Load Balancers in DMZ
* Using Reverse Proxies only in DMZ
You can find more details on the supported methodologies with the DMZ architecture in the following metalink note
DMZ Configuration with Oracle E-Business Suite 11i
In the current post i will discuss on a less complex case where we are using an ebusiness suite application tier as the external tier for our DMZ architecture. The below diagram depicts this architecture in simplistic form.
As discussed earlier we will build a second application server for Oracle Applications using adclone which will be used as the external application for the instance.
Current Configuration
Oracle Applications 11.5.10.2
Dual Node
Operating System Sun Solaris
DMZ Patches
Ensure the following patches have been applied on the instance before proceeding ahead.
3942483 An autoconfig patch to suport your reverse proxy and DMZ configuration
5478710 An autoconfig template roll up.
Pre Clone
Shutdown the application tier services and run adpreclone.pl on the application tier of your instance.
#su - applsam
$cd samappl
$. ./APPSORA.env
$cd $COMMON_TOP/admin/scripts/SAM_SAMLXA
$perl adpreclone.pl appsTier
Filesystem Copy
After that copy across the application tier filesystem to the machine
tar -cf - applsam | ssh Node_B tar -xf - -C /u02/sam/
Post Clone on the External Tier
Now you must run your post clone (adcfgclone.pl) on the external tier and specify that it is your Web Server node only
# su - applsam
$ cd samcomn/clone/bin
$ perl adcfgclone.pl appsTier
Run the txkChangeProfH.sql
Run the txkChangeProfH.sql script to update your profile options default hirerchy to server
$ cd $FND_TOP/patch/115/sql/
$ sqlplus apps/xxxxxxxx @txkChangeProfH.sql SERVRESP
Startup Services
Startup the application services both on the internal and external tier and test the connectivity.
Verify the Nodes
Verify that the nodes have been registered as expected in the instance.
Navigate to
System Administrator->Install->Nodes
Update Node Trust Level
Update the node trust level by identifying the node which you want to be as your external node.
Navigate to
System Administrator->Profile->System and query for 'Node Trust Level'
Set it as external for the external node at the server level only.
Update Responsibilities
After updating the node trust level the user who come in from the external application server will be able to see only those responsibilities that have been marked as external.
Navigate to
System Administrator->Profile->System and query for 'Responsibility Trust Level'
Choose the responsibility that you want to be as external, in my case it was Isupplier Registration.
Update Home Page Mode
Lastly update the home page mode to framework only.
Navigate to
System Administrator->Profile->System and query for 'Self-Service Personal Home Page Mode' and set it to framework only.
Autoconfig
Run autoconfig on both the nodes and restart the application tier services.
Wednesday, January 21, 2009
MIGRATING THE DATABASE FROM ONE MECHINE TO ANOTHER MACHINE USING RMAN
MIGRATING THE DATABASE FROM ONE MECHINE TO ANOTHER MACHINE USING RMAN
DBA11(TARGET), DBA12(CATALOG), DBA15(AUXILIARY DATABASE)
1) create the directory to store rman backup at target database
DBA11$cd /disk3/oradata/san
san$mkdir rman
2) create the password file in dba directory at target database
DBA11$cd $ORACLE_HOME/dbs
dbs$orapwd file=orapw$ORACLE_SID password=sys force=y
3) configure listener at target database
$cp $ORACLE_HOME/network/admin/listener.ora ~
~$vi listener.ora
rman =
(port=2044)(host=dba11)
(sid=
:wq!
4) start the listener at target side
$lsnrctl start
5)create tablespace and user at catalog database
DBA12$
sys>create tablespace rman datafile '/disk3/oradata/san/rman.dbf' size 100m;
sys>create user rman identified by rman;
sys>grant connect,resource,recovery_catalog_owner to rman identified by rman;
sys>alter user rman default tablespace rman;
sys>exit
6)configure tnsnames at catalog database
$cp $ORACLE_HOME/network/admin/tnsnames.ora ~
~$vi tnsnames.ora
torman =
(port=2044) (host=dba11)
(sid=
:wq!
$tnsping torman
7)create catalog under rman user
$rman catalog rman/rman
RMAN> create catalog;
RMAN>exit
$sqlplus rman/rman
RMAN>select count(*) from tab;
RMAN>90 rows
RMAN>exit
$rman catalog rman/rman@torman
RMAN>register database;
RMAN>spool log to script.log
RMAN>show all;
RMAN>exit
8)configure the script.log file
$rman catalog rmna/rman@torman
RMAN>@script.log
RMAN>backup database plus archivelog;
DBA15(AUXILIARY DATABASE)
$export ORACLE_SID=san
9)now configure listener at auxiliary database
$cp $ORACLE_HOME/network/admin/listener.ora ~
$vi listener.ora
aux =
(port=5678)(host=dba15)
(sid=san)
:wq!
$lsnrctl start aux
10) configure tnsnames.ora in the catalog database
DBA12$vi tnsnames.ora
toaux =
(port=5678)(host=dba15)
(sid=san)
:wq!
$tnsping toaux
DBA15$
11)create the passowrd file at auxiliary database
dbs$ orapwd file=orapw$ORACLE_SID password=xyz force=y
dbs$cp init.ora init$ORACLE_SID.ora
dbs$vi initsan.ora
db_name=sandeep(the auxiliary database name and target database name should be same)
control_files=/disk1/oradata/san/control.ctl
user_dump_dest=/disk1/oradata/san/udump
core_dump_dest=/disk1/oradata/san/cdump
background_dump_dest=/disk1/oradata/san/bdump
db_file_name_convert=/disk3/oradata/sandeep /disk1/oradata/san(if u want to have in different location then use this 2 parameters)
log_file_name_convert=/disk3/oradata/sandeep /disk1/oradata/san
:wq!
$mkdir -p /disk3/oradata/sandeep/{cdump,bdump,udump,arch,rman} (create target database structure)
$mkdir -p /disk1/oradata/san/{cdump,udump,bdump,arch} (create auxiliary c,r,d file location)
$cd /disk3/oradata/sandeep/rman
rman$ftp -i dba11
loginid:sandeep
password:
ftp>cd /disk3/oradata/sandeep/rman
ftp>mget *
ftp>bye
rman$du -h
358m(this is the backup size of ur target database)
12) now keep the auxiliary database in nomount stage
sys>startup nomount
13) now connect to auxiliary database using rman
DBA12$rman catalog rman/rman target sys/sys@tormna auxiliary sys/xyz@toaux
now it will connect to target database as well as auxiliary database
RMAN>duplicate target database to san; (here we have to mention the database name of the target database so my database name is san)
14)once we issue the above command then migration will be completed.
