Oracle Performance Tuning

Table of Contents

1. Overview. 4

2. Assumption. 4

3. Tuning Methodology. 4

4. Application Tuning. 5

5. Instance Tuning - Key Initialization Parameters. 7

6. Special Parameters. 8

7. I/O Tuning. 9

8. Contention Tuning. 9

9. Oracle Tuning Utilities. 9

10. Optimizer Hints. 10

11. Tuning Views. 11

12. Identify missing indexes. 11

13. Identify index fragmentation. 12

14. Database Sizing. 12

15. O/S Tuning. 12

16. Kernel Tuning. 12

17. Swap Tuning. 13

18. Memory Tuning. 13

19. O/S Monitoring. 14

20. Rollback segment management 15

21. Monitor Archive Log Status. 19

22. Managing Datafiles. 20

23. Managing Sessions. 23

24. Managing the Control File. 23

25. Managing the Online Redo Log Files. 25

26. Managing Tablespace. 28

27. Tablespace Fragmentation. 32

28. Data Block Fragmentation. 33

29. Index Rebuild. 34

30. Conclusion. 35

31. Appendix. 35

1. Overview

Oracle database needs to be monitored on a continuous basis. Performance tuning and maintenance has to be done regularly. Oracle Performance Management can be classified into two types, i.e. Proactive and Reactive Performance Management. Proactive performance management involves designing and developing the performance architecture for a system, during the early stages of an implementation. This involves hardware selection, performance & capacity planning, mass storage system selection, I/O sub-system configuration & tuning (i.e. RAID), and tailoring the various components to suit the complex needs of the application and the Oracle database. The reactive component involves performance evaluation, troubleshooting, tuning and fine tuning an Oracle environment, within the boundaries of the existing hardware and performance architecture. The primary mission of this document is to provide a methodology for performance tuning Oracle and a set of core (minimal) set of parameters and components that require configuration and tuning. The document also talks about database maintenance, defragmentation and index rebuild.

2. Assumption

This document assumes the reader to have working knowledge of Oracle on Unix platform. Exposure to performance tuning and DB maintenance will be a big plus in understanding the document.

3. Tuning Methodology

The following outlines an effective methodology to engage in a performance tuning effort:

1. Set reasonable performance tuning goals.

2. Measure and document current performance.

3. Identify the current Oracle performance bottlenecks (what Oracle is waiting for, which SQL statements are part of that wait event).

4. Identify the current OS bottlenecks.

5. Tune the required component (application, database, I/O, contention, OS, and so on).

6. Track and exercise change-control procedures.

7. Measure and document current performance.

8. Repeat steps 3 through 7 until the tuning goal is met.

4. Application Tuning

The troubleshooting effort should be started by looking at v$system_event to find out the events on the system. The next step is to look at v$session_event, for the sessions that are causing and/or experiencing the wait events. The final step is to get the details behind the event from v$session_wait. Concurrent to this effort, a similar drill down should be done from the O/S, by analyzing core CPU, Memory and I/O related statistics.

The First Step – Query V$System_Event

The v$system_event view provides a bird’s eye view of all the events in an Oracle system. Even though it does not contain session-level specific information (current or the past), it does sum all the waits, since the last time the instance was bounced. The statistics in this dynamic performance view is reset to zero on instance startup. For that reason, it should be noted that the statistics in this view, should be sampled over time.

The columns in the v$system_event dynamic performance view are:

EVENT: This is the name of an event. Some of the more common events are enqueue wait, buffer busy wait, latch waits, db file scattered read, db file sequential read, and free buffer waits.

TOTAL_WAITS: This is the total number of waits for a given event since the time the instance was bounced.

TOTAL_TIMEOUTS: This column provides the total number of wait time-outs for the specific event since the time the instance was bounced.

TIME_WAITED: This is the total wait time (in 1/100ths of a second) by all sessions for a given event since the time the instance was bounced.

AVERAGE_WAIT: This is the average wait time (in 1/100ths of a second) by all sessions for a given event since the instance was bounced. Average_Wait = (time_waited/total_waits.)

The Second Step - Query V$Session_Event

The v$session_event view provides the same information as v$system_event at the session level, plus relevant session information. (i.e., SID etc). This view allows the drill down from the “system-wide events”, to the session level, to determine which session is causing and/or experiencing a given wait event.

The Final Step - Query V$Session_Wait

The v$session_wait view provides low-level drill down information at the session level for a given event. Unlike some of the other views, which display totals, this view displays session level wait information “real time”. It is important to note that, iteratively querying this view may show different results. This could be attributed to the activity that is happening on the database.

The relevant columns that provide us with the needed information in the v$session_wait dynamic performances are:

SID: This is the session identifier number.

SEQ#: This number is an internal sequence number for the wait event related to this session. This column can be utilized to determine the number of waits for a given event that a session has experienced.

EVENT: This is the name of the event. Some of the more common events are enqueue wait, buffer busy wait, latch waits, db file scattered read, db file sequential read, and free buffer waits.

These following parameters points to more details about the specific wait. The values in these parameters having logical relationships (foreign keys) to other views and the interpretation of a given value is wait-event dependent.

For example, for the wait event - db file scattered read (full table scan), p1 is the file number (points to v$filestat or dba_data_files) and p2 is the block number (related to dba_extents, sys.uet$). But for latch waits, p2 is the number of the latch, which points to v$latch. For the complete list of the waits with their associated parameters, please refer to the Oracle Reference Manual.

STATE: The state of given event is a very important indicator, as it provides details for interpreting the following two columns, namely wait_time and seconds_in_wait. Without fully understanding the state, the wait_time and seconds_in_wait numbers can be worthless.

There are four possible states.

  1. Waiting - The session is currently waiting for the event.

  1. Waited Unknown Time - The Oracle initialization parameter, timed_statistics is not set to TRUE, i.e., is set to FALSE.

  1. Waited Short Time - The session waited for an insignificant amount of time – Not really worth looking.

  1. Waited Known Time – If the resource that is waited upon is gained at a later time, the state changes from Waiting to Waited Known Time.

WAIT_TIME. The value for this column is state dependent.

It has been proven that 80% of all Oracle system performance problems are resolved by coding optimal and commonsense SQL. Any application tuning effort involves prudent usage of indexes, appropriate use of full-table scans, setting the optimal degree of parallelism, selection of the right join methodology etc.

These are some of the key elements that are required for optimal application performance. There is really no substitute for optimal SQL, as it results in reduced CPU utilization, fewer I/O requests, and better response times. Said in another way, optimal SQL eventually provides a level of application scalability, that hardware cannot offer without spending a heck of a lot of money. And, sometimes even after spending a heck of a lot of money, performance does not increase significantly.

11

5. Instance Tuning - Key Initialization Parameters

The following are the core Oracle initialization parameters (configured in the init.ora file) and have known to provide the biggest impact on instance tuning. All of the below mentioned parameters affect the workings of Oracle and the System Global Area (SGA). Any changes made to these parameters need to be tested before implementing on a production environment. Once the changes are made on production, monitoring of the relevant Oracle dynamic performance views and O/S statistics needs to be done, to detect any unusual behavior caused by the configuration change.

DB_BLOCK_SIZE

This parameter is set prior to database creation. It determines the size of each block within the database. This parameter cannot be changed without recreating the database. In benchmarks, it has been observed that on each doubling of the database block size, response times for I/O intensive batch operations have reduced by 40%. On most operating systems, using the 32- bit version of the Oracle software, DB_BLOCK_SIZE can be configured as a power of 2 up to 16K.

The configuration of db_block_size should usually adhere to the following formula:

DB_BLOCK_SIZE = FILESYSTEM BLOCKSIZE >= O/S PAGESIZE

This will ensure that Oracle-generated I-Os will be optimally serviced and the I/O sub-system will not be stressed by the overhead of redundant or unnecessary I-O. Further, this formula will also ensure that any I/O performed with respect to paging will not incur the overhead of multiple I/O system calls. For peoplesoft application, DB_BLOCK_SIZE should be 8k.

SHARED_POOL_SIZE

Set in bytes, this parameter defines the size of the shared pool area in the SGA. The sizing of this component is very heavily dependent on the nature of the application (i.e., whether it reuses SQL or whether it generates dynamic SQL etc.). It is also dependent on the number of concurrent users and whether or not the instance is configured to support the multi-threaded server (MTS). When an instance is configured for MTS, the shared pool area needs to be configured significantly larger, as the cursor-state and the user-session-data components of the Program Global Area (PGA) are moved into the shared pool.

This parameter needs attention only if the “cache-hit ratios” of the library cache or the dictionary cache are persistently below 90%. But if the application does not use bind variables and/or shared cursors, then no amount of memory will bring the shared pool cache hit ratios to over 90%.

Over allocation of the shared pool area can result in increased “parse time” and in some cases abnormal “hanging” of SQL statements. If the application does not re-use SQL effectively, any amount of memory configured for the library cache, and dictionary cache, will not improve their cache-hit ratios. Another factor that needs to be considered in the sizing of the shared pool is the amount of stored PL/SQL code that needs to be readily accessible by the application. For peoplesoft application, ideal shared_pool_size should be 300 MB.

LOG_BUFFER

Monitoring the log buffer space wait event in V$SESSION_WAIT is required to tune this memory structure. It needs to be mentioned here that a badly sized online redo log file can cause “waits” for redo requests. For peopleSoft application, ideal log_buffer should be 5MB.

ROLLBACK_SEGMENTS

The rule of thumb for the number of rollback segments is (# of concurrent users/4). Rollback segment sizing needs to be done in a proactive fashion, such that the extent sizes are adequate and the number of MINEXTENTS is 20. The formula to size the extents of a “normal” rollback segment, is to take the average value of USED_UBLK from v$transaction and round up to the next “power of 2”. It is obvious that the rollback segments used for large operations need to be sized significantly larger, to avoid dynamic extension of the rollback segment at runtime. To distribute I/O operations against rollback segments, it is recommended that the rollback segment tablespace be striped across multiple disks.

PROCESSES

This parameter defines the number of processes that can attach to the Oracle SGA and should be set keeping in mind the SEMMNS O/S kernel parameter. Setting this beyond the value of SEMMNS will result in failures on database startup.

6. Special Parameters

The following parameters should not be modified from the default values until all documented performance-tuning techniques have been exhausted. The working efficacy of these parameters should be tested thoroughly before implementing them in a production environment. These parameters are most significant for packaged applications where the code is usually inaccessible. It is also relevant for environments that require hash joins for their “batch processing”, but want to hold back the optimizer from being over-influenced by the hash join method.

OPTIMIZER_MAX_PERMUTATIONS

This parameter defaults to a value of 80000. When set to a value below 80000, it forces the optimizer to try up to 4 different tables as the driving table for queries that involve joins. This results, in the optimizer picking the least expensive of the 8 plans that it generates. Usually, the default behavior is to build a plan with the smallest table as the driving table. The default behavior may not always generate the most suitable plan, especially for packaged applications. The result of setting this parameter is a nominal increase in “parse time”, but a significant potential in reduction of “execution time” of SQL statements.

OPTIMIZER_INDEX_COST_ADJ

This parameter defaults to a value of 100. The range of values for this parameter is 1-10000. When this is set to a low value (1-10), the optimizer is more encouraged to perform index scans over full table scans.

OPTIMIZER_INDEX_CACHING

This parameter defaults to a value of 0. The range of values for this parameter is 0-100. When this is set to 99, the optimizer is encouraged the use of nested loops over other join methods. Hash joins are suitable for applications, where small table(s) are joined with very large table(s), and a significant portion of the large table(s) is processed. Manually configuring this parameter to a value of 99 does not turn off hash joins, but does hold back the optimizer from going with hash joins as the default join method.

7. I/O Tuning

I/O tuning is a critical component of system tuning and it involves among other tasks, spreading hot files across multiple drives/volumes, employing optimal striping methodologies, identifying I/O-sub-system bottlenecks, identifying controller bottlenecks, and choosing the optimal level of RAID, based on the type of application. I/O tuning should be done after gaining a full understanding of the workings of Oracle and studying the architecture of the Oracle RDBMS. I/O tuning should always be preceded and succeeded by I/O statistics monitoring, such as average service time, IOPS, average length of the disk queue etc

8. Contention Tuning

Most of the contention problems related to Oracle can be proactively managed by configuring the concerned initialization parameters. Improper configuration of the latch parameters in the init.ora can cause contention, but rather than blowing this out of proportion, the required parameters can be configured and proactively dealt with.

Database objects such as tables, can have two points of contention. The first one is the number of “freelists” that are configured (default is 1). The freelist structure maintains the blocks in the table that are available for insert. It is relevant to configure this for those tables where a significant number of “concurrent” inserts occur. To proactively counter freelist contention, the parameter FREELISTS can be configured during table creation. A value of (2 * # of CPUs) is considered optimal. Proactive freelist contention tuning is the way to fix the problem before it becomes one.

9. Oracle Tuning Utilities

ANALYZE

If using the cost-base optimizer, run the ANALYZE command for tables and indexes in the database. This allows the optimizer to generate efficient access plans for stored and dynamic SQL statements. If using the rule-based optimizer, this is not necessary.

SQL Trace and TKPROF

Oracle’s SQL trace facility provides performance information on individual SQL statements. The SQL trace facility generates the following statistics for each statement:

Parse, execute, and fetch counts

CPU and elapsed times

Physical reads and logical reads

Number of rows processed

Misses on the library cache

The SQL trace facility is enabled for a session or for an instance. When the SQL trace facility is enabled, performance statistics for all SQL statements executed in a user session or in an instance are placed into a trace file.

The TKPROF program is run to format the contents of the trace file and places the output into a readable output file. Optionally, TKPROF can also:

Determine the execution plans of SQL statements.

Create a SQL script that stores the statistics in the database.

10. Optimizer Hints

SQL statements can be manually tuned and it can override the optimizer’s default actions by including hints—in the form of comments—within each statement. This forces the statements to be processed rather than how the optimizer would normally handle them. For example, in the following SELECT statement, the hint “/*+ALL_ROWS*/” has been added:

SELECT /*+ALL_ROWS*/
A.EMPLID
A.NAME

The available Oracle optimizer hints (comments) are listed and defined in the following table:

Comment

Instruction

/*+ALL_ROWS*/

Optimize SQL for best throughput.

/*+AND_EQUAL*/

Use index merging on specified tables.

/*+CLUSTER

Use a cluster scan for a specified table.

/*+COST*/

Use cost-based optimizer always.

/*+FIRST_ROWS*/

Optimize SQL for best response times.

/*+FULL*/

Use a full-table scan on a specified table.

/*+HASH*/

Use a hash search on a specified table.

/*+ INDEX*/

Force the use of a specified index for a specified table (assuming the index and the predicates exist for the index).

/*+INDEX_ASC*/

Same as INDEX.

/*+INDEX_DESC*/

Same as INDEX, but in descending order.

/*+ORDERED*/

Use the FROM clause join sequence.

/*+ROWID*/

Use ROWID (row identifier) access method.

/*+RULE*/

Use rule-based optimizer only.

/*+USE_MERGE*/

Use sort-merge join technique on specified tables.

/*+USE_NL*/

Use nested-loop join technique on specified tables.

11. Tuning Views

Views are effectively SELECT statements and can be tuned in the same way that any SELECT statement can be tuned. Remember that views cannot retrieve data any faster than the original SELECT statement can. At all costs, avoid specifying views of views or views within SQL sub-query clauses. These statements tend to confuse the optimizer, resulting in full-table scans.

12. Identify missing indexes

 

There is no guaranteed way of finding missing indexes. The following is intended to help identify where beneficial indexes do not exist.

To find the top SQL statements that have caused most block buffer reads:

Select buffer_gets, sql_text from v$sqlarea where buffer_gets > 10000 order by buffer_gets desc;

If this returns a large number of rows then increase the number of ‘buffer_gets’ required, if it returns no rows then decrease this threshold.

Typically, most of these will be select statements of some sort. Considering each in turn, identify what indexes would help with the query and then check that those indexes exist. Create them if necessary.

To find the most frequently executed SQL:

Select executions, buffer_gets, sql_text from v$sqlarea where executions > 10000 order by executions desc;

If this returns a large number of rows then increase the number of ‘executions’ required. If it returns no rows then decrease the number of executions required

13. Identify index fragmentation

To obtain information about an index:

Analyze index validate structure;

This populates the table ‘index_stats’. It should be noted that this table contains only one row and therefore only one index can be analysed at a time.

An index should be considered for rebuilding under any of the following conditions:

  • The percentage of deleted rows exceeds 30% of the total, i.e. if del_lf_rows / lf_rows > 0.3.

  • If the ‘HEIGHT’ is greater than 4.

  • If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.

14. Database Sizing

Database sizing covers the SGA (System Global Area) size when one configures or installs the database to host the PeopleSoft Applications. PeopleSoft can have four kinds of environments and SGA size can be different on different environment. There is no hard & fast rule for sizing the Oracle Database for PeopleSoft but as a general rule, one can keep his SGA size in following way in different environment

DEMO: - 150-200 MB

DEV: - 200-300 MB

TEST: - 400-500 MB

PROD: - 700- 1000 MB.

Other than this size, one can customize his environment as per the requirement and availability of the resources in the DEVO, TEST and PROD environment but in DEMO applications this is the standard size and PeopleSoft applications will work fine with this much size of the memory.

15. O/S Tuning

Tuning of O/S kernel parameters should be done per the guidelines defined in the Installation and Configuration Guide. Most O/S kernel parameters can retain their default values. There are some that do require attention and these are discussed in the following sections.

16. Kernel Tuning

Two key UNIX kernel parameters that require special attention are SHMMAX and SEMMNS. The former defines the size of the largest shared memory segment that the kernel will support and the latter defines the number of semaphores in the system. SHMMAX should be configured to support the largest shared pool area for any database. While having the entire SGA in one shared memory segment is good, the performance difference in having the SGA split across multiple shared memory segments versus 1 segment is less than 1%. On Solaris, the entire SGA needs to be in one shared memory for USE_ISM to work effectively.

SEMMNS should be configured to support the maximum number of processes that will attach to the SGA of all instances on a given machine. If a database needs to be support 1000 users, Oracle will need 1000 semaphores just to support the user population (not to mention the required semaphores for other background processes). It is useful to mention here that SEMMNS can be configured at (2 * # of processes on the system), to ensure that the system does not undergo semaphore starvation, as Oracle will procure the 1000 semaphores up front, regardless of whether a 1000 processes attached to the Oracle SGA or not.

20

It is worthy to mention here that, the upper bound for the file system buffer cache (which is usually configured by a kernel parameter), should usually be limited to 10-15% of total RAM, as failure to configure this parameter on some flavors of UNIX (e.g., HP-UX), can cause significant degradation in performance. The performance degradation can be noticed in the form of intense levels of paging, process inactivation and swapping. This is because in some cases, default kernel parameters allow up to 50% of total RAM for the file system buffer cache.

17. Swap Tuning

If the system is configured with adequate memory and the various components are allocated optimally, the need for swap areas is very minimal. In many implementations, it has been observed that with optimum memory configuration, the amount of swap area required is really (3 x Total RAM). This is more relevant, if the SGA is pinned or locked in memory using the required initialization parameters. Ideally if things were configured right, there should be very low levels of paging, let alone swapping.

18. Memory Tuning

The total available memory on a system should be configured in such a manner, that all components of the system function at optimum levels. The following is a rule-of-thumb breakdown to help assist in memory allocation for the various components in a system with an Oracle back-end.

SYSTEM COMPONENT - % MEMORY ALLOCATION

Oracle SGA Components 50%

Operating System +

Related Components 15%

User Memory 35%

The following is a rule-of-thumb breakdown of the 50% of memory that is allocated for an Oracle SGA. These are good starting numbers and will potentially require fine-tuning, when the nature and access patterns of the application is determined.

ORACLE SGA COMPONENT - % MEMORY ALLOCATION

Database Buffer Cache 80%

Shared Pool Area 12%

Fixed Size + Misc. 1%

Redo Log Buffer 0.1%

The following is an example to illustrate the above guidelines. In the following example, it is assumed that the system is configured with 2 GB of memory, with an average of 100 concurrent sessions at any given time. The application requires response times within a few seconds and is mainly transactional. But it does support batch reports at regular intervals.

SYSTEM COMPONENT ALLOCATED MEMORY (IN MB)

Oracle SGA Components 1024

Operating System +

Related Components 306

User Memory 694

In the aforementioned breakdown, approximately 694MB of memory will be available for Program Global Areas (PGA) of all Oracle Server processes. Again, assuming 100 concurrent sessions, the average memory consumption for a given PGA should not exceed ~7MB. It should be noted that SORT_AREA_SIZE is part of the PGA.

ORACLE SGA COMPONENT ALLOCATED MEMORY (IN MB)

Database Buffer Cache 800

Shared Pool Area 128 - 188

Fixed Size + Misc. 8

Redo Log Buffer 1 (average size is 512K)

NOTE:

It should be noted here that the final configuration after multiple iterations of fine-tuning, may look different from the above. Also, it should be kept in mind that these numbers are a good start. Frequent monitoring of the various cache-hit ratios and the ratio of sorts in memory vs. sorts on disk etc. will determine the final numbers for the above components.

19. O/S Monitoring

Monitoring the operating system needs to be done while the database is busy, as the nature of activity on the database and its effect on the system, will be reflected by the operating system via different metrics. For example, to measure the level of CPU utilization one can use the system activity reporter (sar –u interval frequency), mpstat (on Sun Solaris), iostat, top (most flavors of UNIX) and vmstat. Sar and vmstat can also be used to determine a variety of other metrics namely memory utilization, I/O metrics, misc. queue sizes, misc. queue waits, amount of paging/swapping activity etc. On Solaris, the mpstat utility can also be used to determine some of the aforementioned metrics for each CPU. On Solaris, Adrian’s Tool for performance management can be utilized. One or more of these utilities should be used to determine the health of the system and to determine any utilization issues or bottlenecks.

20. Rollback segment management

The following table is comprehensive checklists that can be used to find the total number of rollback segment in Database:

Note: - In Oracle 8.1.X svrmgrl utility can be used to perform the DBA work but in Oracle 9.X than there is no svrmgrl utility and everything has to be completed from sqlplus only but in Oracle 8.1.X also sqlplus can be used for DBA task provided login as sysdba.

#

TASK

DONE

Y / N

1.


Login to Operating System and confirm that Desired Oracle Instance is running. Verify the status of Instance using Oracle Processes. To check the correct Instance Name, verify the ORACLE_SID environment variable.


2.

Login to SQLPLUS as sysdba.


3.


Using following query DBAcan find the rollback segment name and its status either its offline or online.

select name,status from v$rollname,v$rollstat where v$rollname.usn=v$rollstat.usn;

The following checklist can be followed to create a new rollback in a database Instance.

#

TASK

DONE

Y / N

1.


Login to Operating System and confirm that Desired Oracle Instance is running. One can check the status of Instance using Oracle Processes. To check the correct Instance Name, verify ORACLE_SID environment variable.


2.

Login to SQLPLUS as sysdba.


3.


Prerequisite for creating a new rollback segment is
one must have the CREATE ROLLBACK SEGMENT system privilege.

4.

Creating a new rollback segment covers the following options

  1. Public or Private:-Public means available to any instance. By default its Private and will be available to the instance if its mentioned in its Parameter File.
  2. Rollback segment Name: - its Identifier and abide the oracle Identifier rule .
  3. Tablespace Name: - Name of the tablespace in which this rollback segment is to be created. If not tablespace is specified, it will be created in System Tablespace which is not recommended.
  4. Storage Clause:- it specifies the characteristics of the rollback segment like initial extent ,next extent, maxextents and minextents etc.

5.

Example for creating a new rollback segment:-

CREATE ROLLBACK SEGMENT rbs_big
   TABLESPACE big_ts
   STORAGE
   ( INITIAL 10K

NEXT 10K MAXEXTENTS UNLIMITED );

6.

Things to be taken care:-

  1. Not to use public unless it’s running in Parallel server.
  2. Initial should be equal to next.
  3. PCTINCREASE should be 0.
  4. MINEXTENT should be 2.

The following checklist can be followed to alter rollback in a database Instance.

#

TASK

DONE

Y / N

1.


Login to Operating System and confirm that Desired Oracle Instance is running. Verify the status of Instance using Oracle Processes. To check the correct Instance Name, verify the ORACLE_SID environment variable.


2.

Login to SQLPLUS as sysdba.


3.


Prerequisite for alter rollback segment is
one must have the ALTER ROLLBACK SEGMENT system privilege.

4.

Altering a new rollback segment covers the following options

  1. Rollback segment name.
  2. Online or Offline: - specify the option to bring the online if its offline or make it offline if it’s online. When a new rollback segment is created, it remains offline and it has to be brought online using ALTER ROLLBACK SEGMENT.
  3. Storage clause: - To change the storage parameter. MAXEXTENTS, OPTIMAL and SHRINK parameters can be changed but one can not change the INITIAL and MINEXTENTS.

5.

Example for altering a rollback segment:-

ALTER ROLLBACK SEGMENT rbs_big
   TABLESPACE big_ts

STORAGE ( NEXT 10K MAXEXTENTS UNLIMITED );

The following checklist can be followed to drop rollback in a database Instance.

TASK

DONE

Y / N

1.


Login to Operating System and confirm that Desired Oracle Instance is running. Verify the status of Instance using Oracle Processes. To check the correct Instance Name, verify the ORACLE_SID environment variable.


2.

Login to SQLPLUS as sysdba.


3.


Prerequisite for alter rollback segment is
one must have the DROP ROLLBACK SEGMENT system privilege.

4.

Restriction on dropping the Rollback segment:-

  1. You can not drop the rollback segment if it’s online.
  2. You can not drop the system rollback segment.
  3. After dropping the tablespace, rollback segment must be removed the database parameters file other wise it will not started next time.

5.

Example for dropping rollback segment:-

Drop rollback segment rollback_segment_name;

The following checklist can be followed to assign a rollback segment to a transaction.

#

TASK

DONE

Y / N

1.


Login to Operating System and confirm that Desired Oracle Instance is running. Verify the status of Instance using Oracle Processes. To check the correct Instance Name, verify the ORACLE_SID environment variable.


2.

Login to SQLPLUS as sysdba.


3.


Prerequisite:-
If you use a SET TRANSACTION statement, then it must be the first statement in your transaction.

4.

Options for Set transaction Statement:-

  1. Read only:-current transaction will be read only. this clause can be used in select, lock table, set role, alter session, alter system statements only.
  2. Use rollback segment:- To specify the rollback segment to the current transaction.
  3. Name: - this clause to assign the name of the current transaction.

5.

Example for assigning the rollback segment to a transaction :-

SET TRANSACTION READ ONLY NAME 'trans1';

The following checklist can be followed to tune the rollback segment

#

TASK

DONE

Y / N

1.


Login to Operating System and confirm that Desired Oracle Instance is running. Verify the status of Instance using Oracle Processes. To check the correct Instance Name, verify the ORACLE_SID environment variable.


2.

Login to SQLPLUS as sysdba.


3.

Some tips to tune the rollback segment:-

  1. Keep the minimum extent 20 and maximum extent 225.
  2. don’t place the rollback segment in system tablespace
  3. Commit the transaction frequently.
  4. If one gets ORA-1555 error, manually auto extends the rollback segment.

Short note about Undo management

Oracle 9i provides the new feature for Undo data management. It’s known as Automatic undo management.

Normally one Undo tablespace is allocated for one oracle instance. For automatic undo management, DBA has to configure two parameters in init file.

  1. UNDO_MANAGEMENT
  2. UNDO_TABLESPCE.

The value of the first parameter can be either AUTOMATIC or MANUAL. For second parameter DBA has to specify the name of the UNDO_TABLESPACE. UNDO_TABLESPACE can be changed later on manually using ALTER SYSTEM SET UNDO_TABLESPACE ‘undotbs_name’;

There are two way to configure the Undo tablespace for an Instance.

  1. While Creating the Database: - DBA can create a Undo tablespace while creating the database using following command.

Create Database db_test

……..

Undo tablespace undo_tbs datafile ‘/u03/oradata/undo_tbs01.dbf’ size 20M autoextended on;

  1. If one wants to add a another Undo tablespace in existing database, can use the following syntax but at time in an instance only one undo tablespace can be online.

Create undo tablespace undo_tbs2 datafile ‘/u03/oradata/undo_tbs01.dbf’ size 20M autoextend on;

Alter Undo Tablespace

Using Alter undo tablespace command one can do the following.

  1. Add datafile:- alter tablespace undo_tbs1 add datafile /u03/oradata/undo_tbs02.dbf’ size 20M;
  2. Rename Datafile.
  3. Taking the datafile online or offline.
  4. Begin backup.
  5. End Backup.

Suppose one has created a new Undo tablespace and wants the current running database instance to use newly created tablespace, he can use the following command.

Alter system set undo_tablespace =’undo_tbs2’;

Undo Tablespace can be dropped provided it should be offline and all transaction in candidate tablespace must be completed, using drop tablespace undo_tbs command.

Other Parameter for Automatic Undo Management

1. UNDO_SUPRESS_ERRORS:- It has two values either true or false. If this parameter is true than it suppresses the error if any where set transaction is used to assign a rollback segment for a transaction.

2. UNDO_RETENTION:- Controls the amount of undo data to retain for consistent read.

21. Monitor Archive Log Status

Monitor whether the archive log is enabled for automatic archiving

ARCHIVE LOG LIST;

It shows the status and archive file destination Make sure that in the archive log file destination enough space available for the log files to archive. Take backup of old archive files and delete old files make space for new archives.

22. Managing Datafiles

Determine the Number of Datafiles

At least one datafile is required for the SYSTEM tablespace of a database. The following are some guidelines to consider when determining the number of datafiles for your database.

Determine the Value of the DB_FILES Initialization Parameter

When starting an Oracle instance, the DB_FILES initialization parameter indicates the amount of SGA space to reserve for datafile information and thus, the maximum number of datafiles that can be created for the instance. This limit applies for the life of the instance. You can change the value of DB_FILES (by changing the initialization parameter setting), but the new value does not take effect until you shut down and restart the instance. When determining a value for DB_FILES, take the following into consideration:

· If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit without first shutting down the database.

· If the value of DB_FILES is too high, memory is unnecessarily consumed.

Limitations When Adding Datafiles to a Tablespace

You can add datafiles to tablespaces, subject to the following limitations

· Operating systems often impose a limit on the number of files a process can open simultaneously. More datafiles cannot be created when the operating system limit of open files is reached.

· Oracle imposes a maximum limit on the number of datafiles for any Oracle database opened by any instance. This limit is operating system specific.

· You cannot exceed the number of datafiles specified by the DB_FILES initialization parameter.

· When you issue CREATE DATABASE or CREATE CONTROLFILE statements, the MAXDATAFILES parameter specifies an initial size of the Datafiles portion of the control file. However, if you attempt to add a new file whose number is greater than MAXDATAFILES, but less than or equal to DB_FILES, the control file will expand automatically so that the Datafiles section can accommodate more files.

Store Datafiles Separate from Redo Log Files

Datafiles should not be stored on the same disk drive that stores the database’s redo log files. If the Datafiles and redo log files are stored on the same disk drive and that disk drive fails, the files cannot be used in your database recovery procedures.

Creating Datafiles and Adding Datafiles to a Tablespace

When creating a tablespace, you should estimate the potential size of database objects and create sufficient datafiles. Later, if needed, you can create additional datafiles and add them to a tablespace to increase the total amount of disk space allocated to it, and consequently the database. Preferably, place datafiles on multiple devices, so as to ensure that data is spread evenly across all devices.

>Create tablespace datafile ‘path’ size M;

>Alter tablespace add datafile ‘path’ size M;

Enabling and Disabling Automatic Extension for a Datafile

You can create datafiles or alter existing datafiles so that they automatically increase in size when more space is needed in the database. The files increase in specified increments up to a specified maximum.

Setting your datafiles to extend automatically provides these advantages.

· Reduces the need for immediate intervention when a tablespace runs out of

Space.

· Ensures applications will not halt because of failures to allocate extents

· To determine whether a datafile is auto-extensible, query the DBA_DATA_FILES
view and examine the AUTOEXTENSIBLE column. You can specify automatic file extension by specifying an AUTOEXTEND ON clause when you create datafiles using the following SQL statements:

CREATE DATABASE

CREATE TABLESPACE

ALTER TABLESPACE

You can enable or disable automatic file extension for existing datafiles using the following statement.

>Alter tablespace add datafile ‘path’ size M autoextend on next K maxsize M;

The value of NEXT is the minimum size of the increments added to the file when it
extends. The value of MAXSIZE is the maximum size to which the file can automatically extend.
The next example disables the automatic extension for the datafile.

>Alter database datafile ‘path’ autoextend off;

Manually Resizing a Datafile

You can manually increase or decrease the size of a datafile using the ALTER DATABASE statement.

>Alter database datafile ‘path’ resize M;

Bringing Datafiles Online or Taking Offline

Where you might be required to alter the availability of a datafile is when Oracle has problems writing to a datafile and automatically takes the datafile offline. Later, after resolving the problem, you can bring the datafile back online manually

>Alter database datafile ‘path’ online;

>Alter database datafile ‘path’ offline;

Renaming and Relocating Datafiles

You can rename datafiles to either change their names or relocate them the procedures follows. Some options, and procedures which you can follow.

1. Take the non-SYSTEM tablespace that contains the datafiles offline.

>Alter tablespace offline normal;

2. Rename the datafiles using the operating system.

3. Use the ALTER TABLESPACE statement with the RENAME DATAFILE option to change the filenames within the database.

>Alter tablespace rename datafile ‘old datafile path’ to ‘new datafile path’;

The new files must already exist; this statement does not create the files. Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile name exactly as it appears in the DBA_DATA_FILES view of the data dictionary.

>Select file_name,tablespace_name from DBA_DATA_FILES;

4. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.

23. Managing Sessions

In some situations, you might want to terminate current user sessions. For example, you might want to perform an administrative operation and need to terminate all non-administrative sessions.

Identify which session to terminate system identifier (sid) and serial number of a session, and the status whether the session is active or not.


The following query identifies all SID, SERIAL# AND STATUS
>SELECT SID, SERIAL#, STATUS FROM V$SESSION;

The following statement terminates the session

>ALTER SYSTEM KILL SESSION 'SID,SERIAL#';

24. Managing the Control File

Multiplex Control Files on Different Disks

Every Oracle database should have at least two control files, each stored on a different disk. If a control file is damaged due to a disk failure, the associated instance must be shut down. Once the disk drive is repaired, the damaged control file can be restored using the intact copy of the control file from the other disk and the instance can be restarted. In this case, no media recovery is required. Specify control file names using the CONTROL_FILES= control file locations initialization parameter in the database’s initialization parameter file

Place Control Files Appropriately

As already suggested, each copy of a control file should be stored on a different disk drive. One practice is to store a control file copy on every disk drive that stores members of online redo log groups, if the online redo log is multiplexed. By storing control files in these locations, you minimize the risk that all control files and all groups of the online redo log will be lost in a single disk failure.

Back Up Control Files

It is very important that you back up your control files. This is true initially, and at any time after you change the physical structure of your database. Such structural changes include Adding, dropping, or renaming datafiles.

· Adding or dropping a tablespace, or altering the read-write state of the tablespace

· Adding or dropping redo log files or groups

>Alter database backup controlfile to trace;

It creates a trace file in a USER_DUMP_DEST destination directory.

Creating Additional Copies, Renaming, and Relocating Control Files

You can create an additional control file copy by copying an existing control file to a new location and adding the file’s name to the list of control files. Similarly, you rename an existing control file by copying the file to its new name or location, and changing the file’s name in the control file list. In both cases, to guarantee that control files do not change during the procedure, shut down the instance before copying the control file.

To Multiplex or Move Additional Copies of the Current Control Files

1. Shut down the database.

2. Copy an existing control file to a different location, using operating system commands.

3. Edit the CONTROL_FILES parameter in the database’s initialization parameter file to add the new control file’s name, or to change the existing control filename.

4. Restart the database.

Creating New Control Files

It is necessary to create new control files in the following situations.

· All control files for the database have been permanently damaged and you do not have a control file backup.

· You want to change one of the permanent database parameter settings originally specified in the CREATE DATABASE statement. These settings include the database’s name and the following parameters MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES.

For example, you would change a database’s name if it conflicted with another database’s name in a distributed environment. Or, as another example, you can change the value of MAXLOGFILES if the original setting is too low.

The CREATE CONTROLFILE Statement

Example shown below

CREATE CONTROLFILE

SET DATABASE prod

LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log',

'/u01/oracle/prod/redo01_02.log'),

GROUP 2 ('/u01/oracle/prod/redo02_01.log',

'/u01/oracle/prod/redo02_02.log'),

GROUP 3 ('/u01/oracle/prod/redo03_01.log',

'/u01/oracle/prod/redo03_02.log')

NORESETLOGS

DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M,

'/u01/oracle/prod/rbs01.dbs' SIZE 5M,

'/u01/oracle/prod/users01.dbs' SIZE 5M,

'/u01/oracle/prod/temp01.dbs' SIZE 5M

MAXLOGFILES

MAXLOGMEMBERS

MAXLOGHISTORY

MAXDATAFILES

MAXINSTANCES

ARCHIVELOG;

ALTER DATABASE OPEN;

ALTER DATABASE OPEN RESETLOGS;

25. Managing the Online Redo Log Files

A database consists of minimum two or more online redo log files. Oracle provides the capability to multiplex an instance’s online redo log files to safeguard against damage to its online redo log files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. Unavailable to LGWR and other members remain accessible to LGWR, so the instance can continue to function. If you archive the redo log, spread online redo log members across disks to eliminate contention between the LGWR and ARCn background processes. For example, if you have two groups of duplexed online redo log members, place each member on a different disk and set your archiving destination to a fifth disk.

Consequently, there is never contention between LGWR (writing to the members) and ARCn (reading the members). Datafiles and online redo log files should also be on different disks to reduce contention in writing data blocks and redo records.

Choosing the Number of Online Redo Log Files

Examine the contents of the LGWR trace file and the database’s alert log. If messages indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.
There is any lgwr waiting for redo log groups then create redo log group
Creating Online Redo Log Groups

>Alter database add logfile group ‘path’ size;

To create new online redo log members for an existing group, use the SQL statement

>Alter database add logfile member ‘path’ to group no;

Relocating and Renaming Online Redo Log Members

This procedure is necessary, for example, if the disk currently used for some online redo log files is going to be removed, or if datafiles and a number of online redo log files are stored on the same disk and should be separated to reduce contention.

Steps for Renaming Online Redo Log Members

Shut down the database.

>Shutdown;

Copy the online redo log files to the new location.

The following example uses operating system commands (UNIX) to move the online redo log members to a new location:

Startup the database mount, but do not open it.

Rename the online redo log members.

>Alter database rename file ‘old path’ to ‘new path’;

Open the database for normal operation.

Dropping Online Redo Log Groups and Members

In some cases, you may want to drop an entire group of online redo log members. For example, you want to reduce the number of groups in an instance’s online redo log. In a different case, you may want to drop one or more specific online redo log members. For example, if a disk failure occurs, you may need to drop all the online redo log files on the failed disk so that Oracle does not try to write to the inaccessible files. In other situations, particular online redo log files become unnecessary. For example, a file might be stored in an inappropriate location.

>Alter database drop logfile group group no;

>Alter database drop logfile member “path’;

Forcing Log Switches

You can force a log switch to make the currently active group inactive and available for online redo log maintenance operations. For example, you want to drop the currently active group, but are not able to do so until the group is inactive. You may also wish to force a log switch if the currently active group needs to be archived at a specific time before the members of the group are completely filled. This option is useful in configurations with large online redo log files that take a long time to fill.

>ALTER SYSTEM SWITCH LOGFILE;

Clearing an Online Redo Log File

An online redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation reinitialize the file without shutting down the database.

>ALTER DATABASE CLEAR LOGFILE GROUP (GROUP NO);

This statement overcomes two situations where dropping redologs is not possible:

· If there are only two log groups

· The corrupt redo log file belongs to the current group

If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in

the statement.

>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP (GROUP NO);

This statement clears the corrupted redo logs and avoids archiving them.

Viewing Online Redo Log Information

Use the following views to display online redo log information. The following query returns the control file information about the online redo log for a database.

>SELECT * FROM V$LOG;

To see the names of all member of a group, use a query similar to the following:

>SELECT * FROM V$LOGFILE;

V$LOG Displays the redo log file information from the control file
V$LOGFILE Identifies redo log groups and members and member status
V$LOG_HISTORY Contains log history information

Accessing Information Stored in Redo Logs

Oracle Corporation provides SQL access to the redo logs through LogMiner, which is part of the Oracle database server. LogMiner presents the information in the redo logs through the V$LOGMNR_CONTENTS fixed view. This view contains historical information about changes made to the database including, but not limited to, the following:

· The type of change made to the database (INSERT, UPDATE, DELETE, or DDL).

· The SCN at which a change was made (SCN column).

· The SCN at which a change was committed (COMMIT_SCN column).

· The transaction to which a change belongs (XIDUSN, XIDSLT, and XIDSQN columns).

· The table and schema name of the modified object (SEG_NAME and SEG_OWNER columns).

· The name of the user who issued the DDL or DML statement to make the change (USERNAME column).

· Reconstructed SQL statements showing SQL that is equivalent (but not necessarily identical) to the SQL used to generate the redo records (SQL_REDO column). If a password is part of the statement in a SQL_REDO column, the password is encrypted.

· Reconstructed SQL statements showing the SQL statements needed to undo the change (SQL_UNDO column). SQL_UNDO columns that correspond to DDL statements are always NULL. Similarly, the SQL_UNDO column may be NULL for some datatypes and for rolled back operations. The redo logs contain internally generated numerical identifiers to identify tables and their associated columns. To reconstruct SQL statements, LogMiner needs to know how the internal identifiers map to user-defined names. This mapping information is stored in the data dictionary for the database. LogMiner provides a procedure (DBMS_LOGMNR_D.BUILD) that lets you extract the data dictionary.

26. Managing Tablespace

Tablespace space management

One can View information about used, free space and available space in a respective tablespaces.

Used space in a tablespaces

>SELECT * FROM SM$TS_USED;

Free space in a tablespaces

>SELECT * FROM SM$TS_FREE;

Available space in a tablespaces

>SELECT * FROM SM$TS_AVAIL;

Locally Managed Tablespaces

All tablespaces, including the SYSTEM tablespace, can be locally managed. Additionally, the DBMS_SPACE_ADMIN package provides maintenance procedures for locally managed tablespaces.


Create locally managed tablespace using the following statement

>CREATE TABLESPACE TABLESPACE_NAME DATAFILE 'PATH' SIZE M

EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

>CREATE TABLESPACE DATAFILE 'PATH' SIZE M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE M;

Dictionary-Managed Tablespaces

The following statement creates the dictionary managed tablespace

>CREATE TABLESPACE TABLESPACENAME DATAFILE 'PATH' SIZE IN BYTES DEFAULT STORAGE (INITIAL K NEXT K MINEXTENTS MAXEXTENTS PCTINCREASE );

Altering a Dictionary-Managed Tablespace

One reason for using an ALTER TABLESPACE statement is to add a datafile. The following statement creates a new datafile for the tablespace:

>ALTER TABLESPACE ADD DATAFILE 'PATH' SIZE;

Temporary Tablespaces

To improve the concurrence of multiple sort operations, reduce their overhead, or avoid Oracle space management operations altogether, create temporary tablespaces. A temporary tablespace can be shared by multiple users and can be assigned to users with the CREATE USER statement when you create users in the database.

You can view the allocation and deallocation of space in a temporary tablespace sort segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies the current sort users in those segments.

You also use different views for viewing information about tempfiles than you would for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the V$DATAFILE and DBA_DATA_FILES views.

Creating a Locally Managed Temporary Tablespace

>CREATE TEMPORARY TABLESPACE TEMPFILE 'PATH' SIZE M REUSE EXTENT MANAGEMENT LOCAL UNIFORM SIZE IN BYTES M;

You also use different views for viewing information about tempfiles than you would for datafiles.

V$TEMPFILE and DBA_TEMP_FILES to the V$DATAFILE and DBA_DATA_FILES views

Altering a Locally Managed Temporary Tablespace

>ALTER TABLESPACE TABLESPACE_NAME ADD TEMPFILE 'PATH' SIZE M REUSE;

However, the ALTER DATABASE statement can be used to alter tempfiles.
The following statements take offline and bring online temporary files

>ALTER DATABASE TEMPFILE 'PATH' OFFLINE;

>ALTER DATABASE TEMPFILE 'PATH' ONLINE;

The following statement resizes a temporary file

>ALTER DATABASE TEMPFILE 'PATH' RESIZE M;

The following statement drops a temporary file and deletes the operating system file

>ALTER DATABASE TEMPFILE 'PATH' DROP INCLUDING DATAFILES;

Creating a Dictionary Managed Temporary Tablespace

>CREATE TABLESPACE TABHLESPACE_NAME DATAFILE 'PATH' SIZE M DEFAULT STORAGE (INITIAL M NEXT M MINEXTENTS PCTINCREASE ) EXTENT MANAGEMENT DICTIONARY TEMPORARY;

Altering a Dictionary-Managed Temporary Tablespace

You can change an existing permanent dictionary-managed tablespace to a
temporary tablespace, using the ALTER TABLESPACE statement. For example

>ALTER TABLESPACE TEMPORARY;

Coalescing Free Space in Dictionary-Managed Tablespaces

Over time, the free space in a dictionary-managed tablespace can become fragmented, making it difficult to allocate new extents. Ways of decrementing this free space are discussed in this section. A free extent in a dictionary-managed tablespace is comprised of a collection of contiguous free blocks. When allocating new extents to a tablespace segment, the free extent closest in size to the required extent is used. In some cases, when segments are dropped, their extents are unallocated and marked as free, but any adjacent free extents are not immediately recombined into larger free extents. The result is fragmentation that makes allocation of larger extents more difficult. The SMON background process periodically coalesces neighboring free extents when the PCTINCREASE value for a tablespace in nonzero. If you set

PCTINCREASE=0, no coalescing of free extents will occur. If you are concerned about the overhead of SMON’s ongoing coalescing, an alternative is to set PCTINCREASE=0, and periodically coalesce free space manually.

You can use the ALTER TABLESPACE ... COALESCE statement to manually
coalesce any adjacent free extents.

>ALTER TABLESPACE COALESCE;

Monitoring Free Space

You can use the following views for monitoring free space in a tablespace:

DBA_FREE_SPACE

if there is any free continueous free extent then issue this command

ALTER TABLESPACE ... COALESCE

Taking Tablespaces Offline

You may want to take a tablespace offline for any of the following reasons:

· To make a portion of the database unavailable while allowing normal access to the remainder of the database

· To perform an offline tablespace backup (even though a tablespace can be backed up while online and in use)

· To make an application and its group of tables temporarily unavailable while updating or maintaining the application

When a tablespace is taken offline, Oracle takes all the associated files offline. The SYSTEM tablespace can never be taken offline.

>ALTER TABLESPACE OFFLINE;

Migrating from a Dictionary-Managed to a Locally Managed Tablespace

You migrate a dictionary-managed tablespace to a locally managed tablespace. You
use the TABLESPACE_MIGRATE_TO_LOCAL procedure.

Let us assume that the database block size is 2K, and the existing extent sizes in tablespace tbs_1 are 10, 50, and 10,000 blocks (used, used, and free). The MINIMUM EXTENT value is 20K (10 blocks). In this scenario, you allow the bitmap allocation unit to be chosen by the system. The value of 10 blocks is chosen, because it is the highest common denominator and does not exceed MINIMUM EXTENT.

The statement to convert tbs_1 to a locally managed tablespace is as follows:

EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('tbs_1');

Listing Tablespaces and Default Storage Parameters:

To list the names and default storage parameters of all tablespaces in a database,

use the following query on the DBA_TABLESPACES view:

SELECT TABLESPACE_NAME "TABLESPACE"INITIAL_EXTENT "INITIAL_EXT",

NEXT_EXTENT "NEXT_EXT" MIN_EXTENTS "MIN_EXT",MAX_EXTENTS "MAX_EXT",

PCT_INCREASE FROM DBA_TABLESPACES;

Listing the Datafiles and Associated Tablespaces of a Database:

To list the names, sizes, and associated tablespaces of a database, enter the following query on the DBA_DATA_FILES view.

>SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME FROM DBA_DATA_FILES;

Displaying Statistics for Free Space (Extents) of Each Tablespace:

To produce statistics about free extents and coalescing activity for each tablespace in the database, enter the following query:

SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,COUNT(*) "PIECES",

MAX(blocks) "MAXIMUM",MIN(blocks) "MINIMUM",AVG(blocks) "AVERAGE",

SUM(blocks) "TOTAL" FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME, FILE_ID;

27. Tablespace Fragmentation

Tablespace freespace fragmentation refers to the existence of tablespace level free space chunks. These chunks are available for segment growth, which occurs by creating a new extent and placing it within a free space chunk.There are two fundamental types of TFF; bubbles and honeycombs. Bubbles are stand-alone chunks of freespace, whereas honeycombs are multiple adjacent bubbles.

If an object is created, the free extent and used extent tables are referenced and updated, but TFF is not the result. However, if an object is dropped, its extents become free. That is, a row(s) is deleted from the used extent table and a row(s) is inserted into the free extent table. If the resulting free extents reside next to each other, then a honeycomb will be created (which smon may quickly coalesce) or if the free extents do not reside next to each other, scattered bubbles will result.

In summary, only dropping an object will create TFF and usually this can be controlled with solid application design and properly managed storage parameters.

Detecting and viewing TFF is very simple and does not significantly affect performance. TFF is detected and observed by looking directly at the free extent data dictionary table, sys.fet$. The free extent table contains a single row for each free extent or bubble. By querying sys.fet$, the severity of TFF can be found. By carefully looking at the starting file and block number combined with the free extent's size, it can be detected if TFF exists and if the TFF is a bubble or a honeycomb. A view of TFF can be achieved by combining the free extent table with the used extent table, sys.uet$.

By understanding how TFF is created, one will quickly realize that only creating an object, an object extending, and dropping an object can possibly be negatively impacted by TFF. The likelihood of a segment rapidly creating extents can be resolved with proper storage parameter management. The dropping issue can also be reduced with the proper storage parameter management (i.e., creating fewer extents) and eliminated by simply truncating the table. The bottom line is, it is extremely rare TFF will negatively impact performance. If performance is being negatively impacted, adjusting the object's storage parameters or simple application adjustments will eliminate the problem.

While TFF does not impact performance, it can impact the database from an operational or administrative perspective. For example, if two non-adjacent 100MB bubbles exist and a segment needs to extend and therefore asks for 200MB of contiguous space, it will fail. Remember, an extent requires a contiguous chunk of freespace. At the present time, Oracle can not and will not split an extent.

TFF can be resolved from an application perspective or from an administrative perspective. The application perspective focuses on setting object storage parameters (e.g., using standard extent sizes) and good code practices (e.g., using truncate instead of drop/create). The administrative perspective focuses on some type of "rebuild."

Rebuilding any part of an Oracle database is undesirable for a number of reasons. Typically, there is associated downtime or non-full application usage, which may make it impossible to rebuild. If storage parameters are set correctly or application coding practices take TFF into account, rebuilding will not be necessary.

If a rebuild is necessary, get as granular as possible. That is, do not rebuild the entire database. Focus on just the fragmented tablespace. This reduces downtime, reduces complexity, and reduces risk.

The only way to eliminate TFF is to physically rearrange Oracle segments. This can be done a number of ways, but the basic strategy is to move a segment(s) out from the fragmented tablespace, de-fragment the tablespace, and move the segment(s) back into the tablespace.

28. Data Block Fragmentation

When an Oracle table data block becomes fragmented, performance is negatively impacted, especially if the database system processes terabytes of data each day. Unfortunately, DBF is typically overlooked or simply ignored because it is relatively difficult to detect and even more difficult to quantify

Data block fragmentation occurs whenever a row is deleted from a table and therefore removed from a data block. Some objects are, by their nature, very susceptible to DBF, while other segments are not very susceptible.

Detecting DBF is relatively difficult because it is difficult to determine, in each block, exactly how much space is free, how much space is occupied by rows, and how much space is overhead. Fortunately, one does not need an exact number and there are ways to quickly estimate DBF.

It is important to have a good grasp of the relative space issues. They are allocated space, once used space, and currently used space.

Allocated Space. This is simply how much space Oracle has allocated for the object. This is the sum of the object’s extents, which can be found by querying dba_extents.

Once Used Space. This is known as the High Water Mark. This statistic marks the peak row space value and the units are Oracle blocks. The HWM increases if rows are inserted and push the HWM value higher. The only way to reduce the HWM is to drop and recreate the object or to truncate the object: both of which, reset the HWM. After the table has been analyzed, the HWM value is contained in the dba_tables.blocks view.

Currently Used Space. This is simply the sum of each row’s size. There are many ways to determine the currently used space and each has advantages and disadvantages.

As mentioned above, DBF definitely reduces both query performance and to a lesser extent DML performance. Performance is impacted because the higher the DBF, the less row data retrieved per block access. That is the overhead per row retrieved is higher. Keep in mind there is overhead regardless of whether the block resides on disk or in the data block buffer cache.

There are two basic resolution methods. The first method is to alter the table's storage parameters allowing the blocks to naturally fill up the holes in each block. The second method is to rebuild the object. Rebuilding the object is rarely the best solution, especially with large production objects (which are the objects we are typically interested in anyway).

One could write a procedure, which would make a list of all the substantially fragmented blocks and defragment just the highly fragmented blocks. Block by block, the procedure would copy the fragmented block's rows somewhere else, then the rows would be deleted (emptying out the block), then re-insert the rows. This type of procedure could be written to slowly and deliberately scan all database blocks that "need" to be rebuilt.

By decreasing the pctfree parameter, each table block will accept more rows than in the past. For example, instead of the average number of rows per block being 20, by decreasing pctfree, the average number of rows per block may increase to 25. By increasing the pctused parameter, a block will become more attractive and remain on the free list longer thereby increasing the chance of attracting rows for insertion.

29. Index Rebuild

1. Before start doing Index rebuild, check the free space for the index tablespace It should be always more than 30%. If it is not, Increase the corresponding index tablespace.

2. Note down the used space and free space for the corresponding Index tablespaces, to know the level of fragmentation.

Index Rebuild Procedure

Note: It is Not Recommended to rebuild the Indexes in Production Hours.

Login into Oracle sqlplus as sys and get the used space and free space for the Index Tablespaces by using the view sm$ts_used and sm$ts_free.

Once you get the space details, close the session.

Login into Oracle sqlplus with a valid Peoplesoft Database Userid (ex : sysadm/sysadm@)

Get the script to rebuild the Indexes for the Peoplesoft Userid.Execute the following Command to get the script.

Login into sqlplus sysadm/sysadm@

Spool the script in to the filename

Spool \path\

Select ‘ alter index ‘ || index_name || ‘ rebuild no logging ; ‘ from user_indexes ;

Spool off

Execute the script @c: or \path\

Depends upon the No of Indexes and Index Fragmentation Execution time will differ.

Check List After Index Rebuild:

To check how much free space reclaimed after Rebuilding Indexes.

Login into Oracle sqlplus as sys and get the used space and free space for the Index Tablespaces by using the view sm$ts_used and sm$ts_free.

Note: It is recommended to rebuild the Indexes Once In a week.

30. Conclusion

Managing Oracle Database Performance efforts requires adherence to a systematic methodology to ensure that all the core avenues and issues are addressed. Most of the issues can be proactively managed. Understanding O/S specific issues is one of the keys to success. The effort to configure systems with a balanced amount of hardware is of great importance.

Eighty Percent of all system performance problems can be fixed by writing optimal SQL. These tuning guidelines ensure a smooth and safe database performance. Tablespace defragmentation and index rebuild should be part of regular database maintenance. These activities will ensure a smooth and well running database.

No comments:

Post a Comment