Using dbms_profiler
The dbms_profiler package
is a built-in set of procedures to capture performance information from
PL/SQL. The dbms_profiler package has these
procedures:
·
dbms_profiler.start_profiler
·
dbms_profiler.flush_data
·
dbms_profiler.stop_profiler
The basic idea behind
profiling with dbms_profiler is for the developer to
understand where their code is spending the most time, so they can detect and
optimize it. The profiling utility allows Oracle to collect data in
memory structures and then dumps it into tables as application code is
executed. dbms_profiler is to PL/SQL, what tkprof and
Explain Plan are to SQL.
Once you have run the
profiler, Oracle will place the results inside the dbms_profiler tables.
The dbms_profiler procedures
are not a part of the base installation of Oracle. Two tables need to be
installed along with the Oracle supplied PL/SQL package. In the $ORACLE_HOME/rdbms/admin directory,
two files exist that create the environment needed for the profiler to
execute.
· profload.sql - Creates the package header and package
body for DBMS_PROFILER. This script must be executed as the SYS user.
· proftab.sql - Creates three tables and a sequence and
must be executed before theprofload.sql file.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba
/*
Run profload.sql which will install the
DBMS_PROFILER package.
*/
SQL> Start
C:\oraclexe\app\oracle\product\10.2.0\server\RDBMS\ADMIN\profload.sql
Package created.
Grant succeeded.
Synonym created.
Library created.
Package body created.
Testing for correct
installation
SYS.DBMS_PROFILER successfully
loaded.
PL/SQL procedure successfully
completed.
/*
Now create the user for a test of
DBMS_PROFILER.
*/
SQL> create user info
identified by test;
User created.
SQL> grant connect ,
resource to info;
Grant succeeded.
SQL> conn info/info
Connected.
/*
Once connected run proftab.sql which will
create special tables where
profiler puts its results.
*/
SQL> Start C:\oraclexe\app\oracle\product\10.2.0\server\RDBMS\ADMIN\proftab.sql
drop table plsql_profiler_data
cascade constraints
*
ERROR at line 1:
ORA-00942: table or view does
not exist
drop table plsql_profiler_units
cascade constraints
*
ERROR at line 1:
ORA-00942: table or view does
not exist
drop table plsql_profiler_runs
cascade constraints
*
ERROR at line 1:
ORA-00942: table or view does
not exist
drop sequence
plsql_profiler_runnumber
*
ERROR at line 1:
ORA-02289: sequence does not
exist
Table created.
Comment created.
Table created.
Comment created.
Table created.
Comment created.
Sequence created.
DBMS_PROFILER
Setup
is complete now. Create table that we will use in our PL/SQL code that we will
investigate through DBMS_PROFILER package.
SQL> create table t1 (col1 varchar2
(30), col2 varchar2 (30));
Table created.
Create two different
procedures one using literals in the queries and other using bind variables.
SQL>
create
or replace
procedure literals
is
vNumber number;
begin
for i in 1..100000 loop
vNumber := dbms_random.random;
execute immediate
'insert into package_test values ('||vNumber||','||vNumber||')';
end loop;
end;
Procedure created.
SQL>
create
or replace
procedure binds
is
vNumber number;
begin
for i in 1..100000 loop
vNumber := dbms_random.random;
insert into package_test values (vNumber,vNumber);
end loop;
end;
Procedure created.
dbms_profiler.start_profiler
Starts
the profiler before the execution of the PL/SQL program.
dbms_profiler.stop_profiler
Stops
the profiler once the program finishes executing.
First run the procedure with the queries that contain literals.
SQL> execute
dbms_profiler.start_profiler ('literals');
PL/SQL procedure successfully
completed.
SQL> exec literals;
PL/SQL procedure successfully
completed.
SQL> execute
dbms_profiler.stop_profiler;
PL/SQL procedure successfully
completed.
The literal run is
finished now run the program which contains queries with bind variables.
SQL> execute
dbms_profiler.start_profiler ('binds');
PL/SQL procedure successfully
completed.
SQL> exec binds;
PL/SQL procedure successfully
completed.
SQL> execute
dbms_profiler.stop_profiler;
PL/SQL procedure successfully
completed.
As binds run is
complete now we can see the result of these runs captured by DBMS_PROFILER.
select
runid,
run_owner,
run_date,
Round(run_total_time/1000000000,2) run_total_time,
run_comment
from plsql_profiler_runs;
select
s.text ,
p.total_occur ,
ROUND(p.total_time/1000000000,2) total_time,
ROUND(p.min_time/1000000000,2) min_time,
ROUND(p.max_time/1000000000,2) max_time
from plsql_profiler_data p, user_source s, plsql_profiler_runs r
where p.line# = s.line
and p.runid = r.runid
and r.run_comment = 'literals'
and s.name ='LITERALS' ------0.021456447 ----3.89E-7
select
s.text ,
p.total_occur ,
ROUND(p.total_time/1000000000,2) total_time,
ROUND(p.min_time/1000000000,2) min_time,
ROUND(p.max_time/1000000000,2) max_time
from plsql_profiler_data p, user_source s, plsql_profiler_runs r
where p.line# = s.line
and p.runid = r.runid
and r.run_comment = 'binds' -----6.278128175
and s.name ='BINDS'
Oracle -
Starting a Profiling Session
The profiler does not begin capturing performance information until the call to start_profiler is executed.
SQL> exec dbms_profiler.start_profiler ('Test of raise procedure by Scott');
Flushing Data during a Profiling Session
The flush command enables the developer to dump statistics during program execution without stopping the profiling utility. The only other time Oracle saves data to the underlying tables is when the profiling session is stopped, as shown below:
SQL> exec dbms_profiler.flush_data();
PL/SQL procedure successfully completed.
Stopping a Profiling Session
Stopping a profiler execution using the Oracle dbms_profiler package is done after an adequate period of time of gathering performance benchmarks – determined by the developer. Once the developer stops the profiler, all the remaining (unflushed) data is loaded into the profiler tables.
SQL> exec dbms_profiler.stop_profiler();
PL/SQL procedure successfully completed.
Oracle dbms_profiler package also provides procedures that suspend and resume profiling (pause_profiler(), resume_profiler()).
The profiler does not begin capturing performance information until the call to start_profiler is executed.
SQL> exec dbms_profiler.start_profiler ('Test of raise procedure by Scott');
Flushing Data during a Profiling Session
The flush command enables the developer to dump statistics during program execution without stopping the profiling utility. The only other time Oracle saves data to the underlying tables is when the profiling session is stopped, as shown below:
SQL> exec dbms_profiler.flush_data();
PL/SQL procedure successfully completed.
Stopping a Profiling Session
Stopping a profiler execution using the Oracle dbms_profiler package is done after an adequate period of time of gathering performance benchmarks – determined by the developer. Once the developer stops the profiler, all the remaining (unflushed) data is loaded into the profiler tables.
SQL> exec dbms_profiler.stop_profiler();
PL/SQL procedure successfully completed.
Oracle dbms_profiler package also provides procedures that suspend and resume profiling (pause_profiler(), resume_profiler()).
Analyzing dbms_profiler data
The plsql_profiler_runs table
contains information related to a profiling session. Things, such as when
the run was started, who started it, and how long the run lasted are contained
in this table. This table has the following important columns:
· runid -
This is the unique run identifier given to each profiler execution.
· related_run - Runid of
related run that can be called by the programmer.
· run_owner -
User who started the run.
· run_date -
Timestamp of the date of the run.
· run_comment –
User provided text concerning anything about this run that they wish to
specify. This is used mainly for documentation, since run_id is
hard to remember.
· run_total_time –
Total elapsed time for this run.
The plsql_profiler_units table
defines each PL/SQL component (unit) that was executed during a profiler
run. Benchmarks for each of the units are stored in this table in the
following columns:
· runid -
References plsql_profiler_runs(runid).
· unit_number -
Internally generated library unit number.
· unit_type -
Library unit type (PACKAGE, PROCEDURE, etc).
· unit_owner -
Library unit owner name (the owner of the object).
· unit_name -
Library unit name (the name of the object as defined in theuser_objects view).
· unit_timestamp –
Time when the unit was created. The “unit”, being the procedural object (procedure,
function, package). This column holds the same data as the created
column in the user_objects view.
· total_time –
Total time used by this unit for the given run.
The plsql_profiler_data table
is where the real performance benchmarks are stored from executing
dbms_profiler. This table contains the execution statistics for each line
of code contained in our PL/SQL unit. This table can be joined to the user_source view
and can extract the actual line of code for each benchmark. The primary
key includes runid, unit_number, and line#.
The plsql_profiler_data table
has the following important columns as indicated by the results of the
following query:
select runid, unit_number,
line#, total_occur, total_time,
min_time, max_time
from plsql_profiler_data;
The profiler utility
populates three tables with information, plsql_profiler_runs,plsql_profiler_units,
and plsql_profiler_data.
Each “run” is initiated by a user and contains zero or more “units”. Each
unit contains “data” about its execution – the guts of the performance data
benchmarks.
The performance
information for a line in a unit needs to be tied back to the line source inuser_source.
Once that join is made, the developer will have all of the information that
they need to optimize, enhance, and tune their application code, as well as the
SQL.
Using the dbms_profiler Scripts
To extract high-level
data, including the length of a particular run, the script (profiler_runs.sql)
below can be executed:
column runid format 990
column type format a15
column run_comment format a20
column object_name format a20
select a.runid,
substr(b.run_comment,
1, 20) as run_comment,
decode(a.unit_name, '',
'<anonymous>',
substr(a.unit_name,1, 20)) as object_name,
TO_CHAR(a.total_time/1000000000, '99999.99') as sec,
TO_CHAR(100*a.total_time/b.run_total_time, '999.9') as pct
from
plsql_profiler_units a, plsql_profiler_runs b
where a.runid=b.runid
order by a.runid asc;
RUNID UNIT_NUMBER
OBJECT_NAME
TYPE SEC
PCT
----- ----------- --------------------
--------------- --------- ------
1 1
<anonymous>
.00 .0
1 2
<anonymous>
1.01 .0
1 3 BMC$PKKPKG
PACKAGE BODY 6921.55 18.2
1 4
<anonymous>
.02 .0
2 1
<anonymous>
.00 .0
2 2
<anonymous>
.01 .0
Note that anonymous
PL/SQL blocks are also included in the profiler tables. Anonymous blocks
are less useful from a tuning perspective since they cannot be tied back to a
source object inuser_source. Anonymous PL/SQL blocks are simply runtime
source objects and do not have a corresponding dictionary object (package,
procedure, function). For this reason, the anonymous blocks should be
eliminated from most reports.
From the data
displayed above, the next step is to focus on the lines within the package
body,testproc, that are taking the longest. The script (profiler_top10_lines.sql) below
displays the line numbers and their performance benchmarks of the top 10 worst
performing lines of code.
select line#, total_occur,
decode (total_occur,null,0,0,0,total_time/total_occur/1000,0)
as avg,
decode(total_time,null,0,total_time/1000)
as total_time,
decode(min_time,null,0,min_time/1000) as
min,
decode(max_time,null,0,max_time/1000) as
max
from plsql_profiler_data
where runid = 1
and unit_number =
3 -- testproc
and rownum <
11 -- only show Top
10
order by total_time desc ;
LINE#
TOTAL_OCCUR AVG
TOTAL_TIME
MIN MAX
---------- ----------- ---------- ----------
---------- ----------
156
1
5008.457 5008.457 5008.457
27
1
721.879 721.879 721.879
2113
1
282.717 282.717 282.717
89
1
138.565 138.565 138.565
2002
1
112.863 112.863 112.863
1233
1
94.984 94.984 94.984
61
1
94.984 94.984 94.984
866
1 94.984
94.984 94.984
481
1
92.749 92.749 92.749
990
1
90.514 90.514 90.514
10 rows selected.
Taking it one step
further, the query below (profiler_line_source.sql) will extract the
actual source code for the top 10 worst performing lines.
select
line#,
decode (a.total_occur,null,0,0,0,
a.total_time/a.total_occur/1000) as Avg,
See code depot
from plsql_profiler_data a,
plsql_profiler_units b, user_source c
where
a.runid = 1
and
a.unit_number = 3
and
a.runid = b.runid
and
a.unit_number = b.unit_number
and
b.unit_name = c.name
and
a.line# = c.line
and
rownum <
11
order by a.total_time
desc ;
LINE# AVG SOURCE
---------- ---------- --------------------
156
5008.457 select sum(bytes) into reusable_var from dba_free_space;
27 721.879 execute immediate dml_str USING
current_time
2113 282.717 select OBJ#, TYPE# from SYS.OBJ$;
89 138.565 OBJ_TYPES(BOBJ(I)) := BTYP(I);
2002 112.863 select count(*) into reusable_var
from dba_objects
1233 94.984 delete from pkk_daily_activity
61 94.984 update_stats_table(33,
reusable_var, null);
866 94.984 latest_executions :=
reusable_var - total_executions;
481 92.749 time_number := hours +
round(minutes * 100/60/100,2);
990 90.514 update_stats_table(45, LOBS,
null);
10 rows selected.
decode (a.total_occur,null,0,0,0,
a.total_time/a.total_occur/1000) as Avg,
Notice
from the output above that most of the information needed to diagnose and fix
PL/SQL performance issues is provided. For lines containing SQL
statements, the tuner can optimize the SQL perhaps by adding optimizer hints,
eliminating full table scans, etc. Consult Chapter 5 for more details on
using tkprof utility to diagnose SQL issues.
Other useful scripts
that are hidden within the Oracle directory structure ($ORACLE_HOME/PLSQL/DEMO)
include a few gems that help report and analyze profiler
information.
· profdemo.sql -A demo script for collecting PL/SQL
profiler data.
· profsum.sql - A collection of useful SQL scripts
that are executed against profiler tables.
· profrep.sql – Creates views and a package
(unwrapped) that populates the views based on the three underlying profiler
tables.
· Wrap only for production - Wrapping code is desired for
production environments but not for profiling. It is much easier to see
the unencrypted form of the text in our reports than it is to connect line
numbers to source versions. Use dbms_profiler before
you wrap your code in a test environment, wrap it, and then put it in
production.
· Eliminate system packages most of
the time - Knowing the
performance data for internal Oracle processing does not buy you much since you
cannot change anything. However, knowing the performance problem is
within the system packages will save you some time of trying to tune your own
code when the problem is elsewhere.
· When analyzing lines of code, it is
best to concentrate on the following:
· Lines of code that are frequently
executed - For example, a
loop that executes 5000 times is a great candidate for tuning. Guru
Oracle tuners typically look for that “low hanging fruit” in which one line or
a group of lines of code are executed much more than others. The benefits
of tuning one line of code that is executed often far outweigh tuning those
lines that may cost more yet are executed infrequently in comparison.
· Lines of code with a high value for average time executed – The minimum and maximum values of
execution time are interesting although not as useful as the average execution
time. Min and max only tell us how much the execution time varies
depending on database activity. Line by line, a PL/SQL developer should
focus on those lines that cost the most on an average execution basis. dbms_profiler does not provide the average, but it
does provide enough data to allow it to be computed (Total Execution Time / #
Times Executed).
· Lines of code that contain SQL
syntax - The main resource
consumers are those lines that execute SQL. Once the data is sorted by
average execution time, the statements that are the worst usually contain
SQL. Optimize and tune the SQL through utilities, such as Explain
Plan, tkprof, and third party software.
nice diensh its very helpful
ReplyDeleteit's great and very useful Blog for database user and programmer's .... keep on adding new Meta data... :)
ReplyDeleteIt's great work keep on sir, very useful for all.............
ReplyDelete