Friday, February 28, 2014

ORA-01653: unable to extend table in tablespace SYSTEM

You tried to allocate an extent for a table segment in the specified tablespace.

1) FIRST WE NEED TO CHECK THE SPACE  IN TABLESPACE

SELECT TABLESPACE_NAME, SUM(BYTES) FREE FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME;

SELECT SUM(BYTES)/(1024*1024) AS MB
FROM DBA_FREE_SPACE  WHERE TABLESPACE_NAME='SYSTEM';

2) BELOW ARE THE PATH WHERE TABLESPACE RESIDE IN DB IT IS PHYSICALLY.

     C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF

3 RUN THE BELOW MENTION QUERY  FOR EXTAND IT.
ALTER DATABASE DATAFILE 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF' RESIZE 2000000;

SIZE IS MORE THE EARLIER SIZE .............



SELECT SUM(BYTES)/(1024*1024) AS MB
FROM DBA_FREE_SPACE  WHERE TABLESPACE_NAME='SYSTEM';

Thursday, February 27, 2014

EXCHANGE PARTITION TABLE

EXCHANGE PARTITION TABLE :-

ALTER TABLE ORIGINAL_TABLE
EXCHANGE PARTITION TEMP_TABLE_PARTITION_NAME
WITH TABLE TEMP_TABLE
WITHOUT VALIDATION  UPDATE GLOBAL INDEXES;  

ALTER TABLE WC_TEST_TMP_FS
EXCHANGE PARTITION PERIOD
WITH TABLE  WC_TEST_TMP1_FS
INCLUDING INDEXES
WITHOUT VALIDATION;

Tuesday, February 25, 2014

A NQSConfig.INI File Configuration


A NQSConfig.INI File Configuration

The Oracle BI Server software uses an initialization file called NQSConfig.INI to set parameters upon startup. This initialization file includes parameters to customize behavior based on the requirements of each individual installation. This appendix lists the NQSConfig.INI file parameters and gives a brief description and any required syntax for each parameter. The parameters are generally listed in the order they appear in the configuration file.

About Parameters in the NQSConfig.INI File

Each instance of the Oracle BI Server has its own NQSConfig.INI file. When you update NQSConfig.INI parameters for a clustered deployment, ensure that you make the change in the NQSConfig.INI file for each instance of the Oracle BI Server.

Some parameters in NQSConfig.INI are centrally managed by Fusion Middleware Control and cannot be updated manually in NQSConfig.INI. Instead, use Fusion Middleware Control to change these parameters. Parameters that are centrally managed by Fusion Middleware Control are marked as such in this appendix, and are also identified by comments in the NQSConfig.INI file.

MAX_ROWS_PER_CACHE_ENTRY

Specifies the maximum number of rows in a query result set to qualify for storage in the query cache. Limiting the number of rows is a useful way to avoid consuming the cache space with runaway queries that return large numbers of rows. If the number of rows a query returns is greater than the value specified in the MAX_ROWS_PER_CACHE_ENTRY parameter, then the query is not cached.

When set to 0, there is no limit to the number of rows per cache entry.

Example: MAX_ROWS_PER_CACHE_ENTRY = 100000;

 MAX_CACHE_ENTRY_SIZE

Specifies the maximum size for a cache entry. Potential entries that exceed this size are not cached. The default size is 20 MB.

Specify GB for gigabytes, KB for kilobytes, MB for megabytes, and no units for bytes.

Example: MAX_CACHE_ENTRY_SIZE = 20 MB;
MAX_CACHE_ENTRIES        

Specifies the maximum number of cache entries allowed in the query cache to help manage cache storage. The actual limit of cache entries might vary slightly depending on the number of concurrent queries. The default value is 1000.

Example: MAX_CACHE_ENTRIES = 1000;

POPULATE_AGGREGATE_ROLLUP_HITS

Specifies whether to aggregate data from an earlier cached query result set and create a new entry in the query cache for rollup cache hits. The default value is NO.

Typically, if a query gets a cache hit from a previously executed query, then the new query is not added to the cache. A user might have a cached result set that contains information at a particular level of detail (for example, sales revenue by ZIP code). A second query might ask for this same information, but at a higher level of detail (for example, sales revenue by state). The POPULATE_AGGREGATE_ROLLUP_HITS parameter overrides this default when the cache hit occurs by rolling up an aggregate from a previously executed query (in this example, by aggregating data from the first result set stored in the cache). That is, Oracle Business Intelligence sales revenue for all ZIP codes in a particular state can be added to obtain the sales revenue by state. This is referred to as a rollup cache hit.

Normally, a new cache entry is not created for queries that result in cache hits. You can override this behavior specifically for cache rollup hits by settingPOPULATE_AGGREGATE_ROLLUP_HITS to YES. Non rollup cache hits are not affected by this parameter. If a query result is satisfied by the cache—that is, the query gets a cache hit—then this query is not added to the cache. When this parameter is set to YES, then when a query gets an aggregate rollup hit, then the result is put into the cache. Setting this parameter to YES might result in better performance, but results in more entries being added to the cache.

Example: POPULATE_AGGREGATE_ROLLUP_HITS = NO;
USE_ADVANCED_HIT_DETECTION

When caching is enabled, each query is evaluated to determine whether it qualifies for a cache hit. A cache hit means that the server was able to use cache to answer the query and did not go to the database at all. The Oracle BI Server can use query cache to answer queries at the same or later level of aggregation.

The parameter USE_ADVANCED_HIT_DETECTION enables an expanded search of the cache for hits. The expanded search has a performance impact, which is not easily quantified because of variable customer requirements. Customers that rely heavily on query caching and are experiencing misses might want to test the trade-off between better query matching and overall performance for high user loads. See also the parameter "MAX_SUBEXPR_SEARCH_DEPTH" for related information.
MAX_SUBEXPR_SEARCH_DEPTH

Lets you configure how deep the hit detector looks for an inexact match in an expression of a query. The default is 5.

For example, at level 5, a query on the expression SIN(COS(TAN(ABS(ROUND(TRUNC(profit)))))) misses on profit, which is at level 7. Changing the search depth to 7 opens up profit for a potential hit.

DISABLE_SUBREQUEST_CACHING

When set to YES, disables caching at the subrequest (subquery) level. The default value is NO.

Caching subrequests improves performance and the cache hit ratio, especially for queries that combine real-time and historical data. In some cases, however, you might disable subrequest caching, such as when other methods of query optimization provide better performance.

Example: DISABLE_SUBREQUEST_CACHING = NO;
GLOBAL_CACHE_STORAGE_PATH

In a clustered environment, Oracle BI Servers can be configured to access a shared cache that is referred to as the global cache. The global cache resides on a shared file system storage device and stores seeding and purging events and the result sets that are associated with the seeding events.

This parameter specifies the physical location for storing cache entries shared across clustering. This path must point to a network share. All clustering nodes share the same location.

You can specify the size in KB, MB, or GB, or enter a number with no suffix to specify bytes.

Syntax: GLOBAL_CACHE_STORAGE_PATH = "directory name" SIZE;

Example: GLOBAL_CACHE_STORAGE_PATH = "C:\cache" 250 MB;
MAX_GLOBAL_CACHE_ENTRIES

The maximum number of cache entries stored in the location that is specified by GLOBAL_CACHE_STORAGE_PATH.

Example: MAX_GLOBAL_CACHE_ENTRIES = 1000;
CACHE_POLL_SECONDS

The interval in seconds that each node polls from the shared location that is specified in GLOBAL_CACHE_STORAGE_PATH.

Example: CACHE_POLL_SECONDS = 300;
CLUSTER_AWARE_CACHE_LOGGING

Turns on logging for the cluster caching feature. Used only for troubleshooting. The default is NO.

Example: CLUSTER_AWARE_CACHE_LOGGING = NO;

General Section Parameters

The General section contains general server default parameters, including localization and internationalization, temporary space and memory allocation, and other default parameters used to determine how data is returned from the Oracle BI Server to a client.
LOCALE

Specifies the locale in which data is returned from the server. This parameter also determines the localized names of days and months.

To successfully run Oracle Business Intelligence, ensure that you configure the appropriate locales on the operating system for the language in which you run the applications. (In some cases, you might install additional content on the system to support the locale.) The Oracle BI Server sets the C-runtime locale during the server startup. Some locale- and language-related settings are interrelated and help determine how the Oracle BI Server sorts data. Ensure that the settings for the following parameters work together:


·         LOCALE

·         SORT_ORDER_LOCALE

·         SORT_TYPE

·         CASE_SENSITIVE_CHARACTER_COMPARISON

Valid platform-independent values for LOCALE and SORT_ORDER_LOCALE are:


LOCALE = "english-usa";

SORT_ORDER_LOCALE = "english-usa";

 SORT_TYPE = "binary";

 SORT_TYPE

Specifies the type of sort to perform. The default value is BINARY. Binary sorts are faster than nonbinary sorts.

Valid values are BINARY and DEFAULT. If you specify DEFAULT, then a nonbinary sort is performed; this yields better sort results for data that contains accented characters.

Example: SORT_TYPE = "BINARY";
CASE_SENSITIVE_CHARACTER_COMPARISON

Specifies whether the Oracle BI Server differentiates between uppercase and lowercase characters when performing comparison operations.

Valid values are ON and OFF. When set to OFF, case is ignored. When set to ON, case is considered for comparisons. This parameter is set to ON by default. For binary sorts, case sensitivity for the server and for the relational database should be set the same way.
NULL_VALUES_SORT_FIRST

Specifies if NULL values sort before other values (ON) or after (OFF). ON and OFF are the only valid values. The value of NULL_VALUES_SORT_FIRST should conform to the underlying database. If there are multiple underlying databases that sort NULL values differently, then set the value to correspond to the database that is used the most in queries.

Example: NULL_VALUES_SORT_FIRST = OFF;

DATE_TIME_DISPLAY_FORMAT


Specifies the format for how date/time stamps are input to and output from the Oracle BI Server. The default value is yyyy/mm/dd hh:mi:ss.

Example: DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss";
DATE_DISPLAY_FORMAT

Specifies the format for how dates are input to and output from the Oracle BI Server. The default value is yyyy/mm/dd.

Example: DATE_DISPLAY_FORMAT = "yyyy/mm/dd";
TIME_DISPLAY_FORMAT

Specifies the format for how times are input to and output from the Oracle BI Server. The default value is hh:mi:ss.

Example: TIME_DISPLAY_FORMAT = "hh:mi:ss";
WORK_DIRECTORY_PATHS

Specifies one or more directories for temporary space.

Each directory listed must be an existing, writable path name, with double quotation marks ( " ) surrounding the path name. Specify mapped directories only.

You can specify either fully qualified paths, or relative paths. When you specify a path that does not start with "/" (on UNIX) or "<drive>:" (on Windows), the Oracle BI Server assumes that the path is relative to the local writable directory. For example, if you specify the path "temp," then at run time, the Oracle BI Server uses the following:

ORACLE_INSTANCE/tmp/OracleBIServerComponent/coreapplication_obisn/temp

Specify multiple directories with a comma-delimited list. Valid values are any relative path, or fully qualified path to an existing, writable directory. UNC path names and network mapped drives are allowed only if the service runs under a qualified user account.

For optimum performance, temporary directories should reside on high-performance storage devices. If you specify multiple directories, then they should reside on different physical drives.

Syntax: WORK_DIRECTORY_PATHS = "path_1" [, "path_2"{, "path_n"}];

Example 1: WORK_DIRECTORY_PATHS = "temp" ;

Example 2: WORK_DIRECTORY_PATHS = "D:\temp", "F:\temp";
VIRTUAL_TABLE_PAGE_SIZE

Several operations, such as sort, join, union, and database fetch, can require memory resources beyond those available to the Oracle BI Server. To manage this condition, the server uses a virtual table management mechanism that provides a buffering scheme for processing these operations. When the amount of data exceeds the VIRTUAL_TABLE_PAGE_SIZE, the remaining data is buffered in a temporary file and placed in the virtual table as processing continues. This mechanism supports dynamic memory sizes and ensures that any row can be obtained dynamically for processing queries.

VIRTUAL_TABLE_PAGE_SIZE specifies the size of a memory page for Oracle BI Server internal processing. A larger value reduces I/O but increases memory usage, especially in a multiuser environment.

When VIRTUAL_TABLE_PAGE_SIZE is increased, I/O operations are reduced. Complex queries might use 20 to 30 virtual tables, while simple queries might not even require virtual tables. The default size of 128 KB is a reasonable size when one considers that the size for virtual paging in Windows NT is 64 KB. This parameter can be tuned depending on the number of concurrent users and the average query complexity. In general, setting the size larger than 256 KB does not yield a corresponding increase in throughput due to the 64 KB size limit of Windows NT system buffers, as each I/O still goes through the system buffers. 128 KB is also a reasonable value on UNIX systems.

Example: VIRTUAL_TABLE_PAGE_SIZE = 128 KB;
USE_LONG_MONTH_NAMES

Specifies whether month names are returned as full names, such as JANUARY and FEBRUARY, or as three-letter abbreviations, such as JAN and FEB. Valid values are YES and NO. Specify YES to have month names returned as full names, or NO to have months names returned as three-letter abbreviations. The default value is NO.

Example: USE_LONG_MONTH_NAMES = NO;
USE_LONG_DAY_NAMES

Specifies whether day names are returned as full names, such as MONDAY and TUESDAY, or as three-letter abbreviations, such as MON and TUE. Valid values are YES and NO. Specify YES to have day names returned as full names, or NO to have day names returned as three-letter abbreviations. The default value is NO.

Example: USE_LONG_DAY_NAMES = NO;
UPPERCASE_USERNAME_FOR_INITBLOCK

You can use the special syntax: USER in initialization blocks to pass through user names. When this parameter is set to YES, then user names passed through initialization blocks using: USER are changed to all uppercase. Otherwise, case is maintained in the user names.

Example: UPPERCASE_USERNAME_FOR_INITBLOCK = NO;
Security Section Parameters

The security parameters specify default values for the Oracle BI Server security features
DEFAULT_PRIVILEGES

Specifies the privileges that users and groups are assigned when they are initially created.

Valid values are NONE and READ. The default value is READ.

Example: DEFAULT_PRIVILEGES = READ;
PROJECT_INACCESSIBLE_COLUMN_AS_NULL

Controls how security-sensitive columns are displayed to unauthorized users. If this parameter is set to YES, then a NULL expression replaces the original column expression in the query and secured columns are hidden from unauthorized users in analyses.

If this parameter is set to NO, then when a user attempts to run a report that contains a secured column the user is not authorized to see, an unresolved column error occurs.

The default value is YES.

Example: PROJECT_INACCESSIBLE_COLUMN_AS_NULL = YES;
IGNORE_LDAP_PWD_EXPIRY_WARNING

Determines whether users can log in even when the LDAP server issues a password expiration warning. Valid values are YES and NO. Specify YES to allow users to log in when the LDAP server issues a password expiration warning, or specify NO to reject user logins when the warning is issued. The default value is NO.

After user passwords have actually expired in the LDAP server, users cannot log in, regardless of the value of this parameter.

Example: IGNORE_LDAP_PWD_EXPIRY_WARNING = NO;
SSL

This parameter, along with the remaining parameters in this section, relate to Secure Sockets Layer (SSL) communication between Oracle Business Intelligence components.

The default setting for SSL is NO.
SSL_CERTIFICATE_FILE

Specifies the directory path to the certificate file. For components acting as SSL servers, such as Oracle BI Server and Oracle BI Scheduler, this is the Server Certificate file name. For client components, such as Oracle Business Intelligence ODBC Client Data Source, this is the Client Certificate file name.

This parameter is centrally managed.

Example (Server): SSL_CERTIFICATE_FILE = "servercert.pem";

Example (Client): SSL_CERTIFICATE_FILE = "client-cert.pem";
SSL_PRIVATE_KEY_FILE
Specifies the privaSSL_CA_CERTIFICATE_FILE

Specifies the name and path of the trusted CA Certificate used to verify the server or client certificate when Verify Peer is set to YES. Takes effect only when client authentication is required.

This parameter is centrally managed.

Example: SSL_CA_CERTIFICATE_FILE = "CACertFile";

te key file. For server components, this is the Server Private Key file name. For client components, this is the Client Private Key file name.

This parameter is centrally managed.

Example (Server): SSL_PRIVATE_KEY_FILE = "serverkey.pem";

Example (Client): SSL_PRIVATE_KEY_FILE = "client-key.pem";
SSL_VERIFY_PEER

Specifies whether the server requires client authentication or not. When set to YES, the Oracle Business Intelligence component verifies that the other component in the connection has a valid certificate (that is, mutual authentication). The default value of NO permits a connection to any peer.

This parameter is centrally managed.

Example: SSL_VERIFY_PEER = NO;
SSL_TRUSTED_PEER_DNS

Specifies individual named clients that are allowed to connect by Distinguished Name (DN). The DN identifies the entity that holds the private key that matches the public key of the certificate.

This parameter is not centrally managed.

Example: SSL_TRUSTED_PEER_DNS = "";
SSL_CERT_VERIFICATION_DEPTH

The depth of the certificate chain. A depth of one means a certificate has to be signed by a trusted CA. A depth of two means the certificate was signed by a CA that was further verified by a CA. The default value is 9.

This parameter is not centrally managed.

Example: SSL_CERT_VERIFICATION_DEPTH = 9;
SSL_CIPHER_LIST

A list of permitted cipher suites that the server uses. The default is empty string, which is equivalent to "ALL."

You must set this parameter only when you want to use a cipher suite other than the default choice.

This parameter is not centrally managed.

Example: SSL_CIPHER_LIST = "EXP-RC2-CBC-MD5";

Server Section Parameters

The parameters in the Server section define defaults and limits for the Oracle BI Server.
READ_ONLY_MODE

Permits or forbids changing Oracle BI repository files when the Administration Tool is in either online or offline mode. The default is NO, meaning that repositories can be edited.

When this parameter is set to YES, it prevents the Administration Tool from making any changes to repository files. When the Administration Tool opens the repository, a message informs the user that the repository is read-only. If this parameter is set to NO, then the Administration Tool can make changes to the repository.

Note that even when READ_ONLY_MODE is set to NO, there are still situations when Administration Tool opens repositories in read-only mode. For example, if you open a repository in offline mode, but the Oracle BI Server or another Administration Tool client holds a lock on the repository, then the repository opens in read-only mode. In online mode, a repository might open as read-only if an offline Administration Tool held a lock on the repository at the time the Oracle BI Server started.

In addition, the Administration Tool also opens in read-only mode when Oracle Business Intelligence has been clustered, and the Administration Tool is connected in online mode to a slave node. This occurs because the master node holds a lock on the repository. To avoid this situation when running in a clustered environment, ensure that the Oracle BI Server ODBC DSN that is used by the Administration Tool has been configured to point to the cluster controller rather than to a particular Oracle BI Server.
MAX_SESSION_LIMIT

Specifies the maximum number of concurrent connections that are allowed by the server. When this number is exceeded, the server refuses the connection request.

The limit is 65,535 connections.

Example: MAX_SESSION_LIMIT = 2000;
MAX_REQUEST_PER_SESSION_LIMIT

Specifies the maximum number of logical requests per session. This is how many open requests there are, per session, at the same time.

The limit is 65,535 logical requests per session.

Example: MAX_REQUEST_PER_SESSION_LIMIT = 500;

SERVER_THREAD_RANGE

For each Oracle BI Server request, SERVER_THREAD_RANGE specifies configuration information for thread allocation. The lower number in the range specifies the number of threads that is initially allocated, and the larger number in the range specifies the maximum number of threads to be allocated. The thread pool grows and shrinks in 5-thread increments until the upper or lower bound is reached. If there are fewer threads than sessions, then sessions share the available number of threads on a first come-first served basis.

Although setting both values to the same number maximizes the benefits of thread pooling, there is a cost associated with doing so. If you set the lower boundary and the upper boundary to the same number, then that number of threads is always allocated, which consumes stack space.

Example: SERVER_THREAD_RANGE = 10-200;
SERVER_THREAD_STACK_SIZE

Specifies the memory stack size that is allocated for each server thread. A value of 0 sets the stack size as 256 KB for each server thread for 32-bit platforms, or 1 MB for 64-bit systems.

The default value is 0. If you change this value, then ensure that the value that you provide is appropriate for the memory resources that are available on the system.

Example: SERVER_THREAD_STACK_SIZE = 0;
 DB_GATEWAY_THREAD_RANGE

Specifies the minimum and maximum number of threads in the Oracle Business Intelligence Database Gateway thread pool, according to SERVER_THREAD_RANGE.

The default value is 40-200.

Example: DB_GATEWAY_THREAD_RANGE = 40-200;

DB_GATEWAY_THREAD_STACK_SIZE

Specifies the memory stack size that is allocated for each Oracle Business Intelligence Database Gateway thread. A value of 0 sets the stack size as 256 KB per server thread for 32-bit platforms, or 1 MB for 64-bit systems.

The default value is 0. If you change this value, then ensure that the value that you provide is appropriate for the memory resources that are available on the system.

Example: DB_GATEWAY_THREAD_STACK_SIZE = 0;
MAX_EXPANDED_SUBQUERY_PREDICATES

Controls the maximum number of values that can be in an IN value list that is populated by a subquery. The default is 8,192 values. The Oracle BI Server generates an error if this limit is exceeded.

Note that there is also a database feature setting called MAX_ENTRIES_PER_IN_LIST. This value is set according to how many literals can be supported by the given data source. If this limit is exceeded, then the Oracle BI Server breaks the IN list into smaller ones and ORs them together. However, if the original IN list is too long, it might exceed the SQL statement length limit for that data source, resulting in a database error or failure. The MAX_EXPANDED_SUBQUERY_PREDICATES parameter provides a second limit to ensure that this situation does not occur.

Example: MAX_EXPANDED_SUBQUERY_PREDICATES = 8192;
 INIT_BLOCK_CACHE_ENTRIES

Controls the number of initialization block result sets that are cached with row-wise initialization. The cache key is the fully instantiated initialization block SQL.

The default value is 20. Because this parameter affects internal operations for localized versions of Oracle Business Intelligence, it is recommended that you do not change this value unless instructed to do so.

Example: INIT_BLOCK_CACHE_ENTRIES = 20;
 CLIENT_MGMT_THREADS_MAX

Specifies the number of management threads to allocate for managing Oracle BI Server client/server communications. Each client process consumes a management thread. The client/server communication method for Oracle BI Server is TCP/IP.

Because the default value of 5 is typically sufficient for server communications with clients, do not change the value of this parameter.

Example: CLIENT_MGMT_THREADS_MAX = 5;
DISCONNECTED

This parameter has been deprecated and is no longer used.
AUTOMATIC_RESTART

Specifies whether the Oracle BI Server should be automatically restarted after a failure. Automatic restart applies only to an Oracle BI Server platform; it does not apply to a clustered Oracle BI Server environment. The default value is YES.

Example: AUTOMATIC_RESTART = YES;

VARIABLE_VALUE_LIMIT

Specifies the maximum length of returned session variable values when client tools call the NQSGetSessionValues() function.

Example: VARIABLE_VALUE LIMIT= 10;
FMW_SECURITY_SERVICE_MAX_NUMBER_OF_CONNECTIONS

Limits the number of connections from the Oracle BI Server to the Oracle Fusion Middleware security service to avoid overloading the Oracle WebLogic Server with too many connections. Do not change.

Example: FMW_SECURITY_SERVICE_MAX_NUMBER_OF_CONNECTIONS = 2000;
A.6.25 FMW_SECURITY_SERVICE_MAX_NUMBER_OF_RETRIES

Specifies the maximum number of times to attempt to connect to the Oracle Fusion Middleware security service.

Example: FMW_SECURITY_SERVICE_MAX_NUMBER_OF_RETRIES = 0;
Dynamic Library Section Parameters

This section contains one entry for each dynamic link library (DLL) or set of shared objects that is used to make connections to the Oracle BI Server, for both Windows and UNIX systems.

Syntax: logical name = dynamic library;

In this syntax:

·         logical name: A logical name for the dynamic link library. These logical names also appear in the Connection Pool dialog.

·         dynamic_library: The name of the associated dynamic library. These libraries are located in:

ORACLE_HOME/bifoundation/server/bin
Usage Tracking Section Parameters

The usage tracking parameters define default values for the collection of usage tracking statistics on each logical query submitted to the Oracle BI Server.
ENABLE

Enables or disables the collection of usage tracking statistics.

Valid values are YES and NO. The default value is NO. When set to NO, statistics are not accumulated. When set to YES, statistics are accumulated for each logical query.

Example: ENABLE = NO ;
DIRECT_INSERT

Specifies whether statistics are inserted directly into a database table or written to a local file.

·         When DIRECT_INSERT is set to NO, data is written to a flat file.

·         When DIRECT_INSERT is set to YES, data is inserted into a table.

Because direct insertion into a database table is recommended, the default value is YES.
STORAGE_DIRECTORY

Specifies the full path to the directory that is used to store usage tracking log files. The directory listed must be a valid fully qualified, writable directory path name, with double quotation marks ( " ) surrounding the path name. Specify mapped directories only.

Valid values are any fully qualified path name to an existing, writable directory.

The parameter STORAGE_DIRECTORY is valid only if the parameter DIRECT_INSERT is set to NO. When usage tracking is enabled, but no storage directory is specified, the files are written to the following location:

ORACLE_INSTANCE\diagnostics\logs\OracleBIServerComponent\coreapplication_obisn

Example: STORAGE_DIRECTORY = "C:\Temp\UsageTracking";
CHECKPOINT_INTERVAL_MINUTES

Specifies how often the usage tracking data is flushed to disk. Setting this interval ti a larger number increases the amount of data that might be lost if the server shuts down abnormally. Setting this interval lower incurs additional overhead.

The default is 5 minutes.

Example: CHECKPOINT_INTERVAL_MINUTES = 5;
FILE_ROLLOVER_INTERVAL_MINUTES

Specifies the time, in minutes, before the current usage tracking log file is closed and a new file is created. For example, if this entry is set to 60 minutes, then 24 usage tracking log files are created each day.

The default is 30 minutes.

When the checkpoint interval equals or exceeds the rollover interval, only the rollover occurs explicitly; the checkpoint occurs implicitly only when the old usage tracking log file is closed.

Example: FILE_ROLLOVER_INTERVAL_MINUTES = 240;
ODE_PAGE

For multilingual repositories, this specifies the type of output code page to use when writing statistics to disk. Valid values include any valid code page number (such as 1252), and other globally recognized output code page types.

The default value is ANSI. The type depends upon the database loader being used. For example, to support multilingual repositories for database loaders that are used by Oracle Database and DB2, specify UTF8. Enclose the value in double quotation marks. USC-2 is currently not supported.

Example: CODE_PAGE = "ANSI";
PHYSICAL_TABLE_NAME

Specifies the table in which to insert records that correspond to the query statistics. The table name is the fully qualified name as it appears in the Physical layer of the Administration Tool.

The general structure of this parameter depends on the type of database being used:

·         For SQL Server, use the following general structure:

PHYSICAL_TABLE_NAME = "Database"."Catalog"."Schema"."Table";

Example:

PHYSICAL_TABLE_NAME = "OracleBI Usage"."Catalog"."dbo"."S_NQ_ACCT";

·         For Oracle Database, use the following general structure:

PHYSICAL_TABLE_NAME = "Database"."Schema"."Table";

Examples:

PHYSICAL_TABLE_NAME = "OracleBI Usage"."S_NQ_SCHED"."S_NQ_ACCT";

In the preceding example, the structure is as follows:

o    "Oracle BI Usage" represents the database component

o    "S_NQ_SCHED" represents the schema component

o    "S_NQ_ACCT" represents the table name
CONNECTION_POOL

Specifies the connection pool to use for inserting records into the usage tracking table. This is the fully qualified name as it appears in the Physical layer of the Administration Tool.

Example: CONNECTION_POOL = "OracleBI Usage"."Connection Pool";
BUFFER_SIZE

Specifies the amount of memory that is used to temporarily store insert statements. The buffer allows the insert statements to be issued to the usage tracking table independently of the query that produced the statistics to be inserted. When the buffer fills up, then the statistics of subsequent queries are discarded until the insert threads service the buffer entries.

You can specify the size in KB or MB, or enter a number with no suffix to specify bytes.

Example: BUFFER_SIZE = 10 MB;
BUFFER_TIME_LIMIT_SECONDS

Specifies the maximum amount of time that an insert statement remains in the buffer before it is issued to the usage tracking table. This time limit ensures that the Oracle BI Server issues the insert statements quickly even during periods of extended quiescence.

Example: BUFFER_TIME_LIMIT_SECONDS = 5;
NUM_INSERT_THREADS

Specifies the number of threads that remove insert statements from the buffer and issue them to the usage tracking table. The number of threads should not exceed the total number of threads that are assigned to the connection pool.

Example: NUM_INSERT_THREADS = 5;
MAX_INSERTS_PER_TRANSACTION

Specifies the number of records to group as a single transaction when inserting into the usage tracking table. Increasing the number might slightly increase performance, but also increases the possibility of inserts being rejected due to deadlocks in the database.

Example: MAX_INSERTS_PER_TRANSACTION = 1;
SUMMARY_STATISTICS_LOGGING

This parameter is reserved for a future release.
SUMMARY_ADVISOR_TABLE_NAME

This parameter is reserved for a future release.
Query Optimization Flags Section Parameters

There is one parameter in the Query Optimization Flags section. It is a special parameter to override the behavior of the Oracle BI Server in certain situations.
STRONG_DATETIME_TYPE_CHECKING

Use this parameter to relax strong type checking to prevent some date/time data type incompatibilities in queries from being rejected. For example, a query of the form "date/time op string-literal" technically contains a date/time data type incompatibility and would normally be rejected by the Oracle BI Server.

Valid values are ON and OFF. The default value is ON, which means that strong type checking is enabled and queries containing date/time data type incompatibilities are rejected. This is the recommended setting.

To relax the strong type checking, set the value to NO. Note that invalid queries or queries with severe date/time incompatibilities are still rejected. Note also that the query could still fail, for example, if the relational database implements a similar strong type checking.

Example: STRONG_DATETIME_TYPE_CHECKING = ON;

MDX Member Name Cache Section Parameters

The parameters in this section are for a cache subsystem that maps between a unique name and the captions of members of all SAP/BW cubes in the repository.
ENABLE

This parameter indicates if the feature is enabled or not.

The default value is NO because this only applies to SAP/BW cubes.
DATA_STORAGE_PATH

The path to the location where the cache is persisted. This applies only to a single location.

The number at the end of the entry indicates the storage capacity. When the feature is enabled, the string <full directory path> must be replaced with a valid path.

Example: DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB;
MAX_SIZE_PER_USER

The maximum disk space that is allowed for each user for cache entries.

Example: MAX_SIZE_PER_USER = 100 MB;
MAX_MEMBER_PER_LEVEL

The maximum number of members in a level that can be persisted to disk.

Example: MAX_MEMBER_PER_LEVEL = 1000;
MAX_CACHE_SIZE

The maximum size for each individual cache entry size.

Example: MAX_CACHE_SIZE = 100 MB;

Aggregate Persistence Section Parameters

Oracle Business Intelligence provides an aggregate persistence feature that automates the creation and loading of the aggregate tables and their corresponding Oracle Business Intelligence metadata mappings. The parameters in this section relate to configuring and using the aggregate persistence feature.
AGGREGATE_PREFIX

Specifies the Domain Server Name for aggregate persistence. The prefix must be between 1 and 8 characters long and should not have any special characters ('_' is allowed).

Example: AGGREGATE_PREFIX = "SA_";
AGGREGATE_THREAD_POOL_SIZE

Specifies the number of threads to be started for aggregate persistence. Within each phase, relational loads are executed in separate threads to improve the load performance. The default value is 5.

Example: AGGREGATE_THREAD_POOL_SIZE = 5;
AGGREGATE_AW_NAME

Specifies the name of the Analytic Workspace object that is created in the target Oracle Database. The aggregate AW cubes and dimensions are created under this container.

Example: AGGREGATE_AW_NAME = "OBI_AW";
PREAGGREGATE_AW_CUBE

Specifies whether the system-generated AW cube for aggregate persistence must be fully solved. The default value is YES. Note that a YES value significantly increases storage space usage.

Example: PREAGGREGATE_AW_CUBE = YES;
JavaHost Section Parameters

There is only one parameter in this section. It provides information about the computers where the Java Host process is running.
JAVAHOST_HOSTNAME_OR_IP_ADDRESSES

This parameter provides information about JavaHost connectivity. The default port value is 9810.

Syntax: JAVAHOST_HOSTNAME_OR_IP_ADDRESS = "host_name1:port1",host_name2:port2;

Example: JAVAHOST_HOSTNAME_OR_IP_ADDRESS = "MYHOST:9810";
Datamart Automation Section Parameters

There is only one parameter in this section. It provides information about Essbase Studio.
ESSBASE_STUDIO_URL

This parameter is reserved for a future release.

Monday, February 24, 2014

GENERIC-HOST-PROCESS-FOR-WIN32-SERVICES-ERROR

GENERIC-HOST-PROCESS-FOR-WIN32-SERVICES-ERROR:-

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\XYZ>cd\

C:\>netsh
netsh>winsock
netsh winsock>dir
The following command was not found: dir.
netsh winsock>reset

Sucessfully reset the Winsock Catalog.
You must restart the machine in order to complete the reset.

Informatica PowerCenter Repository tables

Informatica PowerCenter Repository tables
OPB_SUBJECT - PowerCenter folders table
This table stores the name of each PowerCenter repository folder.
Usage: Join any of the repository tables that have SUBJECT_ID as column with that of SUBJ_ID in this table to know the
folder name.
OPB_MAPPING - Mappings table
This table stores the name and ID of each mapping and its corresponding folder.
Usage: Join any of the repository tables that have MAPPING_ID as column with that of MAPPING_ID in this table to know
the mapping name.
OPB_TASK - Tasks table like sessions, workflow etc
This table stores the name and ID of each task like session, workflow and its corresponding folder.
Usage: Join any of the repository tables that have TASK_ID as column with that of TASK_ID/SESSION_ID in this table toknow the task name. Observe that the session and also workflow are stored as tasks in the repository. TASK_TYPE for
session is 68 and that of the workflow is 71.
OPB_SESSION - Session & Mapping linkage table
This table stores the linkage between the session and the corresponding mapping. As informed in the earlier paragraph,
you can use the SESSION_ID in this table to join with TASK_ID of OPB_TASK table.
OPB_TASK_ATTR - Task attributes tables
This is the table that stores the attribute values (like Session log name etc) for tasks.
Usage: Use the ATTR_ID of this table to that of the ATTR_ID of OPB_ATTR table to find what each attribute in this table
means. You can know more about OPB_ATTR table in the next paragraphs.
OPB_WIDGET - Transformations table
This table stores the names and IDs of all the transformations with their folder details.
Usage: Use WIDGET_ID from this table to that of the WIDGET_ID of any of the tables to know the transformation name
and the folder details. Use this table in conjunction with OPB_WIDGET_ATTR or OPB_WIDGET_EXPR to know more about
each transformation etc.
OPB_WIDGET_FIELD - Transformation ports table
This table stores the names and IDs of all the transformation fields for each of the transformations.
Usage: Take the FIELD_ID from this table and match it against the FIELD_ID of any of the tables like OPB_WIDGET_DEP
and you can get the corresponding information.
OPB_WIDGET_ATTR - Transformation properties table
This table stores all the properties details about each of the transformations.
Usage: Use the ATTR_ID of this table to that of the ATTR_ID of OPB_ATTR table to find what each attribute in this
transformation means.

dbms_profiler package



Using dbms_profiler
The dbms_profiler package is a built-in set of procedures to capture performance information from PL/SQL.   The dbms_profiler package has these procedures:
·         dbms_profiler.start_profiler
·         dbms_profiler.flush_data
·         dbms_profiler.stop_profiler
The basic idea behind profiling with dbms_profiler is for the developer to understand where their code is spending the most time, so they can detect and optimize it.  The profiling utility allows Oracle to collect data in memory structures and then dumps it into tables as application code is executed.  dbms_profiler is to PL/SQL, what tkprof and Explain Plan are to SQL. 
Once you have run the profiler, Oracle will place the results inside the dbms_profiler tables. 
The dbms_profiler procedures are not a part of the base installation of Oracle.  Two tables need to be installed along with the Oracle supplied PL/SQL package.  In the $ORACLE_HOME/rdbms/admin directory, two files exist that create the environment needed for the profiler to execute. 
·     profload.sql - Creates the package header and package body for DBMS_PROFILER.  This script must be executed as the SYS user.
·     proftab.sql - Creates three tables and a sequence and must be executed before theprofload.sql file.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba

/*
 Run profload.sql which will install the DBMS_PROFILER package.
*/

SQL> Start C:\oraclexe\app\oracle\product\10.2.0\server\RDBMS\ADMIN\profload.sql


Package created.


Grant succeeded.


Synonym created.


Library created.


Package body created.

Testing for correct installation
SYS.DBMS_PROFILER successfully loaded.

PL/SQL procedure successfully completed.

/*
 Now create the user for a test of DBMS_PROFILER.
*/

SQL> create user info identified by test;

User created.

SQL> grant connect , resource to info;

Grant succeeded.

SQL> conn info/info
Connected.

/*
 Once connected run proftab.sql which will create special tables where
 profiler puts its results.
*/

SQL> Start C:\oraclexe\app\oracle\product\10.2.0\server\RDBMS\ADMIN\proftab.sql

drop table plsql_profiler_data cascade constraints
           *
ERROR at line 1:
ORA-00942: table or view does not exist


drop table plsql_profiler_units cascade constraints
           *
ERROR at line 1:
ORA-00942: table or view does not exist


drop table plsql_profiler_runs cascade constraints
           *
ERROR at line 1:
ORA-00942: table or view does not exist


drop sequence plsql_profiler_runnumber
              *
ERROR at line 1:
ORA-02289: sequence does not exist



Table created.


Comment created.


Table created.


Comment created.


Table created.


Comment created.


Sequence created.
DBMS_PROFILER
 Setup is complete now. Create table that we will use in our PL/SQL code that we will investigate through DBMS_PROFILER package.
SQL> create table t1 (col1 varchar2 (30), col2 varchar2 (30));

Table created.
Create two different procedures one using literals in the queries and other using bind variables.
SQL>
create or replace procedure literals
 is
   vNumber number;
 begin
   for i in 1..100000 loop
     vNumber := dbms_random.random;              
    execute immediate
      'insert into package_test values ('||vNumber||','||vNumber||')';
  end loop;
   end;

Procedure created.

SQL>
create or replace procedure binds
  is
    vNumber number;
   begin
     for i in 1..100000 loop
      vNumber := dbms_random.random;
    insert into package_test values (vNumber,vNumber);
    end loop;
  end;

Procedure created.
dbms_profiler.start_profiler
 Starts the profiler before the execution of the PL/SQL program.

dbms_profiler.stop_profiler
 Stops the profiler once the program finishes executing.

First run the procedure with the queries that contain literals.
SQL> execute dbms_profiler.start_profiler ('literals');

PL/SQL procedure successfully completed.

SQL> exec literals;

PL/SQL procedure successfully completed.

SQL> execute dbms_profiler.stop_profiler;

PL/SQL procedure successfully completed.
The literal run is finished now run the program which contains queries with bind variables.
SQL> execute dbms_profiler.start_profiler ('binds');

PL/SQL procedure successfully completed.

SQL> exec binds;

PL/SQL procedure successfully completed.

SQL> execute dbms_profiler.stop_profiler;

PL/SQL procedure successfully completed.
As binds run is complete now we can see the result of these runs captured by DBMS_PROFILER.

select runid,
       run_owner,
         run_date,
        Round(run_total_time/1000000000,2) run_total_time,
         run_comment
   from plsql_profiler_runs;
 
 
select s.text ,
         p.total_occur ,
         ROUND(p.total_time/1000000000,2) total_time,
         ROUND(p.min_time/1000000000,2) min_time,
         ROUND(p.max_time/1000000000,2) max_time
   from plsql_profiler_data p, user_source s, plsql_profiler_runs r
   where p.line# = s.line
   and   p.runid = r.runid
  and   r.run_comment = 'literals'
 and   s.name ='LITERALS'   ------0.021456447  ----3.89E-7



select s.text ,
           p.total_occur ,
          ROUND(p.total_time/1000000000,2) total_time,
         ROUND(p.min_time/1000000000,2) min_time,
          ROUND(p.max_time/1000000000,2) max_time
   from plsql_profiler_data p, user_source s, plsql_profiler_runs r
   where p.line# = s.line
  and   p.runid = r.runid
  and   r.run_comment = 'binds'   -----6.278128175
  and   s.name ='BINDS'

Oracle - Starting a Profiling Session

The profiler does not begin capturing performance information until the call to start_profiler is executed.

SQL> exec dbms_profiler.start_profiler ('Test of raise procedure by Scott');

Flushing Data during a Profiling Session 

The flush command enables the developer to dump statistics during program execution without stopping the profiling utility. The only other time Oracle saves data to the underlying tables is when the profiling session is stopped, as shown below:

SQL> exec dbms_profiler.flush_data();

PL/SQL procedure successfully completed.


Stopping a Profiling Session 

Stopping a profiler execution using the Oracle dbms_profiler package is done after an adequate period of time of gathering performance benchmarks – determined by the developer. Once the developer stops the profiler, all the remaining (unflushed) data is loaded into the profiler tables.

SQL> exec dbms_profiler.stop_profiler();

PL/SQL procedure successfully completed.


Oracle dbms_profiler package also provides procedures that suspend and resume profiling (pause_profiler(), resume_profiler()).
Analyzing dbms_profiler data
The plsql_profiler_runs table contains information related to a profiling session.  Things, such as when the run was started, who started it, and how long the run lasted are contained in this table.  This table has the following important columns:
·     runid - This is the unique run identifier given to each profiler execution.
·     related_run - Runid of related run that can be called by the programmer.
·     run_owner - User who started the run.
·     run_date - Timestamp of the date of the run.
·     run_comment – User provided text concerning anything about this run that they wish to specify.  This is used mainly for documentation, since run_id is hard to remember.
·     run_total_time – Total elapsed time for this run.
The plsql_profiler_units table defines each PL/SQL component (unit) that was executed during a profiler run.  Benchmarks for each of the units are stored in this table in the following columns:
·     runid - References plsql_profiler_runs(runid).
·     unit_number - Internally generated library unit number.
·     unit_type - Library unit type (PACKAGE, PROCEDURE, etc).
·     unit_owner - Library unit owner name (the owner of the object).
·     unit_name - Library unit name (the name of the object as defined in theuser_objects view).
·     unit_timestamp – Time when the unit was created.  The “unit”, being the procedural object (procedure, function, package).  This column holds the same data as the created column in the user_objects view.
·     total_time – Total time used by this unit for the given run.
The plsql_profiler_data table is where the real performance benchmarks are stored from executing dbms_profiler.  This table contains the execution statistics for each line of code contained in our PL/SQL unit.  This table can be joined to the user_source view and can extract the actual line of code for each benchmark.  The primary key includes runidunit_number, and line#.
 The plsql_profiler_data table has the following important columns as indicated by the results of the following query:
select runid, unit_number, line#, total_occur, total_time,   
       min_time, max_time
from plsql_profiler_data;

The profiler utility populates three tables with information, plsql_profiler_runs,plsql_profiler_units, and plsql_profiler_data.  Each “run” is initiated by a user and contains zero or more “units”.  Each unit contains “data” about its execution – the guts of the performance data benchmarks. 

The performance information for a line in a unit needs to be tied back to the line source inuser_source.  Once that join is made, the developer will have all of the information that they need to optimize, enhance, and tune their application code, as well as the SQL.

Using the dbms_profiler Scripts

To extract high-level data, including the length of a particular run, the script (profiler_runs.sql) below can be executed:
column runid format 990
column type format a15
column run_comment format a20
column object_name format a20
 
select a.runid,
     substr(b.run_comment, 1, 20) as run_comment,
     decode(a.unit_name, '', '<anonymous>',
           substr(a.unit_name,1, 20)) as object_name,
     TO_CHAR(a.total_time/1000000000, '99999.99') as sec,
     TO_CHAR(100*a.total_time/b.run_total_time, '999.9') as pct
     from plsql_profiler_units a, plsql_profiler_runs b
     where a.runid=b.runid
     order by a.runid asc;
 
 
RUNID UNIT_NUMBER OBJECT_NAME          TYPE            SEC       PCT
----- ----------- -------------------- --------------- --------- ------
    1           1 <anonymous>                                .00     .0
    1           2 <anonymous>                               1.01     .0
    1           3 BMC$PKKPKG           PACKAGE BODY      6921.55   18.2
    1           4 <anonymous>                                .02     .0
    2           1 <anonymous>                                .00     .0
    2           2 <anonymous>                                .01     .0
 
Note that anonymous PL/SQL blocks are also included in the profiler tables.  Anonymous blocks are less useful from a tuning perspective since they cannot be tied back to a source object inuser_source.  Anonymous PL/SQL blocks are simply runtime source objects and do not have a corresponding dictionary object (package, procedure, function).  For this reason, the anonymous blocks should be eliminated from most reports.
From the data displayed above, the next step is to focus on the lines within the package body,testproc, that are taking the longest.  The script (profiler_top10_lines.sql) below displays the line numbers and their performance benchmarks of the top 10 worst performing lines of code.
select line#, total_occur, 
  decode (total_occur,null,0,0,0,total_time/total_occur/1000,0) as avg,
  decode(total_time,null,0,total_time/1000) as total_time,
  decode(min_time,null,0,min_time/1000) as min,
  decode(max_time,null,0,max_time/1000) as max
  from plsql_profiler_data
  where runid = 1  
  and unit_number = 3       -- testproc
  and rownum < 11           -- only show Top 10
  order by total_time desc ;
 
 
     LINE# TOTAL_OCCUR        AVG TOTAL_TIME        MIN        MAX
---------- ----------- ---------- ---------- ---------- ----------
       156           1              5008.457   5008.457   5008.457
        27           1               721.879    721.879    721.879
      2113           1               282.717    282.717    282.717
        89           1               138.565    138.565    138.565
      2002           1               112.863    112.863    112.863
      1233           1                94.984     94.984     94.984
        61           1                94.984     94.984     94.984
       866           1                94.984     94.984     94.984
       481           1                92.749     92.749     92.749
       990           1                90.514     90.514     90.514
 
10 rows selected.

Taking it one step further, the query below (profiler_line_source.sql) will extract the actual source code for the top 10 worst performing lines. 
 select line#, 
  decode (a.total_occur,null,0,0,0,            
  a.total_time/a.total_occur/1000) as Avg,
See code depot
  from plsql_profiler_data a, plsql_profiler_units b, user_source c
     where a.runid       = 1  
     and a.unit_number   = 3
     and a.runid         = b.runid
     and a.unit_number   = b.unit_number
     and b.unit_name     = c.name
     and a.line#         = c.line
     and rownum          < 11  
     order by a.total_time desc ;
 
 
 
 
     LINE#        AVG SOURCE
---------- ---------- --------------------
       156   5008.457   select sum(bytes) into reusable_var from dba_free_space;
        27    721.879   execute immediate dml_str USING  current_time
      2113    282.717   select OBJ#, TYPE# from SYS.OBJ$;
        89    138.565   OBJ_TYPES(BOBJ(I)) := BTYP(I);
      2002    112.863   select count(*) into reusable_var from dba_objects
      1233     94.984   delete from pkk_daily_activity
        61     94.984   update_stats_table(33, reusable_var, null);
       866     94.984   latest_executions := reusable_var - total_executions;
       481     92.749   time_number := hours + round(minutes * 100/60/100,2);
       990     90.514   update_stats_table(45, LOBS, null); 
 
10 rows selected.
Notice from the output above that most of the information needed to diagnose and fix PL/SQL performance issues is provided.  For lines containing SQL statements, the tuner can optimize the SQL perhaps by adding optimizer hints, eliminating full table scans, etc.  Consult Chapter 5 for more details on using tkprof utility to diagnose SQL issues.
Other useful scripts that are hidden within the Oracle directory structure ($ORACLE_HOME/PLSQL/DEMO) include a few gems that help report and analyze profiler information.  
·     profdemo.sql -A demo script for collecting PL/SQL profiler data.
·     profsum.sql - A collection of useful SQL scripts that are executed against profiler tables. 
·     profrep.sql – Creates views and a package (unwrapped) that populates the views based on the three underlying profiler tables. 
Best Practices for Using dbms_profiler  Everywhere
·     Wrap only for production - Wrapping code is desired for production environments but not for profiling.  It is much easier to see the unencrypted form of the text in our reports than it is to connect line numbers to source versions.  Use dbms_profiler before you wrap your code in a test environment, wrap it, and then put it in production.     
·     Eliminate system packages most of the time - Knowing the performance data for internal Oracle processing does not buy you much since you cannot change anything.  However, knowing the performance problem is within the system packages will save you some time of trying to tune your own code when the problem is elsewhere.
·     When analyzing lines of code, it is best to concentrate on the following:
·     Lines of code that are frequently executed - For example, a loop that executes 5000 times is a great candidate for tuning.  Guru Oracle tuners typically look for that “low hanging fruit” in which one line or a group of lines of code are executed much more than others.  The benefits of tuning one line of code that is executed often far outweigh tuning those lines that may cost more yet are executed infrequently in comparison.
·     Lines of code with a high value for average time executed – The minimum and maximum values of execution time are interesting although not as useful as the average execution time.  Min and max only tell us how much the execution time varies depending on database activity.  Line by line, a PL/SQL developer should focus on those lines that cost the most on an average execution basis.  dbms_profiler does not provide the average, but it does provide enough data to allow it to be computed (Total Execution Time / # Times Executed).
·     Lines of code that contain SQL syntax - The main resource consumers are those lines that execute SQL.  Once the data is sorted by average execution time, the statements that are the worst usually contain SQL.  Optimize and tune the SQL through utilities, such as Explain Plan, tkprof, and third party software.


 BEST PYSPARK LEARNING SITES https://www.youtube.com/watch?v=s3B8HXLlLTM&list=PL2IsFZBGM_IHCl9zhRVC1EXTomkEp_1zm&index=5 https://www...