Improve the session performance for flat file source :-
How can we do Performance Tuning in Informatica
- What is the row size for each source/target?
- have you remembered to tick off the 'sorted input' option in joiner3?
- do you NEED to sort the data before joining them?
- are the master the smaller of the two incoming streams in all cases (as mentioned above)?
- is the join-columns for each joiner set up in the correct order (the one with the highest number of distinct values comes first) ?
1.Please use the smaller of the two tables as the master table,as it reduces the cache size.
2.Increase the Memory for the session
Maximum Memory allowed for Auto Memory Attributes
Maximum Percentage of total memory attributes.
3.Hope you have the succifient size for DTM
4.If you load more number of records inthe target,increase the commit interval.
5.If possible also try to have an partition when you read from the source.
Optimizing the Bottleneck’s:-
1) If the source is a flat file, ensure that the flat file is local to the Informatica server. If source is a relational table, then try not to use synonyms or aliases.
2)If the source is a flat file, reduce the number of bytes (By default it is 1024 bytes per line) the Informatica reads per line. If we do this, we can decrease the Line Sequential Buffer Length setting of the session
properties.
3)If possible, give a conditional query in the source qualifier so that the records are filtered off as soon as possible in the process.
In the source qualifier, if the query has ORDER BY or GROUP BY, then create an index on the source table and order by the index field of the source table.
PERFORMANCE TUNING OF TARGETS :-
------------------------------------------------------
1)If the target is a flat file, ensure that the flat file is local to the Informatica server.
2)If target is a relational table, then try not to use synonyms or aliases.
3)Use bulk load whenever possible.
4)Increase the commit level.
5)Drop constraints and indexes of the table before loading.
PERFORMANCE TUNING OF MAPPINGS:-
-------------------------------------------------------
Mapping helps to channel the flow of data from source to target with all the transformations in between. Mapping is the skeleton of Informatica loading process.
Avoid executing major sql queries from mapplets or mappings.
Use optimized queries when we are using them.
Reduce the number of transformations in the mapping. Active transformations like rank, joiner, filter, aggregator etc should be used as less as possible.
Remove all the unnecessary links between the transformations from mapping.
If a single mapping contains many targets, then dividing them into separate mappings can improve performance.
If we need to use a single source more than once in a mapping, then keep only one source and source qualifier in the mapping. Then create different data flows as required into different targets or same target.
If a session joins many source tables in one source qualifier, then an optimizing query will improve performance.
In the sql query that Informatica generates, ORDERBY will be present. Remove the ORDER BY clause if not needed or at least reduce the number of column names in that list. For better performance it is best to order
by the index field of that table.
Combine the mappings that use same set of source data.
On a mapping, field with the same information should be given the same type and length throughout the mapping. Otherwise time will be spent on field conversions.
Instead of doing complex calculation in query, use an expression transformer and do the calculation in the mapping.
If data is passing through multiple staging areas, removing the staging area will increase performance.
Stored procedures reduce performance. Try to keep the stored procedures simple in the mappings.
Unnecessary data type conversions should be avoided since the data type conversions impact performance.
Transformation errors result in performance degradation. Try running the mapping after removing all transformations. If it is taking significantly less time than with the transformations, then we have to fine-tune the
transformation.
Keep database interactions as less as possible.
PERFORMANCE TUNING OF SESSIONS:-
----------------------------------------------------
1)A session specifies the location from where the data is to be taken, where the transformations are done and where the data is to be loaded. It has various properties that help us to schedule and run the job in the way we want.
2)Partition the session: This creates many connections to the source and target, and loads data in parallel pipelines. Each pipeline will be independent of the other. But the performance of the session will not improve if the number of records is less. Also the performance will not improve if it does updates and deletes. So session partitioning should be used only if the volume of data is huge and the job is mainly insertion of data.
3)Run the sessions in parallel rather than serial to gain time, if they are independent of each other.
4)Drop constraints and indexes before we run session. Rebuild them after the session run completes. Dropping can be done in pre-session script and Rebuilding in post session script. But if data is too much, dropping indexes
5)and then rebuilding them etc. will be not possible. In such cases, stage all data, pre-create the index, use a transportable table space and then load into database.
6)Use bulk loading, external loading etc. Bulk loading can be used only if the table does not have an index.
7)In a session we have options to ‘Treat rows as ‘Data Driven, Insert, Update and Delete’. If update strategies are used, then we have to keep it as ‘Data Driven’. But when the session does only insertion of rows into target
table, it has to be kept as ‘Insert’ to improve performance.
8)Increase the database commit level (The point at which the Informatica server is set to commit data to the target table. For e.g. commit level can be set at every every 50,000 records)
9)By avoiding built in functions as much as possible, we can improve the performance. E.g. For concatenation, the operator ‘||’ is faster than the function CONCAT (). So use operators instead of functions, where possible. The
10)functions like IS_SPACES (), IS_NUMBER (), IFF (), DECODE () etc. reduce the performance to a big extent in this order. Preference should be in the opposite order.
11)String functions like substring, ltrim, and rtrim reduce the performance. In the sources, use delimited strings in case the source flat files or use varchar data type.
12)Manipulating high precision data types will slow down Informatica server. So disable ‘high precision’.
13Localize all source and target tables, stored procedures, views, sequences etc. Try not to connect across synonyms. Synonyms and aliases slow down the performance.
DATABASE OPTIMISATION:-
---------------------------------------
1)To gain the best Informatica performance, the database tables, stored procedures and queries used in Informatica should be tuned well.
2)If the source and target are flat files, then they should be present in the system in which the Informatica server is present.
3)Increase the network packet size.
The performance of the Informatica server is related to network connections.Data generally moves across a network at less than 1 MB per second, whereas a local disk moves data five to twenty times faster. Thus network
4)connections often affect on session performance. So avoid network connections.
Optimize target databases.
PERFORMANCE
TUNNING IN INFORMATICA
Performance tuning
in Informatica-
The goal of performance tuning is optimize
session performance so sessions run during the available load window for the
Informatica Server. Increase the session performance by following-
1) Performance of the Informatica
Server is related to network connections. Data generally moves across a network
at less than 1 MB per second, whereas a local disk moves data five to twenty
times faster. Thus network connections often affect on session performance. So avoid
network connections.
2) Flat files: If your flat files
stored on a machine other than the informatica server, move those files to the
machine that consists of informatica server.
3) Relational data sources: Minimize the connections to sources,
targets and informatica server to improve session performance. Moving target
database into server system may improve session performance.
4) Staging areas: If you use staging
areas you force informatica server to perform multiple data passes. Removing of
staging areas may improve session performance.
5) You can run the multiple
informatica servers against the same repository. Distributing the session load
to multiple informatica servers may improve session performance.
6) Run the informatica server in
ASCII data movement mode improves the session performance. Because ASCII data
movement mode stores a character value in one byte. Unicode mode takes 2 bytes
to store a character.
7) If a session joins multiple
source tables in one Source Qualifier, optimizing the query may improve
performance. Also, single table select statements with an ORDER BY or GROUP BY
clause may benefit from optimization such as adding indexes.
8) We can improve the session
performance by configuring the network packet size, which allows data to cross
the network at one time. To do this go to server manger, choose server
configure database connections.
9) If your target consist key constraints
and indexes you slow the loading of data. To improve the session performance in
this case drop constraints and indexes before u run the session and rebuild
them after completion of session.
10) Running a parallel session by
using concurrent batches will also reduce the time of loading the data. So concurrent
batches may also increase the session performance.
11) Partitioning the session
improves the session performance by creating multiple connections to sources
and targets and loads data in parallel pipe lines.
12) In some cases if a session
contains an aggregator transformation, you can use incremental aggregation to
improve session performance.
13) Avoid transformation errors to improve the session performance.
If the session contains lookup transformation you can improve the session
performance by enabling the look up cache.
14) If your session contains filter transformation,
create that filter transformation nearer to the sources or you can use filter
condition in source qualifier.
15) Aggregator, Rank and joiner
transformation may often decrease the session performance, because they must
group data before processing it. To improve session performance in this case
use sorted ports option.
Improving Mapping
Performance in Informatica-
Mapping
optimization:
The best time in the development cycle is
after system testing. Focus on mapping-level optimization only after optimizing
the target and source databases.
Use Session Log to identify if the source, target or transformations are the performance
bottleneck
Identifying Target Bottlenecks:
The most common performance bottleneck occurs when the Informatica Server
writes to a target database. You can identify target bottlenecks by configuring
the session to write to a flat file target. If the session performance
increases significantly when you write to a flat file, you have a target
bottleneck.
Tasks to be performed to increase
performance:
* Drop indexes and key constraints.
* Increase checkpoint intervals.
* Use bulk loading.
* Use external loading.
* Increase database network packet size.
* Optimize target databases.
Identifying Source Bottlenecks:
If the session reads from relational source, you can use a filter
transformation, a read test mapping, or a database query to identify source
bottlenecks:
* Filter Transformation - measure the time taken to process a given amount of
data, then add an always false filter transformation in the mapping after each
source qualifier so that no data is processed past the filter transformation.
You have a source bottleneck if the new session runs in about the same time.
* Read Test Session - compare the time taken to process a given set of data
using the session with that for a session based on a copy of the mapping with
all transformations after the source qualifier removed with the source
qualifiers connected to file targets. You have a source bottleneck if the new
session runs in about the same time.
* Extract the query from the session log and run it in a query tool. Measure
the time taken to return the first row and the time to return all rows. If
there is a significant difference in time, you can use an optimizer hint to
eliminate the source bottleneck
Tasks to be performed to increase
performance:
* Optimize the query.
* Use conditional filters.
* Increase database network packet size.
* Connect to Oracle databases using IPC protocol.
Identifying Mapping Bottlenecks
If you determine that you do not have a source bottleneck.
How to Increase Informatica
Server Performance:
Many factors can affect session
performance. Here are some points-
Before doing tuning that is specific to Informatica:
1. Check hard disks on related
machines. (Slow disk access on source and target databases, source and target
file systems, as well as the Informatica Server and repository machines can
slow session performance.)
2. Improve network speed. (Slow
network connections can slow session performance.)
3. Check CPUs on related machines
(make sure the Informatica Server and related machines run on high performance
CPUs.)
4. Configure physical memory for the
Informatica Server to minimize disk I/O. (Configure the physical memory for the
Informatica Server machine to minimize paging to disk.)
5. Optimize database configuration
6. Staging areas. If you use a
staging area, you force the Informatica Server to perform multiple passes on
your data. Where possible, remove staging areas to improve performance.
7. You can run multiple Informatica
Servers on separate systems against the same repository. Distributing the
session load to separate Informatica Server systems increases performance.
Informatica specific:
- Transformation tuning
- Using Caches
- Avoiding Lookups by using DECODE for smaller and frequently used tables
- Applying Filter at the earliest point in the data flow etc.
Informatica PowerCenter Partitioning Option
Delivering
High Performance for Processing Massive Data Volumes
The PowerCenter® Partitioning Option increases the performance of
PowerCenter through parallel data processing, and it has been instrumental in
establishing PowerCenter’s industry performance leadership. This option
provides a thread-based architecture and automatic data partitioning that
optimizes parallel processing on multiprocessor and grid-based hardware environments.
Partitioning Option
Key Features
Data Smart Parallelism:
• Automatically aligns PowerCenter partitions with database table
partitions to improve performance.
• Automatically guarantees data integrity by leveraging the parallel
engine of
PowerCenter, which dynamically realigns data partitions for
set-oriented transformations.
Session Design Tools:
• Create user-defined partitioning schemes quickly and easily
• Provide a graphical partitioning map for determining the best
partitioning points
• Gather statistics on configurable session options, such as error
handling, recovery
strategy, memory allocation, and logging, to maximize performance.
Integrated Monitoring Console:
• Gathers session statistics, such as throughput, rows/second, error
details, and
performance optimizations, to identify potential bottlenecks and
recognize trends
• Shows all session execution and dependency details.
Multiple Partition Schemes:
• Support parallelization through multiple mechanisms, including key
range, hash
algorithm-based, round robin, or file partitions
• Maximize data throughput via concurrent processing of specified
partitions along the
data transformation pipeline.
Partitioning Option Benefits:
Scale Cost-Effectively to Handle Large Data Volumes:
With the Partitioning Option, you can execute optimal parallel
sessions by dividing data
processing into subsets that are run in parallel and spread among
available CPUs
in a multiprocessor system. When different processors share the computational
load,
large data volumes can be processed faster. When sourcing and
targeting relational
databases, the Partitioning Option enables PowerCenter to
automatically align its
partitions with database table partitions to improve performance.
Unlike approaches
that require manual data partitioning, data integrity is
automatically guaranteed because
the parallel engine of PowerCenter dynamically realigns data
partitions for set-oriented
transformations (e.g., aggregators or sorters).
Enhance Developer Productivity:
The Partitioning Option provides intuitive, GUI-based, session
design tools that reduce
the time spent on initial and ongoing configuration and performance
tuning tasks. You can easily create user-defined partitioning schemes. A
graphical partitioning
map helps you determine the best points of partitioning.
Configurable session options,
such as error handling, recovery strategy, memory allocation, and
logging, make it
easier to gather statistics used to maximize performance.
Optimize System Performance in Response to Changing Business Requirements:
The Partitioning Option lets you easily gather in-depth session
statistics such as throughput, rows/second, error details, and performance optimizations.
These statistics help you identify potential bottlenecks and recognize trends. An
integrated monitoring console lets you view all session execution and
dependency details. With the metadata-driven architecture of PowerCenter, data
transformation logic is abstracted from the physical execution plan. This
feature enables rapid performance tuning without compromising the logic and design
of the original data mappings. You can continually and easily optimize system performance
in the face of increasing data loads and changing business requirements.
Conclusion
The goal of performance tuning is optimize session performance so sessions run during the available load window for the Informatica Server.
Informatica is a leading provider of enterprise data integration software and services. With Informatica, organizations can gain greater business value by integrating all their information assets from across the enterprise. Thousands of companies worldwide rely on Informatica to reduce the cost and expedite the time to address data integration needs of any complexity and scale.