Monday, August 27, 2007
Terror strikes again the twin city
Terror strikes again the twin city(Hyderabad) on August 08 2007, 3 months after the terror attack in the Mecca masjid. People were just returning back to the normalcy after that horrified attack in the Mecca masjid. Like every other time, the target this time was also the innocent people. It is very perplexing why the innocent people are targeted, this can be answered only by the mindless people who executed it and the almighty who watches them. Whatever may be the reasons, they will not achieve their objectives by such a barbarian act.
Allah's Apostle said, " Allah will not be merciful to those who are not merciful to mankind"
-- Narrated by Jarir bin' Abdullah (Shahi Bukhari).
There were so many incidents in the Prophet's (PBUH) life where he highlighted the brotherhood among all the people. In spite of these facts if anyone tries to attack Allah's creations then they will definitely be paid for their ruthless act in the judgment day. Let us all join our hands to fight against those barbarians and pray for the injured innocents. Also let us pray for the restring soul.
Thursday, August 23, 2007
Data Pump Architecture & Its Salient Features
Oracle 10g Data Pump is one of the Oracle's greatest innovations in its database 10g release. I was so amazed after seeing its simplistic approach for the complex data transfer. In this blog lets see what is the limitation of traditional exp/imp and how this Data Pump is so powerful than the traditional method (i request you not to think that Data Pump is the upgraded version of traditional exp/imp, it is really a new and powerful feature).
Limitation of exp/imp
*********************
Traditional exp/imp is a client side utility. During export, it extracts the data from the database server through the server process and the data gets formatted and written into the dump file by the user process (in the client side). During import, the user process creates the SQL statements that to be executed in the database server and the server process executes those statements. If you clearly notice the architecture for exp/imp the work load is heavy in the user process. The server does not have any informations of the export/import activities.When the session gets lost or the instance gets crashed then we have to restart the whole export/import process from the beginning.
Data Pump Architecture
**********************
Data Pump is a server side utility, which means the whole process is done in the server unlike the traditional exp/imp. When the user process launches the export/import job the following components in the server side is created to manage the job.
1)Data Pump Master process DMnn
Starts when the Data Pump job is launched. When multiple Data Pump jobs are launched, each job will have its own DM process. Controls the DWnn.
2)Data Pump Worker processes DWnn
Master process starts n number of worker processes, where n is the parallelism mentioned while launching the job.
3)Control or Command queue
All of the work requests created by the master process and the associated responses from the worker processes are passed through it. This queue operates on a deliver-exactly-once model, means messages are enqueued by DMnn and dequeued by DWnn.
4)Status queue
Master process updates this queue with the job status and the error details. This queue operates on a publish-and-subscribe model, means any user process can query the queue to monitor the job’s progress.
5)Master Table
Table contains the detailed information of the current Data Pump operation being performed. Provides the information on each object being exported or imported and their locations in the dump file set, Records the parameters supplied by the calling program, Worker status information, Output file information. The table provides all the required information to restart a failed/stopped Data Pump job. Each Data Pump jobs will have its own master table. This table is created in the schema which launches this Data Pump job and the name of this table is same like job name.
Data Pump job working process
*****************************
Steps to launch the Data Pump jobs
********************************
In this command we mentioned PARALLEL=5 which means Master process will start 5 worker processes.
(For more options on this expdp/impdp please use -help on this command)
Data Pump Features
*******************
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump DUMPFILE=apps_exp_dump%U.dmp LOGFILE=apps_imp_dump.log SQLFILE=create_database_sql.sql
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump NETWORK_LINK=db_link LOGFILE=apps_imp_dump.log
Only the tables 'EMP' and 'DEPT' will be imported.
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump NETWORK_LINK=db_link LOGFILE=apps_imp_dump.log EXCLUDE=TABLE:"= 'BONUS'"
Except the table 'BONUS' all other tables will be imported.
IMPDP SYSTEM/MANAGER ATTACH='JOB_NAME'
IMPDP>STATUS
Other ways to monitor the job status are by querying the Master table, querying the dynamic performance view V$SESSION_LONGOPS and looking in to the log file.
Limitation of exp/imp
*********************
Traditional exp/imp is a client side utility. During export, it extracts the data from the database server through the server process and the data gets formatted and written into the dump file by the user process (in the client side). During import, the user process creates the SQL statements that to be executed in the database server and the server process executes those statements. If you clearly notice the architecture for exp/imp the work load is heavy in the user process. The server does not have any informations of the export/import activities.When the session gets lost or the instance gets crashed then we have to restart the whole export/import process from the beginning.
Data Pump Architecture
**********************
Data Pump is a server side utility, which means the whole process is done in the server unlike the traditional exp/imp. When the user process launches the export/import job the following components in the server side is created to manage the job.
1)Data Pump Master process DMnn
Starts when the Data Pump job is launched. When multiple Data Pump jobs are launched, each job will have its own DM process. Controls the DWnn.
2)Data Pump Worker processes DWnn
Master process starts n number of worker processes, where n is the parallelism mentioned while launching the job.
3)Control or Command queue
All of the work requests created by the master process and the associated responses from the worker processes are passed through it. This queue operates on a deliver-exactly-once model, means messages are enqueued by DMnn and dequeued by DWnn.
4)Status queue
Master process updates this queue with the job status and the error details. This queue operates on a publish-and-subscribe model, means any user process can query the queue to monitor the job’s progress.
5)Master Table
Table contains the detailed information of the current Data Pump operation being performed. Provides the information on each object being exported or imported and their locations in the dump file set, Records the parameters supplied by the calling program, Worker status information, Output file information. The table provides all the required information to restart a failed/stopped Data Pump job. Each Data Pump jobs will have its own master table. This table is created in the schema which launches this Data Pump job and the name of this table is same like job name.
Data Pump job working process
*****************************
- The user process establishes the session through server process.
- The user process launches the Data Pump job by creating the Master table and starting the Master process. The User process updates the Master table with the job details.
- The Master process now starts the worker processes, control queue and status queue.
- The Master process checks the job details in the Master table and updates the control queue with command to be executed by the worker processes.
- The worker process picks the command from the control queue and executes it (extracts the data and writes into the dump file or reads the dump file and executes the SQL statements on the database). The response is sent back to the control queue. The worker process also updates the Master table with its process status.
- The Master process now updates the Master table with the completed and remaining job details and updates the status queue with the completed job status and error details. The Master process updates the log file which describes the history of the job.
Steps to launch the Data Pump jobs
********************************
- Create the Oracle directory in the database. Data Pump writes (reads for import) the dump file and the log file in this oracle directory location. Oracle directory is owned by the SYS schema even if it is created by the user who has "CREATE DIRECTORY" privilege.
- Grant access on that directory to the user running the Data Pump job.
- Launch expdp/impdp from the client side.
In this command we mentioned PARALLEL=5 which means Master process will start 5 worker processes.
(For more options on this expdp/impdp please use -help on this command)
Data Pump Features
*******************
- Faster in performance
- Generate SQL files
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump DUMPFILE=apps_exp_dump%U.dmp LOGFILE=apps_imp_dump.log SQLFILE=create_database_sql.sql
- Transfer the data without creating the dump file
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump NETWORK_LINK=db_link LOGFILE=apps_imp_dump.log
- Fine-Grained Object Selections
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump NETWORK_LINK=db_link LOGFILE=apps_imp_dump.log INCLUDE=TABLE:"IN ('EMP','DEPT')"
Only the tables 'EMP' and 'DEPT' will be imported.
IMPDP SYSTEM/MANAGER DIRECTORY=exp_dump NETWORK_LINK=db_link LOGFILE=apps_imp_dump.log EXCLUDE=TABLE:"= 'BONUS'"
Except the table 'BONUS' all other tables will be imported.
- Easy to monitor the Data Pump job
IMPDP SYSTEM/MANAGER ATTACH='JOB_NAME'
IMPDP>STATUS
Other ways to monitor the job status are by querying the Master table, querying the dynamic performance view V$SESSION_LONGOPS and looking in to the log file.
Subscribe to:
Posts (Atom)