Data
pump import (IMPDP) is a very modern, server
based and highly scalable data loading utility.
On typical multi-processor servers with good
disk-I/O subsystems, the time to load hundreds
of gigabytes to terabytes is both reliable and
reasonable. And even though the dump files
remain Oracle proprietary, there are also easily
identifiable uses of XML within those files.
Therefore, uncompressed import files are
semi-readable within a text editor, and as
before, can be scanned with operating system
commands such as string on UNIX.
Start by identifying the most
frequent data pump import (IMPDP) command line
parameters:
ATTACH
|
[SCHEMA.]JOB_NAME
Name of an already
existing and executing job to
connect to Need
imp_full_database
privilege for
other schemas
|
CONTENT
|
ALL |
META_DATA_ONLY | DATA_ONLY
Filter the import of
dump file contents to the specified
criteria
|
DATA_OPTIONS
|
<null> |
SKIP_CONSTRAINT_ERRORS
Affects how
non-deferred constraint violations are
handled
|
DIRECTORY
|
DATA_PUMP_DIR |
DIRECTORY_NAME
Name of directory
object pointing to a valid server
directory
|
DUMPFILE
|
[DIRECTORY_NAME:]FILE_NAME [, ...]
The name (and
optionally directory) of the import data
file
|
ESTIMATE
|
BLOCKS, STATISTICS
Network import
requests source to estimate data being
sent
|
EXCLUDE
|
OBJECT_TYPE[:NAME_FILTER_EXPRESSION] [,
...]
Database object types
as a whole or by object name filter to
specifically exclude from the import
|
FULL
|
N | Y
Whether to perform a
full database import or not
Requires
EXP_FULL_DATABASE privilege
|
INCLUDE
|
OBJECT_TYPE[:NAME_FILTER_EXPRESSION] [,
...]
Database object types
as a whole or by object name filter to
specifically include in the import
|
JOB_NAME
|
SYS_EXPORT_<mode>_NN | JOB_NAME
Name by which import
job can be referenced (e.g. ATTACH)
|
LOGFILE
|
EXPORT.LOG |
[DIRECTORY_NAME:]FILE_NAME
The name (and
optionally the directory) of the import
log file
|
NOLOGFILE
|
N | Y
Whether or not to
suppress creation of the import log file
|
PARALLEL
|
1 | INTEGER
The maximum number of
concurrent threads for the import
|
PARFILE
|
[DIRECTORY_SPECIFICATION]FILE_NAME
Name of the operating
system specific parameter file
|
PARTITION_OPTIONS
|
NONE | DEPARTITION |
MERGE
Specifies how to
implement source partitioning on the
target
|
QUERY
|
[[SCHEMA.]TABLE_NAME:] FILTER_EXPRESSION
Data filter condition
applied to all tables or by schema and
object name filters during the import
|
REMAP_DATA
|
[schema.]tablename.column_name:[schema.]pkg.function
Function to generate
new value for column during the import
|
REMAP_SCHEMA
|
REMAP_SCHEMA=source_schema:target_schema
Permits creating
objects in different schema than the
export
|
REMAP_TABLE
|
[schema.]old_tablename[.partition]:new_tablename
Naming schema for
tables from created by partition options
|
REMAP_TABLESPACE
|
source_tablespace:target_tablespace
Permits creating
objects in different tablespace than the
export (requires sufficient quota on the
new target tablespace)
|
SCHEMAS
|
SCHEMA [, ...]
The schema or schemas
to import
Need
IMP_FULL_DATABASE privilege for
other schemas
|
SKIP_UNUSABLE_INDEXES
|
N | Y
Whether or not to skip
indexes that were marked unusable during
import
|
SQLFILE
|
[directory_object:]file_name
The name (and
optionally the directory) where import
records all of the DDL it would have
executed
|
STATUS
|
0 | INTEGER
The frequency in
seconds which job displays client
feedback
|
TABLE_EXISTS_ACTION
|
SKIP | APPEND |
TRUNCATE | REPLACE
What import should do
when encountering tables that already
exist within the target
|
TABLES
|
[SCHEMA.]TABLE_NAME[:PARTITION_NAME] [,
...]
List of tables for a
table mode database import
Restricted to a single
schema
|
TABLESPACES
|
TABLESPACE_NAME
[, ...]
List of tablespaces
for a tablespace mode database import
|
Moreover, to use the stop and
restart data pump job capabilities, run data
pump in interactive mode by getting the data
pump prompt. Then the following commands are
also quite frequently useful: