In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
IMP/EXP, data pump, SQL*Loader, rman
Logical backup and recovery of databases: exp and imp
1. Import libraries: export and import entire database
two。 Import tablespaces: exporting and importing tablespaces
3. Import table users: export and import all objects in the user
4. Import tables: export and import tables or table partitions
(1) commit=y is inserted and submitted, which will affect the efficiency of the database. Can be added when the submission database is too large.
(2) ignore=y/n this parameter means that the creation error is ignored and execution continues.
(3) whether full=y is imported completely.
1. Export Import tabl
Back up the test table:
Create tables emp_test as select * from emp_test
Exp exports database tables:
Exp scott/123456@orcl file=/oracle/bak_oracle/emp_test.dmp tables= "emp_test"
Selective export by condition
Exp scott/123456@10.1.1.10:1521/orcl file=/oracle/bak_oracle/emp_test.dmp tables= "emp_test" query=\ "where sal between '1000' and' 2000'\" log=/oracle/bak_oracle/emp_test.log
Export different tables for different users with authorized users
Exp\ 'sys/syste as sysdba\' file=/oracle/bak_oracle/emp_test.dmp tables= "scott.emp_test,test.tab_test"
Note: oracle users log in and configure TNSNAME to scott/123456 directly
Not a native backup scott/123456@10.1.1.10:1521/orcl
Delete test table
Drop table emp_test
Parameter purge
Imp imports database tables:
Query the default table space where the user stores data: select username,default_tablespace from dba_users where username = 'SCOTT'
In special cases, some users' data needs to be placed in a special table space, and use the following command to modify it.
Modify the default tablespace in which the user stores data: alter user scott default tablespace XXX
Imp scott/123456 file=/oracle/bak_oracle/emp_test.dmp
Use authorized users to export different tables of different users into a user table
Imp\ 'sys/system as sysdba\' file=/oracle/bak_oracle/emp_test.dmp fromuser=scott
Query whether the table was deleted and imported successfully
Select count (*) from emp_test
two。 Export Import user
Export the data below a user
Exp scott/123456 file=/oracle/bak_oracle/scott.dmp owner=scott
Delete a user
SQL > drop user scott cascade
Drop user scott cascade
*
ERROR at line 1:
ORA-01940: cannot drop a user that is currently connected
(1) check to see if any SQL statements are occupying this table.
Select sess.INST_ID, sess.machine
Sess.program, sess.sql_id
Sess.sid, sess.serial#
Sess.PROCESS
Lo.oracle_username, lo.os_user_name
Lo.locked_mode
Ao.object_name, ao.object_type
Ao.status
From gv$locked_object lo, dba_objects ao, gv$session sess
Where ao.object_id = lo.object_id
And lo.session_id = sess.sid
And sess.username = 'SCOTT'
-- kill the process sid,serial#
Alter system kill session'10,11562'
(2) check to see if there are any sessions that have not ended.
SQL > select username,sid,serial#,paddr,status from v$session where username='SCOTT'
USERNAME SID SERIAL# PADDR STATUS
SCOTT 1 281 000000008E51C510 KILLED
SCOTT 20 362 000000008E491150 INACTIVE
SCOTT 21 175 000000008E48D050 INACTIVE
SCOTT 28 169 000000008E51C510 KILLED
SQL > select PROGRAM from v$process where addr='000000008E490110'
PROGRAM
Oracle@master.example.com
-- kill the process sid,serial#
Alter system kill session '1281'
Alter system kill session '20362'
Alter system kill session '21175'
Alter system kill session '28169'
SQL > drop user scott cascade
User dropped.
Import the data below a user
Authorization (permissions to connect to the database and DBA) before import (all tables under the user SCOTT)
Grant connect,resource to scott identified by 123456
Grant dba to scott
Imp scott/123456 file=/oracle/bak_oracle/scott.dmp full=y
Import the table to the user with permission
Imp 'sys/system@10.1.1.10:1521/orcl as sysdba' file=/oracle/bak_oracle/scott.dmp fromuser=scott touser=scott
Note: touser= can be given to different users
3. Import and export tablespaces
-unfinished to be continued
It is recommended to choose a data pump when the amount of data that needs to be backed up is large.
Logical backup and recovery of databases: expdp and impdp
The data pump can only be used on the server side of the database
Create a directory on the server to store data during import and export
Create directory DATA_BAK as'/ oracle'
Authorize users to read and write to a directory
Grant read,write on directory DATA_BAK to TEST
View directory and owner, path
Select * from dba_directories
Expdp user name / password @ IP/ Network Service name schemas= user name directory= created directory dumpfile= data table name.DMP logfile= data table name .log
Expdp test/test@10.1.1.10:1521/orcl schemas='TEST' directory='DATA_BAK' dumpfile=/oracle/xxx.dmp logfile=/oracle/xxx.log
Schemas: user name, directory: the created directory to store data, dumpfile: backup dmp file of the database to be exported, logfile: store the exported log file, and compression: compress the file during export
Export the database file with a script file:
[oracle@master oracle] $cat aa.par
Userid= "test/test"
# userid= "/ as sysdba" is exported with dba
Directory=DATA_BAK
Dumpfile=emp_test2.dmp
Logfile=emp_test2.log
# parallel=5
Compression=all
# nologfile=no
Schemas=test
Job_name=emp_test2_expdp
Use the command to invoke:
Nohup expdp parfile=aa.par &
Export the database file with the command:
Expdp test/test@10.1.1.10/orcl directory=DATA_BAK dumpfile=emp_test2.dmp logfile=emp_test2.log schemas=test
Data pump import database statement:
Impdp test/test@10.1.1.10:1521/orcl REMAP_SCHEMA = test:test table_exists_action = replace directory=DATA_BAK dumpfile=emp_test2.dmp logfile=emp_test2.log
Bulk Import of Database: SQL*Loader
Connect the user
View table structure
Control file:
[oracle@master oracle] $cat EMP_LOADERTEST2.ctl
LOAD DATA
INFILE'/ oracle/EMP_LOADERTEST2.ctl'
TRUNCATE INTO TABLE EMP_LOADERTEST2
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY'"'
(EMPNO,ENAME,JOB)
Data file:
[oracle@master oracle] $cat EMP_LOADERTEST2.dat
11,a1,1,11,22,33,44,55
11,a2,2,11,22,33,44,55
11,a3,2,11,22,33,44,55
11,a4,3,11,22,33,44,55
When importing, the data in the data file needs to correspond to the control file.
The field types in the tables that need to be imported need to be consistent
Sqlldr test/test control=/oracle/EMP_LOADERTEST2.ctl data=EMP_LOADERTEST2.dat
Generated a log file for sqlldr
View the database import in the log file
Check if there is any data in the table
Extended tablespace
The system tablespace automatically expands if the system tablespace is inexplicably large
1. Query whether the audit function is on, which has a great impact on the data in the audit table.
Check whether the audit function is turned on
Show parameter audit
two。 Turn off the audit function command
SQL > alter system set audit_sys_operations=FALSE scope=spfile
System altered.
SQL > alter system set audit_trail=NONE scope=spfile
System altered.
Restart the database takes effect
Note: parameter settings of audit_sys_operations:
None disables database auditing
Os enables database auditing and directs database audit records to operating system audit records
Db enables database auditing and directs all database audit records to the database's SYS.AUD$ table
Xml enables database auditing and writes all records to operating system files in XML format.
Xml,extended enables database auditing and outputs all columns of the audit record, including the values of SqlText and SqlBind.
3. Query whether there is too much statistical information
The statistical information time is 31 days by default, and the time can be changed to 10 days or one week. The expired statistical information system will be deleted automatically.
SQL > select dbms_stats.get_stats_history_retention from dual
SQL > conn / as sysdba
Connected.
SQL > exec dbms_stats.alter_stats_history_retention (10)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.