In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you what the database logical backup and recovery is like, the content is concise and easy to understand, it can definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
(1) question:
The first contact with database backup and recovery is currently applied in such a scenario:
We usually develop in our own development environment, and after the development is completed, we need to transfer the oracle database (installed on the Linux server) to the customer's deployed server (also Linux).
So you need to back up the database in the development environment and restore it to the new server machine.
(2) solution: use expdp/impdp data pump
Follow these steps:
(1) backup tablespace creation script:
Copy the creation statements of all the tablespaces of the database to be backed up, which will be used later to create the corresponding tablespaces on the new database.
(2) use the expdp command to export the database according to the user on your windows machine:
Expdp system/password@172.16.1.61/baseinfo dumpfile=user1.dmp directory=backup schemas=user1
Log in as administrator system and export the database to which the user1 user belongs under the database baseinfo instance on the user server 172.16.1.61. The dmp file is placed in the oradata directory of the remote server 172.16.1.61 (note that this directory is relative to the working directory of Oracle. If the export has the error of "invalid operation, cannot manipulate log file", change to another export directory. Or forget about directory).
(3) dmp file transfer:
Use software such as PenguiNet to connect to linux to copy the dmp file exported from the server to windows. After uploading the dmp file to the target linux server (new database), you can also directly use the scp command to upload the dmp file to each other on the source Linux and the target Linux.
Note: if you are overwriting an existing database, that is, the user1 user already exists in the target database you want to import, then skip two steps and go directly to step 6.
(4) create a directory to store the tablespace:
PenguiNet connects to the target Linux server and creates a new directory to hold the tablespace of the new database:
Mkdir-p / oradata/newDataBasechown-R oracle:oogroup / oradata/newDataBase
Create a directory, and then assign ownership of the directory to the oracle user of the oogroup group. Because it is a database instance created under the oracle user, you need to give it permission to read and write to this directory.
(5) create tablespaces:
Use Toad to connect to the target Linux server (System user login), copy in the script to create the tablespace backed up in the first step, modify the path of the tablespace file to the directory path created in the previous step, and then execute to create all the tablespaces needed by the database
(6) use the impdp command to import data on your own windows machine and import it to the specified instance of the target Linux server:
Note: before importing, it is best to execute drop user userName cascade to delete the user to be imported, and then execute the following import command.
By the way, if you want to import multiple users, you can delete one and import one, because there may be one between them, and if you delete all of them and import them one by one, then an error may occur.
But the best way is to export all users together and then import them together, with user names separated by commas.
Impdp system/password@192.168.9.226/baseinfo dumpfile=user1.dmp schemas=user1
Note: connect to the SCP of Penguinet, navigate to the directory where the user1.dmp file is located, and execute the imp command above to find the dmp file.
If there is an error, usually because the dmp file is not found, you can look at the error prompt, where to find the dmp file, then you put the dmp file.
At this point, the database export, import work is over, you already have the same database on the new machine, then test it with Toad and log in with the user1 user.
(3) the difference from exp/imp:
1. The export and import of Oracle is a commonly used migration tool. In Oracle
In 10g, Oracle launched the data pump (expdp/impdp). It can be more efficient than exp/imp by using parallelism.
2. Expdp/impdp calls the API on the server side to perform the operation, which is the job task within the database. Can be used remotely, but the generated dump
The file exists in the directory on the server.
3. According to the pump and exp/imp, the performance has been greatly improved, of which the biggest influence is paralle. Look at it this way: expdp/impdp=exp/imp+direct
Moe + paralle. Therefore, using the data pump, in order to improve the speed, it is necessary to set parallel parameters.
Logical backup and recovery of Oracle database-- implemented with exp and imp commands
EXP and IMP are a logical backup tool provided by Oracle. Logical backup creates a logical copy of the database object and stores it in a binary dump file. This kind of logical backup needs to be used when the database is started, and its export essence is to read a database recordset (or even a data dictionary) and write the recordset to a file, the export of these records has nothing to do with their physical location, and the import is essentially to read the dump file and execute the commands in it. This backup method is implemented through Oracle utilities export and import. Export exports the data from the database and import imports the data unloaded from export into the database. A variety of functions can be derived from this tool, such as backup of the entire database, reconstruction of table structure, data transmission, user changes, and so on.
The Oracle Export/Import tool provides flexible and diverse functions and export / import modes, and the three most commonly used modes are User, table, and full database. In addition, you can decide whether to include object-related data dictionary information, such as indexes, constraints, permissions, and so on, during export / import.
Note: through logical backup, you can back up the entire database, or only some important data. Because it is a logical backup, it can only be used to recover data logically. once the database is physically damaged, it can not be started, the logical backup data can not help restore the database.
First, introduce the command usage of Export/Import:
(1) Import one user's object into another user mode
Export the existing users in the database to a dmp file (such as exporting the objects of the user olduser to the file * .dmp):
Command: exp sys/password file=*.dmp owner=olduser grant=N indexes=Y rows=Y
Import a user's object from the dmp file into a user's schema (from * .dmp, import the user's user object into the user newuser):
Command: imp sys/password file=*,dmp fromuser=olduser touser=newuser indexes=Y rows=Y
Avoid ORA-00942 (table or graph does not exist) error when importing
Because the export is carried out in the order in which the objects are established in the database, the dependencies between database objects will cause some imported objects to fail to find their dependent objects (such as view, procedure, and so on), which will cause ORA-00942 (table or graph does not exist) error. Usually, the way to solve this problem is to do two imports, with the following settings for the first import:
Command: imp sys/password file=*.dmp full=Y commit=y rows=Y
If an error occurs during import, modify the setting: rows=n,ignore=n:
Command: imp sys/password file=*.dmp full=Y rows=N commit=Y ignore=N
(2) examples of full database backup and recovery without increment
Full database Export to dmp files (such as * .dmp):
Command: exp sys/password file=*.dmp Full=y-- rows is saved to Y
Put the * .dmp file backed up by the entire database into the database:
Command: imp sys/password file=*.dmp ignore=y full=y-- rows is saved to Y
Note: import / export is the most thorough way to reduce disk fragmentation.
Secondly, the following database backup strategies are introduced:
(3) the method of incremental backup and recovery of the whole database:
Full database Export to dmp files (e.g. * .dmp)
Command: exp sys/manager file= * .dmp Full=y inctype=complete
Incremental backup of Export to dmp files on the first day (such as day one 1.dmp):
Command: first day of exp sys/manager file= 1.dmp inctype=incremental
Incremental backup of Export to dmp files on the second day (such as 2.dmp on the second day):
Command: exp sys/manager file= the next day 2.dmp inctype=incremental
Incremental backup of Export to dmp files on the third day (such as 3.dmp on the third day):
Command: exp sys/manager file= Day 3 3.dmp inctype=incremental
Now suppose the Oracle database is corrupted on the third day and build a new Oracle database. First, imp the last dmp file into the database:
Command: imp sys/manager file= Day 3 3.dmp full=y ignore=y inctype=system
Imp the dmp file backed up by the entire database to the database:
Command: imp sys/manager file= * .dmp ignore=y full=y inctype=restore
Imp the dmp file from the first day incremental backup to the database:
Command: first day of imp sys/manager file= 1.dmp ignore=y full=y inctype=restore
Imp the dmp file from the incremental backup the next day to the database:
Command: imp sys/manager file= the next day 2.dmp ignore=y full=y inctype=restore
Imp the dmp files from the third day incremental backup to the database
Command: imp sys/manager file= Day 3 3.dmp ignore=y full=y inctype=restore
(4) back up the database regularly by using windows task schedule:
Write each of the above commands into each bat script, and then add the backup timer. The specific configuration steps are as follows:
My environment is the windowxp operating system, select the system tools in the attachment, select the task plan, and then click "add Task Plan", follow the instructions step by step, customize the execution time of each bat, so that you can perform the task of database backup regularly.
The above is what the database logical backup and recovery is like. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.