In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
In order to improve SGA and PGA, I will adjust the parameters, and in order to minimize misoperation and successfully complete the data migration, I will give the specific parameter values, commands and directories to be created in each step. It is explained here that after analysis of the new server, I am going to allocate 16GB memory for SGA, PGA to 3GB, and the rest of 5GB to be allocated to the system.
1. Stop the existing DATA GUARD a few days before the migration, and install the system and database software of the library server in advance.
2. Stop the DELETE ALL INPUT in the RMAN script the day before migration to stop deleting archive log files.
3. Use V$LOG to determine the current log sequence number before complete, and prepare for determining the archived log files needed for recovery.
4. Create a new directory (/?) in the current server To save the RMAN backup file
5. Use the RMAN complete command to make a full RMAN backup, and specify the directory to the newly created directory (/?) Medium. At the same time, the RMAN backup log is saved, which is used to determine the backup set backup slice where the ORACLE control file is located.
The command is as follows:
Run {
Backup format "/? / db_%s_%p_%T"
Database plus archivelog
Format "/? / arch_%s_%p_%T"
}
6. Use CREATE PFILE FROM SPFILE to update PFILE in the original database to ensure that all parameter settings are up to date
7. Create a new database in the target server through DBCA. The name of the database is the same as the original database, and our production database is POMSPRO, but the location of other files does not need to be changed, because this step is only for the establishment of the initial instance (INSTANCE).
8. Delete all data files, log files, control files and delete SPFILE at the same time.
9. Create a new directory in the target server, which is the same as the directory in the original server and the directory in step 5, and is used to store the full backup set of RMAN.
10. Transfer all backup set backup slices through FTP to the new directory in the target server and /? In the catalog.
11. Modify the PFILE of the database in the target server, which is the same as the original database PFILE. Then modify some of the parameters as follows:
Log_archive_dest_1='LOCATION=/u01/oradata/gelcprod10g
VALID_FOR= (ALL_LOGFILES,ALL_ROLES)'
Pga_aggregate_target = 3221225472
Sga_max_size= 17179869184
Sga_target= 17072495001
Data guard parameter
12. Start the database to the NOMOUNT state, when the instance has been started and the memory has been allocated.
13. Create a new directory in the target server to exist the recovery control files, refer to the configuration in PFILE and establish the following directory
/ oradata/gelcprod10g/GELCPRO/
14. Use the command to restore the control file, as follows
Restore controlfile from'/?
?? Represents the backup film where the control file identified in step 5 is located.
15. Start the database to the MOUNT state, when the database has already loaded the control file.
16. Create a new data file directory, log directory and archive directory in the target server, and set up various TRACE at the same time
File directory. The directory we need to create is as follows
/ u01/oradata/gelcprod10g archive location
Location of / oradata/gelcprod10g/GELCPRO/ data file
Location of / oradata/gelcprod10g/GELCPRO/ log filegroup 1
Location of / u01/oradata/gelcprod10g/GELCPRO/ log filegroup 2
/ home/oraprod/admin/GELCPRO/adump
/ home/oraprod/admin/GELCPRO/bdump
/ home/oraprod/admin/GELCPRO/cbump
/ home/oraprod/admin/GELCPRO/udump
/ home/oraprod/admin/GELCPRO/dpdump
/ home/oraprod/admin/GELCPRO/pfile
These directories are used to store all kinds of TRACE
17. Restore data files through RESOTRE DATABASE
At this point, the preparatory work is complete, and the following steps need to disconnect all applications and stop all business operations.
18. Close the application and make sure that no new data is written to the database.
19. To extract the object information of the original library before migration, use the following command
Select count (*) from user_tables
Select count (*) from user_indexes
Select count (*) from user_views
Select count (*) from user_synonyms
Select OBJECT_TYPE, count (*)
From user_objects
Group by OBJECT_TYPE
Having OBJECT_TYPE not in ('TABLE',' INDEX', 'VIEW','SYNONYM')
Select count (*) from dba_users
Select count (*) from dba_db_links
Select count (*) from user_jobs
20. Perform multiple log switches to ensure that all the changed data has been written to the archive log. Our log group has 3 groups, so do 4-6 switching times to ensure that all the data is entered into the archive log. Use the daily command ALTER SYSTEM SWITCH LOGFILE
21. Determine the current log sequence (SEQUENCE#) by viewing the V$LOG in the original database, determine the archive log files that need to be copied, and take a few more before the earliest archive for more security.
22. Copy the original server archive log to the corresponding directory of the target server through FTP, and copy it to the directory / u01/oradata/gelcprod10g.
23. Restore by applying archived log files. The command is as follows
Recover database until logseq * *
* * represents the SEQUENCE# of the last archived log file
Use ALTER DATABASE OPEN RESETLOGS to open the database.
Use the command SHUTDOWN IMMEDIATE to shut down the database.
Use the command STARTUP MIGRATE to start the database.
27. Run the utlirp.sql script under rdbms/admin
Use the command SHUTDOWN IMMEDIATE to shut down the database.
Use the command STARTUP to start the database.
30. Run the utlrp.sql script under rdbms/admin
Use the command SHUTDOWN IMMEDIATE to shut down the database.
Use the command STARTUP to start the database.
When the data migration is complete, the following steps are used for IP switching and monitoring configuration. In this case, you need to close the original library and disable the network service to avoid IP conflicts.
33. The command to shut down the network service of the original production server is as follows
Service network stop
34. Modify the / etc/sysconfig/network-scripts/ ifcfg-eth0 parameters as follows
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.8.255
IPADDR=192.168.8.9
NETMASK=255.255.255.0
NETWORK=192.168.8.0
ONBOOT=yes
TYPE=Ethernet
35. Modify the IP 192.168.8.9 corresponding to the hostname in / etc/hosts
36. Restart the network service using the command as follows
Service network restart
37. Copy the listener.ora and tnsnames.ora files of the original database to the network/admin directory of the target server through FTP
Restart the ORACLE listener with the command as follows
Lsnrctl stop
Lsnrctl start
When the IP switch is completed, the next step is to verify the object. As a DBA, you can only verify various objects in the database, but not the specific data.
39. Verify the number of tables owned by the user, using the following command
Select count (*) from user_table
40. Verify the number of indexes owned by the user, using the following command
Select count (*) from user_indexes
41. Verify the number of views owned by the user, using the following command
Select count (*) from user_views
42. Verify the number of synonyms owned by the user, using the following command
Select count (*) from user_synonyms
43. Verify some other objects, including procedures, functions, triggers, etc., using the following commands
Select OBJECT_TYPE, count (*)
From user_objects
Group by OBJECT_TYPE
Having OBJECT_TYPE not in ('TABLE',' INDEX', 'VIEW')
44. Verify that all users use the following command
Select count (*) from dba_users
45. Verify the Dblink used using the following command
Select count (*) from dba_db_links
46. Verify the user-owned JOB using the following command
Select count (*) from user_jobs
At this point, the data verification that we DBA can do is over, and developers need to do detailed data sampling verification, and then we can start the application when the data verification is complete.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.