In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain how to build Oracle RAC for you in detail. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.
RAC building
First, the preparatory work. 1
1. Plan. 1
2. Network adaptor preparation (rac1& rac2). 2
3. Turn off the firewall and SElinux (rac1& rac2) 2
4. Modify the hostname (rac1& rac2). 3
5. Modify the host file-Network configuration (rac1& rac2). 3
6. Software package preparation. 3
Second, modify the parameters. 4
1. Modify system parameters (rac1&rac2) 4
2. Modify limits file (rac1&rac2) 4
Add groups, users and directories. 4
1. Add Group (rac1&rac2) 4
2. Add user (rac1&rac2) 5
3. Create a directory (rac1&rac2) 5
Fourth, configure environment variables (RAC1&RAC2). 6
Fifth, prepare the shared disk. 7
1.Openfile uses. 7
2. Partition the disk (rac1). 15
3. Prepare a list of bare device rules (rac1& rac2). 16
Decompress grid and configure ssh mutual trust. 17
1. Extract grid (rac1) 17
2. Configure mutual trust (rac1) 17
3. Check mutual trust (rac1&rac2) 17
4. Install cvuqdisk package (rac1 & rac2) 18 after grid is decompressed
Configure the local yum source and install the software package (rac1&rac2) 18
Shut down unnecessary services (rac1&rac2). 18
IX. Install GI software (rac1) 19
Create an ASM disk group. 30
Install the database software. 32
12. DBCA to build a database. 40
Check after installation. 48
1. Check the monitor. 48
2. View the instance. 48
First, the preparatory work 1. Planning
Rac1
Rac2
IP
192.168.131.100/rac1
192.168.131.101/rac2
Privary IP
10.0.0.100/rac1-priv
10.0.0.101/rac2-priv
VIP
192.168.131.123/rac1-vip
192.168.131.124/rac2-vip
ScanIP
192.168.131.125/racscan-ip
Hostname
Rac1
Rac2
OS version
RHEL 6.4
RHEL 6.4
DB version
11.2.0.4
11.2.0.4
Cluster software
GI 11.2.0.4
GI 11.2.0.4
Storage
Openfile/192.168.131.191
two。 Network Adapter preparation (rac1& rac2)
Add two network adapters to the virtual machine settings interface, one for the public network (select NET mode) and the other for inter-node communication (select host only mode). Configure the Nic information as follows
IP
192.168.131.100/rac1
192.168.131.101/rac2
Privary IP
10.0.0.100/rac1-priv
10.0.0.101/rac2-priv
3. Turn off the firewall and SElinux (rac1& rac2)
[root@rac01 ~] # chkconfig iptables off
[root@rac01 ~] # service iptables stop
Close selinux
[root@rac01 ~] # vi / etc/selinux/config
SELINUX=disabled
4. Modify hostname (rac1& rac2)
Vi / etc/sysconfig/network
HOSTNAME=rac1
GATEWAY=192.168.131.1
5. Modify host file-Network configuration (rac1& rac2)
Vi / etc/hosts
Add the following
192.168.131.100 rac1
192.168.131.101 rac2
10.0.0.102 rac1-priv
10.0.0.103 rac2-priv
192.168.131.123 rac1-vip
192.168.131.124 rac2-vip
# # vip must not be able to communicate with ping
192.168.131.125 racscan-ip
6. Package preparation
(rac1)
GI directory: / u01/setup/grid
DB directory: / u01/setup/db
OS directory: / u01/setup/os
(rac2)
OS directory: / u01/setup/os
Second, modify parameter 1. Modify system parameters (rac1&rac2)
Vi / etc/sysctl.conf
Fs.aio-max-nr = 1048576
Fs.file-max = 6815744
Kernel.shmmax = 4294967296 # # this value is 75% of memory in bytes
Kernel.shmmni = 4096
Kernel.shmall = 2097152
Kernel.sem = 250 32000 100 128
Net.ipv4.ip_local_port_range = 9000 65500
Net.core.rmem_default = 262144
Net.core.rmem_max = 4194304
Net.core.wmem_default = 262144
Net.core.wmem_max = 1048576
Make the parameter effective immediately:
[root@rac01] # / sbin/sysctl-p
two。 Modify limits file (rac1&rac2)
[root@rac01 ~] # vi / etc/security/limits.conf
Grid soft nproc 2047
Grid hard nproc 16384
Grid soft nofile 1024
Grid hard nofile 65536
Oracle soft nproc 2047
Oracle hard nproc 16384se
Oracle soft nofile 1024
Oracle hard nofile 65536
Add groups, users, directories 1. Add Group (rac1&rac2)
Groupadd-g 501 oinstall
Groupadd-g 502 dba
Groupadd-g 503 oper
Groupadd-g 504 asmadmin
Groupadd-g 505 asmoper
Groupadd-g 506 asmdba
two。 Add user (rac1&rac2)
(1) add users
Useradd-g oinstall-G dba,asmdba,oper,oinstall oracle
Useradd-g oinstall-G asmadmin,asmdba,asmoper,oper,dba,oinstall grid
(2) set passwords for grid and oracle users
[root@rac01 ~] # passwd oracle
[root@rac01 ~] # passwd grid
(3) check
[root@ora1 ~] # id oracle
Uid=501 (oracle) gid=501 (oinstall)
Groups=501 (oinstall), 502 (dba), 503 (oper), 506 (asmdba)
[root@ora1 ~] # id grid
Uid=502 (grid) gid=501 (oinstall)
Groups=501 (oinstall), 502 (dba), 503 (oper), 504 (asmadmin), 505 (asmoper), 506 (asmdba)
3. Create a directory (rac1&rac2)
(root users)
Mkdir-p / u01/app/oracle
Mkdir-p / u01/app/grid
Mkdir-p / u01/app/11.2.0/grid
Chown-R grid:oinstall / u01/app/grid
Chown-R grid:oinstall / u01/app/11.2.0
Chown-R oracle:oinstall / u01/app/oracle
Chmod-R 775 / U01
Mkdir-p / u01/app/oraInventory
Chown-R grid:oinstall / u01/setup/grid
Chown-R oracle:oinstall / u01/setup/db
Chown-R grid:oinstall / u01/app/oraInventory
Chmod-R 775 / u01/app/oraInventory
4. Configure environment variables (RAC1&RAC2)
-switch to grid user-
Rac1
[grid@rhel_linux_asm ~] $vim .bash _ profile
Export ORACLE_SID=+ASM1
Export ORACLE_BASE=/u01/app/grid
Export ORACLE_HOME=/u01/app/11.2.0/grid
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Export PATH=$ORACLE_HOME/bin:$PATH
Rac2
[grid@rhel_linux_asm ~] $vim .bash _ profile
Export ORACLE_SID=+ASM2
Export ORACLE_BASE=/u01/app/grid
Export ORACLE_HOME=/u01/app/11.2.0/grid
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Export PATH=$ORACLE_HOME/bin:$PATH
-switch to Oracle user-
Rac1
# su-oracle
$vi / home/oracle/.bash_profile
Export ORACLE_SID=orcl1
Export ORACLE_BASE=/u01/app/oracle
Export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Export PATH=$PATH:$ORACLE_HOME/bin
Rac2
# su-oracle
$vi / home/oracle/.bash_profile
Export ORACLE_SID=orcl2
Export ORACLE_BASE=/u01/app/oracle
Export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Export PATH=$PATH:$ORACLE_HOME/bin
Fifth, shared storage is ready for 1.Openfile use
(1) download and install Openfile
Download address of Openfile iso file
Http://www.openfiler.com/community/download
(2) access Openfile management system
Create a new virtual machine with the following configuration
Hard disk description:
Hard disk 1 is used for Openfile system, hard disk 2 and 3 are used for OCR disk, and hard disk 4-6 is used for data disk
A) enter the openfile management system according to the above ip address in the browser
B) enter the account number openfiler and password password (the interface is very Q)
C) enter the main interface of the management system
(3) use Openfile to add disks
A) enter the services tag and start the iscsi service
Enable the iscsi service
B) enter the system tab
Add a network segment that allows access
View access network segment
C) enter volumes to create a physical volume
At this point, the physical volume has been created
D) create a volume group
At this point, the myvg volume group has been created
E) create logical volumes
In the same way, make / dev/sdc, / dev/sdd, / dev/sde, / dev/sdf into lv
The steps are brief.
The final result is shown in the picture.
F) add an IQN
G) do a disk mapping
At this point, the logical volume is added. If you want to use it, you need to scan the logical volume.
(4) the client scans the openfile server (rac1& rac2)
Command # iscsiadm-m discovery-t sendtargets-p 192.168.131.191-l
If the iscsiadm command is not available, you need to install the iscsi-initiator rpm package
# iscsiadm-m discovery-t st-p 192.168.0.10
-bash: iscsiadm: command not found
# mount / u01/setup/os / mnt-- load the CD
Mount: block device / dev/cdrom is write-protected, mounting read-only
# cd / mnt/
# cd Server/
# ls-l * iscsi*
-Rmuri Dec-55 root root 579386 Dec 17 2008 iscsi-initiator-utils-6.2.0.868-0.18.el5.i386.rpm
# rpm-ivh iscsi-initiator-xxxxxxxxxxx.rpm
OK, retry the iscsiadm command
two。 Find the wwid of the device
(execute order)
# for i in `cat / proc/partitions | awk {'print $4'} | grep sd`; do echo "# $I: `/ lib/udev/scsi_id-- whitelist-- replace-whitespace / dev/$ i`"; done
The content in the red box above is the wwid we need.
3. Prepare a list of bare device rules (rac1& rac2)
(bind via wwid of disk)
# vi / etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c455234524b33457a2d493449682d6c555746", NAME= "asm_ocr1", OWNER= "grid", GROUP= "oinstall", MODE= "0660"
KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45524644775047362d445155342d33315251", NAME= "asm_ocr2", OWNER= "grid", GROUP= "oinstall", MODE= "0660"
KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c4552457a786c38542d656c6c412d30547753", NAME= "asm_data1", OWNER= "grid", GROUP= "oinstall", MODE= "0660"
KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45526377784361742d525865332d48357a36", NAME= "asm_data2", OWNER= "grid", GROUP= "oinstall", MODE= "0660"
KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c455271325a4a66662d5a6959672d4a337961", NAME= "asm_data3", OWNER= "grid", GROUP= "oinstall", MODE= "0660"
Start the device
# start_udev
View Devic
[root@rac1 ~] # cd / dev
[root@rac1 ~] # ll asm*
6. Decompress grid and configure ssh mutual trust 1. Extract grid (rac1)
# su-grid
$cd / u01/setup/grid
$unzip p*.zip
two。 Configure mutual trust (rac1)
(root user execution)
# / u01/setup/grid/grid/sshsetup/sshUserSetup.sh-user oracle-hosts "rac1 rac2"-advanced-noPromptPassphrase
# / u01/setup/grid/grid/sshsetup/sshUserSetup.sh-user grid-hosts "rac1 rac2"-advanced-noPromptPassphrase
(remember that each user has to enter a password 4 times)
3. Check mutual trust (rac1&rac2)
Su-grid
Ssh rac1 date
Ssh rac2 date
Ssh rac1-priv date
Ssh rac2-priv date
Su-oracle
Ssh rac1 date
Ssh rac2 date
Ssh rac1-priv date
Ssh rac2-priv date
(verify that mutual access does not require a password)
4. Install cvuqdisk package (rac1 & rac2) after grid is decompressed
[root@rac1 grid] # cd rpm/
# rpm-ivh cvuqdisk-1.0.9-1.rpm
Configure the local yum source and install the software package (rac1&rac2)
(root users)
# mount-o loop / u01Universe setupUnix * / mnt
# yum-config-manager-- add-repo file:///mnt
Vi / etc/yum.repos.d/mnt.repo
[mnt]
Name=Yum Source
Baseurl= file:///mnt
Enabled=1
Gpgcheck=0
# yum makecache
Install the following packages:
[root@rac01 opt] # yum install-y binutils compat* elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel gcc gcc-c++ libaio libgcc libstdc++ libstdc++-devel make sysstat unixODBC-devel libaio-devel ksh
8. Shut down unnecessary services (rac1&rac2)
Chkconfig autofs off
Chkconfig acpid off
Chkconfig sendmail off
Chkconfig cups-config-daemon off
Chkconfig cpus off
Chkconfig xfs off
Chkconfig lm_sensors off
Chkconfig gpm off
Chkconfig openibd off
Chkconfig pcmcia off
Chkconfig cpuspeed off
Chkconfig nfslock off
Chkconfig ip6tables off
Chkconfig rpcidmapd off
Chkconfig apmd off
Chkconfig sendmail off
Chkconfig arptables_jf off
Chkconifg microcode_ctl off
Chkconfig rpcgssd off
Chkconfig ntpd off
9. Install GI software (rac1)
1. Feasibility check
(execute the following script)
[grid@rac1 grid] $. / runcluvfy.sh stage-pre crsinst-n rac1,rac2-fixup-verbose
two。 Install GI
$unzip / u01amp setupUniverse gridplink p*
$cd / u01/setup/grid/grid
Run. / run
View cluster status
[root@rac1 ~] # cd / u01/app/11.2.0/grid/bin/
[root@rac1 bin] #. / crs_stat-t
Create an ASM disk group
[grid@rac1 ~] $export DISPLAY=192.168.131.1:0.0
[grid@rac1 ~] $asmca
Data disk Select asm_data1/asm_data2/asm_data3
11. Install database software
Decompress unzip / u01qqsetupUniDbUnip p*
Cd / u01/setup/db/database
Run. / run
12. DBCA to build a database
[oracle@rac1~] dbca
Check after installation 1. View monitoring
The monitoring of the cluster is automatically created by the system, and you can view it directly.
two。 View an instance
This is the end of the article on "how to build Oracle RAC". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.