In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
I. operating system
Red hat 7.4
Oracle database version
11.2.0.4
Oracle grid version
11.2.0.4
Corresponding file
P13390677_112040_Linux-x86-64_1of7.zip-database software
P13390677_112040_Linux-x86-64_2of7.zip-database software
P13390677_112040_Linux-x86-64_3of7.zip-grid software
IP address Planning:
DNS server:192.168.1.168
Db node1:192.168.1.212 public
192.168.1.213 VIP
10.0.1.2 private
Db node2:192.168.1.214 public
192.168.1.215 VIP
10.0.1.3 private
Scan: 192.168.1.216
II. Basic configuration
1. Modify the host name:
Vi / etc/hostname configuration file
Or
Hostnamectl set-hostname
A: redhat-212
B: redhat-214
2. Change dynamic IP to static IP
Cd / etc/sysconfig/network-scripts/
BOOTPROTO= "static" # dhcp changed to static
3. Modify the configuration of two rac node NICs:
Node 1:
Cat / etc/sysconfig/network-scripts/ifcfg-ens192 TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=noneDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=noIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens192UUID=2a2b7809-26ac-4fc6-95d0-124c7348171aDEVICE=ens192ONBOOT=yesIPADDR=192.168.1.212PREFIX=24cat / etc/sysconfig/network-scripts/ifcfg-ens224 TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens224UUID=6da67cdc-933c-4bfe-a3b1-2896175be86bDEVICE=ens224ONBOOT=yesIPADDR=10.0.1.2PREFIX=24
Node 2:
Cat / etc/sysconfig/network-scripts/ifcfg-ens192 TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens192UUID=aeead365-1e33-41c3-b0e9-b147c4a2e688DEVICE=ens192ONBOOT=yesIPADDR=10.0.1.3PREFIX=24cat / etc/sysconfig/network-scripts/ifcfg-ens224TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=noneDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=noIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens224UUID=e4fe8fed-6e97-43b4-aec6-80ce42588eadDEVICE=ens224ONBOOT=yesIPADDR=192.168.1.214PREFIX=24
# vim / etc/resolv.conf
Nameserver 218.2.2.2
Restart the network card separately:
Systemctl restart network.service
Disable the predictable naming convention. For this, you can pass the kernel parameters of "net.ifnames=0 biosdevname=0" at startup. This is done by editing / etc/default/grub and adding "net.ifnames=0 biosdevname=0" to the GRUBCMDLINELINUX variable.
Cat / etc/default/grubGRUB_TIMEOUT=5GRUB_DISTRIBUTOR= "$(sed's, release. * $, g' / etc/system-release)" GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT= "console" GRUB_CMDLINE_LINUX= "crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap net.ifnames=0 biosdevname=0 rhgb quiet" GRUB_DISABLE_RECOVERY= "true"
3. Turn off the firewall directly
Systemctl stop firewalld.service # stop firewall
Systemctl disable firewalld.service # prevents firewall from booting
4. Close selinux
# vim / etc/selinux/config
Modify a file
SELINUX=disabled
Temporarily Closed
# setenforce 0
5. Internal DNS parses two rac nodes:
NODE1:
Change the name racdb1 on the line vim / etc/hosts 127.0.0.1 Add the following at the end: 127.0.0.1 racdb1 localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.216 cluster clusterscan192.168.1.212 redhat-212192.168.1.214 redhat-214192.168.1.213 redhat-212-vip192.168.1.215 redhat-214-vip10.0.1.2 raca-priv10.0.1.3 racb-priv
NODE2:
Change the name racdb2 on the line vim / etc/hosts 127.0.0.1 At the end, add the following 127.0.0.1 racdb2 localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.216 cluster clusterscan192.168.1.212 redhat-212192.168.1.214 redhat-214192.168.1.213 redhat-212-vip192.168.1.215 redhat-214-vip10.0.1.2 raca-priv10.0.1.3 racb-priv 3 and openfiler to divide the storage space
This software is very easy to use, installed in the vmware environment, is actually a linux system, we create vmdisk on the line. Configure to log in to a web page https://ip:446
Perform the following installation on the Network Storage Server (openfiler1).
After configuring the network on both Oracle RAC nodes, the next step is to install the Openfiler software to the network storage server (openfiler1). The network storage server is later configured as an iSCSI storage device to meet all shared storage needs of Oracle Clusterware and Oracle RAC.
For the steps to install Openfiler, please refer to the oracle official website:
Http://www.oracle.com/technetwork/cn/articles/hunter-rac11gr2-iscsi-083834-zhs.html
1. Use Openfiler to configure iSCSI volumes
OpenFiler setting iSCSI / logical volume name volume description
Racdb-crs1 racdb-ASM CRS Volume 1
Racdb-crs2 racdb-ASM CRS Volume 2
Racdb-crs3 racdb-ASM CRS Volume 3
Racdb-data1 racdb-ASM Data Volume 1
Racdb-data2 racdb-ASM Data Volume 2
Racdb-data3 racdb-ASM Data Volume 3
Racdb-fra1 racdb-ASM FRA Volume 1
Racdb-fra2 racdb-ASM FRA Volume 2
Racdb-fra3 racdb-ASM FRA Volume 3
2. ISCSI path name naming
Iqn.2006-01.com.openfiler:racdb.crs1
Iqn.2006-01.com.openfiler:racdb.crs2
Iqn.2006-01.com.openfiler:racdb.crs3
Iqn.2006-01.com.openfiler:racdb.data1
Iqn.2006-01.com.openfiler:racdb.data2
Iqn.2006-01.com.openfiler:racdb.data3
Iqn.2006-01.com.openfiler:racdb.fra1
Iqn.2006-01.com.openfiler:racdb.fra2
Iqn.2006-01.com.openfiler:racdb.fra3
3. Install iscsi client on two nodes
# yum install-y iscsi-initiator-utils#systemctl start iscsid.service#service iscsid start
4. Set the iscsi client to boot.
Systemctl enable iscsid.servicesystemctl enable iscsi.service
5. Check the service status
# systemctl list-unit-files | grep iscsi*iscsi-shutdown.service static iscsi.service enabled iscsid.service enabled iscsiuio.service disablediscsid.socket enabled iscsiuio.socket enabled
6. Discover the ISCSI server disk path
# iscsiadm-m discovery-t sendtargets-p openfiler1-priv is the IP address of your openfiler My address here is 10.0.1.100#iscsiadm-m discovery-t sendtargets-p 10.0.1.10010.0.100 10.0.1.100#iscsiadm 3260 01.com.openfiler:racdb.data310.0.1.100 1 iqn.2006-01.com.openfilerlace racdb.fra310.0.1.100 iqn.2006-01.com.openfilerveracdb.fra210.0.1.100JV 3260 01.com.openfiler:racdb.data310.0.1.100 1 iqn.2006-01.com.openfilerpurracracdb.fra110.1.100Fax-01.com.openfilerrace racdb.fra110.1.100 : 3260 iqn.2006 1 iqn.2006-01.com.openfilerRelay racdb.data210.0.1.100 iqn.2006-01.com.openfilerRelace racdb.data110.1.100 iqn.2006-01.com.openfilerlace racdb.crs310.0.1.100Luca 3260men 1 iqn.2006-01.com.openfilerRelracdb.crs210.0.1.100 iqn.2006-01.com.openfilerparracdb210.0.100
7. Each Oracle RAC node can find available targets from the network storage server. The next step is to manually log in to each available target, which can be done using the iscsiadm command line interface. This needs to be run on two Oracle RAC nodes. Note that I have to specify the IP address of the network storage server instead of its hostname (openfiler1-priv)-I think this is necessary because the above discovery uses the IP address to display the target.
Log in to ISCSI remote disk
Iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs1-p 10.0.1.100-liscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs2-p 10.0.1.100-liscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs3-p 10.0.1.100-liscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.data1-p 10.0.1.100-liscsiadm-m node -T iqn.2006-01.com.openfiler:racdb.data2-p 10.0.1.100-liscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.data3-p 10.0.1.100-liscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.fra1-p 10.0.1.100-liscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.fra2-p 10.0.1.100-liscsiadm-m node-T iqn. 2006-01.com.openfiler:racdb.fra3-p 10.0.1.100-l
8. Set the boot to automatically connect the disk
Iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs1-p 10.0.1.100-- op update-n node.startup-v automaticiscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs2-p 10.0.1.100-- op update-n node.startup-v automaticiscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs3-p 10.0.1.100-op update-n node.startup-v automaticiscsiadm-m Node-T iqn.2006-01.com.openfiler:racdb.data1-p 10.0.1.100-- op update-n node.startup-v automaticiscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.data2-p 10.0.1.100-- op update-n node.startup-v automaticiscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.data3-p 10.0.1.100-op update-n node.startup-v automaticiscsiadm-m node-T Iqn.2006-01.com.openfiler:racdb.fra1-p 10.0.1.100-- op update-n node.startup-v automaticiscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.fra2-p 10.0.1.100-- op update-n node.startup-v automaticiscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.fra3-p 10.0.1.100-op update-n node.startup-v automatic
9. View the remote disk path and associated disk path
# (cd / dev/disk/by-path; ls-l * openfiler* | awk'{FS= "" Print $9 "$10" $11}') ip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0->.. /.. / sdcip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs2-lun-0->.. /.. / sddip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs3-lun -0->.. /.. / sdeip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0->.. / sdfip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data2-lun-0->.. /.. / sdgip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data3-lun-0- >.. / sdhip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0->.. /.. / sdiip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra2-lun-0->.. /.. / sdjip-10.0.1.100:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra3-lun-0->.. /.. / sdk
10. Install udev device Manager (rac_udev sets up disk and installs grid)
# yum install-y udev
Check the disk serial number
-for disk in `ls / dev/ Sd* `do echo $disk / usr/lib/udev/scsi_id-- whitelisted-- replace-whitespace-- device=$disk done\\ enter- -- / dev/sda/dev/sda1/dev/sda2/dev/sdb/dev/sdb1/dev/sdc14f504e46494c45527763443337452d557347312d514f3049/dev/sdd14f504e46494c4552715837527a472d78444f522d6e6b5774/dev/sde14f504e46494c45523045727559352d706674422d43666c59/dev/sdf14f504e46494c455262664b78684c2d51796e512d30464179/dev/sdg14f504e46494c45526c36533367792d6a6265712d45705648/dev/sdh14f504e46494c45524159783651312d4a4554742d4f74776f
On the two Oracle RAC nodes:
Map associated disk
# vim / etc/udev/rules.d/99-oracle-asmdevices.rulesKERNEL== "sd?", ENV {ID_SERIAL} = = "14f504e46494c45527763443337452d557347312d514f3049", SYMLINK+= "asm_ocr_1_1", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd?", ENV {ID_SERIAL} = = "14f504e46494c4552715837527a472d78444f522d6e6b5774", SYMLINK+= "asm_ocr_1_2", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd?", ENV {ID_SERIAL} = "14f504e46494c45523045727559352d706674422d43666c59" SYMLINK+= "asm_data_1_1", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd?", ENV {ID_SERIAL} = = "14f504e46494c455262664b78684c2d51796e512d30464179", SYMLINK+= "asm_data_1_2", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd?", ENV {ID_SERIAL} = = "14f504e46494c45526c36533367792d6a6265712d45705648", SYMLINK+= "asm_fra_1_1", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd?", ENV {ID_SERIAL} = "14f504e46494c45524159783651312d4a4554742d4f74776f" SYMLINK+= "asm_fra_1_2", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"
Load rules file (install grid by setting up rac_udev disk)
# udevadm control-reload-rules
# udevadm trigger
11. View the associated path
# ls-l / dev/asm*lrwxrwxrwx. 1 root root 3 December 26 17:30 / dev/asm_data_1_1-> sdflrwxrwxrwx. 1 root root 3 December 26 17:30 / dev/asm_data_1_2-> sdglrwxrwxrwx. 1 root root 3 December 26 17:30 / dev/asm_fra_1_1-> sdhlrwxrwxrwx. 1 root root 3 December 26 17:30 / dev/asm_fra_1_2-> sdilrwxrwxrwx. 1 root root 3 December 26 17:30 / dev/asm_ocr_1_1-> sdclrwxrwxrwx. 1 root root 3 December 26 17:30 / dev/asm_ocr_1_2-> sddlrwxrwxrwx. 1 root root 3 January 8 14:00 / dev/asm_ocr_1_3-> sde 4. Create task roles to divide operating system permission groups, users and directories
1. Create a user:
Groupadd-g 1000 oinstall groupadd-g 1200 asmadmin groupadd-g 1201 asmdba groupadd-g 1202 asmoper groupadd-g 1300 dba groupadd-g 1301 oper
2. Create a group:
Useradd-u 1100-g oinstall-G asmadmin,asmdba,asmoper grid useradd-u 1101-g oinstall-G dba,oper,asmdba oracle
3. Create a password:
Passwd grid
Passwd oracle
4. Set the mutual trust relationship. Remember here that both oracle and grid users should set mutual trust.
# su-grid
$mkdir ~ / .ssh (if not created)
Two nodes do the following
Ssh-keygen-t rsa
Ssh-keygen-t dsa
Here you go straight back to the car step by step.
The following operations can be performed on a node (id_rsa is the key and id_rsa.pub is the public key)
Cat ~ / .ssh/id_rsa.pub > >. / .ssh / authorized_keys-- the public key is stored in the authorized_keys file and written to the local machine
Cat ~ / .ssh/id_dsa.pub > >. / .ssh / authorized_keys
Ssh redhat-214 cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys-- the public key of the second node is written to the local machine
Ssh redhat-214 cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys
Scp / .ssh/authorized_keys secdb2:~/.ssh/authorized_keys
Verification on both nodes
Ssh redhat-212 date
Ssh redhat-214 date
Ssh raca-priv date
Ssh racb-priv date
# su-oracle
The two nodes do the following:
Ssh-keygen-t rsa
Ssh-keygen-t dsa
The following operations can be performed on a node
Cat ~ / .ssh/id_rsa.pub > >. / .ssh / authorized_keys-- the public key is stored in the authorized_keys file and written to the local machine
Cat ~ / .ssh/id_dsa.pub > >. / .ssh / authorized_keys
Ssh redhat-214 cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys-- the public key of the second node is written to the local machine
Ssh redhat-214 cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys
Scp ~ / .ssh/authorized_keys redhat-214:~/.ssh/authorized_keys-- upload the local public key to the second node for management
Verification on both nodes
Ssh redhat-212 date
Ssh redhat-214 date
Ssh raca-priv date
Ssh racb-priv date
5. Set environment variables for grid users
The following operations also need to be done on both nodes. It should be noted here that the SID of grid users and the SID of oracle users are different. If you understand the principle here, you will not make mistakes.
Log in to both Oracle RAC nodes as the grid user account and create the following login script (.bash _ profile):
Note: when setting the Oracle environment variable for each Oracle RAC node, be sure to specify a unique Oracle SID for each RAC node. For this example, I use:
Racnode1:ORACLE_SID=+ASM1
Racnode2:ORACLE_SID=+ASM2
Node1:
[root@racnode1 ~] # su-gridvi .bash _ profileexport TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+ASM1 export ORACLE_BASE=/u01/grid export ORACLE_HOME=/u01/app/grid/11.2.0 export NLS_DATE_FORMAT='yyyy/mm/dd hh34:mi:ss' export TNS_ADMIN=$ORACLE_HOME/network/admin export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME / jlib:$ORACLE_HOME/rdbms/jlibexport LANG=en_USexport NLS_LANG=AMERICAN_AMERICA.ZHS16GBK umask 022
Node2:
# su-gridvi .bash _ profileexport TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+ASM2 export ORACLE_BASE=/u01/grid export ORACLE_HOME=/u01/app/grid/11.2.0 export NLS_DATE_FORMAT='yyyy/mm/dd hh34:mi:ss' export TNS_ADMIN=$ORACLE_HOME/network/admin export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME / rdbms/jlib export LANG=en_US export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK umask 022
6. Set environment variables for oracle users
Log in to both Oracle RAC nodes as the oracle user account and create the following login script (.bash _ profile):
Note: when setting the Oracle environment variable for each Oracle RAC node, be sure to specify a unique Oracle SID for each RAC node. For this example, I use:
Racnode1:ORACLE_SID=orcl1
Racnode2:ORACLE_SID=orcl2
# su-oraclevi .bash _ profileexport TMP=/tmp export TMPDIR=$TMP export ORACLE_HOSTNAME=rac1 export ORACLE_SID=orcl1 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 export ORACLE_UNQNAME=orcl export TNS_ADMIN=$ORACLE_HOME/network/admin # export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib: $ORACLE_HOME/rdbms/jlib export LANG=en_US export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK export NLS_DATE_FORMAT='yyyy/mm/dd hh34:mi:ss' umask 022
Node2:
# su-oraclevi .bash _ profileexport TMP=/tmp export TMPDIR=$TMP export ORACLE_HOSTNAME=rac2 export ORACLE_SID=orcl2 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 export ORACLE_UNQNAME=orcl export TNS_ADMIN=$ORACLE_HOME/network/admin # export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib: $ORACLE_HOME/rdbms/jlib export LANG=en_US export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK export NLS_DATE_FORMAT='yyyy/mm/dd hh34:mi:ss' umask 022
7. Create the Oracle base directory path
The final step is to configure the Oracle base path to follow the structure of the best flexible architecture (OFA) and the correct permissions. You need to do this on both Oracle RAC nodes of the cluster as the root user.
This guide assumes that the / U01 directory is created in the root file system. Please note that this is done for simplicity and is not recommended as a general practice. The / U01 directory is typically supplied as a separate file system configured with hardware or software mirroring capabilities.
# mkdir-p / u01/grid
# mkdir-p / u01/app/grid/11.2.0
# chown-R grid:oinstall / U01
# mkdir-p / u01/app/oracle
# chown oracle:oinstall / u01/app/oracle
# chmod-R 775 / U01
8. Set resource limits for Oracle software installation users:
8.1 on each Oracle RAC node, add the following lines of code to the / etc/security/limits.conf file (the following example shows the software account owners oracle and grid):
# vi / etc/security/limits.confgrid soft nproc 2047grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536 or add the maximum stack size of stack below. I usually use more oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536oracle soft stack 10240oracle hard stack 32768grid soft nproc 2047grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536grid soft stack 10240grid hard stack 32768 above.
8.2 on each Oracle RAC node, add or edit the following line in the / etc/pam.d/login file (if this line does not exist):
# vi / etc/pam.d/loginsession required pam_limits.so
8.3 modify kernel parameters
# vim / etc/sysctl.conf fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.shmmni = 4096 kernel.sem = 25032000 100128 net.ipv4.ip_local_port_range = 900065500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576
Make sysctl effective
# sysctl-p
8.3 install related development kits
# yum-y install glibc\ glibc-devel\ glibc-headers\ libaio\ libaio-devel\ libgcc\ libstdc++\ libstdc++-devel\ make\ unixODBC\ unixODBC-devel\ pdksh\ compat-libcap1\ compat-libstdc++-33\ elfutils-libelf-devel\ gcc\ gcc-c++\ smartmontools\ pdksh\ compat-libstdc\ cvuqdisk
9. RHEL 7 installs oracle rac 11.2.0.4 and executes root.sh error ohasd failed to start
Reason for reporting an error:
Because RHEL 7 runs and restarts processes using systemd instead of initd, root.sh runs ohasd processes through traditional initd.
9.1. Solution:
In RHEL 7, ohasd needs to be set up as a service before running the script root.sh.
Create a service file as a root user
# touch / usr/lib/systemd/system/ohas.service
# chmod 777 / usr/lib/systemd/system/ohas.service
9.2.Add the following to the newly created ohas.service file
# vim / usr/lib/systemd/system/ ohas.service [Unit] Description=Oracle High Availability ServicesAfter= syslog.target [service] ExecStart=/etc/init.d/init.ohasd run > / dev/null 2 > & 1 Type=simpleRestart=always [Install] WantedBy=multi-user.target
Run the following command as the root user
# systemctl daemon-reload
# systemctl enable ohas.service
# systemctl start ohas.service
9.4. View the running status
# systemctl status ohas.service
Install GI cluster software
1. Xmanager in connection operation
# su-grid
$cd / home/grid/
$unzip p13390677_112040_Linux-x86-64_3of7.zip
# yum install xhost +-y
# su-grid
$export DISPLAY=ip:0.0 / / ip is your native ip
$xhost +
$export LANG=en_US
$. / runInstaller
If you need to add a Chinese language pack
# mkdir-p / usr/share/fonts/zh_CN/TrueType
Put the zysong.ttf file in the / usr/share/fonts/zh_CN/TrueType directory
2. Installation screenshot
Come out the installation screen below, select the bottom Skip software updates, and then select next
Select the first to install and configure the cluster environment, next
Select the second advanced installation, next
Choose to add simplified Chinese, next
The cluster name is chosen by yourself. The SCAN name needs to be the same as the scan ip alias in / etc/hosts. The port defaults to GNS,next.
Add a busrac2 nod
Click next (the above setting of mutual trust relationship must have been performed, otherwise this step will go wrong if next)
Or if the mutual trust relationship is not set, you can click SSH Connectivity, enter the password of the grid user (note that the grid user of the two nodes should be the same as the password), click setup, and finally next
You just need to confirm the network interface here. The installation wizard has filled us in, next (here is to automatically identify which segment of IP is on your server)
Storage selection: select ASM,next
Here, let's create an asm disk group, give it a name "ORC", create an OCR disk, and select a 3-disk next (at least 3 disks are required here, and the picture here is for reference only; sometimes the disk cannot be found, so you can try to load the rules file again (install grid by setting up a disk in rac_udev))
Set password: next
Choose the second option here without using IPMI
Assign a different group to ASM, next
The installation wizard will help you fill in the installation path of the cluster software. Let's continue with next. What we need to note here is that oracle_Home cannot be a subdirectory of oracle_base.
Execute script as root: root user executes two scripts, one node executes the second node after execution (root.sh execution may be a little longer)
We will see this error report. In fact, I checked the mos knowledge base about this problem. It is rather vague. This error can be ignored and will not affect our installation and future use.
Finish directly, where our cluster software installation is complete.
VI. Install oracle 11.2.0.4 software
# su-oracle
$export DISPLAY=ip:0.0 / / ip is your native ip
$xhost +
$export LANG=en_US
$cd database
$. / runInstaller
Run the installation script in the unzipped database, which is the same as installing grid, start the installation wizard
It doesn't make sense for us not to fill in the mos account and not to connect to the Internet. Click next, and here we still install it on the rac1 node.
We only install database software here, click next
Choose the cluster installation method here, and you can choose one of the following three options:
Single instance database installation (single instance database installation) this option allows single instance database software to be installed only on the local node.
Oracle Real Application Cluster database installation (Oracle RAC database installation) this option allows you to select and install Oracle RAC binaries on selected nodes in the cluster.
Oracle RAC One Node database installation this option installs the Oracle RAC One Node database binaries on the selected node.
On this interface, select the Oracle Real Application Cluster database installation option.
Select "select All" and click next
Or choose the language as "English/Simplified Chinese" and click next
Select "Enterprise Edition" and click "next"
Here are oracle_base and oracle_home, click next
This corresponds to the group is fine, continue to next
This step will check your installation environment. As long as the configuration parameters are correct, there is no problem. Here is basically succeeded. Click next (the reason for this error is that the address of SCAN is configured in / etc/hosts. Try ping this address information. If it is successful, this error can be ignored. I tried to ping ping scan ip, so I ignored this error for the time being. )
Summary, and we can install'.
The installation will be faster here.
The following error occurred
Check the log and report an error as follows:
# vi / u01/oraInventory/logs/installActions2018-01-10: 02-56-55PM.log
INFO: collect2: error: ld returned 1 exit statusINFO: make [1]: * [/ u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/emdctl] Error 1INFO: make [1]: Leaving directory `/ u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib'INFO: make: * [emdctl] Error 2INFO: End output from spawned process.INFO:- -INFO: Exception thrown from action: makeException Name: MakefileExceptionException String: Error in invoking target 'agent nmhs' of makefile' / u01 apprentice. Oracle.productAccording to 11.2.0. See'/ u01/oraInventory/logs/installActions2018-01-10 0256-55PM.log' for details.Exception Severity: 1
Solution:
Vi $ORACLE_HOME/sysman/lib/ins_emagent.mk
$vi / u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/ins_emagent.mk searches for the following line: $(MK_EMAGENT_NMECTL) is changed to: $(MK_EMAGENT_NMECTL)-lnnz11
Then click: retry, yes.
After completion, you also have to execute a script, which is also executed on two nodes by the root user.
So far, the database software has been installed.
Create an ASM disk group
1. We configured the asm disk when we installed the GI cluster software. Now we still have a disk partition that has not been added to the asm disk group, which is realized through asmca.
$su-grid
$export DISPLAY=ip:0.0
Run the command asmca and we see the configuration wizard below. Here you can see the disk group that we configured before. After we click create,
When we come here, let's give disk group a name, choose redundancy Normal, check data_1_1 and data_1_2, and let's ok.
There will be a 10-second wait for the disk group to be created, which will be prompted for success when it is finished.
Select redundancy Normal, check fra_1_1 and fra_1_2, and let's ok.
Here we can see three disk group! Just quit.
8. Configure oracle database
Let's directly su-oracle and run the dbca command to configure the database
$export DISPLAY=IP:0.0
$export LANG=en_US
$dbca
After coming to the following interface, we select the first cluster mode, and then next
You don't have to think about this, just choose create a database and continue next.
This is also the choice of general purpose, continue to next
The configuration type is admin-managed,Global database name and sid are the same as test, at the bottom we choose "select all", and then next
Here we still choose the default recommended configuration. Both EM and amt choose the configuration to continue next.
Give sys, system and dbsnmp the same password, and you can give it yourself! And then next
Here we choose to use the storage type asm and use omf to manage the data file (the name here is the path name of the data file to store the data, custom).
When I install the database and check OEM, I have to enter the ASMSNMP password in this place and set up the ok after that.
Choose the flashback recovery area here, depending on the individual situation.
Or we do not choose the flashback recovery area here, I will not configure it here, and then manually change the spfile!
Do not choose to install sample schemas
Or install sample schemas, there will be data to test and play! Continue next
Here our character set is set, ZHS16GBK-GBK, the international character set default on the line. Other tabs have no special configuration, so you can change spfile later. Let's continue with next.
This is the database storage configuration page, let's take a look at the control files and so on are about to be installed, next
Continue finish
Wait for it, this is also a long time, if SSD and cpu are more powerful, the speed is basically 10 minutes!
After installation, click exit
We will exit the installation and configuration wizard!
IX. Inspection
1. Check the status of crs resources
[grid@redhat212] $crs_stat-tName Type Target State Host-ora.DATADB.dg ora....up.type ONLINE ONLINE redhat212 ora.FRA.dg ora .... up.type ONLINE ONLINE redhat212 ora....ER.lsnr ora....er.type ONLINE ONLINE redhat212 ora....N1.lsnr ora....er.type ONLINE ONLINE redhat212 ora.ORC.dg ora....up.type ONLINE ONLINE redhat212 ora.asm ora.asm.type ONLINE ONLINE redhat212 ora.cvu ora.cvu.type ONLINE ONLINE redhat212 ora.gsd Ora.gsd.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE redhat212 ora.oc4j ora.oc4j.type ONLINE ONLINE redhat212 ora.ons ora.ons.type ONLINE ONLINE redhat212 ora.orcl.db ora....se.type ONLINE ONLINE redhat212 ora....SM1.asm application ONLINE ONLINE redhat212 ora....12.lsnr application ONLINE ONLINE Redhat212 ora....212.gsd application OFFLINE OFFLINE ora....212.ons application ONLINE ONLINE redhat212 ora....212.vip ora....t1.type ONLINE ONLINE redhat212 ora....SM2.asm application ONLINE ONLINE redhat214 ora....14.lsnr application ONLINE ONLINE redhat214 ora....214.gsd application OFFLINE OFFLINE ora....214. Ons application ONLINE ONLINE redhat214 ora....214.vip ora....t1.type ONLINE ONLINE redhat214 ora.scan1.vip ora....ip.type ONLINE ONLINE redhat212
We see that ora.gsd and ora.214 (RAC2) .gsd are offline statuses. In fact, the two processes have no effect on our database. We can just open them.
2. View the status of the node
[grid@redhat212 ~] $srvctl status nodeapps-n redhat212VIP redhat212-vip is enabledVIP redhat212-vip is running on node: redhat212Network is enabledNetwork is running on node: redhat212GSD is disabledGSD is not running on node: redhat212ONS is enabledONS daemon is running on node: redhat212
3. Open the node
[grid@redhat212 ~] $srvctl enable nodeappsPRKO-2415: VIP is already enabled on node (s): redhat212,redhat214PRKO-2416: Network resource is already enabled.PRKO-2417: ONS is already enabled on node (s): redhat212 After redhat214 opens the node, check the node application status [grid@redhat212 ~] $srvctl status nodeappsVIP redhat212-vip is enabledVIP redhat212-vip is running on node: redhat212VIP redhat214-vip is enabledVIP redhat214-vip is running on node: redhat214Network is enabledNetwork is running on node: redhat212Network is running on node: redhat214GSD is enabledGSD is not running on node: redhat212GSD is not running on node: redhat214ONS is enabledONS daemon is running on node: redhat212ONS daemon is running on node: redhat214
4. Start the node
$srvctl start nodeappsPRKO-2421: Network resource is already started on node (s): redhat212,redhat214PRKO-2420: VIP is already started on node (s): redhat212PRKO-2420: VIP is already started on node (s): redhat214PRKO-2422: ONS is already started on node (s): redhat212,redhat214
5. Let's check whether all the components are online
$crs_stat-tName Type Target State Host-ora.DATADB.dg ora....up.type ONLINE ONLINE redhat212 ora.FRA.dg ora....up. Type ONLINE ONLINE redhat212 ora....ER.lsnr ora....er.type ONLINE ONLINE redhat212 ora....N1.lsnr ora....er.type ONLINE ONLINE redhat212 ora.ORC.dg ora....up.type ONLINE ONLINE redhat212 ora.asm ora.asm.type ONLINE ONLINE redhat212 ora.cvu ora.cvu.type ONLINE ONLINE redhat212 ora.gsd ora.gsd.type ONLINE ONLINE redhat212 ora....network ora....rk.type ONLINE ONLINE redhat212 ora.oc4j ora.oc4j.type ONLINE ONLINE redhat212 ora.ons ora.ons.type ONLINE ONLINE redhat212 ora.orcl.db ora....se.type ONLINE ONLINE redhat212 ora....SM1.asm application ONLINE ONLINE redhat212 ora....12.lsnr application ONLINE ONLINE redhat212 ora .... 212.gsd application ONLINE ONLINE redhat212 ora....212.ons application ONLINE ONLINE redhat212 ora....212.vip ora....t1.type ONLINE ONLINE redhat212 ora....SM2.asm application ONLINE ONLINE redhat214 ora....14.lsnr application ONLINE ONLINE redhat214 ora....214.gsd application ONLINE ONLINE redhat214 ora....214.ons application ONLINE ONLINE redhat214 ora....214.vip ora....t1.type ONLINE ONLINE redhat214 ora.scan1.vip ora....ip.type ONLINE ONLINE redhat212
We have completed the installation here!
About RAC database and monitoring start and stop commands
Both grid and oracle users can execute, which can be executed on one of the nodes:
1. Start and stop RAC snooping:
$srvctl status listener # check the status of TNS listener
$srvctl config listener-a # check the configuration of TNS listener
$srvctl start listener # enable listening
$srvctl stop listener # stop listening
$srvctl stop listener-n redhat212 # disable the listening of specified nodes
$srvctl start listener-n redhat212 # start specified node listening
Example:
RAC starts database listening and enters the grid user to start and stop the port
# su-grid
$srvctl start | stop | status listener
2. RAC start and stop database:
Srvctl status database-d RAC # checks database status
Srvctl status instance-d RAC-I rac1 # checks the status of the specified instance
Srvctl start database-d orcl # start the database
Srvctl stop database-d orcl # shut down the database
Srvctl start instance-d orcl-I orcl1 # starts the specified instance
Srvctl stop instance-d orcl-I orcl2 # closes the specified instance
Example: close an instance on nodes1 and view the status of two nodes
Or the following way
RAC starts the database and goes to oracle user # su-oracle$ sqlplus sys/**** as sysdba SQL > select status from vault instance; / / View the database status SQL > startup; / / launch the database SQL > shutdown immediate / / stop database 11. Add tablespace step: ssh root enters 192.168.1.212 su-oraclesource .bash _ profilesqlplus / nologconnect / as sysdba to create tablespace: SQL > CREATE SMALLFILE TABLESPACE "TEST" DATAFILE'+ DATADB/ORCL/DATAFILE/test001.dbf' SIZE 1024m AUTOEXTEND ON NEXT 500m MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; add test001.dbf files stored in TEST tablespace under the + DATADB/ORCL/DATAFILE/ directory of ASM, the initial size is 1G, and there is no limit to automatic 500MB extension. Add tablespace: alter tablespace CP_TM add datafile'+ DATADB/orcl/datafile/cp_tm_08.dbf' size 20480M; RAC mode increases the size of cp_tm_08.dbf data file to 20G to CP_TM tablespace. Alter tablespace CP_TM add datafile + DATADB/orcl/datafile/cp_tm_09.dbf' size 10240M autoextend on next 50m maxsize 20480; RAC increases tablespace cp_tm_-09.dbf to 10G initially, up to 20g at 50m. XII. Summary of grammar
Cluster information can be executed by both grid and oracle users
10.1. Databases and examples:
List all configuration databases
$srvctl config database
Check the database related information (the-d parameter is followed by your server_name)
$srvctl config database-d orcl-a
Or
$srvctl config database-d orcl-t
Srvctl status database-d RAC # checks database status
Srvctl status instance-d RAC-I rac1 # checks the status of the specified instance
Srvctl start database-d rac # start the database
Srvctl stop database-d rac # shut down the database
Srvctl start instance-d rac-I rac1 # starts the specified instance
Srvctl stop instance-d rac-I rac2 # closes the specified instance
ASM statu
$srvctl status asm
ASM configuration
$srvctl status asm-a
10.2. Network-related commands
TNS listener status and configuration
$srvctl status listener # check the status of TNS listener
$srvctl config listener-a # check the configuration of TNS listener
$srvctl start listener # enable listening
$srvctl stop listener # stop listening
$srvctl stop listener-n redhat212 # disable the listening of specified nodes
$srvctl start listener-n redhat212 # start specified node listening
Parameter of SRVCTL-n instance node name, not instance sid
View the ASM information of a node
Srvctl config listener-n node2
SCAN status and configuration
$srvctl status scan
$srvctl config scan
Status and configuration of each node in VIP
$srvctl status vip-n rac1
$srvctl status vip-n rac2
$srvctl config vip-n rac1
$srvctl config vip-n rac2
Node application configuration (VIP, GSD, ONS, listener)
$srvctl config nodeapps-a-GMurs-l
View cluster status (nodeapps node applications, ASM instances, databases, etc.)
$crs_stat-t
You can also use the following command for status checking:
Crsctl stat resource-t or crsctl stat resource
In addition, I would also like to emphasize a bug in version 11.2.0.1, that is, the solution for the client unable to connect to the database through scan is as follows: [oracle@redhat212 ~] $sqlplus / as sysdbaSQL*Plus: Release 11.2.0.1.0 Production on Wed Mar 19 11:29:58 2014Copyright (c) 1982, 2009, Oracle. All rights reserved.Connected to:Oracle Database 11g Enterprise Edition Release 11.2.0.1.0-64bit ProductionWith the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP Data Mining and Real Application Testing optionsSQL > show parameter local_listenerNAME TYPE VALUE-- local_listener String (DESCRIPTION= (ADDRESS_LIST= (ADDRESS = (PROTOCOL=TCP) (HOST=redhat212-vip) (PORT=1521) SQL > show parameter remot_listener NAME TYPE VALUE-- remote_dependencies_mode string TIMESTAMPremote_listener String clusterscan:1521remote_login_passwordfile string EXCLUSIVEremote_os_authent boolean FALSEremote_os_roles boolean FALSEresult_cache_remote_expiration integer 0SQL > alter system set local_listener=' (DESCRIPTION= (ADDRESS_LIST= (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.1.216) (PORT = 1521) 'sid='orcl1' System altered.SQL > alter system set remote_listener='clusterscan:1521';System altered.SQL > alter system register;System altered. Finally, the configuration client tnsname.ora file points to scan listener#tnsnames.ora.rac1 Network Configuration File: / u01/app/11.2.0/grid/network/admin/tnsnames.ora.rac1#Generated by Oracle configuration tools.ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.1.216) (PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = orcl)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.