In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Note: all operations that need to be set on both hosts are noted. The posting process is only based on rac1.
IP Planning:
# Public IP
192.168.1.22 rac1
192.168.1.33 rac2
# Private IP
1.1.1.111 rac1-priv
1.1.1.222 rac2-priv
# Virtual IP
192.168.1.23 rac1-vip
192.168.1.34 rac2-vip
# Scan IP
192.168.1.77 rac-scan
Change the IP address (rac1 and rac2)
[root@rac1 network-scripts] # vi ifcfg-eno16777736
Change:
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=eeaef3ba-b1fe-498f-95e8-3a982ec8931e
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.1.22
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=8.8.8.8
~
Due to the lack of configuration file for network card 2 in the experimental environment, I copy a manual change
[root@rac1 network-scripts] # cp ifcfg-eno16777736 ifcfg-eno33554984
[root@rac1 network-scripts] # vi ifcfg-eno33554984
Add:
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno33554984
UUID=7b040b98-b78e-44fa-91e1-5e115f0bdd9f
DEVICE=eno33554984
ONBOOT=yes
IPADDR=1.1.1.111
NETMASK=255.255.255.0
GATEWAY=1.1.1.1
DNS1=8.8.8.8
~
According to your own IP plan, rac2 has the same operation.
Test whether the two sides can ping each other.
Modify hostname
(the system has been modified when the system is installed in the test environment) ~
View firewall status (rac1 and rac2)
[root@rac1 ~] # systemctl status firewalld
Turn off the firewall (current state)
[root@rac1 ~] # systemctl stop firewalld
Turn off the firewall (permanent)
[root@rac1 ~] # systemctl disable firewalld
Modify host files (rac1 and rac2)
[root@rac2 network-scripts] # vi / etc/hosts
Add:
# Public IP
172.16.171.22 rac1
172.16.171.33 rac2
# Private IP
1.1.1.111 rac1-priv
1.1.1.222 rac2-priv
# Virtual IP
172.16.171.23 rac1-vip
172.16.171.34 rac2-vip
# Scan IP
172.16.171.77 rac-scan
Restart network services (rac1 and rac2)
[root@rac1 ~] # service network restart
Configure kernel parameters (rac1 and rac2): [root@rac1 ~] # vi / etc/sysctl.conf
Add:
# for oracle 11g
Fs.aio-max-nr = 1048576
Fs.file-max = 6815744
Kernel.shmall = 2147483648
Kernel.shmmax = 68719476736
Kernel.shmmni = 4096
Kernel.sem = 250 32000 100 128
Net.ipv4.ip_local_port_range = 9000 65500
Net.core.rmem_default = 262144
Net.core.rmem_max = 4194304
Net.core.wmem_default = 262144
Net.core.wmem_max = 1048586
Make the parameters effective (rac1 and rac2): [root@rac1 ~] # / sbin/sysctl-p
Change the limits file (rac1 and rac2): [root@rac1 ~] # vi / etc/security/limits.conf
Add:
Grid soft nproc 2047
Grid hard nproc 16384
Grid soft nofile 1024
Grid hard nofile 65536
Oracle soft nproc 2047
Oracle hard nproc 16384
Oracle soft nofile 1024
Oracle hard nofile 65536
Change the login file (rac1 and rac2): [root@rac1 ~] # vi / etc/pam.d/login
Add:
Session required pam_limits.so
Change the profile file (rac1 and rac2): [root@rac1 ~] # vi / etc/profile
Add:
If [$USER = "oracle"] | | [$USER = "grid"]; then
If [$SHELL = "/ bin/ksh"]; then
Ulimit-p 16384
Ulimit-n 65536
Else
Ulimit-u 16384-n 65536
Fi
Umask 022
Fi
Close selinux (rac1 and rac2): [root@rac1 ~] # vi / etc/selinux/config
Modify: SELINUX=disabled
Add:
Getsebool
Getsebool: SELinux is disabled
Restart the host
Add users and groups (rac1 and rac2)
Groupadd-g 501 oinstall
Groupadd-g 502 dba
Groupadd-g 503 oper
Groupadd-g 504 asmadmin
Groupadd-g 505 asmoper
Groupadd-g 506 asmdba
Useradd-g oinstall-G dba,asmdba,oper oracle
Useradd-g oinstall-G asmadmin,asmdba,asmoper,oper,dba grid
Set the grid' and oracle passwords (rac1 and rac2):
[root@rac1 ~] # passwd grid
[root@rac1 ~] # passwd oracle
Create directories (rac1 and rac2):
Mkdir-p / u01/app/oracle
Mkdir-p / u01/app/grid
Mkdir-p / u01/app/11.2.0/grid
Chown-R grid:oinstall / u01/app/grid
Chown-R grid:oinstall / u01/app/11.2.0
Chown-R oracle:oinstall / u01/app/oracle
Mkdir-p / u01/app/oraInventory
Chown-R grid:oinstall / u01/app/oraInventory
Chmod-R 777 / u01/app/oraInventory
Chmod-R 777 / U01
Switch users and add environment variables (rac1 and rac2)
[root@rac1 ~] # su-oracle
[oracle@rac1 ~] $vi / home/oracle/.bash_profile
Add:
Export ORACLE_SID=rac1
Export ORACLE_BASE=/u01/app/oracle
Export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Export NLS_DATE_FORMAT= "yyyy-mm-dd HH24:MI:SS"
Export TMP=/tmp
Export TMPDIR=$TMP
Export PATH=$PATH:$ORACLE_HOME/bin
Note: in rac2: export ORACLE_SID=rac2
[oracle@rac1 ~] $su-grid
Password:
[grid@rac1 ~] $vim .bash _ profile
Add:
Export ORACLE_SID=+ASM1
Export ORACLE_BASE=/u01/app/grid
Export ORACLE_HOME=/u01/app/11.2.0/grid
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Export NLS_DATE_FORMAT= "yyyy-mm-dd HH24:MI:SS"
Export PATH=$ORACLE_HOME/bin:$PATH
Note: export ORACLE_SID=+ASM2 in rac2
Shut down the device and modify the virtual machine vmx file. Open it in notepad (rac1 and rac2 respectively). Add:
Disk.EnableUUID= "TRUE"
Disk.locking = "false"
Scsi1.shared= "TRUE"
DiskLib.dataCacheMaxSize = "0"
DiskLib.dataCacheMaxReadAheadSize = "0"
DiskLib.DataCacheMinReadAheadSize = "0"
DiskLib.dataCachePageSize = "4096"
DiskLib.maxUnsyncedWrites = "0"
Scsi1:1.deviceType = "disk"
Scsi1:2.deviceType = "disk"
Scsi1:3.deviceType = "disk"
Scsi1:4.deviceType = "disk"
Scsi1:5.deviceType = "disk"
Scsi1:1.shared = "true"
Scsi1:2.shared = "true"
Scsi1:3.shared = "true"
Scsi1:4.shared = "true"
Scsi1:5.shared = "true
Edit the settings of the virtual machine and add three disks, called OCR_VOTE.vmdk.
Data.vmdk fra.vmdk (note the size of the disk)
The remaining two disks operate in the same way
Note: disk name, disk size and virtual device node SCIS choose 1:2 and 1:3
Add disks in rac2
The remaining two disks have the same operation. Note that the virtual device node SCIS selects 1:2 and 1:3 to correspond to the disks above the rac1.
Turn on the virtual machine
Check if the disk is mounted (rac1 and rac2)
[root@rac1 ~] # fdisk-l
Query disk UUID (rac1 and rac2)
If the uuid cannot be queried, check whether the added virtual machine file is correct
The vmx file of the virtual machine has been added disk.enableUUID = "TRUE"
[root@rac1] # / usr/lib/udev/scsi_id-g-u / dev/sdb
36000c2917d180b5daef20885fa95bfbe
[root@rac1] # / usr/lib/udev/scsi_id-g-u / dev/sdc
36000c291b9457755e6bdafe27a6dd685
[root@rac1] # / usr/lib/udev/scsi_id-g-u / dev/sdd
36000c29c91113958099603eb65a72ce3
Configure the udev rules file
/ etc/udev/rule.d/99-oracle-asmdevices.rules
[root@rac1 rules.d] # vi 99-oracle-asmdevices.rules
Add:
KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "36000c29e1359ab575540edf6a00fd489", RUN+= "/ bin/sh-c 'mknod / dev/asmdisk01 b $major $minor; chown grid:oinstall / dev/asmdisk01; chmod 0660 / dev/asmdisk01'"
KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "36000c2988f455427ca667639a40ee44f", RUN+= "/ bin/sh-c 'mknod / dev/asmdisk02 b $major $minor; chown grid:oinstall / dev/asmdisk02; chmod 0660 / dev/asmdisk02'"
KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "36000c29c692398db6baa48338cc7b9b8", RUN+= "/ bin/sh-c 'mknod / dev/asmdisk03 b $major $minor; chown grid:oinstall / dev/asmdisk03; chmod 0660 / dev/asmdisk03'"
Run: (rac1 and rac2)
Check the new device name:
[root@rac1 rules.d] # / sbin/udevadm trigger-- type=devices-- action=change
Reload UDEV (rac1 and rac2)
[root@rac1 rules.d] # / sbin/udevadm control-reload
To diagnose udev (rac1 and rac2)
[root@rac1 rules.d] # / sbin/udevadm test / sys/block/sdb
[root@rac1 rules.d] # / sbin/udevadm test / sys/block/sdc
[root@rac1 rules.d] # / sbin/udevadm test / sys/block/sdd
Check to see if the binding is successful
[root@rac1 rules.d] # ls / dev/asm*
/ dev/asmdisk01 / dev/asmdisk02 / dev/asmdisk03
Extract the grid installation package
Root users run run graphing
[root@rac1 U01] # xhost +
Switch users
[root@rac1 U01] # su-grid
Set up display
(xmanager remote connection runs graphically)
[grid@rac1 ~] $IP address of the export DISPLAY= remote machine: 0.0
Install grid
[grid@rac1 ~] $cd / u01/grid/
[grid@rac1 grid] $. / runInstaller
Mount the CD
[root@rac1 dev] # mount / dev/cdrom / mnt/
Install the required dependency packages (rac1 and rac2)
[root@rac1 /] # cd / mnt/Packages
[root@rac1 Packages] # rpm-ivh elfutils-libelf-devel-0.163-3.el7.x86_64.rpm
[root@rac1 Packages] # rpm-ivh libaio-devel-0.3.109-13.el7.x86_64.rpm
Lack of pdksh-5.2.14 this package, download from the Internet, install dependency package
[root@rac1 U01] # rpm-ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm
Run scripts (rac1 and rac2)
[root@rac1 system] # cd / u01/app/oraInventory/
[root@rac1 system] #. / orainstRoot.sh
[root@rac1 system] # cd / u01/app/11.2.0/grid/
[root@rac1 system] #. / root.sh
Error running root footer
Ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2015-05-23 23 3715 45.460:
[client (13782)] CRS-2101:The OLR was formatted using version 3.
Reason: because RHEL 7 uses systemd instead of initd to run and restart processes, while root.sh runs ohasd processes through traditional initd.
Solution:
Cancel root.sh rerun
/ u01/app/11.2.0/grid/crs/install/roothas.pl-deconfig-force-verbose
1. Create a service file as a root user
# touch / usr/lib/systemd/system/ohas.service
# chmod 777 / usr/lib/systemd/system/ohas.service
two。 Add the following to the newly created ohas.service file
[root@rac1 init.d] # cat / usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run > / dev/null 2 > & 1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target
3. Run the following command as the root user
Systemctl daemon-reload
Systemctl enable ohas.service
Systemctl start ohas.service
4. View running status
[root@rac1 init.d] # systemctl status ohas.service
Ohas.service-Oracle High Availability Services
Loaded: loaded (/ usr/lib/systemd/system/ohas.service; enabled)
Active: failed (Result: start-limit) since Fri 2015-09-11 16:07:32 CST; 1s ago
Process: 5734 ExecStart=/etc/init.d/init.ohasd run > / dev/null 2 > & 1 Type=simple (code=exited, status=203/EXEC)
Main PID: 5734 (code=exited, status=203/EXEC)
Sep 11 16:07:32 rac1 systemd [1]: Starting Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd [1]: Started Oracle High Availability Services.
Sep 11 16:07:32 rac1 systemd [1]: ohas.service: main process exited, code=exited, status=203/EXEC
Sep 11 16:07:32 rac1 systemd [1]: Unit ohas.service entered failed state.
Sep 11 16:07:32 rac1 systemd [1]: ohas.service holdoff time over, scheduling restart.
Sep 11 16:07:32 rac1 systemd [1]: Stopping Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd [1]: Starting Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd [1]: ohas.service start request repeated too quickly, refusing to start.
Sep 11 16:07:32 rac1 systemd [1]: Failed to start Oracle High Availability Services.
Sep 11 16:07:32 rac1 systemd [1]: Unit ohas.service entered failed state.
The status is failed at this time because there is no / etc/init.d/init.ohasd file yet.
Now you can run the script root.sh without reporting any more ohasd failed to start errors.
If the ohasd failed to start error is still reported, it may be that ohas.service did not start immediately after the root.sh script created the init.ohasd. For more information on the solution, please see the following:
When running root.sh, refresh / etc/init.d until the init.ohasd file appears, and immediately manually start the ohas.service service command: systemctl start ohas.service
[root@rac1 init.d] # systemctl status ohas.service
Ohas.service-Oracle High Availability Services
Loaded: loaded (/ usr/lib/systemd/system/ohas.service; enabled)
Active: active (running) since Fri 2015-09-11 16:09:05 CST; 3s ago
Main PID: 6000 (init.ohasd)
CGroup: / system.slice/ohas.service
6000 / bin/sh / etc/init.d/init.ohasd run > / dev/null 2 > & 1 Type=simple
6026 / bin/sleep 10
Grid users: check that the installation is completed correctly (rac1 and rac2)
[grid@rac1 grid] $crs_stat-t
Configure asm disk
Use the grid user to execute asmca
[grid@rac1 grid] $asmca
Root user decompresses oracle,oracle user installs oracle
[oracle@rac1 database] $. / runInstaller
[oracle@rac1 database] $IP address of the export DISPLAY= remote machine: 0.0
.
Oracle users: solution (rac1)
[oracle@rac1 lib] $cd / $ORACLE_HOME/sysman/lib
[oracle@rac1 lib] $cp ins_emagent.mk ins_emagent.mk.bak
[oracle@rac1 lib] $vi ins_emagent.mk
Enter / NMECTL for quick lookup, followed by-lnnz11 followed by the letter l followed by the number 1
Then go back to the interface and click retry.
Run scripts (rac1 and rac2)
[root@rac1 ~] # cd / u01/app/oracle/product/11.2.0/dbhome_1/
[root@rac1 ~] #. / root/sh
Oracle user DBCA create database (rac1)
Notice that the oracle_sid matches that in the previously set environment variable
At this point, the installation is complete, and subsequent tests are made to see if the environment is running properly.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.