Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Install Oracle11g RAC in multiple virtual machine environments

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. Security environment and network planning 1.1, installation environment

RAC Node operating system: Linux 6.4x8664

Cluster software:Oracle Grid Infrastructure 11gr2 (11.2.0.4)

Oracle Database software:Oracle11gr2 (11.2.0.4)

Shared storage: ASM

1.2. Network planning

Node name Public IP Private IP Virtual IP SCAN name SCAN IP

Note: SCAN-IP is added to Oracle11g as a pool concept of VIP.

1.3.The Oracle software group

1.4.The RAC node

1.5. Storage component

2. Create a virtual machine 2.1and VMware vSphere

Log in to the virtualized host as administrator:

Right-click on the host "New Virtual Machine"

Select a custom configuration, next

Name the virtual machine rac1, the next step

Select the location of the virtual machine, and next step

Select the location where the virtual machine files are stored (as far as possible on shared storage, large space and prevent data loss), and next step

Select the high version, next step

Select Linux as the operating system and Red Hat Enterprise Linux 6 (64-bit) as the version. Next.

Configure CPU, memory, next step

Select the network card and configure two network cards, one as Public and the other as Private. Next

Select SCSI drive type: LSI Logic parallel, next step

Select to create a new disk, next

Configure disk size, next step

Explanation of the above three options:

Select the virtual device node. Default is SCSI (0:0). Select mode is not independent. Next step

Select the edit option before the virtual machine is created and continue

Edit the optical drive device, select the ISO file, and finish

View the new virtual machine configuration

Create another node, rac2, in the same way.

2.2 、 VMware Workstation

Create a new virtual machine rac1, select Custom, and next

Choose to install the operating system ISO, next step

Name the virtual machine, select the storage location, and next

Configure memory size, next step

Select the network type to bridge the network, and next step

Select the LSI Logic controller type, and next step

Select SCSI as the disk type, and next

Configure the disk size and choose to split the virtual disk into multiple files, next

Specify disk file, next step

Create another node, rac2, in the same way.

2.3 、 Virtual BOX

Create a new virtual machine rac1, name the virtual machine, select the system type as Linux and the version as Red Hat (64bit). Next

Configure memory size, next step

Create a virtual hard disk, create

Select the virtual disk file type, next

Select dynamic allocation, next step

Select the location and size of the file, next

Creation completed

Set virtual machine parameters

Select Storage, in the properties on the right, click on the right side of the assigned CD-ROM drive, and select install operating system ISO file.

Create another node, rac2, in the same way.

3. Install system 3.1, install system (two nodes)

The system installation process is consistent, so it will not be detailed. This article only gives a brief description of some error-prone areas in the installation process.

Select basic storage device

Prompt whether to discard all data, yes,discard any data

Select a custom partition, Create Custom Layout

Enter the zoning interface

Select the standard partition mode, Standard Partition

Create virtual memory (swap), boot partition, / partition respectively

Partition result

Select OK format

Choose to write to disk

Just choose the default.

Select system mode, Software Development Workstation,Customize Later, other defaults

Start installing the system

After installation, you are prompted to restart the system and have a series of configurations, including starting Kdump

According to the prompt, restart the system again, the system installation is complete!

3.2. Configure the network (two nodes)

This step requires that the virtual machine has two network cards, Public and Private.

Log in to the virtual machine using the root user, right-click, and select Edit Connection

Follow these steps to configure two network cards (IP) as the corresponding items in the above table

3.3. Delete the automatically generated virtual network card (two nodes)

The so-called virtual network card is the virbr0 listed by executing the ifconfig command, as shown in the figure

Execute the following command to delete the virtual network card

Virsh net-list

Virsh net-destroy default

Virsh net-undefine default

Service libvirtd restart

Use the ifconfig command again to see that the virtual network card no longer exists

3.4. Test network (two nodes)

1. Physical machine ping Public IP of two virtual machines rac1 and rac2

2. Rac1 node Public IP and Private IP of ping rac2 node

3. Rac2 node Public IP and Private IP of ping rac1 node

The above three steps can be done by ping.

4. Add shared storage 4.1, VMware vSphere4.1.1, and configure rac1 nodes

Close the rac1 node, right-click the rac1 node to select edit settings, enter the configuration interface, and click add

Select hard drive, next step

Create a new virtual disk

Configure the disk size, select the thick setting zero (for cluster only), and specify the storage location of the disk. Next step

Drive device select SCSI 1:0, mode selection is independent (cluster must choose this option), next step

In the virtual machine attributes, select the SCSI controller 1 drive you just added and configure it in physical mode (to support sharing) to ensure shared access to rac1 and rac2 nodes

Repeat the above steps to add two more disks and select their drive device as SCSI 1 1x SCSI 1:2.

The rac1 node configuration information for adding 3 shared disks is as follows:

4.1.2. Configure the rac2 node

Close the rac2 node, right-click the rac2 node to select edit settings, enter the configuration interface, and click add

Select hard drive, next step

Select an existing virtual disk, next

Select the specified storage device. Note here: select the first disk created in the rac1 node

Drive device chooses SCSI 1:0, mode chooses independently, next step

As with the rac1 node, in the rac2 node properties, select the SCSI controller 1 drive you just added and configure it to be in physical mode for sharing

Repeat the above steps to add the remaining 2 disks. Note that the drive number should be SCSI 1:1, then SCSI 1:2.

The rac2 node configuration information for adding 3 shared disks is as follows:

4.2. Add shared disks to VMware Workstation4.2.1 and physical machines

Vmware-vdiskmanager.exe-c-s 5G-a lsilogic-t 2 "d:\ Virtual Machines\ RAC\ shared"\ asm1.vmdk

Vmware-vdiskmanager.exe-c-s 5G-a lsilogic-t 2 "d:\ Virtual Machines\ RAC\ shared"\ asm2.vmdk

Vmware-vdiskmanager.exe-c-s 20G-a lsilogic-t 2 "d:\ Virtual Machines\ RAC\ shared"\ asm3.vmdk

Note:-a specifies the disk type-T2 represents a file that directly allocates a pre-allocated space.

4.2.2. Close the node and edit the vmx file with notepad, for example: rac1.vmx (two nodes)

Add the following:

# shared disks configure

Disk.EnableUUID = "TRUE"

Disk.locking = "FALSE"

DiskLib.dataCacheMaxSize = "0"

DiskLib.dataCacheMaxReadAheadSize = "0"

DiskLib.dataCacheMinReadAheadSize = "0"

DiskLib.maxUnsyncedWrites = "0"

Scsi1.present = "TRUE"

Scsi1.virtualDev = "lsilogic"

Scsil.sharedBus = "VIRTUAL"

Scsi1:0.present = "TRUE"

Scsi1:0.mode = "independent-persistent"

Scsi1:0.fileName = "D:\ Virtual Machines\ RAC\ shared\ asm1.vmdk"

Scsi1:0.deviceType = "disk"

Scsi1:0.redo = ""

Scsi1:1.present = "TRUE"

Scsi1:1.mode = "independent-persistent"

Scsi1:1.fileName = "D:\ Virtual Machines\ RAC\ shared\ asm2.vmdk"

Scsi1:1.deviceType = "disk"

Scsi1:1.redo = ""

Scsi1:2.present = "TRUE"

Scsi1:2.mode = "independent-persistent"

Scsi1:2.fileName = "D:\ Virtual Machines\ RAC\ shared\ asm3.vmdk"

Scsi1:2.deviceType = "disk"

Scsi1:2.redo = ""

Note:

1. 3 shared disks have been added here, so add 3 segments of content scsi1:0,scsi1:1,scsi1:2, that is, to add several disks, it is necessary to increase the corresponding number of segments.

2. The following content of scsi 1vis.filename = should be the same as the disk storage location created by you using vmware-vdiskmanager.exe on the physical host.

3. Restart the two nodes, check the virtual machine configuration, and confirm that the shared disk file is loaded successfully (the disk may not be recognized without rebooting the system)

4. 3, Virtual BOX4.3.1, configure rac1 nodes

Method 1:

Also in the configuration of the rac1 node, select Storage, select SATA Controller, and then click the add hard disk icon

Create a new disk, create new disk

Select the default VDI disk mode

Select a fixed size (this option must be selected for shared disks), next

Configure shared disk storage path and size

The newly created disk is already connected to the rac1 virtual machine

Select this new disk.

Click the modify (modify) icon and select "Shareable"

Repeat the above steps to create 2 more disks.

Finally, add the configuration information after the disk

Method 2:

1. Execute the following statement in the command line of the physical host to create the disk

VBoxManage.exe createhd-filename asm1.vdi-size 5120-format VDI-variant Fixed

VBoxManage.exe createhd-filename asm2.vdi-size 5120-format VDI-variant Fixed

VBoxManage.exe createhd-filename asm3.vdi-size 204800-format VDI-variant Fixed

2. Connect the rac1 virtual machine

VBoxManage.exe storageattach rac1-storagectl "SATA controller"-port 1-- device 0-- type hdd-- medium asm1.vdi-- mtype shareable

VBoxManage.exe storageattach rac1-storagectl "SATA controller"-port 2-- device 0-- type hdd-- medium asm2.vdi-- mtype shareable

VBoxManage.exe storageattach rac1-storagectl "SATA controller"-port 3-- device 0-- type hdd-- medium asm3.vdi-- mtype shareable

3. Set up disk sharing

VBoxManage.exe modifyhd asm1.vdi-type shareable

VBoxManage.exe modifyhd asm2.vdi-type shareable

VBoxManage.exe modifyhd asm3.vdi-type shareable

4. View rac1 configuration information

4.3.2. Configure the rac2 node

The rac2 node only needs to configure the connection between the disk drive and the virtual machine, as follows:

VBoxManage.exe storageattach rac2-storagectl "SATA controller"-port 1-- device 0-- type hdd-- medium asm1.vdi-- mtype shareable

VBoxManage.exe storageattach rac2-storagectl "SATA controller"-port 2-- device 0-- type hdd-- medium asm2.vdi-- mtype shareable

VBoxManage.exe storageattach rac2-storagectl "SATA controller"-port 3-- device 0-- type hdd-- medium asm3.vdi-- mtype shareable

5. Realize shared storage 5.1 and divide shared disk (single node)

Fdisk-l

Three zoning functions are explained:

Fdisk / dev/sdb-stores files as ocr and voting disk

Fdisk / dev/sdc-storing files in the fast recovery area

Fdisk / dev/sdd-storing database files

Fdisk / dev/ SDB [c | d]

N → p → 1 → Enter → Enter → w to achieve partition

Fdisk-l-- check again and on the rac2 node to make sure it is exactly the same as rac1

After partitioning, do not format the newly added disk and keep the bare device.

5.2.Configuring ASM disks (two nodes)

Use udev management to configure disk mode

Rpm-qa | grep udev-check to see if udev is installed

Execute the following command to get scsi id information

Scsiid-g-u-d / dev/sdb [c | d]

36000c292f99f2349911c3766f3cc53d7

36000c293f4c9f2c1fdd38a63e5861ad3

36000c2994d5eda8fbefc5922b14ab651

Edit the udev configuration file, add the rules file (/ etc/udev/rules.d/), and authorize the following parameters.

Method 1:

Vi / etc/udev/rules.d/99-x-asmdisk.rules

KERNEL== "sdb1", BUS== "scsi", PROGRAM= "scsiid-g-u-d / dev/$parent", RESULT== "36000c292f99f2349911c3766f3cc53d7", NAME= "asmdiskOCR", OWNER:= "grid", GROUP:= "dba", MODE= "0660"

KERNEL== "sdc1", BUS== "scsi", PROGRAM= "scsiid-g-u-d / dev/$parent", RESULT== "36000c293f4c9f2c1fdd38a63e5861ad3", NAME= "asmdiskDATA", OWNER:= "grid", GROUP:= "dba", MODE= "0660"

KERNEL== "sdd1", BUS== "scsi", PROGRAM= "scsiid-g-u-d / dev/$parent", RESULT== "36000c2994d5eda8fbefc5922b14ab651", NAME= "asmdiskFRA", OWNER:= "grid", GROUP:= "dba", MODE= "0660"

Note:

1. One line is a rule

2. There should be a space with the next keyword

3. Uncertain values after GROUP:=, both dba and asmadmin can

Start udev

Startudev

Ls / dev/asmdisk

Method 2:

Vi / etc/udev/rules.d/99-oracle-asmdevices.rules

Enter and execute the following loop directly on the command line

For i in b c d

Do

Echo "KERNEL==\" sd\ ", BUS==\" scsi\ ", PROGRAM==\" / sbin/scsiid-- whitelisted-- replace-whitespace-- device=/dev/\ $name\ ", RESULT==\" / sbin/scsiid-- whitelisted-- replace-whitespace-- device=/dev/sd$i\ ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ "> / etc/udev/rules.d/99-oracle-asmdevives.rules

Done

Note: B, c, d here are sdb, sdc, sdd remove sd

Start udev

Startudev

Ls / dev/asm

6. Configure Linux system 6.1, user groups and user settings (two nodes, root users) 6.1.1, create Oracle software group

Groupadd-g 601 oinstall

Groupadd-g 602 dba

Groupadd-g 603 oper

Groupadd-g 604 asmadmin

Groupadd-g 605 asmdba

Groupadd-g 606 asmoper

6.1.2. Create grid and oracle users

Useradd-u 601-g oinstall-G asmadmin,asmdba,asmoper grid

Useradd-u 602-g oinstall-G dba,oper,asmdba oracle

6.1.3. Set passwords for grid and oracle users

Passwd grid

Passwd oracle

6.2.Hostname IP mapping file setup (two nodes)

Vi / etc/hosts-add the following

# public:

192.168.2.231 rac1

192.168.2.232 rac2

# vip:

192.168.2.233 rac1-vip

192.168.2.234 rac2-vip

# priv

1.1.6.231 rac1-priv

1.1.6.232 rac2-priv

# SCAN

192.168.2.235 rac-scan

Configure Linux kernel parameters (two nodes)

Vi / etc/sysctl.conf-add the following

Fs.aio-max-nr = 1048576

Fs.file-max = 6815744

Kernel.shmmni = 4096

Kernel.sem = 250 32000 100 128

Net.ipv4.iplocalportrange = 9000 65500

Net.core.rmemdefault = 262144

Net.core.rmemmax = 4194304

Net.core.wmemdefault = 262144

Net.core.wmemmax = 1048576

Sysctl-p-- make the above settings take effect

6.4.Setting grid, oracle user setting shell limits (two nodes)

Vi / etc/security/limits.conf-add the following

Grid soft nproc 2047

Grid hard nproc 16384

Grid soft nofile 1024

Grid hard nofile 65536

Grid soft stack 10240

Grid hard stack 32768

Oracle soft nproc 2047

Oracle hard nproc 16384

Oracle soft nofile 1024

Oracle hard nofile 65536

Oracle soft stack 10240

Oracle hard stack 32768

Vi / etc/pam.d/login-add the following

Session required pamlimits.so

6.5.Create Oracle Inventory Directory (two nodes)

Mkdir-p / u01/app/oraInventory

Chown-R grid.oinstall / u01/app/oraInventory

Chmod-R 775 / u01/app/oraInventory

6.6. Create an Oracle Grid Infrastructure Home directory (two nodes)

Mkdir-p / u01/app/grid

Mkdir-p / u01/app/grid/crs

Mkdir-p / u01/app/grid/11.2.0

Chown-R grid.oinstall / u01/app/grid

Chmod-R 775 / u01/app/grid

Create an Oracle RDBMS Home directory (two nodes)

Mkdir-p / u01/app/oracle

Chown-R oracle.oinstall / u01/app/oracle

Chmod-R 775 / u01/app/oracle

Mkdir-p / u01/app/oracle/product/11.2.0/db1

Chown-R oracle.oinstall / u01/app/oracle/product/11.2.0/db1

Chmod-R 775 / u01/app/oracle/product/11.2.0/db1

Install the corresponding system support package (two-node 64&32bit)

Binutils

Compat-libstdc++-33

Elfutils-libelf

Elfutils-libelf-devel

Gcc gcc-c++

Glibc

Glibc-common

Glibc-devel

Glibc-headers

Ksh

Libaio

Libaio-devel

Libgcc

Libstdc++

Libstdc++-devel

Make

Numactl-devel

Sysstat

UnixODBC

UnixODBC-devel

Method 1:

Copy the contents of the system CD to the local directory

Mkdir / yum/

Cp / media/ / yum/

Delete content under / etc/yum.repos.d/ and create yum.repo

Vi / etc/yum.repos.d/yum.repo

[yum]

Name=yum

Baseurl= file:///yum/

Enabled=1

Method 2:

Rpm-ivh-nodeps-force rpm

6.9. Modify grid and oracle user environment variables (two nodes)

Su-grid-switch to the grid user and modify the environment variable

Vi .bashprofile-remove duplicates and add the following

Export ORACLEHOSTNAME=rac1 (rac2)

Export ORACLEUNQNAME=rac

Export ORACLEBASE=/u01/app/grid/crs

Export ORACLEHOME=/u01/app/grid/11.2.0

Export ORACLESID=+ASM1 (+ ASM2)

Export ORACLETERM=xterm

Export PATH=$ORACLEHOME/bin:/usr/sbin:$PATH

Export LDLIBRARYPATH=$ORACLEHOME/lib:/lib:/usr/lib

Export CLASSPATH=$ORACLEHOME/JRE:$ORACLEHOME/jlib:$ORACLEHOME/rdbms/jlib

Export TMP=/tmp

Export TMPDIR=$TMP

Su-oracle-switch to the oracle user and modify the environment variable

Vi .bashprofile-remove duplicates and add the following

Export ORACLEHOSTNAME=rac1 (rac2)

Export ORACLEUNQNAME=rac

Export ORACLEBASE=/u01/app/oracle

Export ORACLEHOME=$ORACLEBASE/product/11.2.0/db1

Export ORACLESID=rac1 (rac2)

Export ORACLETERM=xterm

Export PATH=$ORACLEHOME/bin:/usr/sbin:$PATH

Export LDLIBRARYPATH=$ORACLEHOME/lib:/lib:/usr/lib

Export CLASSPATH=$ORACLEHOME/JRE:$ORACLEHOME/jlib:$ORACLEHOME/rdbms/jlib

Export TMP=/tmp

Export TMPDIR=$TMP

7. Turn off firewall 7.1and set up SELinux (two nodes)

1. Effective immediately

Setenforce 0-close SELinux immediately

Getenforce-View SELinux status

2. Permanent effect

Vi / etc/selinux/config-- close selinux

Replace SELinux=enforcing with SELinux=disabled

Turn off the firewall iptables (two nodes)

Service iptables save

Service iptables stop-close iptables

Chkconfig iptables off-set iptables not to start with the system

Chkconfig-- list iptables-- View firewall status list

8. Set ssh equivalence for grid and oracle users

The operation process of ssh equivalence is the same, so you only need to switch to the appropriate user to execute it.

Su-grid-two nodes

Cd / home/grid

Rm-rf ~ / .ssh

Mkdir / .ssh

Chmod 700. ssh

Ssh-keygen-t rsa

Ssh-keygen-t dsa

Configure synchronization

Rac1 node

Cat ~ / .ssh/idrsa.pub > > ~ / .ssh/authorizedkeys

Cat ~ / .ssh/iddsa.pub > > ~ / .ssh/authorizedkeys

Scp / .ssh/authorizedkeys rac2:~/.ssh/authorizedkeys

Rac2 node

Cat ~ / .ssh/idrsa.pub > > ~ / .ssh/authorizedkeys

Cat ~ / .ssh/iddsa.pub > > ~ / .ssh/authorizedkeys

Scp / .ssh/authorizedkeys rac1:~/.ssh/authorizedkeys

Detect connectivity-two nodes

Ssh racdb1 date

Ssh racdb2 date

Ssh racdb1-priv date

Ssh racdb2-priv date

Note: the first time you ask if you want to connect, enter yes.

9. Clock synchronization 9.1. Clock synchronization using NTP service of Linux system (two nodes)

Vi / etc/ntp.conf-- the primary node rac1 adds the following

Server 127.127.1.0

Fudge 127.127.1.0 stratum 11

Broadcastdelay 0.008

Vi / etc/ntp.conf-- other node rac2 adds the following

Server 192.168.2.231 prefer

Driftfile / var/lib/ntp/drift

Broadcastdelay 0.008

Vi / etc/sysconfig/ntpd-configure the NTP service

SYNCHWCLOCK=yes

OPTIONS= "- x-u ntp:ntp-p / var/run/ntpd.pid"

/ etc/init.d/ntpd restart-- start the NTP service

Chkconfig ntpd on-set the NTP service to start with the system

Netstat-an | grep 123-make sure the port is open in udp mode

Ntpstat-View the status of NTP services

9.2. Use oracle cluster software ctss service to synchronize the clock (two nodes)

11G R2 has its own time synchronization mechanism by default, and it can be done without NTP. Ctss should be running observation mode all the time if there is NTP. Using the cluster time synchronization service to provide synchronization services in the cluster requires uninstalling the Network time Protocol (NTP) and its configuration.

Service ntpd stop

Chkconfig ntpd off

Mv / etc/ntp.conf / etc/ntp.conf.original-with the file / etc/ntp.conf, ctss will not be enabled

Rm / var/run/ntpd.pid-this file holds the pid of the NTP daemon

10. Prepare before installation

This operation needs to be performed by two nodes separately!

Upload installation files, this time the installation is Oracle 11.2.0.4 version, a total of 3 installation packages, of which 1, 2 is database package, 3 is grid package.

Decompress 3 packets respectively, and then generate two directories / setup/database and / setup/grid.

Note: the rac2 node only uploads the grid package and decompresses it.

Install the operating system package cvuqdisk (under / setup/grid/) on both rac nodes. Without cvuqdisk, the cluster verification utility will not be able to find the shared disk (/ dev/asm) and will receive a "Package cvuqdisk not installed" error later in the installation.

Install cvuqdisk-1.0.9-1.rpm package under root user

Export CVUQDISKGRP=oinstall

Cd / setup/grid/rpm

Rpm-ivh cvuqdisk-1.0.9-1.rpm

Su-grid-switch to grid user

Cd / setup/grid

. / runcluvfy.sh stage-pre crsinst-n rac1,rac2-fixup-verbose > 1.log-enter 1.log for easy viewing

If an error is reported, execute the following command again after executing it

/ tmp/CVU11.2.0.1.0grid/runfixup.sh

Rm-rf / tmp/bootstrap

. / runcluvfy.sh stage-pre crsinst-n rac1,rac2-fixup-verbose

Note: the above command execution results, prompt network problems, time synchronization, DNS problems can be ignored!

. / runcluvfy.sh stage-post hwos-n rac1,rac2-verbose

Note:

1. The result of the two CVU check commands is passed before the installation can continue (network, time, DNS problems can be ignored).

2. At this time, it is necessary to ensure that the four IP (Public, Private) of the two nodes can communicate with each other through ping

11. Install Oracle Grid Infrastructure11.1, installation process

This operation only needs to be performed on one node!

The graphical interface installation requires the following command to be executed by the root user

Xhost +

If the output above appears, you can continue with the graphic installation.

Su-grid

Cd / setup/grid

. / runInstaller

Skip updates

Select installation option

Select advanced installation (easy to set parameters)

The Cluster Name can be named at will; the SCAN Name needs to be consistent with the configuration in / etc/hosts; the port defaults to 1521. Do not configure GNS

Add Node 2 Information

Network information

Adopt ASM storage management

Configure ocr and voting disk storage disk groups, where the disk group name is CRS

Change the discovery path / dev/asm to find the shared disk

Select the planned disk to join the CRS group

Set the ASM administrative password (to facilitate the use of a unified password)

Password does not comply with oracle rules warning, select yes to continue

Not applicable to IPMI

Select grid software ORACLEBASE and ORACLEHOME (.bash profile is configured beforehand, where the path will be found automatically)

Select the oraInventory directory (this directory mainly stores installation logs, etc.)

Install pre-inspection.

If you can see the same content on two nodes ll / dev/asm, the Device Checks for ASM warning can be ignored.

If you are using the Linux system NTP service, a Network Time Protocol warning appears (no alarm if you use ctss)

There may also be some warnings about DNS and the network, just ignore them.

Skip the warning and choose yes to continue

Installation overview

Installation

Prompt the root user to execute the root.sh script on both nodes. Note the last hint that you need to execute root.sh on other nodes after the installation node has finished executing it (which can be executed in parallel at this time)!

Rac1 node execution completed

Next, execute root.sh on other nodes

INS-20802 is a monitoring error. The reason is that the address of SCAN is configured in / etc/hosts. If you can ping SCAN-IP in this step, you can ignore this error. Click OK to continue and the installation of grid is complete.

11.2. Check after installation

Su-grid

Check CRS status

Crsctl check crs

Check Cluster resources

Crsstat-t

Check CRS node information

Olsnodes-n

Check the two-node Oracle TNS listening process

Ps-ef | grep lsnr | grep-v 'grep' | grep-v' ocfs' | awk'{print$9}'

Confirm the Oracle ASM function for the Oracle Cluster file:

Su-grid

Srvctl status asm-a

11.3. Configure ASM disk groups

Under grid user, execute asmca and add physical disk to ASM disk group

Create Oracle data storage disk

Complete the configuration of oracle data disk

Repeat the above to create a flash disk.

Final configuration result:

12. Install Oracle DataBase12.1, installation process

This operation only needs to be performed on one node!

The graphical interface installation requires the following command to be executed by the root user

Xhost +

If the output above appears, you can continue with the graphic installation.

Su-oracle

Cd / setup/database

. / runInstaller

Do not receive Oracle security updates

Security update mailbox warning, select yes to continue

Skip updates

Choose to install only the database software (Install database software only)

Select all nodes and test SSH connectivity

Connectivity test successful

Select language

Select Enterprise Edition

Select ORACLEBASE and ORACLEHOME

Oracle Management Group

Install pre-inspection. The reason for the SCAN warning is that SCAN-IP can be configured with up to 3 addresses and corresponding domain names. Here DNS Name is not used and only one IP is configured, so an error is reported, but it will not affect the system use and can be ignored!

Ignore the SCAN-IP warning and select yes to continue

Installation overview

Installation

Root users execute root.sh scripts. This can be done in parallel on all nodes (unlike at the end of the grid installation), but it is recommended that you do it in turn.

Installation completed

12.2. Install the database

At this point, oracle users check whether the listener exists.

Su-oracle

Lsnrctl status

After the monitoring check is complete, execute the following command to start creating the database

Dbca

Create a database

Select the installation template (General Purpose or Transaction Processing)

Select Admin-Managed, fill in Global Database Name (ORACLEUNQNAME configured in .bash profile), and select all nodes

Select OEM

Set administrative password (to facilitate the use of a unified password)

Select a database file store

Enter the ASM diskgroup management password and OK to continue

Select the flashback ASM disk group to enable archiving as needed

Configure memory for Database and enable automatic memory management

Change the character set (select ZHS16GBK because you need to support Chinese characters in China)

Overview of database storage

Whether to create an Oracle installation script (Generate Database Creation Scripts)

Create a database overview

Create an Oracle installation script

Create a database

After creating the database, OK

13. Oracle RAC maintenance

The command set of Oracle Clusterware can be divided into the following four types:

Node layer: osnodes

Network layer: oifcfg

Cluster layer: crsctl, ocrcheck,ocrdump,ocrconfig

Application layer: srvctl,onsctl,crsstat

Note: CRS maintenance requires the use of grid users (root users need to perform under the grid ORACLEHOME/bin directory). It is recommended to maintain under grid users.

13.1. Node layer

Olsnodes-n-I-s

13..2, Network layer 13.2.1, list CRS Nic

Oifcfg iflist

13.2.2. Obtain CRS network card information

Oifcfg getif

13.3. Cluster layer 13.3.1, check CRS status

Crsctl check crs

13.3.2. Check CRS single service

Crsctl check CSSD [CRSD | evmd]

13.3.3. Check whether CRS starts automatically (root user)

Crsctl disable | enable crs

13.3.4, start, stop, view CRS (root user)

Crsctl start | stop crs

13.3.5. View the Votedisk disk location

Crsctl query css votedisk

13.3.6. Maintain Votedisk

During the process of installing Clusterware in the new way, if you choose the External Redundancy policy when configuring Votedisk. You can only fill in one Votedisk. However, even if External Redundancy is used as a redundancy policy, multiple Vodedisk can be added, but must be added through the crsctl command. After adding multiple Votedisk, these Votedisk are mirrored to each other to prevent the single point of failure of Votedisk.

It is important to note that Votedisk uses a "mostly available algorithm", and if there is more than one Votedisk, more than half of the Votedisk must be used at the same time for Clusterware to work properly. For example, if 4 Votedisk are configured and a bad Votedisk is configured, the cluster can work normally. If 2 are broken, more than half of them cannot be satisfied. The cluster will crash immediately and all nodes will restart immediately. Therefore, if you add Votedisk, try not to add only one, but 2. This is different from OCR. Only one OCR needs to be configured.

The operation of adding and removing Votedisk is dangerous, you must stop the database, stop ASM, stop the operation after CRS Stack, and you must use the-force parameter.

The following actions should be performed using the root user

1) View current configuration

. / crsctl query css votedisk

2) stop the CRS of all nodes:

. / crsctl stop crs

3) add Votedisk

. / crsctl add css votedisk / dev/raw/raw1-force

Note: even after CRS is turned off, the Votedisk must be added and removed with the-force parameter, and the-force parameter is safe to use only if CRS is turned off. Otherwise, it will be reported: Cluter is not a ready state for online disk addition.

4) confirm the situation after being added:

. / crsctl query css votedisk

5) start CRS

. / crsctl start crs

13.3.7. View OCR status

Ocrcheck

13.3.8. Maintain OCR

1 View OCR automatic backup

Ocrconfig-showbackup

2 backup and restore simulation cases using export and import (root users)

Oracle recommends that when making adjustments to the cluster, such as adding or deleting nodes, you should make a backup of OCR. You can use export to back up to the specified file. If you do operations such as replace or restore, Oracle recommends using the cluvfy comp ocr-n all command to do a comprehensive check. This command is in the installation software of clusterware.

1) first turn off the CRS of all nodes

. / crsctl stop crs

2) Export OCR content with root users

. / ocrconfig-export / u01/ocr.exp

3) restart CRS

. / crsctl start crs

4) check CRS status

. / crsctl check crs

5) destroy OCR content

Dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=102400

6) check OCR consistency

. / ocrcheck

7) use the cluvfy tool to check consistency

. / runcluvfy.sh comp ocr-n all

8) use Import to restore OCR content

. / ocrconfig-import / u01/ocr.exp

9) check OCR again

. / ocrcheck

10) use the cluvfy tool to check

. / runcluvfy.sh comp ocr-n all

3 Mobile OCR file location simulation case (root user)

Move OCR from / dev/raw/raw1 to / dev/raw/raw3.

1) check whether there is an OCR backup

. / ocrconfig-showbackup

If you do not have a backup, you can immediately perform an export as a backup:

. / ocrconfig-export / u01/ocrbackup-s online

2) View the current OCR configuration

. / ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 2

Total space (kbytes): 147352

Used space (kbytes): 4364

Available space (kbytes): 142988

ID: 610419116

Device/File Name: / dev/raw/raw1

Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

The output shows that there is currently only one Primary OCR, in / dev/raw/raw1, and no Mirror OCR. Because there is only one OCR file, you can't change the location of the OCR directly. You must first add the image and then modify it, otherwise it will report: Failed to initialize ocrconfig.

3) add a Mirror OCR

. / ocrconfig-replace ocrmirror / dev/raw/raw4

4) confirm that the addition is successful

. / ocrcheck

5) change the location of primary OCR

. / ocrconfig-replace ocr / dev/raw/raw3

Confirm that the modification is successful:

. / ocrcheck

6) after being modified with the ocrconfig command, the contents of the / etc/oracle/ocr.loc files on all RAC nodes will also be automatically synchronized. If there is no automatic synchronization, you can manually change it to the following.

More / etc/oracle/ocr.loc

Ocrconfigloc=/dev/raw/raw1

Ocrmirrorconfigloc=/dev/raw/raw3

Localonly=FALSE

13.4. Application layer 13.4.1, check CRS status

Crsstat-t-v

Crsstat crsstat ora.scan1.vip-View ora.scan1.vip status

Crsstat-ls-View the permission definition of each resource in the same format as Linux

13.4.2, onsctl command

The onsctl command is used to manage the configuration ONS (Oracle Notification Service). ONS is the basis for Oracle Clusterware to implement the FAN Event Push model.

In the traditional model, the client needs to check the server periodically to judge the state of the server, which is essentially a pull model. Oracle 10g introduces a new PUSH mechanism-FAN (Fast Application Notification). When something happens on the server, the server will actively notify the client of the change, so that the client can know the change of the server as soon as possible. The introduction of this mechanism relies on the ONS implementation, and the ONS service needs to be configured before using the onsctl command.

View ONS service status

Onsctl ping

Start ONS

Onsctl start

View the details of ONS

Onsctl debug

13.4.3, srvctl command

Srvctl can manipulate the following resources: Database,Instance,ASM,Service,Listener and Node Application, of which Node application includes GSD,ONS,VIP.

View database configuration

Srvctl config database-displays the database name

Srvctl config database-d rac-displays the details of the specified rac database

View resource information of each node

Srvctl config nodeapps

Srvctl config nodeapps-n rac1-a-g

View monitoring information

Srvctl config listener

Srvctl config listener-n rac1

Configure whether the database is started

Srvctl enable | disable database-d rac

Srvctl enable | disable database-d rac-I rac1

Start the database

Srvctl start database-d rac

Start the instance

Srvctl start instance-d raw

Reference: http://www.cnblogs.com/rootq/archive/2012/11/14/2769421.html

14, appendix: 14.1, CRS noun explanation: 14.1.1, CRS some service functions cvu: responsible for oracle health check process ons: responsible for communication between different nodes, cluster synchronization. Gsd: responsible for allocating service resources, only for 9i RAC, but reserved for backward compatibility without affecting performance. The suggestion should not be dealt with. Oc4j: is a resource for DBWLM (Database Workload Management (DBWLM)). WLM is not available until 11.2.0.2. Just don't deal with it. Acfs (ASM Cluster File System): ASM-based cluster file system, with features added after 11.2. Supports a wider range of storage file types. Ora.acfs indicates whether the service is supported or not. 14.1.2, Oracle Cluster Registry (OCR)

OCR is responsible for maintaining the configuration information of the entire cluster, including RAC and Clusterware resources. In order to solve the problem of "forgetfulness" of the cluster, the whole cluster will have a configuration OCR, at most two points of OCR, and a primary OCR and a mirror OCR mirror each other in case of a single point of failure of OCR.

14.1.3 、 Votedisk

The information of the node members is recorded in the Voting Disk and must be stored in the shared memory. The main purpose of the Voting Disk is to determine which Partion to gain control in the event of a brain fissure, and other Partion must be removed from the cluster. If there are four Votedisk and one bad Votedisk, the cluster can work normally. If two are broken, more than half of them cannot be satisfied. The cluster will crash immediately and all nodes will restart immediately.

14.1.4, Admin Managed and Policy Managed

Introduction of Policy-Managed mode

Policy-based management is based on server pool (Server Pools). To put it simply, you first define some server pools, which contain a certain number of servers, and then define some policies. According to these policies, Oracle will automatically decide how many database instances to run on several machines in the pool. The suffix of the database instance name, the number of database instances, and the host running are all determined by the policy, not by the database administrator.

What kind of environment is suitable for this new way of management? When managing a large number of server clusters and running a variety of RAC databases with different importance and different strategies, Policy-Managed is recommended to simplify management. in fact, Oracle also recommends that Policy-Managed be used to manage the entire database cluster only when there are more than 3 servers. Imagine what can be achieved by using the Policy-Managed approach: if we have 10 servers, define the criticality of the server pool according to the importance of different applications, and then when some of these machines are down unexpectedly, we can still automatically maintain enough machines to provide database services to important systems while minimizing the number of non-critical system database servers.

Policy management: DBA specifies the server pool in which the database resource runs (excluding generic or free). Oracle Clusterware is responsible for putting database resources on a server.

Introduction of Admin-Managed mode

In fact, the above statement has clearly explained the difference between Policy-Managed and Admin-Managed. Let's review what it was like to create a RAC database in the past. In the dbca interface, we would choose to run the database instance on several machines in the entire cluster, or two, three, or even more, but as long as a few machines are selected during installation, they will always run on these machines if we do not add or subtract nodes later. Moreover, the database instances on each host are usually automatically named dbname1 to dbnameN according to the sort of host name. These will not change automatically after the administrator has installed them. This is the Admin-Managed way.

Administrator management: DBA specifies all servers on which the database resource runs and manually places the resource as needed. This is the management policy used in previous versions of the Oracle database.

Reference: http://czmmiao.iteye.com/blog/2128771

14.1.5, Grid Name Service (GNS)

There are three ways to configure SCAN in RAC:

 / etc/hosts-- A common way

 DNS

 GNS-- DHCP+DNS

Reference: http://blog.csdn.net/tianlesoftware/article/details/42970129

Http://download.csdn.net/detail/renfengjun/4488454

14.1.6, Intelligent Management Platform Interface (IPMI)

Intelligent platform management interface, IPMI is a standard standard, of which the most important physical component is BMC (Baseboard Management Controller), which is an embedded management microcontroller, which is equivalent to the "brain" of the whole platform management. Through it, IPMI can monitor the data of each sensor and record the log of various events. It is configured during Grid Infrastructure installation, but this option is not generally configured.

14.2, installation problem 14.2.1, installation interface garbled

The system default character set is different from the installation software character set.

Solution: export LANG=enus

14.2.2, xhost + error report

Solution: export DISPLAY=:0-root users

Xhost +

Su-grid

Export DISPLAY=0:0

14.2.3. The name of the network card does not correspond to the MAC address

Solution: ① vi / etc/udev/rules.d/70-persistent-net.rules

② vi / etc/sysconfig/network-scripts/ifcfg-eth0 | 1

Change the eth0, eth2 name and MAC address in ① to the corresponding content in ②, and restart the network.

14.2.4, Linux6.yum installation error

Warning: rpmtsHdrFromFdno: Header V3 RSA/SHA256 Signature, key ID c105b9de: NOKEY

Updates/gpgkey

Public key for .rpm is not installed

Solution: rpm--import / etc/pki/rpm-gpg/RPM

Reference: http://blog.sina.com.cn/s/blog6e2aeba30100pshi.html

14.2.5, 11g support Linux system list

RHEL4.7 ↑ 5.2 ↑

SUSE 10 SP2 ↑

14.2.6. Several ways of configuring RAC ASM in udev

Method 1:60-raw.rules-- bare equipment

Vi / etc/udev/rules.d/60-raw.rules

ACTION== "add", KERNEL== "sdb", RUN+= "/ bin/raw / dev/raw/raw1 N"

ACTION== "add", KERNEL== "raw1", OWNER= "grid", GROUP= "asmadmin", MODE= "660"

ACTION== "add", KERNEL== "sdc", RUN+= "/ bin/raw / dev/raw/raw2 N"

ACTION== "add", KERNEL== "raw2", OWNER= "grid", GROUP= "asmadmin", MODE= "660"

Ls-l / dev/raw/raw

Brw-rw----1 grid asmadmin 8, 96 Jun 29 21:56 / dev/raw1

Brw-rw----1 grid asmadmin 8, 64 Jun 29 21:56 / dev/raw2

Method 2:99-oracle-asmdevices.rules-- ASM

Vi / etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL== "sdb", NAME= "asmdiskb", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sdc", NAME= "asmdiskc", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

Udevadm control-reload-rules

Startudev

Starting udev: [OK]

Ll / dev/asm

Brw-rw----. 1 gridasmadmin 8, 16 Dec 16 15:52 / dev/asmdiskb

Brw-rw----. 1 gridasmadmin 8, 32 Dec 16 15:52 / dev/asmdiskc

Reference: http://www.cnblogs.com/jimeper/archive/2012/12/09/2809724.html

14.2.7, Oracle11.2.0.1 problem

1. Error installing grid while executing root.sh Times:

Error: this is a previous version of Oracle11.2.0.3 (not included) bug

CRS-4124: Oracle High Availability Services startup failed.

CRS-4000: Command Start failed, or completed with errors.

Ohasd failed to start: Inappropriate ioctl for device

Ohasd failed to start at / u01/app/grid/11.2.0/crs/install/rootcrs.pl line 443.

Solution: open two root user windows

Window 1:

$ORACLEHOME/root.sh-when Adding daemon to inittab appears, start executing dd in window 2

Window 2:

/ bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

Note: this problem will exist until a later reboot, that is, this step is required for each startup!

Restart steps (two nodes):

Open two root user windows

Window 1:

$ORACLEHOME/crs/install/roothas.pl-clear the CRS configuration and continue with root.sh after the following results appear

Either / etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

/ bin/dd: opening `': No such file or directory

Successfully deconfigured Oracle Restart stack

$ORACLEHOME/root.sh-when Adding daemon to inittab appears, start executing dd in window 2

Window 2:

/ bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1-- until no error is returned

Any window:

Su-grid

Crsstart-all-start the cluster

2. Netca reports an error:

Error:

Line 190: 16835 Aborted

Solution: vi / etc/hosts

IP hostname**

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report