Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

[RAC] RAC build step Linux7.2+11G (based on Vmware+Openfile)

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. Planning 2

1. Network Planning 2

two。 Disk Planning 2

3. Host hardware configuration 3

Second, clear thinking 3

III. Preparatory work 3

1. Network Card preparation (rac1&rac2) 3

two。 Turn off the firewall and SElinux (rac1&rac2) 5

3. Package preparation 5

4. Shared disk preparation 5

Fourth, build RAC 17

1. Modify system parameters (rac1&rac2) 17

two。 Modify limits file (rac1&rac2) 18

3. Modify / etc/hosts file (rac1&rac2) 18

4. Create users and groups (rac1&rac2) 18

5. Create a software installation directory (rac1&rac2) 19

6. Modify the environment variable (rac1&rac2) 19

7. Configure local YUM (rac1&rac2) 20

8. UDEV binding disk (rac1&rac2) 21

9. Install grid software (rac1) 22

10. Configure ASM disk (rac1) 34

11. Create a database (rac1) 36

12. DBCA Library (rac1) 43

5. Verification 52

1. Verify cluster status 52

two。 Verify instance information 53

3. Verify data file, log file, control file information 53

First, planning 1. Network planning

Virtual IP: virtual IP is the IP generated after the cluster is built. This IP needs to be on the same network segment as the physical IP. The biggest feature of virtual IP: when the server or cluster of this node goes down, the virtual IP will be seamlessly connected to another node to ensure the continuity of transactions and queries. Therefore, it is recommended that in the case of business separation, the service connects to the virtual IP of this node. In a two-node RAC cluster, each operating system has its own physical IP, and each node of the cluster needs a virtual IP (VIP), and the two IP need to be on the same network segment.

SCAN-IP: floating IP. The IP floats randomly between the two nodes. If the business is not separated, it is recommended that the service be connected to SCAN-IP to achieve load balancing.

two。 Disk planning

Disk planning needs to consider two levels: one is the disks needed by the cluster, and the other is the disks needed by the database.

1) the disks required by the cluster

In order to meet the function of the cluster, two kinds of disks, OCR and Vote disks, need to be provided. OCR disk records some information about the cluster. Vote disk plays a decisive role when a single node in the cluster fails, so it plays a decisive role in eliminating the node.

OCR disk and Votefile disk can be in the same disk group, and the size requirements are as follows:

-Externel Redundancy

1 OCR (1 x 400m) = 0.4g

1 voting files (1 x 300m) = 0.3G

-Normal Redundancy

2 OCRs (2 x 400m) = 0.8G

3 voting files (3 x 300m) = 0.9G

-High Redundancy

3 OCRs (3 x 400m) = 1.2G

5 Voting files (5 x 300m) = 1.5G

2) the disk required by the database

The database requires at least two disk groups, one for data files and the other for archived log files. There is no minimum requirement for size, and the size needs to be allocated according to the amount of business data.

3. Host hardware configuration

(this is my test environment.)

Second, clear thinking

Step1: network card configuration (rac1&rac2)

Step2: turn off the firewall and Selinux (rac1&rac2)

Step3: shared disk preparation

(your own test environment can use Openfile as shared storage. If it is a production environment, you need a storage server for shared storage.)

Step4: modifying kernel parameter files

Step5: modifying user limit files

Step6: modifying Hosts fil

Step7: create users, groups

Step8: create directories and grant permissions

Step9: configuring environment variables (grid and oracle)

Step10: configure the local YUM source and install the required packages

Step11: bind shared disks with udev

Step12: install cluster software (GI)

Step13: configuring ASM disk

Step14: install database software

Step15: DBCA to build a database

Step16: end validation

Third, preparatory work 1. Network card preparation (rac1&rac2)

Explanation: in RAC cluster, each server needs two network cards, one for external network service and one for private network communication.

Add a private network card

two。 Turn off the firewall and SElinux (rac1&rac2)

# systemctl stop firewall

# systemctl disable firewall

Close selinux

# vi / etc/selinux/config

SELINUX=disabled

3. Package preparation

(rac1)

GI directory: # mkdir-p / u01/setup/grid

DB directory: # mkdir-p / u01/setup/db

OS directory: # mkdir-p / u01/setup/os

(rac2)

OS directory: # mkdir-p / u01/setup/os

4. Shared disk preparation

Download address of Openfile iso file

Http://www.openfiler.com/community/download

1) create a virtual machine with the following configuration

Disk description:

Disk 1 is used to install the openfile system, 30g

Disk 2 for ocr and vote disks, 5G

Disk 3x4 for DATA disk groups, 100G each

Disk 5 is for ARCH disk groups, 50g.

2) Open the virtual machine and install openfile

Click Enter to install automatically

3) access the management interface of openfile (the address above)

Username: openfiler

Password: password

4) enter the service s tag and start the iscsi service

5) enter the system tab

Add an IP segment that allows access

6) enter volu mes to create a physical volume

At this point, the physical volume has been created

6) create a volume group

7) create a logical volume

Similarly, create all the physical disks planned above as logical volumes

8) add IQN

4) do disk mapping

At this point, the logical volume is added. If you want to use it, you need to scan the logical volume.

9) client scans the openfile server

# iscsiadm-m discovery-t sendtargets-p 172.16.70.176-l

If the iscsiadm command is not available, you need to install the iscsi-initiator rpm package

# iscsiadm-m discovery-t st-p 192.168.0.10

-bash: iscsiadm: command not found

# mount / dev/cdrom / media-load CD, install iscsi-initiator rpm package mount: block device / dev/cdrom is write-protected, mounting read-only # cd / media/

# cd Server/ # ls- l * iscsi*-Rafael-55 root root 579386 Dec 17 2008 iscsi-initiator-utils-6.2.0.868-0.18.el5.i386.rpm # rpm-ivh iscsi-initiator-utils-6.2.0.868-0.18.el5.i386.rpm

Fourth, set up RAC1. Modify system parameters (rac1&rac2)

# vi / etc/sysctl.conf

Fs.aio-max-nr = 1048576

Fs.file-max = 6815744

Kernel.shmmax = 8589934591

Kernel.shmmni = 4096

Kernel.shmall = 2097152

Kernel.sem = 250 32000 100 128

Net.ipv4.ip_local_port_range = 9000 65500

Net.core.rmem_default = 262144

Net.core.rmem_max = 4194304

Net.core.wmem_default = 262144

Net.core.wmem_max = 1048576

Make the parameter effective

# sysctl-p

two。 Modify limits file (rac1&rac2)

# vi / etc/security/limits.conf

Grid soft nproc 2047

Grid hard nproc 16384

Grid soft nofile 1024

Grid hard nofile 65536

Oracle soft nproc 2047

Oracle hard nproc 16384

Oracle soft nofile 1024

Oracle hard nofile 65536

3. Modify / etc/hosts file (rac1&rac2)

172.16.70.170 rac1

172.16.70.171 rac2

10.0.0.100 rac1-priv

10.0.0.101 rac2-priv

172.16.70.173 rac1-vip

172.16.70.174 rac2-vip

172.16.70.175 cluster-scan-ip

4. Create users and groups (rac1&rac2)

(1) create a group

# groupadd-g 501 dba

# groupadd-g 50 2 oinstall

(2) create users

# useradd-u 50 1-g oinstall-G dba-d / home/oracle oracle

# useradd-u 600-g oinstall-G dba-d / home/grid grid (3) set passwords for oracle and grid users

# passwd oracle

# passwd grid

(4) check

# id oracle

# id grid

5. Create a software installation directory (rac1&rac2)

# mkdir-p / u01/app/oracle

# mkdir-p / u01/app/oracle / product/11.2.0/db_1

# mkdir-p / u01/app/grid

# mkdir-p / u01/app/11.2.0/grid

# chown-R oracle:oinstall / U01

# chown-R grid:oinstall / U01 / app/grid

# chown-R grid:oinstall / u01/app/11.2.0 / grid

# chown-R oracle:oinstall / u01/app/oracle

# chmod-R 775 / U01

(rac1 execution)

# chown-R grid:oinstall / u01/setup/grid

# chown-R oracle:oinstall / u01/setup/db

6. Modify environment variable (rac1&rac2)

-switch to grid user-

Rac1

$vi .bash _ profile

Export ORACLE_SID= + ASM1

Export ORACLE_BASE=/u01/app/grid

Export ORACLE_HOME=/u01/app/11.2.0/grid

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib

Export PATH=$ORACLE_HOME/bin:$PATH

Rac2

$vim .bash _ profile

Export ORACLE_SID= + ASM2

Export ORACLE_BASE=/u01/app/grid

Export ORACLE_HOME=/u01/app/11.2.0/grid

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib

Export PATH=$ORACLE_HOME/bin:$PATH

-switch to Oracle user-

Rac1

# su-oracle

$vi / home/oracle/.bash_profile

Export ORACLE_SID= ORCL1

Export ORACLE_BASE=/u01/app/oracle

Export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib

Export PATH=$PATH:$ORACLE_HOME/bin

Rac2

# su-oracle

$vi / home/oracle/.bash_profile

Export ORACLE_SID= ORCL2

Export ORACLE_BASE=/u01/app/oracle

Export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib

Export PATH=$PATH:$ORACLE_HOME/bin

Make the above environment variables effective

$source .bash _ porofile

7. Configure local YUM (rac1&rac2)

(root users)

# mount-o loop / u01/setup/os/ rhel-server-7.2-x86_64-dvd.iso / mnt

# vi / etc/yum.repos.d/mnt.repo

[mnt]

Name=Yum Source

Baseurl= file:///mnt

Enabled=1

Gpgcheck=0

# yum makecache

Install the following packages:

# yum install-y binutils compat* elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel gcc gcc-c++ libaio libgcc libstdc++ libstdc++-devel make sysstat unixODBC-devel libaio-devel ksh

8. Bind disk with UDEV (rac1&rac2)

# vi / etc/udev/rules.d/ 99-oracle-asmdevices.rules

KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "14f504e46494c45524932494157712d763771782d30694f30", RUN+= "/ bin/sh-c 'mknod / dev/asm_ocr b $major $minor; chown grid:oinstall / dev/asm_ocr; chmod 0660 / dev/asm_ocr'"

KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "14f504e46494c45526d70325177442d616f33572d35686452", RUN+= "/ bin/sh-c 'mknod / dev/asm_data1 b $major $minor; chown grid:oinstall / dev/asm_data1; chmod 0660 / dev/asm_data1'"

KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "14f504e46494c455251726b30676d2d337842312d55375278", RUN+= "/ bin/sh-c 'mknod / dev/asm_data2 b $major $minor; chown grid:oinstall / dev/asm_data2; chmod 0660 / dev/asm_data2'"

KERNEL== "sd* [! 0-9]", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "14f504e46494c45527650634d78742d4a7664622d3276506e", RUN+= "/ bin/sh-c 'mknod / dev/asm_arch b $major $minor; chown grid:oinstall / dev/asm_arch; chmod 0660 / dev/asm_arch'"

Note: the RESULT parameter should be the wwid number of the bare device. The udev configuration of each small version is slightly different, and the current configuration is only applicable to RHEL version 7.2.

Start udev

# / sbin/udevadm trigger-type=devices-action=change

View asm disk

# ll / dev/asm*

9. Install the grid software (rac1)

(1) decompress grid software

# su-grid

$cd / u01/setup/grid

$unzip p*.zip

(2) install cvuqdisk

# cd rpm/

# rpm-ivh cvuqdisk-1.0.9-1.rpm

Transfer the rpm package to the 2 nodes for installation

(3) run the graphical interface

$unzip / u01/setup/grid/ p*

$cd / u01/setup/grid/grid

Run. / runInstaller

Note:

1) remote operation of the graphical interface requires

Open Xmanager, dual Xmanager-Passive

$export DISPLAY= native IP:0.0

2) check whether the drawing can be run

$xhost +

Access control disabled, clients can connect from any host s

The words above indicate that the graphical interface can be run

3) "mouth" garbled code appears, execute

$export Lang=en_US

Skip software updates

Select Advanced installation

The name of SCAN name should be written as the name of scan-ip in / etc/hosts file

The rac1/rac1-vip/rac2/rac2-vip name here corresponds to the name in the / etc/hosts file.

Enter the password grid and click setup to configure mutual trust

General password write oracle

Execute the following two scripts

Execution order:

Machine An executes script 1 → B executes script 1 → An executes script 2 → B executes script 2

Scripts must be executed one by one, preferably not together.

Rac1:

Rac2

When the script is finished, click OK.

Note:

1) if the execution of the root.sh script fails, you can go back, as follows:

# / u01/app/11.2.0/grid/crs/install/roothas.pl-deconfig-force-verbose

2) if you have been stuck in Adding daemon to inittab or Adding Clusterware entries to inittab or ohasd process and failed to start, you need to open a new window to execute the following script, and then cancel the execution of the script after ohasd starts successfully.

# / bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

Neglecting to report an error will not affect

10. Configure ASM disk (rac1)

Grid user

$asmca

It should be noted here that for normal disk groups (non-OCR), at least 2 disks are required for normal redundancy mode, at least 3 disks for high redundancy mode, and at least 2 disks for non-redundancy mode.

The final results are as follows:

11. Create a database (rac1)

(Oracle users)

# su-oracle

$cd / u01/setup/db/

$unzip p13390677_112040_Linux-x86-64_1of7.zip

$unzip p13390677_112040_Linux-x86-64_2of7.zip

Unpacking two zip packages will result in a database directory

$cd database/

$pwd

/ u01/setup/db/database

$. / runInstaller

Oracle's password is oracle. Select setup to establish mutual trust.

12. Build DBCA database (rac1)

(Oracle users)

$dbca

Password oracle

Fifth, verify 1. Verify cluster status

two。 Verify instance information

3. Verify data file, log file, control file information

-end-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report