In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
On the left above is my personal Wechat. For further communication, please add Wechat. On the right is my official account "Openstack Private Cloud". If you are interested, please follow us.
Recently the company used oracle12C products, the original familiar with oracle11gR2 RAC, 12C version has a big feature is the database container and pluggable database, that is, CDB and PDB, or need to be familiar with. Prepare to use the PVE environment to build a set of 12C rac.
Refer to the following blog:
Https://blog.51cto.com/sery/2156860
The hardware configuration is planned as follows:
1. Two pve virtual machines as instance nodes. Each virtual machine uses 4 cores, 8 gigabytes, 2 disks, 1 block, 32 gigabytes, and 100 gigabytes, of which 16 gigabytes are used as swap switching partitions, and 2 network cards are bridged to the pve physical network.
2. A pve virtual machine as shared storage installs openfiler, using 2 disks, 1 32G installation system, 1 200G shared disk, shared storage openfiler provides shared storage through iscsi, and the iscsi client is configured on the instance node to use iscsi storage shared by openfiler. The configuration is as follows:
Note that ide cache=writethrough is used for storage, select scsi at the beginning, and the disk is not recognized when installing openfiler.
3. Installation planning of database. Found an official oracle12C RAC installation guide on the Internet, but for rhel6, my environment is centos7, refer to the following URL:
Https://blog.51cto.com/ld0381/1923207
The installation plan is as follows:
A. Storage planning:
1. GRID cluster component disk group
+ dggrid: 1 normal consisting of three 10G disks (note that ocr disks need to be larger than 77g in 12C)
2. Database installation disk group
+ dgsystem: for database basic tablespaces, control files, parameter files, etc.
+ dgrecovery: for archiving and flashback log space
+ dgdata: user database business tablespace
B. IP planning:
Oraclenode1:
Publicip: ens18: 192.168.1.32
Vip:192.168.1.36
Privateip: ens19: 192.168.170.32
Oraclenode2:
Publicip: ens18: 192.168.1.33
Vip:192.168.1.37
Privateip: ens19: 192.168.170.33
Scanip: 192.168.1.38
C. Software version:
Operating system: CentOS 7.2
Database: ORACLE12c R2
Cluster management software: ORACLEGRID 12.2.0.1
D, hostname planning:
# public ip
192.168.1.32 oraclenode1
192.168.1.33 oraclenode2
# private ip
192.168.170.32 oraclenode1pri
192.168.170.32 oraclenode2pri
# vip ip
192.168.1.36 oraclenode1vip
192.168.1.37 oraclenode2vip
# scan ip
192.168.1.38 oraclenodescan
E. User and user group planning:
Groupadd-g 60001 oinstall
Groupadd-g 60002 dba
Groupadd-g 60003 oper
Groupadd-g 60004 backupdba
Groupadd-g 60005 dgdba
Groupadd-g 60006 kmdba
Groupadd-g 60007 asmdba
Groupadd-g 60008 asmoper
Groupadd-g 60009 asmadmin
Useradd-u 61001-g oinstall-G asmadmin,asmdba,dba,asmoper grid
Useradd-u 61002-g oinstall-G dba,backupdba,dgdba,kmdba,asmadmin,oper,asmdba oracle
Echo "grid" | passwd-- stdin grid
Echo "oracle" | passwd-- stdin oracle
F, catalogue planning:
Mkdir-p / data/oracle/app/grid
Mkdir-p / data/oracle/app/12.2.0.1/grid
Chown-R grid:oinstall / data/oracle
Mkdir-p / data/oracle/app/oraInventory
Chown-R grid:oinstall / data/oracle/app/oraInventory
Mkdir-p / data/oracle/app/oracle
Chown-R oracle:oinstall / data/oracle/app/oracle
Chmod-R 775 / data/oracle
As can be seen from the storage planning above, 6 lun are required for shared disks, of which 3 10G are used as OCR+voting (note that ocr disks need to be larger than 77g in 12C), and three lun store system tablespace 50G, archive and flashback log space 50G, and user data space 50G respectively.
First of all, be ready to install the source, including the centos7 installation image, the oracle 12C R2 installation source, and the openfiler ISO installation image.
Download the installation file from the oracle official website (you need to register the oracle user first):
Https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-linux-12201-3608234.html
Download the iso installation image of openfiler:
Https://www.openfiler.com/community/download
When installing, note that the IP address is configured as a static address:
After installing openfiler, it is as follows:
The default user name and password is: openfiler password. The interface after login is as follows:
Partition the extended logic partition on the second disk / dev/sdb, create 3 10G (note that ocr disk needs to be greater than 77g in 12C), 3 50g partition, and then create pv, as follows:
[root@openfiler] # fdisk / dev/sdbCommand (m for help): pDisk / dev/sdb: 214.7 GB, 214748364800 bytes255 heads, 63 sectors/track, 26108 cylinders Total 419430400 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00089483 Device Boot Start End Blocks Id System/dev/sdb1 63 16771859 838589 + 83 Linux/dev/sdb2 16771860 18876374 1052257 + 82 Linux swap / SolarisCommand (m for help): nCommand action e extended p primary partition (1-4) pPartition number (1-4) Default 3): Using default value 3First sector (18876375-419430399, default 18876375): Using default value 18876375Last sector, + sectors or + size {K Magna M ·G} (18876375-419430399, default 419430399): Using default value 419430399Command (m for help): pDisk / dev/sdb: 214.7 GB, 214748364800 bytes255 heads, 63 sectors/track, 26108 cylinders Total 419430400 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00089483 Device Boot Start End Blocks Id System/dev/sdb1 63 16771859 8385898 + 83 Linux/dev/sdb2 16771860 18876374 1052257 + 82 Linux swap / Solaris/dev/sdb3 18876375 419430399 200277012 + 83 LinuxCommand (m for help): tPartition number (1-4) ): 3Hex code (type L to list codes): 8eChanged system type of partition 3 to 8e (Linux LVM) Command (m for help): pDisk / dev/sdb: 214.7 GB 214748364800 bytes255 heads, 63 sectors/track, 26108 cylinders Total 419430400 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00089483 Device Boot Start End Blocks Id System/dev/sdb1 63 16771859 8385898 + 83 Linux/dev/sdb2 16771860 18876374 1052257 + 82 Linux swap / Solaris/dev/sdb3 18876375 419430399 200277012e Linux LVMCommand (m for help): wThe partition table has been Altered calling ioctl () to re-read partition table.WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table. The new table will be used atthe next reboot or after you run partprobe (8) or kpartx (8) Syncing disks. [root@openfiler ~] # partprobe [root@openfiler ~] # [root@openfiler ~] # pvcreate / dev/sdb3 Physical volume "/ dev/sdb3" successfully created
Note that after creating the partition with fdisk, you need to update the partition information with the partprobe command before pvcreate can recognize it. After the above operation, you can see the following pv information on the web interface:
Next, open the iscsi service:
Note that only one network is open above. If both networks are turned on, there will be a multipath problem.
Next, Install the iscsi client on the rac host: yum install-y iscsi-initiator-utils looks for the iscsi server on the rac host: [root@localhost ~] # iscsiadm-m discovery-t sendtargets-p 192.168.1.31192.168.1.31 iqn.2006-01.com.openfiler:tsn.eb490bf65b71 login server on the rac host: [root@localhost ~] # iscsiadm-m node-T iqn.2006-01.com.openfiler:tsn.eb490bf65b71-lLogging in to [iface: default Target: iqn.2006-01.com.openfiler:tsn.eb490bf65b71, portal: 192.168.1.31 portal 3260] Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.eb490bf65b71 Portal: 192.168.1.31 disk 3260] successful. [root@localhost ~] # use fdisk-l to verify whether the shared lun is mounted: [root@oraclenode1 ~] # lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 032G 0 disk ├─ sda1 8:1 0500M 0 part / boot └─ sda2 8:2 031.5G 0 part centos-root 253ve0 0 29.5G 0 lvm / └─ centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 100G 0 disk sdc 8:32 0 48.9G 0 disk sdd 8:48 0 48.9G 0 disk sde 8:64 0 48.9G 0 disk sdf 8:80 0 83G 0 disk sr0 11:0 1 1024M 0 rom
Because the virtual machine is used, the parameters related to the configuration are all the same in the previous series of tedious preparations before installation, so in the previous operation, only one virtual machine is installed, and then the virtual machine is cloned, and then the inconsistent parameters such as IP address and hostname are modified.
Next, upload the installation package and install the database. The detailed steps will not be detailed, refer to the website at the top of this article.
In the asm disk management section, refer to the official documents:
Download and install the asmlib package:
Https://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel7-2773795.html
Downloaded two packages:
Oracleasmlib-2.0.12-1.el7.x86_64.rpm
Oracleasm-support-2.1.11-2.el7.x86_64.rpm
Install using the yum localinstall command to resolve dependency issues.
Configure asm:
[root@oraclenode1 software] # oracleasm configure-iConfiguring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets ('[]'). Hitting without typing ananswer will keep that current value. Ctrl-C will abort.Default user to own the driver interface []: gridDefault group to own the driver interface []: asmadminStart Oracle ASM library driver on boot (YPAPO) [n]: yScan for Oracle ASM disks on boot (YPEA) [y]: yWriting Oracle ASM library driver configuration: done
Https://www.cndba.cn/Expect-le/article/1819
Prepare the ASM disk:
Use udev to bind disks: KERNEL== "sd*", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "14f504e46494c45523263575331752d466a34362d64385876", RUN+= "/ bin/sh-c 'mknod / dev/asmdiskc b $major $minor; chown grid:asmadmin / dev/asmdiskc Chmod 0660 / dev/asmdiskc' "KERNEL==" sd* ", ENV {DEVTYPE} = =" disk ", SUBSYSTEM==" block ", PROGRAM==" / usr/lib/udev/scsi_id-g-u-d $devnode ", RESULT==" 14f504e46494c455246314d47436e2d6432317a2d7039576d ", RUN+=" / bin/sh-c 'mknod / dev/asmdiskd b $major $minor; chown grid:asmadmin / dev/asmdiskd Chmod 0660 / dev/asmdiskd' "KERNEL==" sd* ", ENV {DEVTYPE} = =" disk ", SUBSYSTEM==" block ", PROGRAM==" / usr/lib/udev/scsi_id-g-u-d $devnode ", RESULT==" 14f504e46494c45524b374e4435422d63316f692d7667344d ", RUN+=" / bin/sh-c 'mknod / dev/asmdiske b $major $minor; chown grid:asmadmin / dev/asmdiske Chmod 0660 / dev/asmdiske' "KERNEL==" sd* ", ENV {DEVTYPE} = =" disk ", SUBSYSTEM==" block ", PROGRAM==" / usr/lib/udev/scsi_id-g-u-d $devnode ", RESULT==" 14f504e46494c45527061503038772d716467662d4a303479 ", RUN+=" / bin/sh-c 'mknod / dev/asmdiskf b $major $minor; chown grid:asmadmin / dev/asmdiskf Chmod 0660 / dev/asmdiskf' "KERNEL==" sd* ", ENV {DEVTYPE} = =" disk ", SUBSYSTEM==" block ", PROGRAM==" / usr/lib/udev/scsi_id-g-u-d $devnode ", RESULT==" 14f504e46494c455274475033336f2d4e39746f2d75436d70 ", RUN+=" / bin/sh-c 'mknod / dev/asmdiskg b $major $minor; chown grid:asmadmin / dev/asmdiskg Chmod 0660 / dev/asmdiskg' "KERNEL==" sd* ", ENV {DEVTYPE} = =" disk ", SUBSYSTEM==" block ", PROGRAM==" / usr/lib/udev/scsi_id-g-u-d $devnode ", RESULT==" 14f504e46494c45524c427661724a2d69746c322d67705363 ", RUN+=" / bin/sh-c 'mknod / dev/asmdiskh b $major $minor; chown grid:asmadmin / dev/asmdiskh The RESULT== content above "chmod 0660 / dev/asmdiskh'" is obtained by the command: / usr/lib/udev/scsi_id-g-u / dev/sd$i, which is referenced by iRecc de f g h: https://www.cndba.cn/Expect-le/article/1819 copies the above contents to the file / etc/udev/rules.d/99-oracle-asmdevices.rules. Effective: / sbin/udevadm trigger-- type=devices-- action=change check ASM disk: ls-ltr / dev/asm* if the file cannot be found, restart the system: if reboot is reinstalled, use dd if=/dev/sdc of=/dev/sdc to erase lun data, otherwise the used lun is in member state. If the capacity is very large, it is recommended to delete lun directly and rebuild it. Redo the mapping relationship: 1. Find [root@localhost] # iscsiadm-m discovery-t st-p 192.168.1.31192.168.1.31 st 3260 iqn.2006-01.com.openfiler:tsn.eb490bf65b712, uninstall: iscsiadm-m node-T iqn.2006-01.com.openfiler:tsn.eb490bf65b71-u3, delete: iscsiadm-m node-T iqn.2006-01.com.openfiler:tsn.eb490bf65b71-o delete4, delete volumn in openfiler, create a new volumn. Re-map (abbreviated) 5, find [root@localhost ~] # iscsiadm-m discovery-t st-p 192.168.1.31192.168.1.31 iscsiadm 3260 iqn.2006-01.com.openfiler:tsn.eb490bf65b716, login login: iscsiadm-m node-T iqn.2006-01.com.openfiler:tsn.eb490bf65b71-L7, Verification: lsblk
Before you install, you need to install the graphics components:
Yum-y groups install "X Window System"Fonts"
Xmanager is installed and xshell is set up, which will not be discussed here.
Run the gridSetup.sh script for the extracted grid installation package:. / gridSetup.sh
Password: oracle
Yum install compat-libcap1-y
Yum install nfs-utils-y
Many hours later:
The installation is complete. Log in to the grid account and use crs_stat-t to view the cluster status:
Use ocrcheck to check the status of ocr and crsctl query css votedisk to check the status of votedisk, as follows:
[root@oraclenode2 tmp] # su-gridLast login: Mon Jan 14 11:18:07 CST 2019 [grid@oraclenode2 ~] $ocrcheckStatus of Oracle Cluster Registry is as follows: Version: 4 Total space (kbytes): 409568 Used space (kbytes): 2032 Available space (kbytes): 407536 ID: 1486039673 Device/File Name: + GRID Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non -privileged user [grid@oraclenode2] $crsctl query css votedisk## STATE File Universal Id File Name Disk group-- 1. ONLINE 8e6efe7ec1b74f02bf229f9bd02ceb92 (/ dev/asmdiskc) [GRID] Located 1 voting disk (s). [grid@oraclenode2 ~] $
Ocr and votedisk status is normal!
Install the database software:
Log in with your oracle account and run the installer. / runInstaller:
At this point, the installation of RAC's grid cluster and database software is complete.
Next, you need to install the database instances CDB and PDB, and continue in the following blog post.
Summary:
The process of installation is still a little complicated, mainly involving the planning of shared storage, network, host, etc. I have always thought that the design of oracle is too complex, just installing a RAC can block a lot of people and make it look very classy. In fact, when you really use it, the more complex the design is, the more points of failure will be. in fact, ordinary enterprise users really do not need rac high availability to tell the truth. It is much more reliable to make good use of the stand-alone database and make regular backups and inspections.
Before installation, you need to do a lot of preparatory work, mainly to prepare the software package, then find the installation instructions for the corresponding operating system and database versions, and plan the database, which is very important. Plan first, do not start the installation as soon as you come up.
I encountered a problem during the installation. 12C has a requirement for the size of the disk used by ocr, which is larger than 77G, which makes me sick, because it takes me some time to redo all my shared storage, including the iscsi configuration on the host, and the udev binding. Some screenshots or contents in the article are still three 10G disks designed at that time as ocr voting disks, so I won't change them. The process is actually the same.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.