In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Installation environment:
Operating system: oracle linux 4.864 bit
Cluster software: 10201_clusterware_linux_x86_64.cpio.gz
Database: 10201_database_linux_x86_64.cpio.gz
Cpu:1
Memory: must be more than 1.5g (usually 2G)
Local disk: 20g (root partition 17G focus swap is 4G focus boot partition is 512m)
Asm download address: http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel4-092650.html#oracleasm_rhel4_amd64
Asm software: oracleasm-support-2.1.7-1.el4.x86_64.rpm
Oracleasmlib-2.0.4-1.el4.x86_64.rpm
Oracleasm-2.6.9-89.35.1.EL-2.0.5-1.el4.x86_64.rpm
Oracle rac Professional Group: 476687362
Oracle dba Professional Group: 581851278
IP address division:
192.168.3.30 rac10g01 eth0
192.168.3.40 rac10g01 eth0
192.168.3.50 rac10g01-vip eth0:1
192.168.3.60 rac10g02-vip eth0:1
10.0.0.1 rac10g01-priv eth2
10.0.0.2 rac10g02-priv eth2
IP description:
Each server must have two network cards. Public ip and priv ip cannot be on the same network segment. Public and priv must write the address in the network card. Vip is added by the database itself.
1. Install operating system, graphical installation
Disable both iptable and selinux when installing the system
two。 Configure the network card of the virtual machine
Rac10g01 node:
[root@rac10g01 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
HWADDR=00:0C:29:54:80:1D
ONBOOT=yes
TYPE=Ethernet
IPADDR=192.168.3.30
NETMASK=255.255.255.0
[root@rac10g01 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
BOOTPROTO=static
HWADDR=00:0C:29:54:80:27
ONBOOT=yes
TYPE=Ethernet
IPADDR=10.0.0.1
NETMASK=255.255.255.0
[root@rac10g01 ~] # / etc/init.d/network restart
Shutting down interface eth0: [OK]
Shutting down loopback interface: [OK]
Setting network parameters: [OK]
Bringing up loopback interface: [OK]
Bringing up interface eth0: [OK]
Bringing up interface eth2: [OK]
[root@rac10g01 ~] #
Rac10g02 node:
[root@rac10g02 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
HWADDR=00:0C:29:25:63:D2
ONBOOT=yes
TYPE=Ethernet
IPADDR=192.168.3.40
NETMASK=255.255.255.0
[root@rac10g02 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
BOOTPROTO=static
HWADDR=00:0C:29:25:63:DC
ONBOOT=yes
TYPE=Ethernet
IPADDR=10.0.0.2
NETMASK=255.255.255.0
[root@rac10g02 ~] # / etc/init.d/network restart
Shutting down interface eth0: [OK]
Shutting down loopback interface: [OK]
Setting network parameters: [OK]
Bringing up loopback interface: [OK]
Bringing up interface eth0: [OK]
Bringing up interface eth2: [OK]
[root@rac10g01 ~] #
3. Configure the hosts file, hostname and IP address resolution (both nodes are the same)
[root@rac10g01 ~] # vim / etc/hosts
192.168.3.30 localhost.localdomain localhost-this line is written on both nodes, and the IP address is native
192.168.3.30 rac10g01
192.168.3.40 rac10g02
192.168.3.50 rac10g01-vip
192.168.3.60 rac10g02-vip
10.0.0.1 rac10g01-priv
10.0.0.2 rac10g02-priv
[root@rac10g01 ~] # ping rac10g01-- all addresses are ping to see if there is a problem
4. CD image mount, install software dependency package (the two nodes are the same)
[root@rac10g01 ~] # mount / dev/hdc / mnt/cdrom/
Mount: block device / dev/hdc is write-protected, mounting read-only
[root@rac10g01 ~] # vim / etc/yum.repos.d/tong.repo
[tong]
Name=tong
Baseurl= file:///mnt
Gpgkey= file:///mnt/cdrom/RPM-GPG-KEY-oracle
Gpgcheck=1
Enabled=1
[root@rac10g01 ~] # yum list
[root@rac10g01 ~] # yum install binutils compat-db compat-libstdc++-296 control-center gcc gcc-c++ glibc glibc-common libstdc++ libstdc++-devel make sysstat setarch glibc-devel libaio ksh glibc-headers libgnome libgcc libgnomeui libgomp openmotif libXp-y
[root@rac10g01 ~] #
5. Modify kernel parameters (two nodes must be one to)
[root@rac10g01 ~] # vim / etc/sysctl.conf
Kernel.shmall = 2097152
Kernel.shmmax = 2147483648
Kernel.shmmni = 4096
Kernel.sem = 250 32000 100 128
Fs.file-max = 65536
Net.ipv4.ip_local_port_range = 1024 65000
Net.core.rmem_default = 262144
Net.core.rmem_max = 262144
Net.core.wmem_default = 262144
Net.core.wmem_max = 262144
[root@rac10g01] # sysctl-p
6. Create user, user group, user set password (two nodes must be one to)
[root@rac10g01 ~] # groupadd-g 500oinstall
[root@rac10g01] # groupadd-g 501 dba
[root@rac10g01] # groupadd-g 502 oper
[root@rac10g01] # groupadd-g 503 asmadmin
[root@rac10g01] # groupadd-g 504 asmdba
[root@rac10g01] # groupadd-g 505 asmoper
[root@rac10g01] # useradd-u 1000-g oinstall-G dba,oper,asmdba-d / home/oracle oracle
[root@rac10g01 ~] # passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based ona dictionary word
Retype new UNIX password:
Passwd: all authentication tokens updated successfully.
[root@rac10g01 ~] #
7. Create a directory where oracle and grid are stored (two nodes must be one to one)
[root@rac10g01] # mkdir-p / u01/oracle/product/10.2.0.1/db_1
[root@rac10g01] # mkdir-p / u01/oracle/product/10.2.0.1/crs_1
[root@rac10g01] # chown-R oracle:oinstall / u01 /
8. Modify user restrictions on file opening (one to two nodes)
[root@rac10g01 ~] # vim / etc/security/limits.conf
Oracle soft nproc 2047
Oracle hard nproc 16384
Oracle soft nofile 1024
Oracle hard nofile 65536
Oracle soft memlock 5242880
Oracle hard memlock 524280
[root@rac10g01 ~] #
9. Add environment variables for oracle users (one to two nodes)
[root@rac10g01 ~] # vim / home/oracle/.bash_profile
Export ORACLE_BASE=/u01/oracle
Export ORACLE_HOME=$ORACLE_BASE/product/10.2.0.1/db_1
Export CRS_HOME=$ORACLE_BASE/product/10.2.0.1/crs_1
Export ORA_CRS_HOME=$CRS_HOME
Export ORACLE_SID=rac10g1
Export PATH=$PATH:$ORA_CRS_HOME/bin:$ORACLE_HOME/bin
[root@rac10g01] #. / home/oracle/.bash_profile
10.oracle users' ssh trusts each other (one of the two nodes)
Rac10g01 node:
[root@rac10g01 ~] # su-oracle
[oracle@rac10g01] $ssh-keygen-t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/ home/oracle/.ssh/id_rsa):
Created directory'/ home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in / home/oracle/.ssh/id_rsa.
Your public key has been saved in / home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
Cb:63:66:59:9d:7a:68:8f:4c:de:83:c4:f3:f4:19:97 oracle@rac10g01
[oracle@rac10g01 ~] $cd .ssh /
[oracle@rac10g01 .ssh] $scp id_rsa.pub oracle@rac10g02:/home/oracle/
The authenticity of host 'rac10g02 (192.168.3.40)' can't be established.
RSA key fingerprint is 43:20:d8:f6:01:f1:e0:c0:9a:5f:6c:e2:f8:76:3e:3a.
Are you sure you want to continue connecting (yes/no)? Yes
Warning: Permanently added 'rac10g02192.168.3.40' (RSA) to the list of known hosts.
Oracle@rac10g02's password:
Id_rsa.pub 100% 225 0.2KB/s 00:00
[oracle@rac10g01 .ssh] $
Rac10g02 node:
[oracle@rac10g02] $ssh-keygen-t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/ home/oracle/.ssh/id_rsa):
Created directory'/ home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in / home/oracle/.ssh/id_rsa.
Your public key has been saved in / home/oracle/.ssh/id_rsa.pub.2
The key fingerprint is:
Cb:63:66:59:9d:7a:68:8f:4c:de:83:c4:f3:f4:19:97 oracle@rac10g02
[oracle@rac10g02 ~] $cat id_rsa.pub > >. / ssh/id_rsa.pub
[oracle@rac10g02 ~] $mv. / ssh/id_rsa.pub. / ssh/authorized_keys
[oracle@rac10g02 ~] $scp. / ssh/authorized_keys oracle@rac10g01:/home/oracle/.ssh/
[oracle@rac10g01 .ssh] $ssh rac10g01 date-both nodes must be tested and must pass
Sat Nov 12 22:39:54 CST 2016
[oracle@rac10g01 .ssh] $ssh rac10g02 date
Sat Nov 12 22:40:03 CST 2016
[oracle@rac10g01 .ssh] $ssh rac10g02-priv date
Sat Nov 12 22:40:07 CST 2016
[oracle@rac10g01 .ssh] $ssh rac10g01-priv date
Sat Nov 12 22:40:04 CST 2016
[oracle@rac10g01 .ssh] $
11. Add local disk, install oracleasm software, set up asm automatic storage management
[root@rac10g01 Desktop] # rpm-ivh oracleasm-support-2.1.7-1.el4.x86_64.rpm
Preparing... # [100%]
1:oracleasm-support # # [100%]
[root@rac10g01 Desktop] # rpm-ivh oracleasm-2.6.9-89.35.1.EL-2.0.5-1.el4.x86_64.rpm
Preparing... # [100%]
1:oracleasm-2.6.9-89.35.1 [100%]
[root@rac10g01 Desktop] # rpm-ivh oracleasmlib-2.0.4-1.el4.x86_64.rpm
Preparing... # [100%]
1:oracleasmlib # # [100%]
[root@rac10g01 Desktop] # / etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
Driver. The following questions will determine whether the driver is
Loaded on boot and what permissions it will have. The current values
Will be shown in brackets ('[]'). Hitting without typing an
Answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface [oracle]: oracle
Default group to own the driver interface [dba]: dba
Start Oracle ASM library driver on boot (yzone) [y]: y
Scan for Oracle ASM disks on boot (yzone) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [OK]
Scanning the system for Oracle ASMLib disks: [OK]
[root@rac10g01 Desktop] # fdisk / dev/sdb-disk partition
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only
Until you decide to write them. After that, of course, the previous
Content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w (rite)
Command (m for help): n
Command action
E extended
P primary partition (1-4)
P
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or + size or + sizeM or + sizeK (1-130, default 130):
Using default value 130
Command (m for help): P
Disk / dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdb1 1 130 1044193 + 83 Linux
Command (m for help): W
The partition table has been altered!
Calling ioctl () to re-read partition table.
Syncing disks.
[root@rac10g01 ~] # raw/ dev/raw/raw1 / dev/sdb1-used by vote disk and crs disks
/ dev/raw/raw1: bound to major 8, minor 17
[root@rac10g01 ~] # raw/ dev/raw/raw2 / dev/sdc1
/ dev/raw/raw2: bound to major 8, minor 33
[root@rac10g01 ~] # raw/ dev/raw/raw3 / dev/sdd1
/ dev/raw/raw3: bound to major 8, minor 49
[root@rac10g01 ~] # raw/ dev/raw/raw4 / dev/sde1
/ dev/raw/raw4: bound to major 8, minor 65
[root@rac10g01 ~] # vim / etc/sysconfig/rawdevices
/ dev/raw/raw1 / dev/sdb1
/ dev/raw/raw2 / dev/sdc1
/ dev/raw/raw3 / dev/sdd1
/ dev/raw/raw4 / dev/sde1
[root@rac10g01] # chown-R oracle:oinstall / dev/raw
[root@rac10g01 ~] # ll / dev/raw/raw*
Crw-rw---- 1 oracle oinstall 162, 1 Nov 13 00:49 / dev/raw/raw1
Crw-rw---- 1 oracle oinstall 162, 2 Nov 13 00:50 / dev/raw/raw2
Crw-rw---- 1 oracle oinstall 162, 3 Nov 13 00:50 / dev/raw/raw3
Crw-rw---- 1 oracle oinstall 162, 4 Nov 13 00:50 / dev/raw/raw4
[root@rac10g01 ~] # udevstart
[root@rac10g01 Desktop] # / etc/init.d/oracleasm createdisk VOL05 / dev/sdf1-- all other disks are the same
Marking disk "VOL05" as an ASM disk: [OK]
[root@rac10g01 Desktop] # / etc/init.d/oracleasm createdisk VOL06 / dev/sdg1
Marking disk "VOL06" as an ASM disk: [OK]
[root@rac10g01 Desktop] # / etc/init.d/oracleasm listdisks
VOL05
VOL06
[root@rac10g01 Desktop] #
Rac10g02 node:
[root@rac10g02 Desktop] # / etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:
[root@rac10g02 Desktop] # / etc/init.d/oracleasm listdisks
VOL05
VOL06
[root@rac10g02 Desktop] #
12.cluster cluster software installation (check the conditions before installation)
[oracle@rac10g01 ~] $cd clusterware/cluvfy/
[oracle@rac10g01 cluvfy] $. / runcluvfy.sh stage-pre crsinst-n rac10g01,rac10g02-verbose
Rac10g01 node:
[root@rac10g01 ~] # sh / u01/oracle/oraInventory/orainstRoot.sh
Changing permissions of / u01/oracle/oraInventory to 770.
Changing groupname of / u01/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac10g01 ~] # sh / u01/oracle/product/10.2.0.1/
Crs_1/ db_1/
[root@rac10g01 ~] # sh / u01/oracle/product/10.2.0.1/crs_1/root.sh
WARNING: directory'/ u01Accord is not owned by root 10.2.0.1'
WARNING: directory'/ u01max oracleUniverse product'is not owned by root
WARNING: directory'/ u01 bind oracle'is not owned by root
WARNING: directory'/ u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/ etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory'/ u01Accord is not owned by root 10.2.0.1'
WARNING: directory'/ u01max oracleUniverse product'is not owned by root
WARNING: directory'/ u01 bind oracle'is not owned by root
WARNING: directory'/ u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
Node:
Node 1: rac10g01 rac10g01-priv rac10g01
Node 2: rac10g02 rac10g02-priv rac10g02
Creating OCR keys for user 'root', privgrp' root'..
Operation successful.
Now formatting voting device: / dev/raw/raw3
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
Rac10g01
CSS is inactive on these nodes.
Rac10g02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac10g01 ~] #
Rac10g02 node:
[root@rac10g02 ~] # sh / u01/oracle/oraInventory/orainstRoot.sh
Changing permissions of / u01/oracle/oraInventory to 770.
Changing groupname of / u01/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac10g02 ~] #
Modify the contents of the vipca and srvctl files before the rac10g02 node executes root.sh (both nodes need to be modified)
[oracle@rac10g02 ~] $vim / u01/oracle/product/10.2.0.1/crs_1/bin/vipca
LD_ASSUME_KERNEL=2.4.19
Export LD_ASSUME_KERNEL-add red content below
Unset LD_ASSUME_KERNEL
[oracle@rac10g02 ~] $vim / u01/oracle/product/10.2.0.1/crs_1/bin/srvctl
LD_ASSUME_KERNEL=2.4.19
Export LD_ASSUME_KERNEL-add red content below
Unset LD_ASSUME_KERNEL
[root@rac10g02 ~] # sh / u01/oracle/product/10.2.0.1/crs_1/root.sh
WARNING: directory'/ u01Accord is not owned by root 10.2.0.1'
WARNING: directory'/ u01max oracleUniverse product'is not owned by root
WARNING: directory'/ u01 bind oracle'is not owned by root
WARNING: directory'/ u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/ etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory'/ u01Accord is not owned by root 10.2.0.1'
WARNING: directory'/ u01max oracleUniverse product'is not owned by root
WARNING: directory'/ u01 bind oracle'is not owned by root
WARNING: directory'/ u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
Node:
Node 1: rac10g01 rac10g01-priv rac10g01
Node 2: rac10g02 rac10g02-priv rac10g02
Creating OCR keys for user 'root', privgrp' root'..
Operation successful.
Now formatting voting device: / dev/raw/raw3
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
/ etc/profile: line 61: ulimit: open files: cannot modify limit: Operation not permitted
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
Rac10g01
Rac10g02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init (1m)
Running vipca (silent) for configuring nodeapps
The given interface (s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.
[root@rac10g02 ~] #
If the following error occurs, manually modify the public IP and cluster IP address
Error 0 (Native: listNetInterfaces: [3])
[Error 0 (Native: listNetInterfaces: [3])]
[root@rac10g02] # / u01/oracle/product/10.2.0.1/crs_1/bin/oifcfg setif-global eth0/192.168.3.0:public
[root@rac10g02] # / u01/oracle/product/10.2.0.1/crs_1/bin/oifcfg setif-global eth0/10.0.0.0:cluster_interconnect
[root@rac10g02 ~] # / u01/oracle/product/10.2.0.1/crs_1/bin/oifcfg getif
Eth0 192.168.3.0 global public
Eth0 10.0.0.0 global cluster_interconnect
[root@rac10g02 ~] #
Then execute vipca on the rac10g01 node to create a virtual IP address
[root@rac10g01 ~] # / u01/oracle/product/10.2.0.1/crs_1/bin/vipca
Exit the vipca window and continue to install the cluster software
13. Verify that cluster is installed successfully
Rac10g01 node:
[oracle@rac10g01] $crs_stat-t
Name Type Target State Host
Ora....g01.gsd application ONLINE ONLINE rac10g01
Ora....g01.ons application ONLINE ONLINE rac10g01
Ora....g01.vip application ONLINE ONLINE rac10g01
Ora....g02.gsd application ONLINE ONLINE rac10g02
Ora....g02.ons application ONLINE ONLINE rac10g02
Ora....g02.vip application ONLINE ONLINE rac10g02
[oracle@rac10g01 ~] $
Rac10g02 node:
[oracle@rac10g02] $crs_stat-t
Name Type Target State Host
Ora....g01.gsd application ONLINE ONLINE rac10g01
Ora....g01.ons application ONLINE ONLINE rac10g01
Ora....g01.vip application ONLINE ONLINE rac10g01
Ora....g02.gsd application ONLINE ONLINE rac10g02
Ora....g02.ons application ONLINE ONLINE rac10g02
Ora....g02.vip application ONLINE ONLINE rac10g02
[oracle@rac10g02 ~] $
Oracle database installation address: http://tongcheng.blog.51cto.com/6214144/1872299
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.