In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Catalogue
Oracle 19c RAC on Linux installation manual. 2
Explain. 2
1 OS environment check. 3
2 close THP and turn on Hugepages 4
2.1 disable large transparent pages:. 4
2.2 Open a large page:. 5
3 install the package. 5
3.1 Red Hat Enterprise Linux 7 installation package. 5
3.2 other software packages. 6
4 kernel parameters. 6
4.1 configure kernel parameters using Preinstall RPM. 6
4.2 manually configure parameters. 6
4.2 CVU (optional) 7
5 Network configuration. 7
5.1 fixed configuration. 8
5.2 GNS + fixed configuration. 8
6 other configurations. 10
6.1 operating system Miscellaneous configuration. 10
6.2 clock synchronization. 11
6.3 NAS Storage additional configuration. 11
6.4 I/O Scheduler 12
6.5 SSH timeout limit. 12
6.3 user group directory configuration. 12
6.6 graphical interface configuration. 14
6.7 limits.conf 14
6.8 close X11 Forward. fourteen
6.9 Direct NFS. fifteen
6.10 Oracle Member Cluster 15
6.11 manually configure ASM disk, UDEV. fifteen
7 gridSetup.sh. fifteen
7.1 gridSerup.sh. sixteen
7.2 runInstaller 27
7.3 19.3 upgrade 19.5.1 patch. 33
7.4 DBCA. thirty-four
Oracle 19c RAC on Linux installation manual instructions
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), as part of an Oracle Flex
Cluster installation, Oracle ASM is configured within Oracle Grid Infrastructure to
Provide storage services
Starting with Oracle Grid Infrastructure 19c, with Oracle Standalone
Clusters, you can again place OCR and voting disk files directly on shared file
Systems.
Oracle Flex Clusters
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle Grid
Infrastructure cluster configurations are Oracle Flex Clusters deployments.
Since 12.2, the cluster is divided into two cluster modes: Standalone Cluster and Domain Service Cluster.
Standalone Cluster:
U can support 64 nodes
Each node is directly connected to shared storage
The shared storage of each node is mounted through the ASM instance or shared file system of each node.
ü local control GIMR
ü 19c Standalone Cluster can choose whether to configure GIMR or not
You can use GNS to configure vip and scan, or you can configure it manually.
Domain Services Cluster:
One or more nodes form a domain service cluster (DSC)
One or more nodes form a database member cluster (Database Member Cluster)
(optional) one or more nodes make up the application member node (Application Member Cluster)
Centralized grid infrastructure management repository (provides MGMTDB for each cluster in Oracle Cluster Domain)
Trace File Analyzer (TFA) service for target diagnostic data collection for Oracle Clusterware and Oracle databases
Merge Oracle ASM storage management services
Optional Quick Home configuration (RHP) service for installing clusters, as well as configuring, patching and upgrading Oracle Grid Infrastructure and Oracle Database homes. When you configure an Oracle domain services cluster, you can also choose to configure Rapid Home Provisioning Server.
These centralized services can be utilized by database member clusters in cluster Domain (Datebase Member Cluster or Application Member Cluster).
Storage access in Domain Service Cluster:
ASM in DSC can provide centralized storage management services, and member clusters (Member Cluster) can access fragmented storage on DSC in two ways:
Directly physically connect to sharded storage for access
Using ASM IO Service to access through the network path
All nodes in a single Member Cluster must access sharded storage in the same way. A Domain Service Cluster can have multiple Member Cluster. The architecture diagram is as follows:
1 OS environment check
Project
Request
Check command
RAM
At least 8G
# grep MemTotal / proc/meminfo
Operation level
3 or 5
# runlevel
Linux version
Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 4:
4.1.12-112.16.7.el7uek.x86_64 or later
Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 5:
4.14.35-1818.1.6.el7uek.x86_64 or later
Oracle Linux 7.4 with the Red Hat Compatible kernel:
3.10.0-693.5.2.0.1.el7.x86_64 or later
Red Hat Enterprise Linux 7.4: 3.10.0-693.5.2.0.1.el7.x86_64
Or later
SUSE Linux Enterprise Server 12 SP3: 4.4.103-92.56-default
Or later
# uname-mr
# cat / etc/redhat-release
/ tmp
At least 1G
# du-h / tmp
Swap
SWAP Between 4 GB and 16 GB: Equal to RAM
More than 16 GB: 16 GB, if HugePage is enabled, calculating SWAP requires subtracting the memory allocated to HugePage.
# grep SwapTotal / proc/meminfo
/ dev/shm
Check / dev/shm mount type, as well as permissions.
# df-h / dev/shm
Software space requirements
At least 12g of grid and at least 10g of Oracle. 100g reservation is recommended.
19C starting GIMR becomes optional during standalone installation.
# df-h / U01
2 close THP and turn on Hugepages
If you use Oracle Linux, you can configure the operating system through Preinstallation RPM. If you install Oracle Domain Services Cluster, you need to configure GIMR. You need to consider that large pages will be used by GIMR's SGA to use 1G, which needs to be taken into account in hugepages, and standalone can choose whether to configure GIMR.
2.1 disable large transparent pages:
# check whether the large transparent page is open
[root@db-oracle-node1 ~] # cat / sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
# check whether the transparent large page defragmentation feature is enabled, THP defragmentation
[root@db-oracle-node1 ~] # cat / sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
After appending the "transparent_hugepage=never" kernel parameter to the GRUB_CMDLINE_LINUX option:
# vi / etc/default/grub
GRUB_CMDLINE_LINUX= "rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap...
Transparent_hugepage=never "
Back up / boot/grub2/grub.cfg and rebuild the / boot/grub2/grub.cfg file with the grub2-mkconfig-o command:
On BIOS-based machines: ~] # grub2-mkconfig-o / boot/grub2/grub.cfg
On UEFI-based machines: ~] # grub2-mkconfig-o / boot/efi/EFI/redhat/grub.cfg
Restart the system:
# shutdown-r now
Verify that the parameter settings are correct:
# cat / proc/cmdline
Note: if you have not already closed THP, refer to http://blog.itpub.net/31439444/viewspace-2674001/ to complete the remaining steps.
2.2 Open a large page:
# vim / etc/sysctl.conf
Vm.nr_hugepages = xxxx
# sysctl-p
Vim / etc/security/limits.conf
Oracle soft memlock xxxxxxxxxxx
Oracle hard memlock xxxxxxxxxxx
3 installation package 3.1 Red Hat Enterprise Linux 7 installation package
Openssh
Bc
Binutils
Compat-libcap1
Compat-libstdc++
Elfutils-libelf
Elfutils-libelf-devel
Fontconfig-devel
Glibc
Glibc-devel
Ksh
Libaio
Libaio-devel
LibX11
LibXau
LibXi
LibXtst
LibXrender
LibXrender-devel
Libgcc
Librdmacm-devel
Libstdc++
Libstdc++-devel
Libxcb
Make
Net-tools (for Oracle RAC and Oracle Clusterware)
Nfs-utils (for Oracle ACFS)
Python (for Oracle ACFS Remote)
Python-configshell (for Oracle ACFS Remote)
Python-rtslib (for Oracle ACFS Remote)
Python-six (for Oracle ACFS Remote)
Targetcli (for Oracle ACFS Remote)
Smartmontools
Sysstat
3.2 other software packages
You can choose whether to install additional drivers and software packages, and you can configure: PAM, OCFS2, ODBC, LDAP
4 Kernel parameters 4.1 configure kernel parameters using Preinstall RPM
If it's Oracle Linux, or Red Hat Enterprise Linux
You can configure os using preinstall rpm:
# cd / etc/yum.repos.d/
# wget http://yum.oracle.com/public-yum-ol7.repo
# yum repolist
# yum install oracle-database-preinstall-19c
You can also download the preinstall rpm installation package manually:
Http://yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64//
Http://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64
Preinstall does the following:
Create oracle users, create oraInventory (oinstall) and OSDBA (dba) groups.
Set sysctl.conf and adjust the system startup parameters and driver parameters recommended by Oracle
Set hard and soft user resource limits.
U set other recommended parameters related to the kernel version of the system.
U set numa=off
4.2 manually configure parameters
If you do not use preinstall rpm to configure kernel parameters, you can also configure kernel parameter manually:
# vi / etc/sysctl.d/97-oracledatabase-
Sysctl.conf
Fs.aio-max-nr = 1048576
Fs.file-max = 6815744
Kernel.shmall = 2097152
Kernel.shmmax = 4294967295
Kernel.shmmni = 4096
Kernel.sem = 250 32000 100 128
Net.ipv4.ip_local_port_range = 9000 65500
Net.core.rmem_default = 262144
Net.core.rmem_max = 4194304
Net.core.wmem_default = 262144
Net.core.wmem_max = 1048576
Change the current system value:
# / sbin/sysctl-system
# / sbin/sysctl-a
Set the network port range:
$cat / proc/sys/net/ipv4/ip_local_port_range
# echo 9000 65500 > / proc/sys/net/ipv4/ip_local_port_range
# / etc/rc.d/init.d/network restart
4.2 CVU (optional)
If you do not use Oracle Preinstallation RPM, you can use Cluster Verification Utility and follow these steps to install CVU:
ü Locate the cvuqdisk RPM package, which is located in the directory
Grid_home/cv/rpm. Where Grid_home is the Oracle Grid Infrastructure home
Directory.
ü Copy the cvuqdisk package to each node on the cluster. You should ensure that
Each node is running the same version of Linux.
ü Log in as root.
ü Use the following command to find if you have an existing version of the cvuqdisk
Package:
# rpm-qi cvuqdisk
ü If you have an existing version of cvuqdisk, then enter the following command to
Deinstall the existing version:
# rpm-e cvuqdisk
ü Set the environment variable CVUQDISK_GRP to point to the group that owns
Cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
ü In the directory where you have saved the cvuqdisk RPM, use the command rpm
-iv package to install the cvuqdisk package. For example:
# rpm-iv cvuqdisk-1.0.10-1.rpm
ü run installation verification
$. / runcluvfy.sh stage-pre crsinst-fixup-n node1,node2,node3
5 Network configuration
Network configuration description:
(1) either all ipv4 or all ipv6, GNS can generate ipv6 address
(2) VIP, Starting with Oracle Grid Infrastructure 18c, using VIP is optional for Oracle
Clusterware deployments. You can specify VIPs for all or none of the cluster
Nodes. However, specifying VIPs for selected cluster nodes is not supported.
(3) Private: during the installation process, four interface private IP can be configured as HAIP (highly available IP). If more than four interface are configured, more than four can be automatically used as redundancy. Private can be bound without bond ENI, and the cluster can be automatically highly available.
(4) Public/VIP name: alphanumeric and "-" connectors can be used, but "_" underscores are not allowed.
(5) Public/VIP/SCAN VIP needs to be in the same subnet segment.
(6) Public needs to be permanently configured on each node network card. VIP, Private IP and SCAN can all be configured by GNS. Except for SCAN, which requires three fixed IP, all need a fixed IP, which can be not fixed on the network card, but must be fixed and parsed.
5.1 fixed configuration
The SCAN is parsed only through DNS, and the Public/Private/VIP is manually configured to fix the IP, and the settings are manually specified during installation.
5.2 GNS + fixed configuration
To enable GNS, you need to use dhcp+DNS configuration. There is no need to resolve vip and scan for DNS positive and negative resolution. You only need the domain names of vip and scan to be in the sub-domain to be managed by gns.
/ etc/hosts
192.168.204.11 pub19-node1.rac.libai
192.168.204.12 pub19-node2.rac.libai
# private ip
40.40.40.41 priv19-node1.rac.libai
40.40.40.42 priv19-node2.rac.libai
# vip
192.168.204.21 vip19-node1.rac.libai
192.168.204.22 vip19-node2.rac.libai
# scan-vip
# 192.168.204.33 scan19-vip.rac.libai
# 192.168.204.34 scan19-vip.rac.libai
# 192.168.204.35 scan19-vip.rac.libai
# gns-vip
192.168.204.10 gns19-vip.rac.libai
DNS configuration:
[root@19c-node2 limits.d] # yum install-y bind chroot
[root@19c-node2 limits.d] # vi / etc/named.conf
Options {
Directory "/ var/named"
Dump-file "/ var/named/data/cache_dump.db"
Statistics-file "/ var/named/data/named_stats.txt"
Memstatistics-file "/ var/named/data/named_mem_stats.txt"
Allow-query {any;}; # any can specify a network segment that allows it to query the DNS server.
Recursion yes
Allow-transfer {none;}
}
Zone "." IN {
Type hint
File "named.ca"
}
Zone "rac.libai" IN {# forward solution domain centos.libai
Type master
File "named.rac.libai"
}
Zone "204.168.192.in-addr.arpa" IN {# decryption domain 204.168.192.in-addr.arpa
Type master
File "named.192.168.204"
}
Zone "40.40.40.in-addr.arpa" IN {# decryption domain 204.168.192.in-addr.arpa
Type master
File "named.40.40.40"
}
/ * Edit the positive resolution field of vip pub
[root@pub19-node2 ~] # vi / var/named/named.rac.libai
$TTL 600
@ IN SOA rac.libai. Admin.rac.libai. (
0; serial number
1D; refresh
1H; retry
1W; expire
3H); minimum
@ IN NS master
Master IN A 192.168.204.12
Priv19-node1.rac.libai. IN A 40.40.40.41
Priv19-node2.rac.libai. IN A 40.40.40.42
Pub19-node1.rac.libai. IN A 192.168.204.11
Pub19-node2.rac.libai. IN A 192.168.204.12
Vip.rac.libai. IN NS gns.rac.libai.
Gns.rac.libai. IN A 192.168.204.10
# the last two lines indicate that the resolution server for the subdomain vip.rac.libai is gns.rac.libai, and the server address for gns.rac.libai is 192.168.204.10
This is the key to configuring gns.
# on the page where gridSetup.sh configures SCAN, the domain name scan19.vip.rac.libai of scan must contain the subdomain to be managed by gns, that is, scan19.vip.rac.libai needs to contain vip.rac.libai
# gridSetup.sh configure the IP address of gns is 192.168.204.10, and subdomain is vip.rac.libai
# if you cooperate with DHCP, you can complete vip, private, and scan all use gns to assign IP.
From: http://blog.sina.com.cn/s/blog_701a48e70102w6gv.html
# No need for DNS to parse SCAN and VIP, just hand it over to GNS. You need to enable dhcp.
[root@19c-node2 named] # vi named.192.168.204
$TTL 600
@ IN SOA rac.libai. Admin.rac.libai. (
10; serial
3H; refresh
15m; retry
1W; expire
1D); minimum
@ IN NS master.rac.libai.
12 IN PTR master.rac.libai.
11 IN PTR pub19-node1.rac.libai.
12 IN PTR pub19-node2.rac.libai.
10 IN PTR gns.rac.libai.
[root@19c-node2 named] # vi named.40.40.40
$TTL 600
@ IN SOA rac.libai. Admin.rac.libai. (
10; serial
3H; refresh
15m; retry
1W; expire
1D); minimum
@ IN NS master.rac.libai.
42 In PTR 19cpriv-node2.rac.libai.
[root@19c-node2 named] # systemctl restart named
[root@19c-node1 software] # yum install-y dhcp
[root@19c-node1 software] # vi / etc/dhcp/dhcpd.conf
# see / usr/share/doc/dhcp*/dhcpd.conf.example
# see dhcpd.conf (5) man page
#
Ddns-update-styleinterim
Ignoreclient-updates
Subnet 192.168.204.0 netmask 255.255.255.0 {
Option routers 192.168.204.1
Option subnet-mask 255.255.255.0
Option nis-domain "rac.libai"
Option domain-name "rac.libai"
Option domain-name-servers 192.168.204.12
Option time-offset-18000; # Eastern Standard Time
Range dynamic-bootp 192.168.204.21 192.168.204.26
Default-lease-time 21600
Max-lease-time 43200
}
[root@19c-node2 ~] # systemctl enable dhcpd
[root@19c-node2 ~] # systemctl restart dhcpd
[root@19c-node2 ~] # systemctl status dhcpd
/ * View the lease documents
/ var/lib/dhcp/dhcpd.leases
/ * re-obtain the dhcp address for enp0s10
# dhclient-d enp0s10
/ * release the lease
# dhclient-r enp0s10
6 other configuration 6. 1 operating system miscellaneous configuration
(1) cluster name:
Case insensitive, must be alphanumeric, must contain-connector, cannot contain _ underscore, and can be up to 15 characters long. After installation, the cluster name can only be modified by reinstalling GI.
(2) / etc/hosts
# public Ip
192.168.204.11 pub19-node1.rac.libai
192.168.204.12 pub19-node2.rac.libai
# private ip
40.40.40.41 priv19-node1.rac.libai
40.40.40.42 priv19-node2.rac.libai
# vip
192.168.204.21 vip19-node1.rac.libai
192.168.204.22 vip19-node2.rac.libai
# scan-vip
# 192.168.204.33 scan19.vip.rac.libai
# 192.168.204.34 scan19.vip.rac.libai
# 192.168.204.35 scan19.vip.rac.libai
# gns-vip
192.168.204.10 gns.rac.libai
(3) operating system hostname
Hostnamectl set-hostname pub19-node1.rac.libai-static
Hostnamectl set-hostname pub19-node2.rac.libai-static
6.2 clock synchronization
Ensure that all nodes synchronize time using NTP or CTSS.
Before installation, make sure that each node has the same clock. If you use CTSS, you can turn off linux 7 native NTP by following these steps:
By default, the NTP service available on Oracle Linux 7 and Red Hat
Linux 7 is chronyd.
Deactivating the chronyd Service
To deactivate the chronyd service, you must stop the existing chronyd service, and
Disable it from the initialization sequences.
Complete this step on Oracle Linux 7 and Red Hat Linux 7:
1. Run the following commands as the root user:
# systemctl stop chronyd
# systemctl disable chronyd
Confirming Oracle Cluster Time Synchronization Service After Installation
To confirm that ctssd is active after installation, enter the following command as the
Grid installation owner:
$crsctl check ctss
6.3 NAS storage additional configuration
If you use NAS, it is recommended to enable Name Service Cache Daemon (nscd) in order for Oracle Clusterware to better tolerate network failures of NAS devices and NAS mounts.
# chkconfig-list nscd
# chkconfig-- level 35 nscd on
# service nscd start
# service nscd restart
Systemctl-- all | grep nscd
6.4 I/O Scheduler
For best performance for Oracle ASM, Oracle recommends that you use the Deadline
I/O Scheduler.
# cat / sys/block/$ {ASM_DISK} / queue/scheduler
Noop [deadline] cfq
If the default disk I/O scheduler is not Deadline, then set it using a rules file:
1. Using a text editor, create a UDEV rules file for the Oracle ASM devices:
# vi / etc/udev/rules.d/60-oracle-schedulers.rules
2. Add the following line to the rules file and save it:
ACTION== "add | change", KERNEL== "sd [a Murz]", ATTR {queue/rotational} = = "0"
ATTR {queue/scheduler} = "deadline"
3. On clustered systems, copy the rules file to all other nodes on the cluster. For
Example:
$scp 60-oracle-schedulers.rules root@node2:/etc/udev/rules.d/
4. Load the rules file and restart the UDEV service. For example:
Oracle Linux and Red Hat Enterprise Linux
# udevadm control-reload-rules
5. Verify that the disk I/O scheduler is set as Deadline.
# cat / sys/block/$ {ASM_DISK} / queue/scheduler
Noop [deadline] cfq
6.5 SSH timeout limit
To prevent ssh from failing in some cases, set the timeout limit to ulimit:
/ etc/ssh/sshd_config on all cluster nodes:
# vi / etc/ssh/sshd_config
LoginGraceTime 0
6.3 user group directory configuration
Determine if there is an inventory and whether the group previously existed:
# more / etc/oraInst.loc
$grep oinstall / etc/group
Create an inventory directory and do not assign it to the oracle base directory to prevent installation errors caused by permission changes during installation.
The id for all nodes user and group must be the same.
# groupadd-g 54421 oinstall
# groupadd-g 54322 dba
# groupadd-g 54323 oper
# groupadd-g 54324 backupdba
# groupadd-g 54325 dgdba
# groupadd-g 54326 kmdba
# groupadd-g 54327 asmdba
# groupadd-g 54328 asmoper
# groupadd-g 54329 asmadmin
# groupadd-g 54330 racdba
# / usr/sbin/useradd-u 54321-g oinstall-G dba,asmdba,backupdba,dgdba,kmdba,oper,racdba oracle
# useradd-u 54322-g oinstall-G asmadmin,asmdba,racdba grid
# id oracle
# id grid
# passwd oracle
# passwd grid
It is recommended to use the OFA directory structure to ensure that the Oracle home directory path contains only ASCII characters.
GRID standalone can install grid in the ORACLE_BASE directory of oracle database software, but not others.
# mkdir-p / u01/app/19.0.0/grid
# mkdir-p / u01/app/grid
# mkdir-p / u01/app/oracle/product/19.0.0/dbhome_1/
# chown-R grid:oinstall / U01
# chown oracle:oinstall / u01/app/oracle
# chmod-R 775 / u01 /
Grid .bash _ profile:
# su-grid
$vi ~ / .bash_profile
Umask 022
Export ORACLE_BASE=/u01/app/grid
Export ORACLE_HOME=/u01/app/19.0.0/grid
Export PATH=$PATH:$ORACLE_HOME/bin
Export NLS_DATE_FORMAT='yyyy-mm-dd hh34:mi:ss'
Export NLS_LANG=AMERICAN.AMERICA_AL32UTF8
Export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$. . / .bash _ profile
Oracle .bash _ profile:
# su-oracle
$vi ~ / .bash_profile
Umask 022
Export ORACLE_BASE=/u01/app/oracle
Export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1
Export PATH=$PATH:$ORACLE_HOME/bin
Export NLS_DATE_FORMAT='yyyy-mm-dd hh34:mi:ss'
Export NLS_LANG=AMERICAN.AMERICA_AL32UTF8
Export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$. . / .bash _ profile
6.6 graphical interface configuration
$xhost + hostname
$export DISPLAY=local_host:0.0
6.7 limits.conf
The preinstall rpm package configures only oracle users, installs GI, copies oracle settings, and changes to grid users:
The following oracle grid users need to check:
File descriptor:
$ulimit-Sn
$ulimit-Hn
Number of processes:
$ulimit-Su
$ulimit-Hu
Stack:
$ulimit-Ss
$ulimit-Hs
6.8 close X11 Forward
To ensure that the installation does not fail because of X11 forwarding, under the oracle grid user's home directory, .ssh:
$/ .ssh/config
Host *
ForwardX11 no
6.9 Direct NFS
If you use DNFS, you can refer to the documentation to configure DNFS.
6.10 Oracle Member Cluster
If you want to create an Oracle Member Cluster, you need to create a Member Cluster Manifest File on the Oracle Domain Services Cluster. Refer to the following section of the official document Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide:
Creating Member Cluster Manifest File for Oracle Member Clusters
6.11 manually configure ASM disk, UDEV
/ * get disk UUID
# / usr/lib/udev/scsi_id-g-u / dev/sdb
/ * write UDEV rule files
# vi / etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL== "sd*", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "1ATA_VBOX_HARDDISK_VB9c33adf6-29245311", RUN+= "/ bin/sh-c 'mknod / dev/asmocr1 b $major $minor;chown grid:asmadmin / dev/asmocr1;chmod 0660 / dev/asmocr1'"
KERNEL== "sd*", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "1ATA_VBOX_HARDDISK_VBb008c422-c636d509", RUN+= "/ bin/sh-c 'mknod / dev/asmdata1 b $major $minor;chown grid:asmadmin / dev/asmdata1;chmod 0660 / dev/asmdata1'"
KERNEL== "sd*", ENV {DEVTYPE} = = "disk", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-g-u-d $devnode", RESULT== "1ATA_VBOX_HARDDISK_VB7d37c0f6-8f45f264", RUN+= "/ bin/sh-c 'mknod / dev/asmfra1 b $major $minor;chown grid:asmadmin / dev/asmfra1;chmod 0660 / dev/asmfra1'"
/ * copy UDEV rule files to other nodes in the cluster
# scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracleasmdevices.
Rules
/ * reload udev configuration, testing
/ sbin/udevadm trigger-type=devices-action=change
/ sbin/udevadm control-- reload
/ sbin/udevadm test / sys/block/sdb
7 gridSetup.sh
$su root
# export ORACLE_HOME=/u01/app/19.0.0/grid
Use Oracle ASM command line tool (ASMCMD) to provision the disk devices
For use with Oracle ASM Filter Driver.
[root@19c-node1 grid] # asmcmd afd_label DATA1 / dev/sdb-- init
[root@19c-node1 grid] # asmcmd afd_label DATA2 / dev/sdc-- init
[root@19c-node1 grid] # asmcmd afd_label DATA3 / dev/sdd-- init
[root@19c-node1 grid] # asmcmd afd_lslbl / dev/sdb
[root@19c-node1 grid] # asmcmd afd_lslbl / dev/sdc
[root@19c-node1 grid] # asmcmd afd_lslbl / dev/sdd
7.1 gridSerup.sh
$unzip LINUX.X64_193000_grid_home.zip-d / u01/app/19.0.0/grid/
$/ u01/app/19.0.0/grid/gridSetup.sh
Encountered a problem:
"when creating an OCR ASM disk group in the graphical interface, the ASM disk cannot be found, check the correct UDEV,UDEV configuration, and check the cfgtoollogs log to find the following error:"
[root@19c-node1 ~] # su-grid
[grid@19c-node1 ~] $cd $ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-090001-02-16PM
[grid@19c-node1] $vi gridSetupActions2020-03-09-01-02-16PM.log
INFO: [Mar 9, 2020 1:15:03 PM] Executing [/ u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, op=disks, shallow=true, asm_diskstring='/dev/asm*']
INFO: [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process / u01/app/19.0.0/grid/bin/kfod.bin
INFO: [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR
INFO: [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! Could not initialize the diag context
Grid ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-090001-02-16PM
An error was found in the ASM disk path:
INFO: [Mar 9, 2020 1:15:03 PM] Executing [/ u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*']
INFO: [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process / u01/app/19.0.0/grid/bin/kfod.bin
INFO: [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR
INFO: [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! Could not initialize the diag context
Resolve:
Carry out the order before the error is reported separately.
/ u01/app/19.0.0/grid/bin/kfod.bin nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*'
Found error NLS DATA error, obviously, related to the NLS related variables set by the .bash _ profile environment configuration file, comment out the relevant NLS_LANG variables, take effect, execute again, and everything is fine.
[root@pub19-node1 ~] # / u01/app/oraInventory/orainstRoot.sh
[root@pub19-node2 ~] # / u01/app/oraInventory/orainstRoot.sh
[root@pub19-node1 ~] # / u01/app/19.0.0/grid/root.sh
[root@pub19-node2 ~] # / u01/app/19.0.0/grid/root.sh
7.2 runInstaller
[oracle@pub19-node1 dbhome_1] $unzip LINUX.X64_193000_db_home.zip-d / u01/app/oracle/product/19.0.0/dbhome_1/
[oracle@pub19-node1 dbhome_1] $. / runInstaller
[oracle@pub19-node1 dbhome_1] $dbca
Encountered a problem:
CRS-5017: The resource action "ora.czhl.db start" encountered the following error:
ORA-12547: TNS:lost contact
. For details refer to "(: CLSN00107:)" in "/ u01/app/grid/diag/crs/pub19-node2/crs/trace/crsd_oraagent_oracle.trc".
Resolve:
The node 2 ORACLE_HOME directory has two incorrect permissions. After modifying the permissions, it is normal to start the database manually.
[root@pub19-node2 oracle] # chown oracle:oinstall product/
[root@pub19-node2 product] # chown oracle:oinstall 19.0.0
[root@pub19-node2 19.0.0] # chown oracle:oinstall dbhome_1/
[grid@pub19-node2 ~] $srvctl start instance-node pub19-node2.rac.libai
Starting database instances on nodes "pub19-node2.rac.libai"...
Started resources "ora.czhl.db" on node "pub19-node2"
7.3 19.3 upgrade 19.5.1 patch
Grid users (both nodes need to be upgraded):
# su-grid
$unzip LINUX.X64_193000_grid_home.zip-d / u01/app/19.0.0/grid/
$unzip unzip p30464035_190000_Linux-x86-64.zip
Oracle users (both nodes need to be upgraded):
# su-oracle
$unzip-o p6880880_190000_Linux-x86-64.zip-d / u01/app/oracle/product/19.0.0/dbhome_1/
Root users:
/ * check the patch version. In fact, only 1 node GI has been patched. Continue to patch node 2GI, node 1 DB, node 2 DB. Be sure to pay attention to opatchauto, GI patch requires opatchauto under GI ORACLE_HOME, DB patch requires opatchauto under DB ORACLE_HOME
Node 1:
# / u01/app/19.0.0/grid/OPatch/opatchauto apply-oh / u01/app/19.0.0/grid/ software/30464035/
Node 2:
# / u01/app/19.0.0/grid/OPatch/opatchauto apply-oh / u01/app/19.0.0/grid/ software/30464035/
Node 1:
# / u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply / software/30464035/-oh / u01 incandary appplash 19.0. 0, grid, revocation, U01, oracleand, product1, and dbhomeowners.
Node 2:
# ls-l / u01/app/oraInventory/ContentsXML/oui-patch.xml # must check the permissions of this file at this time, otherwise the following error is reported, resulting in patch corrupt, and cannot be reversed and applied positively again. Modify permissions and apply patches. If an error is reported, you can use the opatchauto resume command to continue to apply the patch.
# / u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply / software/30464035/-oh / u01/app/oracle/product/19.0.0/dbhome_1
Caution:
[Mar 11, 2020 8:56:05 PM] [WARNING] OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: / u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)'
Resolve:
/ * empower according to log output
# chmod 664 / u01/app/oraInventory/ContentsXML/oui-patch.xml
# / u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto resume / software/30464035/-oh / u01/app/oracle/product/19.0.0/dbhome_1
If you follow the log prompts to restore, you can take the following steps to resolve the patch problem:
/ * the execution of restore.sh failed in the end, so we can only copy the software manually and add rollback
# / u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto rollback / software/30464035/-oh / u01/app/oracle/product/19.0.0/dbhome_1
/ * according to the failure prompt, which files do not exist, copy the corresponding patch in the decompressed folder to the directory specified by ORACLE_HOME, and continue to roll back until the rollback is successful.
Patch the Node 2 oracle software again:
# / u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply / software/30464035/-oh / u01 incandary appplash 19.0. 0, grid, revocation, U01, oracleand, product1, and dbhomeowners.
Verify the patch:
$/ u01/app/19.0.0/grid/OPatch/opatch lsinv
$/ u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch lsinv
# su-grid
$kfod op=patches
$kfod op=patchlvl
7.4 DBCA
Oracle 19c RAC on Linux installation manual .docx
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.