In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you how to modify the ip address of rac, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Oracle release 11.2.0.4.0
This experiment mode netmask:255.255.255.0
1: modify the host ip,vip,scan (the name of the network card remains the same)
Second: modify Private IP
Third, modify the name of the public network card and keep the ip address unchanged.
4: modify the name of the Private Nic and keep the ip address unchanged
5: modify the name of the public network card, ip
6: modify the name of the Private network card, ip
Appendix: subnet Mask address calculation
* *
* *
Backup: 2 hosts:
VPC configuration exists not only in ocr, but also in gpnp profile, so you need to back up profile.xml first:
[grid@host01 peer] $pwd
/ u01/11.2.0/grid/gpnp/host01/profiles/peer
[grid@host01 peer] $ls
Pending.xml profile.old profile_orig.xml profile.xml
[grid@host01 peer] $cp profile.xml profile.xml.bak
[root@host02 ~] #
[root@host02 ~] # cd / u01/11.2.0/grid/gpnp/host02/profiles/peer/
[root@host02 peer] # ls
Profile_orig.xml profile.xml
[root@host02 peer] # cp profile.xml profile.xml.bak
Ocr backup:
[root@host01] # / u01/11.2.0/grid/bin/ocrconfig-manualbackup
Host02 2018-03-29 08:48:19 / u01/11.2.0/grid/cdata/host-cluster/backup_20180329_084819.ocr
[root@host01] # / u01/11.2.0/grid/bin/ocrconfig-showbackup
Host02 2018-03-29 06:08:55 / u01/11.2.0/grid/cdata/host-cluster/backup00.ocr
Host02 2018-03-29 02:08:55 / u01/11.2.0/grid/cdata/host-cluster/backup01.ocr
Host02 2018-03-28 22:08:54 / u01/11.2.0/grid/cdata/host-cluster/backup02.ocr
Host02 2018-03-28 22:08:54 / u01/11.2.0/grid/cdata/host-cluster/day.ocr
Host02 2018-03-28 22:08:54 / u01/11.2.0/grid/cdata/host-cluster/week.ocr
Host02 2018-03-29 08:48:19 / u01/11.2.0/grid/cdata/host-cluster/backup_20180329_084819.ocr
Backup of hosts configuration files:
Cp / etc/hosts / tmp/hosts.bak0329
* *
Scenario one
* *
Modify the host ip,vip,scan (the name of the Nic remains the same)
Before modification:
# 192.168.0.35 host01
# 192.168.0.36 host01-vip
# 192.168.0.38 host02
# 192.168.0.39 host02-vip
# 192.168.0.40 scan
After modification:
172.16.0.135 host01
172.16.0.136 host01-vip
172.16.0.138 host02
172.16.0.139 host02-vip
172.16.0.140 scan
1. Normal library shutdown, monitoring and CRS
# / u01/11.2.0/grid/bin/crsctl stop crs
two。 Modify / etc/hosts configuration file
172.16.0.135 host01
172.16.0.136 host01-vip
172.16.0.138 host02
172.16.0.139 host02-vip
172.16.0.140 scan
3.OS layer modifies the address of the public network card
4. Start crs
# / u01/11.2.0/grid/bin/crsctl start crs
5. Modify Public IP
[grid@host01 ~] $oifcfg iflist
Eth4 172.16.0.0
Eth5 10.168.0.0
Eth5 169.254.0.0
[grid@host01 ~] $oifcfg getif
Eth4 192.168.0.0 global public
Eth5 10.168.0.0 global cluster_interconnect
[grid@host01 ~] $oifcfg delif-global eth4
[grid@host01 ~] $oifcfg setif-global eth4/172.16.0.0:public
[grid@host01 ~] $oifcfg getif
Eth5 10.168.0.0 global cluster_interconnect
Eth4 172.16.0.0 global public
6. Modify VIP-- to stop database and snooping
Srvctl stop vip-n host01
Srvctl stop vip-n host02
# root user modifies:
Srvctl modify nodeapps-n host01-A 172.16.0.136/255.255.255.0/eth4
Srvctl modify nodeapps-n host02-A 172.16.0.139/255.255.255.0/eth4
[root@host01] # / u01/11.2.0/grid/bin/srvctl config vip-n host01
VIP exists: / 172.16.0.136/172.16.0.136/172.16.0.0/255.255.255.0/eth4, hosting node host01
[root@host01] # / u01/11.2.0/grid/bin/srvctl config vip-n host02
VIP exists: / host02-vip/172.16.0.139/172.16.0.0/255.255.255.0/eth4, hosting node host02
/ u01/11.2.0/grid/bin/srvctl start vip-n host01
/ u01/11.2.0/grid/bin/srvctl start vip-n host02
Confirm local_listener information:
Check the local_listener information. If it is incorrect and needs to be modified, my query here is correct.
-- the two nodes confirm respectively:
Show parameter local_listener
-- modify:
Alter system set local_listener=' (ADDRESS= (PROTOCOL=TCP) (HOST=172.16.0.136) (PORT=1521)) 'sid='orcl1'
Alter system set local_listener=' (ADDRESS= (PROTOCOL=TCP) (HOST=172.16.0.139) (PORT=1521)) 'sid='orcl2'
7. Modify SCAN VIP
Srvctl stop scan_listener
Srvctl stop scan
Srvctl status scan_listener
Srvctl status scan
# root
/ u01/11.2.0/grid/bin/srvctl modify scan-n scan
/ u01/11.2.0/grid/bin/srvctl config scan
SCAN name: scan, Network: 1/172.16.0.0/255.255.255.0/eth4
SCAN VIP name: scan1, IP: / scan/172.16.0.140
/ u01/11.2.0/grid/bin/srvctl start scan
/ u01/11.2.0/grid/bin/srvctl start scan_listener
* *
Scenario two
* *
Modify Private IP
Before modification:
10.168.0.123/24--eth5
10.168.0.124/24--eth5
Modified address
100.16.0.23/24--eth5
100.16.0.24/24--eth5
1. Add new network information
[grid@host01 ~] $oifcfg getif
Eth5 10.168.0.0 global cluster_interconnect
Eth4 192.168.0.0 global public
[grid@host01 ~] $
# # New subnets have the same name of Nic but different subnets:
[grid@host01 ~] $oifcfg setif-global eth5/100.16.0.0:cluster_interconnect
[grid@host01 ~] $oifcfg getif
Eth5 10.168.0.0 global cluster_interconnect
Eth5 100.16.0.0 global cluster_interconnect
Eth4 172.16.0.0 global public
2. Close crs
# root execution
/ u01/11.2.0/grid/bin/crsctl stop crs
3. Modify the host ip address
4, start crs
/ u01/11.2.0/grid/bin/crsctl start crs
5. Delete the old network information
[grid@host01 ~] $oifcfg delif-global eth5/10.168.0.0
[grid@host01 ~] $oifcfg getif
Eth5 100.16.0.0 global cluster_interconnect
Eth6 172.16.0.0 global public
* *
Scenario 3
* *
Public network card name eth4=== > eth6
1. Modify the Nic information
[grid@host01 ~] $oifcfg getif
Eth5 10.168.0.0 global cluster_interconnect
Eth5 100.16.0.0 global cluster_interconnect
Eth4 172.16.0.0 global public
[grid@host01 ~] $oifcfg iflist
Eth6 172.16.0.0
Eth5 100.16.0.0
Eth5 169.254.0.0
[grid@host01 ~] $oifcfg setif-global eth6/172.16.0.0:public
[grid@host01 ~] $oifcfg getif
Eth5 10.168.0.0 global cluster_interconnect
Eth5 100.16.0.0 global cluster_interconnect
Eth4 172.16.0.0 global public
Eth6 172.16.0.0 global public
[grid@host01 ~] $oifcfg delif-global eth4/172.16.0.0:public
[grid@host01 ~] $oifcfg getif
Eth5 10.168.0.0 global cluster_interconnect
Eth5 100.16.0.0 global cluster_interconnect
Eth6 172.16.0.0 global public
2, stop crs
3. Modify the name of the operating system Nic
4, start crs
5. Modify vip,scan
# root execution
Srvctl stop vip-n host01
Srvctl stop vip-n host02
Srvctl modify nodeapps-n host01-A 172.16.0.136/255.255.255.0/eth6
Srvctl modify nodeapps-n host02-A 172.16.0.139/255.255.255.0/eth6
[root@host01 host01] # / u01/11.2.0/grid/bin/srvctl config vip-n host01
VIP exists: / 172.16.0.136/172.16.0.136/172.16.0.0/255.255.255.0/eth6, hosting node host01
[root@host01 host01] # / u01/11.2.0/grid/bin/srvctl config vip-n host02
VIP exists: / host02-vip/172.16.0.139/172.16.0.0/255.255.255.0/eth6, hosting node host02
/ u01/11.2.0/grid/bin/srvctl start vip-n host01
/ u01/11.2.0/grid/bin/srvctl start vip-n host02
* *
Scenario 4
* *
Modify the name of the Private network card eth5=== > eth4
1. Add new network information
[grid@host01 ~] $
[grid@host01 ~] $oifcfg setif-global eth4/100.16.0.0:cluster_interconnect
[grid@host01 ~] $oifcfg getif
Eth5 100.16.0.0 global cluster_interconnect
Eth6 172.16.0.0 global public
Eth4 100.16.0.0 global cluster_interconnect
2, stop crs
3. Modify the information of the host network card
4, start crs
5. Delete redundant network information
[grid@host01 ~] $oifcfg getif
Eth5 100.16.0.0 global cluster_interconnect
Eth6 172.16.0.0 global public
Eth4 100.16.0.0 global cluster_interconnect
[grid@host01 ~] $oifcfg delif-global eth5/100.16.0.0
[grid@host01 ~] $oifcfg getif
Eth6 172.16.0.0 global public
Eth4 100.16.0.0 global cluster_interconnect
=
Before modification:
[root@host02 network-scripts] # ip a s
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Valid_lft forever preferred_lft forever
Inet6:: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth6: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:50:56:8e:06:87 brd ff:ff:ff:ff:ff:ff
Inet 172.16.0.138/24 brd 172.16.0.255 scope global eth6
Valid_lft forever preferred_lft forever
Inet6 fe80::250:56ff:fe8e:687/64 scope link
Valid_lft forever preferred_lft forever
3: eth4: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
Link/ether 00:50:56:8e:8c:39 brd ff:ff:ff:ff:ff:ff
4: eth5: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:50:56:8e:2d:83 brd ff:ff:ff:ff:ff:ff
Inet 100.16.0.24/24 brd 100.16.0.255 scope global eth5
Valid_lft forever preferred_lft forever
Inet6 fe80::250:56ff:fe8e:2d83/64 scope link
Valid_lft forever preferred_lft forever
=
After modification
[root@host02 network-scripts] # ip a s
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Valid_lft forever preferred_lft forever
Inet6:: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth6: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:50:56:8e:06:87 brd ff:ff:ff:ff:ff:ff
Inet 172.16.0.138/24 brd 172.16.0.255 scope global eth6
Valid_lft forever preferred_lft forever
Inet6 fe80::250:56ff:fe8e:687/64 scope link
Valid_lft forever preferred_lft forever
3: eth4: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:50:56:8e:8c:39 brd ff:ff:ff:ff:ff:ff
Inet 100.16.0.24/24 brd 100.16.0.255 scope global eth4
Valid_lft forever preferred_lft forever
Inet6 fe80::250:56ff:fe8e:8c39/64 scope link
Valid_lft forever preferred_lft forever
4: eth5: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
Link/ether 00:50:56:8e:2d:83 brd ff:ff:ff:ff:ff:ff
=
* *
5: modify the name of the public network card, ip
* *
Before modification:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.