Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Use OpenFiler to simulate the test of ASM shared disk and multipath (multipath) in storage configuration RAC

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Chapter one: an overview of this article

A previous article on "Oracle_lhr_RAC 12cR1 installation" was released, but the storage in it did not use multipathing, but instead used storage provided by VMware itself. So, the last thing a year ago is to learn about multipathing. This article introduces the configuration of OpenFiler, iSCSI, and multipathing.

The content of this article is:

Chapter II installation of OpenFiler

OpenFile is developed on the basis of rPath Linux and can be released as a stand-alone Linux operating system. Openfiler is a very good storage management operating system, open source and free, through the web interface for storage disk management, support the popular network storage technology IP-SAN and NAS, support iSCSI, NFS, SMB/CIFS and FTP protocols.

The software required to install the OpenFiler lock this time is as follows:

Serial number

Types

Content

one

Openfiler

Openfileresa-2.99.1-x86_64-disc1.iso

Note: these software wheat seedlings have been uploaded to Tencent Weiyun (http://blog.itpub.net/26736162/viewspace-1624453/), friends can download. In addition, wheat seedlings have uploaded the installed virtual machine to the cloud disk, which has integrated rlwrap software.

2.1 installation

The detailed installation process of wheat seedlings is not a screenshot. Some netizens have posted a step-by-step process on the Internet. It doesn't matter if the memory of OpenFiler is set to 1G or smaller. The disk chooses IDE disk format. Due to the subsequent configuration of multi-path, you need to install 2 network cards. After the installation is complete, restart with the interface as follows:

Note that the contents of the box can be opened directly in the browser. Root user login can be used for user maintenance, while storage maintenance can only use openfiler users. Openfiler is managed remotely using the Web interface. The management address here is https://192.168.59.200:446, the initial user name is openfiler (lowercase), and the password is password. You can change this password after logging in.

2.2 basic configuration 2.2.1 Network Card configuration

Configure the static Nic address:

[root@OFLHR ~] # more / etc/sysconfig/network-scripts/ifcfg-eth0

# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]

DEVICE=eth0

BOOTPROTO=static

BROADCAST=192.168.59.255

HWADDR=00:0C:29:98:1A:CD

IPADDR=192.168.59.200

NETMASK=255.255.255.0

NETWORK=192.168.59.0

ONBOOT=yes

[root@OFLHR ~] # more / etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

MTU=1500

USERCTL=no

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.2.200

NETMASK=255.255.255.0

HWADDR=00:0C:29:98:1A:D7

[root@OFLHR ~] # ip a

1: lo: mtu 16436 qdisc noqueue

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000

Link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

Inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

Inet6 fe80::20c:29ff:fe98:1acd/64 scope link

Valid_lft forever preferred_lft forever

3: eth2: mtu 1500 qdisc pfifo_fast qlen 1000

Link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

Inet 192.168.2.200/24 brd 192.168.2.255 scope global eth2

Inet6 fe80::20c:29ff:fe98:1ad7/64 scope link

Valid_lft forever preferred_lft forever

[root@OFLHR ~] #

2.2.2 add a hard disk

Add a 100g IDE hard disk as storage.

[root@OFLHR ~] # fdisk-l

Disk / dev/sda: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000adc2c

Device Boot Start End Blocks Id System

/ dev/sda1 * 63 610469 305203 + 83 Linux

/ dev/sda2 610470 17382329 8385930 83 Linux

/ dev/sda3 17382330 19486844 1052257 + 82 Linux swap / Solaris

Disk / dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk / dev/sdb doesn't contain a valid partition table

[root@OFLHR ~] #

2.3 iscsi target configuration

Two hard drives are configured for the openfiler server, of which 10GB's hard drive has been used to install the openfiler operating system, and 200GB's hard drive will be used for data storage.

2.3.1 create a logical volume

Login address: https://192.168.59.200:446

Initial username and password: openfiler/password

In independent storage devices, LUN (Logical Unit Number) is the most important basic unit. LUN can be accessed by any host in SAN, whether through HBA or iSCSI. Even if the iSCSI is activated by the software, the LUN can be accessed by the iSCSI initiator of the software under different operating systems after the operating system starts. Under OpenFiler, LUN is called Logical Volume (LV), so to create LUN under OpenFiler is to create LV.

After you have installed OpenFiler, the next step is to share the disks under OpenFiler for use by virtual machines or other hosts on the network. After the standard SAN, this can be done at the RAID level, but the benefits and flexibility of VG cannot be compared by RAID. Let's take a look at how the VG under OpenFiler is created step by step.

To create a VG:

(1) enter the interface of OpenFiler and select the physical hard disk you want to use.

(2) the physical hard disk to be added is formatted into Physical Volume format.

(3) create a VG group and join the physical hard disk formatted into PV format.

(4) after joining, it becomes a large VG group and is regarded as a large physical hard disk of the system.

(5) add a logical partition LUN to this VG, which is called Logical Volume in OpenFiler.

(6) specify the file format of LUN, such as iSCSI, ext3, or NFS, and format it.

(7) if it is iSCSI, it needs to be reconfigured. If it is in other file formats, it can be shared by NAS.

After logging in, click the Volumes tab

Two hard drives are configured for the openfiler server, of which 10GB's hard drive has been used to install the openfiler operating system, and 200GB's hard drive will be used for data storage.

Click create new physical volumes and then click / dev/sdb

Click Reset in the lower right corner of the page, and then click Create. Partition type is Physical volume

Click Volume Groups

Enter a name, check the check box, and click Add volume group

[root@OFLHR ~] # fdisk-l

Disk / dev/sda: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000adc2c

Device Boot Start End Blocks Id System

/ dev/sda1 * 63 610469 305203 + 83 Linux

/ dev/sda2 610470 17382329 8385930 83 Linux

/ dev/sda3 17382330 19486844 1052257 + 82 Linux swap / Solaris

WARNING: GPT (GUID Partition Table) detected on'/ devamp sdbstores! The util fdisk doesn't support GPT. Use GNU Parted.

Disk / dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Device Boot Start End Blocks Id System

/ dev/sdb1 1 209715199 104857599 + ee GPT

[root@OFLHR ~] # pvs

PV VG Fmt Attr PSize PFree

/ dev/sdb1 vmlhr lvm2 a-95.34g 95.34g

[root@OFLHR ~] #

Click Add Volume

Enter the content, adjust the disk size to 10G, and select block (iSCSI,FC,etc) for the volume type.

Create a total of 4 logical volumes in turn:

[root@OFLHR ~] # vgs

VG # PV # LV # SN Attr VSize VFree

Vmlhr 1 4 0 wz--n- 95.34g 55.34g

[root@OFLHR ~] # pvs

PV VG Fmt Attr PSize PFree

/ dev/sdb1 vmlhr lvm2 a-95.34g 55.34g

[root@OFLHR ~] # lvs

LV VG Attr LSize Origin Snap% Move Log Copy% Convert

Lv01 vmlhr-wi-a- 10.00g

Lv02 vmlhr-wi-a- 10.00g

Lv03 vmlhr-wi-a- 10.00g

Lv04 vmlhr-wi-a- 10.00g

[root@OFLHR ~] # fdisk-l

Disk / dev/sda: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000adc2c

Device Boot Start End Blocks Id System

/ dev/sda1 * 63 610469 305203 + 83 Linux

/ dev/sda2 610470 17382329 8385930 83 Linux

/ dev/sda3 17382330 19486844 1052257 + 82 Linux swap / Solaris

WARNING: GPT (GUID Partition Table) detected on'/ devamp sdbstores! The util fdisk doesn't support GPT. Use GNU Parted.

Disk / dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Device Boot Start End Blocks Id System

/ dev/sdb1 1 209715199 104857599 + ee GPT

Disk / dev/dm-0: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk / dev/dm-0 doesn't contain a valid partition table

Disk / dev/dm-1: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk / dev/dm-1 doesn't contain a valid partition table

Disk / dev/dm-2: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk / dev/dm-2 doesn't contain a valid partition table

Disk / dev/dm-3: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk / dev/dm-3 doesn't contain a valid partition table

[root@OFLHR ~] #

2.3.2 enable iSCSI Target service

Click the Services tab to set iSCSI Target for Enable to enable the service Start.

2.3.3 LUN Mapping operation

Go back to the Volumes tab and click iSCSI Targets

Click Add

Select the LUN Mapping tab and click Map

2.3.4 Network ACL

Since iSCSI is based on the IP network, we need to allow computers in the network to be accessed through IP. Here is how to connect the IP network in OpenFiler to other hosts on the same network segment.

1. Enter System in OpenFiler and pull it directly to the bottom of the page.

2. Enter the name of the network access at Network Access Configuration, such as VM_LHR.

3. Enter the IP segment of the host. Note that you cannot enter the IP of a single host, which will be inaccessible. We enter 192.168.59.0 here, which means that it can be accessed from 192.168.59.1 to 192.168.59.254.

4. Choose 255.255.255.0 in Netmask and select Share in the Type drop-down list box, after which you can click the Update button.

Update after selection

At this point, you can see the authorized network segment in this OpenFiler.

In iSCSI Targets, click the Network ACL tab

Set Access to Allow and click Update

The configuration to this storage has been completed

2.3.5 / etc/initiators.deny

Comment out iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL:

[root@OFLHR ~] # more / etc/initiators.deny

# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!

# This configuration file was autogenerated

# by Openfiler. Any manual changes will be overwritten

# Generated at: Sat Jan 21 1:49:55 CST 2017

# iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL

# End of Openfiler configuration

[root@OFLHR ~] #

Chapter III configuration sharing in RAC 3.1 RAC Node configuration iSCSI

ISCSI (Internet Small Computer System Interface). ISCSI technology, researched and developed by IBM, is a SCSI instruction set used by hardware devices and can run on the upper layer of IP protocol. This instruction set can run SCSI protocol on IP network and enable it to choose routes on high-speed Gigabit Ethernet. ISCSI technology is a new storage technology, which combines the existing SCSI interface with Ethernet technology, so that the server can exchange data with the storage device using IP network. ISCSI is a TCP/IP-based protocol used to establish and manage connections between IP storage devices, hosts, and clients, and to create storage area networks (SAN).

ISCSI target: even storage devices, devices that store disks or RAID, can now simulate Linux hosts as iSCSI target! The purpose is to provide "disks" for use by other hosts

ISCSI initiator: a client that can use target, usually a server. In other words, if you want to connect to the server of iSCSI target, you must install the relevant features of iSCSI initiator before you can use the disk provided by iSCSI target.

3.1.1 iSCSI target

[root@OFLHR ~] # service iscsi-target start

Starting iSCSI target service: [OK]

[root@OFLHR ~] # more / etc/ietd.conf

# WARNINGGUBG This configuration file generated by Openfiler!-WARNING. DO NOT MANUALLY EDIT. #

Target iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

HeaderDigest None

DataDigest None

MaxConnections 1

InitialR2T Yes

ImmediateData No

MaxRecvDataSegmentLength 131072

MaxXmitDataSegmentLength 131072

MaxBurstLength 262144

FirstBurstLength 262144

DefaultTime2Wait 2

DefaultTime2Retain 20

MaxOutstandingR2T 8

DataPDUInOrder Yes

DataSequenceInOrder Yes

ErrorRecoveryLevel 0

Lun 0 Path=/dev/vmlhr/lv01,Type=blockio,ScsiSN=22llvD-CacO-MOMA,ScsiId=22llvD-CacO-MOMA,IOMode=wt

Lun 1 Path=/dev/vmlhr/lv02,Type=blockio,ScsiSN=BgLpy9-u7PH-csDC,ScsiId=BgLpy9-u7PH-csDC,IOMode=wt

Lun 2 Path=/dev/vmlhr/lv03,Type=blockio,ScsiSN=38KsSC-REKL-yPgW,ScsiId=38KsSC-REKL-yPgW,IOMode=wt

Lun 3 Path=/dev/vmlhr/lv04,Type=blockio,ScsiSN=aN5blo-NyMp-L4Jl,ScsiId=aN5blo-NyMp-L4Jl,IOMode=wt

[root@OFLHR ~] # ps-ef | grep iscsi

Root 937 2 0 01:01? 00:00:00 [iscsi_eh]

Root 946 1 0 01:01? 00:00:00 iscsid

Root 947 1 0 01:01? 00:00:00 iscsid

Root 13827 1217 0 02:43 pts/1 00:00:00 grep iscsi

[root@OFLHR ~] # cat / proc/net/iet/volume

Tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

Lun:0 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv01

Lun:1 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv02

Lun:2 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv03

Lun:3 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv04

[root@OFLHR ~] # cat / proc/net/iet/session

Tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

[root@OFLHR ~] #

3.1.2 iSCSI initiator3.1.2.1 installation iSCSI initiator

ISCSI initiator is installed on two nodes of RAC.

[root@raclhr-12cR1-N1 ~] # rpm-qa | grep iscsi

Iscsi-initiator-utils-6.2.0.873-10.el6.x86_64

[root@raclhr-12cR1-N1 ~] #

If it is not installed, use yum install iscsi-initiator-utils* to install it.

3.1.2.2 iscsiadm

Iscsi initiator is mainly managed through the iscsiadm command. Let's first check what target are available on the iscsi target machine that provides the service:

[root@raclhr-12cR1-N1] # iscsiadm-- mode discovery-- type sendtargets-- portal 192.168.59.200

[OK] iscsid: [OK]

192.168.59.20014 3260 iqn.2006 1-01.com.openfiler:tsn.5e423e1e4d90

192.168.2.200 Phantom 3260 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

[root@raclhr-12cR1-N1 ~] # ps-ef | grep iscsi

Root 2619 2 0 11:32? 00:00:00 [iscsi_eh]

Root 2651 1 0 11:32? 00:00:00 iscsiuio

Root 2658 1 0 11:32? 00:00:00 iscsid

Root 2659 1 0 11:32? 00:00:00 iscsid

Root 2978 56098 0 11:33 pts/1 00:00:00 grep iscsi

[root@raclhr-12cR1-N1 ~] #

At this step, you can see the number and name of the iSCSI Target created by your server. This command only needs to remember-p followed by the address of the iSCSI service, or it can be the hostname, either! 3260 is the port number of the service, the default!

Then you can log in to a certain target, and after logging in to a certain target, all the hard drives under this target will be shared:

[root@raclhr-12cR1-N1 ~] # fdisk-l | grep dev

Disk / dev/sda: 21.5 GB, 21474836480 bytes

/ dev/sda1 * 1 26 204800 83 Linux

/ dev/sda2 26 1332 10485760 8e Linux LVM

/ dev/sda3 1332 2611 10279936 8e Linux LVM

Disk / dev/sdb: 107.4 GB, 107374182400 bytes

/ dev/sdb1 1 1306 10485760 8e Linux LVM

/ dev/sdb2 1306 2611 10485760 8e Linux LVM

/ dev/sdb3 2611 3917 10485760 8e Linux LVM

/ dev/sdb4 3917 13055 73399296 5 Extended

/ dev/sdb5 3917 5222 10485760 8e Linux LVM

/ dev/sdb6 5223 6528 10485760 8e Linux LVM

/ dev/sdb7 6528 7834 10485760 8e Linux LVM

/ dev/sdb8 7834 9139 10485760 8e Linux LVM

/ dev/sdb9 9139 10445 10485760 8e Linux LVM

/ dev/sdb10 10445 11750 10485760 8e Linux LVM

/ dev/sdb11 11750 13055 10477568 8e Linux LVM

Disk / dev/sde: 10.7 GB, 10737418240 bytes

Disk / dev/sdc: 6442 MB, 6442450944 bytes

Disk / dev/sdd: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

Disk / dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

Disk / dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

Disk / dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

Disk / dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

[root@raclhr-12cR1-N1] # iscsiadm-- mode node-- targetname iqn.2006-01.com.openfiler:tsn.5e423e1e4d90-portal 192.168.59.200 targetname iqn.2006 3260-- login

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200 3260] (multiple)

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200 3260] (multiple)

Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200 Magna 3260] successful.

Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200 Magi 3260] successful.

[root@raclhr-12cR1-N1 ~] #

[root@raclhr-12cR1-N1 ~] # fdisk-l | grep dev

Disk / dev/sda: 21.5 GB, 21474836480 bytes

/ dev/sda1 * 1 26 204800 83 Linux

/ dev/sda2 26 1332 10485760 8e Linux LVM

/ dev/sda3 1332 2611 10279936 8e Linux LVM

Disk / dev/sdb: 107.4 GB, 107374182400 bytes

/ dev/sdb1 1 1306 10485760 8e Linux LVM

/ dev/sdb2 1306 2611 10485760 8e Linux LVM

/ dev/sdb3 2611 3917 10485760 8e Linux LVM

/ dev/sdb4 3917 13055 73399296 5 Extended

/ dev/sdb5 3917 5222 10485760 8e Linux LVM

/ dev/sdb6 5223 6528 10485760 8e Linux LVM

/ dev/sdb7 6528 7834 10485760 8e Linux LVM

/ dev/sdb8 7834 9139 10485760 8e Linux LVM

/ dev/sdb9 9139 10445 10485760 8e Linux LVM

/ dev/sdb10 10445 11750 10485760 8e Linux LVM

/ dev/sdb11 11750 13055 10477568 8e Linux LVM

Disk / dev/sde: 10.7 GB, 10737418240 bytes

Disk / dev/sdc: 6442 MB, 6442450944 bytes

Disk / dev/sdd: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

Disk / dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

Disk / dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

Disk / dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

Disk / dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

Disk / dev/sdf: 10.7 GB, 10737418240 bytes

Disk / dev/sdi: 10.7 GB, 10737418240 bytes

Disk / dev/sdh: 10.7 GB, 10737418240 bytes

Disk / dev/sdl: 10.7 GB, 10737418240 bytes

Disk / dev/sdj: 10.7 GB, 10737418240 bytes

Disk / dev/sdg: 10.7 GB, 10737418240 bytes

Disk / dev/sdk: 10.7 GB, 10737418240 bytes

Disk / dev/sdm: 10.7 GB, 10737418240 bytes

There are 8 extra disks here, but only map four times in openfiler. Why is it 8 instead of 4? Because openfiler has two network cards and uses two IP to log in to iscsi target twice, there are two duplicates here.

To view information about each iscsi:

# iscsiadm-m session-P 3

[root@raclhr-12cR1-N1 ~] #

[root@raclhr-12cR1-N1] # iscsiadm-m session-P 3

ISCSI Transport Class version 2.0-870

Version 6.2.0-873.10.el6

Target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

Current Portal: 192.168.59.200:3260,1

Persistent Portal: 192.168.59.200:3260,1

**********

Interface:

**********

Iface Name: default

Iface Transport: tcp

Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355

Iface IPaddress: 192.168.59.160

Iface HWaddress:

Iface Netdev:

SID: 1

ISCSI Connection State: LOGGED IN

ISCSI Session State: LOGGED_IN

Internal iscsid Session State: NO CHANGE

*********

Timeouts:

*********

Recovery Timeout: 120

Target Reset Timeout: 30

LUN Reset Timeout: 30

Abort Timeout: 15

*****

CHAP:

*****

Username:

Password: *

Username_in:

Password_in: *

************************

Negotiated iSCSI params:

************************

HeaderDigest: None

DataDigest: None

MaxRecvDataSegmentLength: 262144

MaxXmitDataSegmentLength: 131072

FirstBurstLength: 262144

MaxBurstLength: 262144

ImmediateData: No

InitialR2T: Yes

MaxOutstandingR2T: 1

************************

Attached SCSI devices:

************************

Host Number: 4 State: running

Scsi4 Channel 00 Id 0 Lun: 0

Attached scsi disk sdg State: running

Scsi4 Channel 00 Id 0 Lun: 1

Attached scsi disk sdj State: running

Scsi4 Channel 00 Id 0 Lun: 2

Attached scsi disk sdk State: running

Scsi4 Channel 00 Id 0 Lun: 3

Attached scsi disk sdm State: running

Current Portal: 192.168.2.200:3260,1

Persistent Portal: 192.168.2.200:3260,1

**********

Interface:

**********

Iface Name: default

Iface Transport: tcp

Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355

Iface IPaddress: 192.168.2.100

Iface HWaddress:

Iface Netdev:

SID: 2

ISCSI Connection State: LOGGED IN

ISCSI Session State: LOGGED_IN

Internal iscsid Session State: NO CHANGE

*********

Timeouts:

*********

Recovery Timeout: 120

Target Reset Timeout: 30

LUN Reset Timeout: 30

Abort Timeout: 15

*****

CHAP:

*****

Username:

Password: *

Username_in:

Password_in: *

************************

Negotiated iSCSI params:

************************

HeaderDigest: None

DataDigest: None

MaxRecvDataSegmentLength: 262144

MaxXmitDataSegmentLength: 131072

FirstBurstLength: 262144

MaxBurstLength: 262144

ImmediateData: No

InitialR2T: Yes

MaxOutstandingR2T: 1

************************

Attached SCSI devices:

************************

Host Number: 5 State: running

Scsi5 Channel 00 Id 0 Lun: 0

Attached scsi disk sdf State: running

Scsi5 Channel 00 Id 0 Lun: 1

Attached scsi disk sdh State: running

Scsi5 Channel 00 Id 0 Lun: 2

Attached scsi disk sdi State: running

Scsi5 Channel 00 Id 0 Lun: 3

Attached scsi disk sdl State: running

[root@raclhr-12cR1-N1 ~] #

After logging in, the new disk is partitioned, formatted, and then mounted.

After completing these commands, iscsi initator records this information to the / var/lib/iscsi directory:

/ var/lib/iscsi/send_targets records the situation of each target, and / var/lib/iscsi/nodes records the nodes of each target. The next time you start iscsi initator (service iscsi start), it will automatically log in to each target. If you want to manually log in to each target, you need to delete all the contents in the / var/lib/iscsi/send_targets directory and / var/lib/iscsi/nodes.

3.2 install multipath software on two nodes of multipath multipath3.2.1 RAC

1. Install the multipath package:

[root@raclhr-12cR1-N1 ~] # mount / dev/sr0 / media/lhr/cdrom/

Mount: block device / dev/sr0 is write-protected, mounting read-only

[root@raclhr-12cR1-N1 ~] # cd / media/lhr/cdrom/Packages/

[root@raclhr-12cR1-N1 Packages] # ll device-mapper-*.x86_64.rpm

-root 168424 Oct 30 2013 device-mapper-1.02.79-8.el6.x86_64.rpm

-root 118316 Oct 30 2013 device-mapper-event-1.02.79-8.el6.x86_64.rpm

-root 112892 Oct 30 2013 device-mapper-event-libs-1.02.79-8.el6.x86_64.rpm

-root 199924 Oct 30 2013 device-mapper-libs-1.02.79-8.el6.x86_64.rpm

-Rmuri root root-95 root root 118892 Oct 25 2013 device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

-Rmuri root root-95 root root 184760 Oct 25 2013 device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

-Rmuri root root-96 root root 2444388 Oct 30 2013 device-mapper-persistent-data-0.2.8-2.el6.x86_64.rpm

[root@raclhr-12cR1-N1 Packages] # ll iscsi*

-root 702300 Oct 29 2013 iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm

[root@raclhr-12cR1-N1 Packages] # rpm-qa | grep device-mapper

Device-mapper-persistent-data-0.2.8-2.el6.x86_64

Device-mapper-1.02.79-8.el6.x86_64

Device-mapper-event-libs-1.02.79-8.el6.x86_64

Device-mapper-event-1.02.79-8.el6.x86_64

Device-mapper-libs-1.02.79-8.el6.x86_64

[root@raclhr-12cR1-N1 Packages] # rpm-ivh device-mapper-1.02.79-8.el6.x86_64.rpm

Warning: device-mapper-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing... # [100%]

Package device-mapper-1.02.79-8.el6.x86_64 is already installed

[root@raclhr-12cR1-N1 Packages] # rpm-ivh device-mapper-event-1.02.79-8.el6.x86_64.rpm

Warning: device-mapper-event-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing... # [100%]

Package device-mapper-event-1.02.79-8.el6.x86_64 is already installed

[root@raclhr-12cR1-N1 Packages] # rpm-ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

Warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Error: Failed dependencies:

Device-mapper-multipath-libs = 0.4.9-72.el6 is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

Libmpathpersist.so.0 () (64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

Libmultipath.so () (64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

[root@raclhr-12cR1-N1 Packages] # rpm-ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

Warning: device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing... # [100%]

1:device-mapper-multipath### [100%]

[root@raclhr-12cR1-N1 Packages] # rpm-ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

Warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing... # [100%]

1:device-mapper-multipath### [100%]

[root@raclhr-12cR1-N1 Packages] # rpm-qa | grep device-mapper

Device-mapper-multipath-0.4.9-72.el6.x86_64

Device-mapper-persistent-data-0.2.8-2.el6.x86_64

Device-mapper-1.02.79-8.el6.x86_64

Device-mapper-event-libs-1.02.79-8.el6.x86_64

Device-mapper-event-1.02.79-8.el6.x86_64

Device-mapper-multipath-libs-0.4.9-72.el6.x86_64

Device-mapper-libs-1.02.79-8.el6.x86_64

[root@raclhr-12cR1-N2 Packages] #

Rpm-ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

Rpm-ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

3.2.2 start multipath

Add multipath software to the kernel module

Modprobe dm-multipath

Modprobe dm-round-robin

Check the kernel addition

[root@raclhr-12cR1-N1 Packages] # lsmod | grep multipath

Dm_multipath 17724 1 dm_round_robin

Dm_mod 84209 16 dm_multipath,dm_mirror,dm_log

[root@raclhr-12cR1-N1 Packages] #

Set the multipath software multipath to boot automatically

[root@raclhr-12cR1-N1 Packages] # chkconfig-- level 2345 multipathd on

[root@raclhr-12cR1-N1 Packages] #

[root@raclhr-12cR1-N1 Packages] # chkconfig-- list | grep multipathd

Multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

[root@raclhr-12cR1-N1 Packages] #

Start the multipath service

[root@raclhr-12cR1-N1 Packages] # service multipathd restart

Ux_socket_connect: No such file or directory

Stopping multipathd daemon: [FAILED]

Starting multipathd daemon: [OK]

[root@raclhr-12cR1-N1 Packages] #

3.2.3 configure multipathing software / etc/multipath.conf

1. Configure multipath software, edit / etc/multipath.conf

Note: by default, / etc/multipath.conf does not exist and you need to generate the multipath.conf file with the following command:

/ sbin/mpathconf-enable-find_multipaths y-with_module y-with_chkconfig y

[root@raclhr-12cR1-N1 ~] # multipath-ll

Jan 23 12:52:54 | / etc/multipath.conf does not exist, blacklisting all devices.

Jan 23 12:52:54 | A sample multipath.conf file is located at

Jan 23 12:52:54 | / usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf

Jan 23 12:52:54 | You can run / sbin/mpathconf to create or modify / etc/multipath.conf

[root@raclhr-12cR1-N1 ~] # multipath-ll

Jan 23 12:53:49 | / etc/multipath.conf does not exist, blacklisting all devices.

Jan 23 12:53:49 | A sample multipath.conf file is located at

Jan 23 12:53:49 | / usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf

Jan 23 12:53:49 | You can run / sbin/mpathconf to create or modify / etc/multipath.conf

[root@raclhr-12cR1-N1] # / sbin/mpathconf-enable-find_multipaths y-with_module y-with_chkconfig y

[root@raclhr-12cR1-N1 ~] #

[root@raclhr-12cR1-N1 ~] # ll / etc/multipath.conf

-rw- 1 root root 2775 Jan 23 12:55 / etc/multipath.conf

[root@raclhr-12cR1-N1 ~] #

2. View and obtain the wwid information that stores the logical disk lun assigned to the server

[root@raclhr-12cR1-N1 multipath] # multipath-v0

[root@raclhr-12cR1-N1 multipath] # more / etc/multipath/wwids

# Multipath wwids, Version: 1.0

# NOTE: This file is automatically maintained by multipath and multipathd.

# You should not need to edit this file in normal circumstances.

#

# Valid WWIDs:

/ 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/

/ 14f504e46494c455242674c7079392d753750482d63734443/

/ 14f504e46494c455233384b7353432d52454b4c2d79506757/

/ 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/

[root@raclhr-12cR1-N1 multipath] #

Overwrite the contents of the files / etc/multipath/wwids and / etc/multipath/bindings to node 2:

[root@raclhr-12cR1-N2 ~] # multipath-v0

[root@raclhr-12cR1-N2 ~] # more / etc/multipath/wwids

# Multipath wwids, Version: 1.0

# NOTE: This file is automatically maintained by multipath and multipathd.

# You should not need to edit this file in normal circumstances.

#

# Valid WWIDs:

/ 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/

/ 14f504e46494c455242674c7079392d753750482d63734443/

/ 14f504e46494c455233384b7353432d52454b4c2d79506757/

/ 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/

[root@raclhr-12cR1-N1 ~] # more / etc/multipath/bindings

# Multipath bindings, Version: 1.0

# NOTE: this file is automatically maintained by the multipath program.

# You should not need to edit this file in normal circumstances.

#

# Format:

# alias wwid

#

Mpatha 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41

Mpathb 14f504e46494c455242674c7079392d753750482d63734443

Mpathc 14f504e46494c455233384b7353432d52454b4c2d79506757

Mpathd 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c

[root@raclhr-12cR1-N1 ~] #

[root@raclhr-12cR1-N2 ~] #

[root@raclhr-12cR1-N1 multipath] # multipath-ll

Mpathd (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

Size=10G features='0' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 5 sdk 0lv 0RU 3 sdk 8Rd 160 active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 4VOV 0RU 0RU 3 sdm 8Rd 192 active ready running

Mpathc (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

Size=10G features='0' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 5 sdj 0144active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 4VOV 0RU 0RU 2 sdl 8RU 176active ready running

Mpathb (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

Size=10G features='0' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 4VOVUL0VOUR 0RU 1 sdh 8VO12 active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 5VOV 0RU 0RU 1 sdi 8Rd 128 active ready running

Mpatha (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

Size=10G features='0' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 4, sdf, 0, 0, 0, 0, and active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 5VOUL0VOUL0ULO sdg 8:96 active ready running

[root@raclhr-12cR1-N1 multipath] # fdisk-l | grep dev

Disk / dev/sda: 21.5 GB, 21474836480 bytes

/ dev/sda1 * 1 26 204800 83 Linux

/ dev/sda2 26 1332 10485760 8e Linux LVM

/ dev/sda3 1332 2611 10279936 8e Linux LVM

Disk / dev/sdb: 107.4 GB, 107374182400 bytes

/ dev/sdb1 1 1306 10485760 8e Linux LVM

/ dev/sdb2 1306 2611 10485760 8e Linux LVM

/ dev/sdb3 2611 3917 10485760 8e Linux LVM

/ dev/sdb4 3917 13055 73399296 5 Extended

/ dev/sdb5 3917 5222 10485760 8e Linux LVM

/ dev/sdb6 5223 6528 10485760 8e Linux LVM

/ dev/sdb7 6528 7834 10485760 8e Linux LVM

/ dev/sdb8 7834 9139 10485760 8e Linux LVM

/ dev/sdb9 9139 10445 10485760 8e Linux LVM

/ dev/sdb10 10445 11750 10485760 8e Linux LVM

/ dev/sdb11 11750 13055 10477568 8e Linux LVM

Disk / dev/sdc: 6442 MB, 6442450944 bytes

Disk / dev/sdd: 10.7 GB, 10737418240 bytes

Disk / dev/sde: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

Disk / dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

Disk / dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

Disk / dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

Disk / dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

Disk / dev/sdf: 10.7 GB, 10737418240 bytes

Disk / dev/sdg: 10.7 GB, 10737418240 bytes

Disk / dev/sdh: 10.7 GB, 10737418240 bytes

Disk / dev/sdi: 10.7 GB, 10737418240 bytes

Disk / dev/sdj: 10.7 GB, 10737418240 bytes

Disk / dev/sdk: 10.7 GB, 10737418240 bytes

Disk / dev/sdl: 10.7 GB, 10737418240 bytes

Disk / dev/sdm: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/mpatha: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/mpathb: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/mpathc: 10.7 GB, 10737418240 bytes

Disk / dev/mapper/mpathd: 10.7 GB, 10737418240 bytes

[root@raclhr-12cR1-N1 multipath] #

3.2.4 Editing / etc/multipath.conf

For i in f g h i j k l m

Do

Echo "KERNEL==\" sd*\ ", BUS==\" scsi\ ", PROGRAM==\" / sbin/scsi_id-- whitelisted-- device=/dev/\ $name\ ", RESULT==\" `scsi_id-whitelisted-- device=/dev/sd$ i` ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ ""

Done

[root@raclhr-12cR1-N1 multipath] # for i in f g h i j k l m

> do

> echo "KERNEL==\" sd*\ ", BUS==\" scsi\ ", PROGRAM==\" / sbin/scsi_id-- whitelisted-- device=/dev/\ $name\ ", RESULT==\" `scsi_id-whitelisted-- device=/dev/sd$ i`\ ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ ""

> done

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c455232326c6c76442d4361634f2d4d4f4d41", NAME= "asm-diskf", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c455232326c6c76442d4361634f2d4d4f4d41", NAME= "asm-diskg", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c455242674c7079392d753750482d63734443", NAME= "asm-diskh", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c455242674c7079392d753750482d63734443", NAME= "asm-diski", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c455233384b7353432d52454b4c2d79506757", NAME= "asm-diskj", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c", NAME= "asm-diskk", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c455233384b7353432d52454b4c2d79506757", NAME= "asm-diskl", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-device=/dev/$name", RESULT== "14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c", NAME= "asm-diskm", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

[root@raclhr-12cR1-N1 multipath] #

[root@raclhr-12cR1-N1 multipath] # more / etc/multipath.conf

Defaults {

Find_multipaths yes

User_friendly_names yes

}

Blacklist {

Wwid 3600508b1001c5ae72efe1fea025cd2e5

Devnode "^ hd [a murz]"

Devnode "^ sd [Amure]"

Devnode "^ sda"

}

Multipaths {

Multipath {

Wwid 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41

Alias VMLHRStorage000

Path_grouping_policy multibus

Path_selector "round-robin 0"

Failback manual

Rr_weight priorities

No_path_retry 5

}

Multipath {

Wwid 14f504e46494c455242674c7079392d753750482d63734443

Alias VMLHRStorage001

Path_grouping_policy multibus

Path_selector "round-robin 0"

Failback manual

Rr_weight priorities

No_path_retry 5

}

Multipath {

Wwid 14f504e46494c455233384b7353432d52454b4c2d79506757

Alias VMLHRStorage002

Path_grouping_policy multibus

Path_selector "round-robin 0"

Failback manual

Rr_weight priorities

No_path_retry 5

}

Multipath {

Wwid 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c

Alias VMLHRStorage003

Path_grouping_policy multibus

Path_selector "round-robin 0"

Failback manual

Rr_weight priorities

No_path_retry 5

}

}

Devices {

Device {

Vendor "VMWARE"

Product "VIRTUAL-DISK"

Path_grouping_policy multibus

Getuid_callout "/ lib/udev/scsi_id-- whitelisted-- device=/dev/%n"

Path_checker readsector0

Path_selector "round-robin 0"

Hardware_handler "0"

Failback 15

Rr_weight priorities

No_path_retry queue

}

}

[root@raclhr-12cR1-N1 multipath] #

Start multipath configuration

[root@raclhr-12cR1-N1 ~] # service multipathd restart

Ok

Stopping multipathd daemon: [OK]

Starting multipathd daemon: [OK]

[root@raclhr-12cR1-N1 ~] # multipath-ll

VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 5 sdk 0lv 0RU 3 sdk 8Rd 160 active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 4VOV 0RU 0RU 3 sdm 8Rd 192 active ready running

VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 5 sdj 0144active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 4VOV 0RU 0RU 2 sdl 8RU 176active ready running

VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 4VOVUL0VOUR 0RU 1 sdh 8VO12 active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 5VOV 0RU 0RU 1 sdi 8Rd 128 active ready running

VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

| |-+-policy='round-robin 0' prio=1 status=active |

| | `- 4, sdf, 0, 0, 0, 0, and active ready running |

`- +-policy='round-robin 0' prio=1 status=enabled

`- 5VOUL0VOUL0ULO sdg 8:96 active ready running

[root@raclhr-12cR1-N1 ~] #

[root@raclhr-12cR1-N1 ~] # multipath-ll | grep LHR

VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

[root@raclhr-12cR1-N1 ~] #

When multipath configuration is enabled, a multipath logical disk is generated under / dev/mapper

[root@raclhr-12cR1-N1 ~] # cd / dev/mapper

[root@raclhr-12cR1-N1 mapper] # ll

Total 0

Crw-rw---- 1 root root 10, 58 Jan 23 12:49 control

Lrwxrwxrwx 1 root root 7 Jan 23 12:49 vg_orasoft-lv_orasoft_soft->.. / dm-3

Lrwxrwxrwx 1 root root 7 Jan 23 12:49 vg_orasoft-lv_orasoft_u01->.. / dm-2

Lrwxrwxrwx 1 root root 7 Jan 23 12:50 vg_rootlhr-Vol00->.. / dm-1

Lrwxrwxrwx 1 root root 7 Jan 23 12:50 vg_rootlhr-Vol01->.. / dm-4

Lrwxrwxrwx 1 root root 7 Jan 23 12:49 vg_rootlhr-Vol02->.. / dm-0

Lrwxrwxrwx 1 root root 7 Jan 23 12:50 vg_rootlhr-Vol03->.. / dm-5

Lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage000->.. / dm-6

Lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage001->.. / dm-7

Lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage002->.. / dm-8

Lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage003->.. / dm-9

[root@raclhr-12cR1-N1 mapper] #

At this point, the multipath multipath configuration is complete.

3.2.5 configure permissions for multipath Devic

To configure the permissions of multipath devices before 6.2, you only need to add uid,gid,mode to the device configuration.

Uid 1100 # uid

Gid 1020 # gid

Such as:

Multipath {

Wwid 360050763008101d4e00000000000000a

Alias DATA03

Uid 501 # uid

Gid 501 # gid

}

Remove the three parameters uid,gid,mode from the configuration multipath configuration file after 6.2. you need to use udev. The sample file has a template file in / usr/share/doc/device-mapper-version called 12-dm-permissions.rules, which you can use and put it in the / etc/udev/rules.d directory to make it effective.

[root@raclhr-12cR1-N1 rules.d] # ll / usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

-rw-r--r--. 1 root root 3186 Aug 13 2013 / usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

[root@raclhr-12cR1-N1 rules.d] #

[root@raclhr-12cR1-N1 rules.d] # ll

Total 24

-rw-r--r-- 1 root root 77 Jan 23 18:06 12-dm-permissions.rules

-rw-r--r-- 1 root root 190 Jan 23 15:40 55-usm.rules

-rw-r--r-- 1 root root 549 Jan 23 15:17 70-persistent-cd.rules

-rw-r--r-- 1 root root 585 Jan 23 15:09 70-persistent-net.rules

-rw-r--r-- 1 root root 633 Jan 23 15:46 99-oracle-asmdevices.rules

-rw-r--r-- 1 root root 916 Jan 23 15:16 99-oracleasm.rules

[root@raclhr-12cR1-N1 rules.d] # more / etc/udev/rules.d/12-dm-permissions.rules

ENV {DM_NAME} = "VMLHRStorage*", OWNER:= "grid", GROUP:= "asmadmin", MODE:= "660"

[root@raclhr-12cR1-N1 rules.d] #

Copy the file / etc/udev/rules.d/12-dm-permissions.rules to node 2.

3.2.6 configure udev rules

The script is as follows:

For i in f g h i j k l m

Do

Echo "KERNEL==\" dm-*\ ", BUS==\" block\ ", PROGRAM==\" / sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/\ $name\ ", RESULT==\" `scsi_id-- whitelisted-- replace-whitespace-- device=/dev/sd$ i` ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ "> > / etc/udev/rules.d/99-oracleasm.rules

Done

Since the multipath setting WWID is duplicated, the duplicate lines in the file / etc/udev/rules.d/99-oracleasm.rules should be removed.

On Node 1, do the following:

[root@raclhr-12cR1-N1 rules.d] # for i in f g h i j k l m

> do

> echo "KERNEL==\" dm-*\ ", BUS==\" block\ ", PROGRAM==\" / sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/\ $name\ ", RESULT==\" `scsi_id-- whitelisted-- replace-whitespace-- device=/dev/sd$ i` ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ "> > / etc/udev/rules.d/99-oracleasm.rules

> done

Open the file / etc/udev/rules.d/99-oracleasm.rules, remove the duplicate lines of WWID and leave only one line.

[root@raclhr-12cR1-N1 ~] # cat / etc/udev/rules.d/99-oracleasm.rules

KERNEL== "dm-*", BUS== "block", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c455232326c6c76442d4361634f2d4d4f4d41", NAME= "asm-diskf", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "dm-*", BUS== "block", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c455242674c7079392d753750482d63734443", NAME= "asm-diskh", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "dm-*", BUS== "block", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c455233384b7353432d52454b4c2d79506757", NAME= "asm-diskj", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "dm-*", BUS== "block", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c", NAME= "asm-diskk", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

[root@raclhr-12cR1-N1 ~] #

Copy the contents of the file / etc/udev/rules.d/99-oracleasm.rules to node 2, and then restart udev.

[root@raclhr-12cR1-N1 ~] # start_udev

Starting udev: [OK]

[root@raclhr-12cR1-N1 ~] #

[root@raclhr-12cR1-N1 ~] # ll / dev/asm-*

Brw-rw---- 1 grid asmadmin 8, 32 Jan 23 15:50 / dev/asm-diskc

Brw-rw---- 1 grid asmadmin 8, 48 Jan 23 15:48 / dev/asm-diskd

Brw-rw---- 1 grid asmadmin 8, 64 Jan 23 15:48 / dev/asm-diske

Brw-rw---- 1 grid asmadmin 253, 7 Jan 23 15:46 / dev/asm-diskf

Brw-rw---- 1 grid asmadmin 253, 9 Jan 23 15:46 / dev/asm-diskh

Brw-rw---- 1 grid asmadmin 253, 6 Jan 23 15:46 / dev/asm-diskj

Brw-rw---- 1 grid asmadmin 253, 8 Jan 23 15:46 / dev/asm-diskk

[root@raclhr-12cR1-N1 ~] #

[grid@raclhr-12cR1-N1 ~] $$ORACLE_HOME/bin/kfod disks=all s=true ds=true

Disk Size Header Path Disk Group User Group

=

1: 6144 Mb MEMBER / dev/asm-diskc OCR grid asmadmin

2: 10240 Mb MEMBER / dev/asm-diskd DATA grid asmadmin

3: 10240 Mb MEMBER / dev/asm-diske FRA grid asmadmin

4: 10240 Mb CANDIDATE / dev/asm-diskf # grid asmadmin

5: 10240 Mb CANDIDATE / dev/asm-diskh # grid asmadmin

6: 10240 Mb CANDIDATE / dev/asm-diskj # grid asmadmin

7: 10240 Mb CANDIDATE / dev/asm-diskk # grid asmadmin

ORACLE_SID ORACLE_HOME

=

+ ASM2 / u01/app/12.1.0/grid

+ ASM1 / u01/app/12.1.0/grid

[grid@raclhr-12cR1-N1 ~] $asmcmd

ASMCMD > lsdg

State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name

MOUNTED EXTERN N 512 4096 1048576 10240 6487 0 6487 0 N DATA/

MOUNTED EXTERN N 512 4096 1048576 10240 10144 0 10144 0 N FRA/

MOUNTED EXTERN N 512 4096 1048576 6144 1672 0 1672 0 Y OCR/

ASMCMD > lsdsk

Path

/ dev/asm-diskc

/ dev/asm-diskd

/ dev/asm-diske

ASMCMD > lsdsk-- candidate-p

Group_Num Disk_Num Incarn Mount_Stat Header_Stat Mode_Stat State Path

0 1 0 CLOSED CANDIDATE ONLINE NORMAL / dev/asm-diskf

0 3 0 CLOSED CANDIDATE ONLINE NORMAL / dev/asm-diskh

0 2 0 CLOSED CANDIDATE ONLINE NORMAL / dev/asm-diskj

0 0 0 CLOSED CANDIDATE ONLINE NORMAL / dev/asm-diskk

ASMCMD >

3.3 create a disk group with a new disk

CREATE DISKGROUP FRA external redundancy DISK'/ dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' =' 12.1, 'compatible.asm' =' 12.1'

SQL > select path from v$asm_disk

PATH

/ dev/asm-diskk

/ dev/asm-diskf

/ dev/asm-diskj

/ dev/asm-diskh

/ dev/asm-diske

/ dev/asm-diskd

/ dev/asm-diskc

7 rows selected.

SQL > CREATE DISKGROUP TESTMUL external redundancy DISK'/ dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' =' 12.1', 'compatible.asm' =' 12.1'

Diskgroup created.

SQL >

ASMCMD > lsdg

State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name

MOUNTED EXTERN N 512 4096 1048576 10240 6487 0 6487 0 N DATA/

MOUNTED EXTERN N 512 4096 1048576 10240 10144 0 10144 0 N FRA/

MOUNTED EXTERN N 512 4096 1048576 6144 1672 0 1672 0 Y OCR/

MOUNTED EXTERN N 512 4096 1048576 20480 20381 0 20381 0 N TESTMUL/

ASMCMD >

[root@raclhr-12cR1-N1 ~] # crsctl stat res-t | grep-2 TESTMUL

ONLINE ONLINE raclhr-12cr1-n1 STABLE

ONLINE ONLINE raclhr-12cr1-n2 STABLE

Ora.TESTMUL.dg

ONLINE ONLINE raclhr-12cr1-n1 STABLE

ONLINE ONLINE raclhr-12cr1-n2 STABLE

[root@raclhr-12cR1-N1 ~] #

3.3.1 Test disk group

[oracle@raclhr-12cR1-N1 ~] $sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 23 16:17:28 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0-64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP

Advanced Analytics and Real Application Testing options

SQL > create tablespace TESTMUL datafile'+ TESTMUL' size 10m

Tablespace created.

SQL > select name from v$datafile

NAME

+ DATA/LHRRAC/DATAFILE/system.258.933550527

+ DATA/LHRRAC/DATAFILE/undotbs2.269.933551323

+ DATA/LHRRAC/DATAFILE/sysaux.257.933550483

+ DATA/LHRRAC/DATAFILE/undotbs1.260.933550575

+ DATA/LHRRAC/DATAFILE/example.268.933550723

+ DATA/LHRRAC/DATAFILE/users.259.933550573

+ TESTMUL/LHRRAC/DATAFILE/testmul.256.934042679

7 rows selected.

SQL >

Stop the storage from a network card eth2:

[root@OFLHR ~] # ip a

1: lo: mtu 16436 qdisc noqueue

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000

Link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

Inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

Inet6 fe80::20c:29ff:fe98:1acd/64 scope link

Valid_lft forever preferred_lft forever

3: eth2: mtu 1500 qdisc pfifo_fast qlen 1000

Link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

Inet 192.168.2.200/24 brd 192.168.2.255 scope global eth2

Inet6 fe80::20c:29ff:fe98:1ad7/64 scope link

Valid_lft forever preferred_lft forever

[root@OFLHR ~] # ifconfig eth2 down

[root@OFLHR ~] # ip a

1: lo: mtu 16436 qdisc noqueue

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000

Link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

Inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

Inet6 fe80::20c:29ff:fe98:1acd/64 scope link

Valid_lft forever preferred_lft forever

3: eth2: mtu 1500 qdisc pfifo_fast qlen 1000

Link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

Inet 192.168.2.200/24 brd 192.168.2.255 scope global eth2

[root@OFLHR ~] #

The rac node views the log:

[root@raclhr-12cR1-N1] # tail-f / var/log/messages

Jan 23 16:20:51 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200 failed 3260 failed (No route to host)

Jan 23 16:20:57 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200 failed 3260 failed (No route to host)

Jan 23 16:21:03 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200 failed 3260 failed (No route to host)

[root@raclhr-12cR1-N1 ~] # multipath-ll

VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-8 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-5 sdm 0RV 0RU 0RU 3 sdm 8Rd 192 failed faulty running |

`- 4VOV 0RU 0RU 3 sdl 8Rd 176active ready running

VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-9 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-5 sdj 0RV 0RU 0RU 2 sdj 8Rd 144failed faulty running |

`- 4VOV 0RU 0RU 2 sdk 8RU 160 active ready running

VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-4, sdi, 0, 0, 1, 8, and 128 active ready running |

`- 5VOVUR 0VOUR 1 sdh 8RU 112 failed faulty running

VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-4 0 sdf 0 active ready running at 8:80 |

`- 5VOUL0VOUL0ULO sdg 8:96 failed faulty running

[root@raclhr-12cR1-N1 ~] #

The tablespace can be accessed normally:

SQL > create table tt tablespace TESTMUL as select * from dual

Table created.

SQL > select * from tt

D

-

X

SQL >

By the same token, the tablespace is still normal when eth2 is up and eth0 is down. After restarting the cluster and storage, everything in the cluster is fine.

Chapter IV testing multipath

Re-build a multipath environment to test multipath.

The simplest test method is to use dd to read and write data to disk, and then use iostat to observe the traffic and status of each channel to determine whether Failover or load balancing is normal:

# dd if=/dev/zero of=/dev/mapper/mpath0

# iostat-k 2

[root@orcltest ~] # multipath-ll

VMLHRStorage003 (14f504e46494c4552674a61727a472d523449782d5336784e) dm-3 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-35, sdf, 0, 0, 0, 0, and 2, active ready running, 8:80. |

`- 36 0RO sdg 0UR 0RU 8:96 active ready running

VMLHRStorage002 (14f504e46494c4552506a5a5954422d6f6f4e652d34423171) dm-2 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-35 sdh 0RV 0RU 0RU 3 sdh 8Rd 112active ready running |

`- 36 sdi 015 015 3 sdi 815 128 active ready running

VMLHRStorage001 (14f504e46494c4552324b583573332d774e5a622d696d7334) dm-1 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-35, sdd, 0, 0, 0, 0, 1, sdd, 8:48 active ready running |

`- 36VOUL0VOUL0VOUR 1 sde 8:64 active ready running

VMLHRStorage000 (14f504e46494c45523431576859532d643246412d5154564f) dm-0 OPNFILER,VIRTUAL-DISK

Size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`- +-policy='round-robin 0' prio=1 status=active

| |-35 sdb 0 sdb 0 active ready running |

`- 36 sdc 0RO sdc 0UR 0RU 0RU 8:32 active ready running

[root@orcltest ~] # dd if=/dev/zero of=/dev/mapper/VMLHRStorage001

To reopen a window and execute iostat-k 2, you can see:

Avg-cpu:% user nice% system% iowait% steal% idle

0.00 0.00 5.23 20.78 0.00 73.99

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn

Sda 9.00 0.00 92.00 0 184

Scd0 0.00 0.00 0.00 00

Sdb 0.00 0.00 0.00 00

Sdc 0.00 0.00 0.00 00

Sdd 1197.50 4704.00 10886.00 9408 21772

Sde 1197.50 4708.00 10496.00 9416 20992

Sdh 0.00 0.00 0.00 00

Sdi 0.00 0.00 0.00 00

Sdf 0.00 0.00 0.00 00

Sdg 0.00 0.00 0.00 00

Dm-0 0.00 0.00 0.00 00

Dm-4 0.00 0.00 0.00 00

Dm-10 0.00 0.00 0.00 00

Dm-1 2395.00 9412.00 21382.00 18824 42764

Dm-2 0.00 0.00 0.00 00

Dm-3 0.00 0.00 0.00 00

Dm-5 0.00 0.00 0.00 00

Dm-6 0.00 0.00 0.00 00

Dm-7 0.00 0.00 0.00 00

Dm-8 0.00 0.00 0.00 00

Dm-9 0.00 0.00 0.00 00

4.1 other theoretical knowledge about multipath

When the mapping is generated with multipath, multiple devices pointing to the same link are generated in the / dev directory:

/ dev/mapper/mpathn

/ dev/mpath/mpathn

/ dev/dm-n

But their sources are completely different:

/ dev/mapper/mpathn is a multipath device virtualized by multipath, which we should use; the device in / dev/mapper is generated during the boot process. These devices can be used to access multipath devices, such as when generating logical volumes.

/ dev/mpath/mpathn is created by the udev device manager and actually points to the following dm-n devices, only for convenience and cannot be used to mount; the devices in / dev/mpath are provided for convenience so that all multipath devices can be seen in a directory. These devices are generated by udev device Manager and may not boot when the system needs to access them. Do not use these devices to generate logical volumes or file systems.

/ dev/dm-n is used internally by the software and cannot be used outside the software and cannot be mounted. All devices in / dev/dm-n format can only be used internally and should never be used.

To put it simply, we should use the device character under / dev/mapper/. The device can be partitioned with fdisk or created as pv.

Well, that's the end of the test of using OpenFiler to simulate the ASM shared disk and multipath in the storage configuration RAC. 2016 is over, today is January 23, tomorrow is January 24, wheat seedlings go home for the Spring Festival, O (∩ _ ∩) O ~. I would also like to wish all netizens and friends good health, good luck, happy family, happy life, successful career, full of pearls, longevity and wealth, rich, invincible and invincible!

About Me

.

● author: wheat seedlings, only focus on the database technology, pay more attention to the application of technology

● article is updated synchronously on itpub (http://blog.itpub.net/26736162), blog Park (http://www.cnblogs.com/lhrbest) and personal Wechat official account (xiaomaimiaolhr).

● article itpub address: http://blog.itpub.net/26736162/viewspace-2132858/

● article blog park address: http://www.cnblogs.com/lhrbest/p/6345157.html

● pdf version of this article and wheat seedling cloud disk address: http://blog.itpub.net/26736162/viewspace-1624453/

● QQ group: 230161599 WeChat group: private chat

● contact me, please add QQ friend (642808185), indicate the reason for adding

● was completed at Agricultural Bank of China from 08:00 on 2017-01-22 to 24:00 on 2016-01-23.

The content of the ● article comes from the study notes of wheat seedlings, and some of it is sorted out from the Internet. Please forgive me if there is any infringement or improper place.

Copyright ● all rights reserved, welcome to share this article, please reserve the source for reprint

.

Pick up your phone and use Wechat client to scan the picture on the left below to follow the Wechat official account of wheat seedlings: xiaomaimiaolhr, scan the QR code on the right to join the QQ group of wheat seedlings, and learn the most practical database technology.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report