In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail what is the implementation principle of iSCSI in openfiler. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
Overview of iSCSI
ISCSI:Internet small computer system Interface (iSCSI:Internet Small Computer System Interface).
Internet minicomputer system Interface (iSCSI) is a TCP/IP-based protocol used to establish and manage connections between IP storage devices, hosts, and clients, and to create storage area networks (SAN). SAN makes it possible for the SCSI protocol to be used in high-speed data transmission networks, which are carried out at the block level (block-level) between multiple data storage networks.
The SCSI structure is based on the client / server model, and its usual application environment is that the devices are close to each other, and these devices are connected by the SCSI bus. The main function of iSCSI is to encapsulate and transfer a large amount of data between the host system (initiator initiator) and the storage device (target target) on the TCP/IP network. In addition, iSCSI provides encapsulation of SCSI commands on the IP network and runs on TCP.
ISCSI (Internet SCSI) is a project developed by IETF (InternetEngineering Task Force, the Internet Engineering Task Force) in 2003.
Bcm5722 ISCSI Nic standard for mapping SCSI blocks to Ethernet packets. SCSI (Small Computer System Interface) is a block data transfer protocol, which is widely used in the storage industry, and is the most basic standard protocol for storage devices. Fundamentally speaking, iSCSI protocol is a method of using IP network to transmit SCSI data blocks with short latency, and ISCSI uses Ethernet protocol to transmit SCSI commands, responses and data. ISCSI can build IP storage area networks using Ethernet that we are already familiar with and use every day. In this way, ISCSI overcomes the limitation of directly connecting to storage, enables us to share storage resources across different servers, and expands storage capacity without downtime.
The working process of iSCSI
When the iSCSI host application issues a data read and write request, the operating system generates a corresponding SCSI command. The SCSI command is encapsulated into an ISCSI message packet at the iSCSI initiator layer and transmitted to the device side through TCP/IP. The iSCSI target layer on the device side will unlock the iSCSI message packet, get the contents of the SCSI command, and then transmit it to the SCSI device for execution. The response of the device after executing the SCSI command is encapsulated into the ISCSI response PDU when it passes through the iSCSI target layer on the device side, and is transmitted to the host through the ISCSI initiator layer of the TCP/IP network. ISCSI initiator will parse the SCSI response from the ISCSI response PDU and transmit it to the operating system, and then the operating system responds to the application.
ISCSI initiator
In essence, an iSCSI initiator is a client device that connects to a service provided by the server and initiates a request for that service. If you use iscsi to create RAC,iSCSI initiator software, you need to install it on each Oracle RAC node.
The iSCSI initiator can be implemented in either software or hardware. The software iSCSI initiator is available for most major operating system platforms and can use the free Linux Open-iSCSI software drivers available in iscsi-initiator-utils RPM. ISCSI software initiators are typically used in conjunction with standard network interface cards (NIC) (in most cases Gigabit Ethernet cards). The hardware initiator is an iSCSI HBA (or TCP offload engine (TOE) card), which is essentially a dedicated Ethernet card on which the SCSI ASIC can uninstall all work from the system CPU (TCP and SCSI commands). ISCSI HBA can be purchased from many vendors, including Adaptec, Alacritech, Intel, and QLogic.
ISCSI goal
The iSCSI target is the "server" component of the iSCSI network. It is usually a storage device that contains the information you need and responds to requests from one or more initiators. For the purposes of this article, the node openfiler1 will be the iSCSI target.
Openfiler
Openfiler is a browser-based network storage management tool. From the Linux system. Openfiler supports file-level NAS and block-level SAN in a network architecture, and supports CIFS,NFS,HTT/DAV,FTP and iSCSI protocols. Openfiler is a storage management operating system based on the Linux 2.6 kernel and other open source programs such as apache,samba,lvm2,ext3,Linux NFS and iSCSI enterprise target. He integrates all these technologies into a powerful web interface that is small and easy to manage.
Configure openfiler
The installation of openfiler is similar to the installation of linux, it is very simple here without too much introduction, readers who do not understand can consult the information by themselves. The following configuration environment is based on the RAC configuration, and the goal is to divide the storage space on the openfiler into rac1 and rac2 nodes in such a way that the storage space has been shared.
Browser: Firefox3.6 (IE is not recommended for this setting, the reason will be explained below)
Client operating system: RHEL 5.4
ISCSI client: open-iscsi-2.0-871
To use Openfiler as an iSCSI storage server, we need to perform six main tasks: set up the iSCSI service, configure network access, specify and partition physical storage, create new volume groups, create all logical volumes, and finally, create a new iSCSI target for each logical volume.
Service
Enter https://192.168.2.195:446/ in the browser to open the Web interface of Openfiler, where 192.168.2.195 is the ip,446 of the openfiler system as the service port. The default user name for Openfiler is openfiler and the password is password.
To control the service, we use Openfiler Storage Control Center and go to [Services] / [Manage Services]:
To enable the iSCSI service, click the "Enable" link after the "iSCSI target server" service name. After that, the "iSCSI target server" status should change to "Enabled".
The ietd program will implement the user-level portion of the iSCSI Enterprise Target software to build an iSCSI storage system on Linux. With iSCSI target enabled, we should be able to log in to the Openfiler server through SSH and see that the iscsi-target service is running:
[root@openfiler1 ~] # service iscsi-target status
Ietd (pid 14243) is running...
Network access configuration
The next step is to configure network access in Openfiler, specifying two Oracle RAC nodes (racnode1 and racnode2) that need to access iSCSI volumes through a storage (private) network. Note that the iSCSI volume will be created later in this section. Also note that this step does not actually grant two Oracle RAC nodes the appropriate permissions required to access the iSCSI logical volume. Granting permissions will be done later in this section by updating ACL for each new logical volume.
As in the previous section, we used Openfiler Storage Control Center and went to [System] / [Network Setup] to complete the network access configuration. Through the "Network Access Configuration" section at the bottom of the page, administrators can set up networks and / or hosts to allow them access to resources exported by Openfiler appliance. For the purposes of this article, we want to add two separate Oracle RAC nodes instead of allowing the entire 192.168.2.0 network to access Openfiler resources.
As you enter each Oracle RAC node, note that the "Name" field is just a logical name for reference only. According to the convention of the input node, I only used the node name defined for the IP address. Next, when you enter the actual node in the "Network/Host" field, always use its IP address, even if its hostname is already defined in the / etc/hosts file or DNS. Finally, when entering the actual host in our Class C network, use the subnet mask 255.255.255.255.
Remember, it is important that you enter the IP address of the private network (eth2) for each RAC node in the cluster.
The following figure shows the result of adding two Oracle RAC nodes:
Physical storage
In this section, we will create three iSCSI volumes for the two Oracle RAC nodes in the cluster to use as shared storage. This will perform multiple steps on the built-in 73GB 15K SCSI hard drive connected to the Openfiler server.
Storage devices, such as internal IDE/SATA/SCSI/SAS disks, storage arrays, external USB drives, external FireWire drives, or any other storage device, can connect to the Openfiler server and be used by clients. If these devices are found at the operating system level, you can use Openfiler Storage Control Center to set up and manage all of these storage devices.
In this example, we have a built-in SCSI hard drive from 73GB for shared storage. On the Openfiler server, the drive appears as / dev/sdb (MAXTOR ATLAS15K2_73SCA). To see the drive and start the iSCSI volume creation process, go from Openfiler Storage Control Center to [Volumes] / [Block Devices]:
Partition the physical disk
The first step we need to perform is to create a primary partition on the / dev/sdb internal hard drive. Click the / dev/sdb link, and we will see the "Edit" or "Create" options for editing and creating partitions, respectively. Since we will create a primary partition that spans the entire disk, we can leave most of the options at the default settings, and the only modification is to change "Partition Type" from "Extended partition" to "Physical volume". Here are the values I specified to create the primary partition on / dev/sdb:
Mode: Primary
Partition type: Physical volume
Start cylinder: 1
End cylinder: 8924
The size will now show 68.36 GB. To accept this setting, click the Create button. This will generate a new partition (/ dev/sdb1) on our built-in hard drive:
Figure 9: partitioning a physical volume
Volume group management
The next step is to create a volume group. We will create a volume group named racdbvg that contains the newly created primary partition. Go to [Volumes] / [Volume Groups] from Openfiler Storage Control Center. We will see all the existing volume groups, or nothing (as is the case with us). In the Volume Group Management screen, enter the name of the new volume group (racdbvg), click the check box before / dev/sdb1 to select the partition, and finally click the "Add volume group" button. After that, we will see a list showing our newly created volume group named "racdbvg":
Logical volume
We can now create three logical volumes in the newly created volume group (racdbvg). Go to [Volumes] / [Add Volume] from Openfiler Storage Control Center. We will see the newly created volume group (racdbvg) and its block storage statistics. The bottom of the screen also provides an option to create a new volume in the selected volume group-(create a volume in "racdbvg"). Use this screen to create the following three logical (iSCSI) volumes. After each logical volume is created, the application goes to the Manage Volumes screen. Then you need to click back to the Add Volume tab to create the next logical volume until all three iSCSI volumes are created:
ISCSI / logical volume name volume description required space (MB) file system type racdb-crs1racdb-ASM CRS Volume 12208iSCSIracdb-data1racdb-ASM Data Volume 133888iSCSIracdb-fra1racdb-ASM FRA Volume 133888iSCSI
In fact, we have created three iSCSI disks and can now present them to iSCSI clients (racnode1 and racnode2) on the network. The Manage Volumes screen should look like this:
ISCSI goal
Now we have three iSCSI logical volumes. However, in order for iSCSI clients to access these logical volumes, you first need to create an iSCSI target for each of the three volumes. Each iSCSI logical volume will be mapped to a specific iSCSI target and two Oracle RAC nodes will be granted appropriate network access to that target. For this article, there will be an one-to-one mapping between the iSCSI logical volume and the iSCSI target.
The process of creating and configuring an iSCSI target consists of three steps: creating a unique target IQN (essentially the common name of the new iSCSI target), mapping an iSCSI logical volume created in the previous section to the newly created iSCSI target, and finally, granting two Oracle RAC nodes access to the new iSCSI target. Note that you need to perform this procedure once for each of the three iSCSI logical volumes created in the previous section.
For this article, the following table lists the new iSCSI target name (target IQN) and the iSCSI logical volume to which it will be mapped:
ISCSI target / logical volume mapping target IQNiSCSI volume name volume description iqn.2006-01.com.openfiler:racdb.crs1racdb-crs1racdb-ASM CRS Volume 1iqn.2006-01.com.openfiler:racdb.data1racdb-data1racdb-ASM Data Volume 1iqn.2006-01.com.openfiler:racdb.fra1racdb-fra1racdb-ASM FRA Volume 1
Now let's create three new iSCSI targets-one for each iSCSI logical volume. The following examples illustrate the three steps that need to be performed to create a new Oracle Clusterware/racdb-crs1 target by creating an iSCSI target (iqn.2006-01.com.openfiler:racdb.crs1). This three-step process requires repeating each of the three new iSCSI goals listed in the table above.
Create a new target IQN
Go to [Volumes] / [iSCSI Targets] from Openfiler Storage Control Center. Make sure the gray subtab "Target Configuration" is selected. You can create a new iSCSI target in this tab page. A default value is automatically generated as the name of the new iSCSI target (often called "target IQN"). An example of a target IQN is "iqn.2006-01.com.openfiler:tsn.ae4683b67fd3":
I like to replace the last paragraph of the default target IQN with a more meaningful string. For the first iSCSI target (Oracle Clusterware/racdb-crs1), I will modify the default target IQN by replacing the string "tsn.ae4683b67fd3" with "racdb.crs1", as shown in the following figure:
When you are satisfied with the new target IQN, click the "Add" button. This creates a new iSCSI target, and then a page appears where you can modify a series of settings for the new iSCSI target. For the purposes of this article, there is no need to change any settings for the new iSCSI target.
LUN mapping
After you create a new iSCSI target, the next step is to map the corresponding iSCSI logical volume to that target. Under the "Target Configuration" subtab, verify that the correct iSCSI target is selected in the "Select iSCSI Target" section. If not, use the drop-down menu to select the correct iSCSI destination, and then click the "Change" button.
Next, click the gray subtab named "LUN Mapping" (next to the "Target Configuration" subtab). Locate the appropriate iSCSI logical volume (/ dev/racdbvg/racdb-crs1 in this case) and click the "Map" button. You do not need to change any settings on this page. After clicking the "Map" button for volume / dev/racdbvg/racdb-crs1, your screen should look like the following figure:
Network ACL
The iSCSI client needs to be granted the appropriate permissions before it can access the newly created iSCSI target. Previously, we configured network access for two hosts (Oracle RAC nodes) through Openfiler. These two nodes need to access the new iSCSI target through the storage (private) network. Now we need to grant these two Oracle RAC nodes access to the new iSCSI target.
Click the gray subtab named "Network ACL" (next to the "LUN Mapping" subtab). For the current iSCSI target, change the Access value for both hosts from "Deny" to "Allow", and then click the "Update" button.
Return to the section creating a new target IQN and perform these three tasks on the remaining two ISCSI logical volumes, replacing the values found in the iSCSI target / logical volume mapping table.
Configure iSCSI volumes on the Oracle RAC node
Configure the iSCSI initiator on both Oracle RAC nodes in the cluster. Creating a partition should only be performed on one node of the RAC cluster.
The iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) that provides iSCSI support (driver). In our example, the clients are two Linux servers (racnode1 and racnode2) running Oracle Enterprise Linux 5.4.
In this section, we will configure the iSCSI software initiator on two Oracle RAC nodes. Oracle Enterprise Linux 5.4 includes the Open-iSCSI iSCSI software initiator, which is located in iscsi-initiator-utils RPM. This is a change from earlier versions of Oracle Enterprise Linux (4.x), which included Linux iscsi-sfnet software drivers developed as part of the Linux-iSCSI project. All iSCSI management tasks, such as discovery and login, will use the command line interface iscsiadm included in Open-iSCSI.
The iSCSI software initiator will be configured to automatically log in to the network storage server (openfiler1) and discover the iSCSI volume created in the previous section. After that, we will gradually use udev to create a persistent local SCSI device name (that is, / dev/iscsi/crs1) for each iSCSI target name found. Having a consistent local SCSI device name and the iSCSI target it maps to helps you distinguish between the three volumes when configuring ASM. However, before that, we must first install the iSCSI initiator software.
Install the iSCSI (initiator) service
"for Oracle Enterprise Linux 5.4, the Open-iSCSI iSCSI software initiator is not installed by default." The software is included in the iscsi-initiator-utils package on CD 1. To determine if the package is installed (not in most cases), execute the following command on both Oracle RAC nodes:
[root@racnode1 ~] # rpm-qa-- queryformat "% {NAME} -% {VERSION} -% {RELEASE} (% {ARCH})\ n" | grep iscsi-initiator-utils
If the iscsi-initiator-utils package is not installed, load CD 1 into each Oracle RAC node and execute the following command
[root@racnode1] # mount-r / dev/cdrom / media/cdrom
[root@racnode1 ~] # cd / media/cdrom/Server
[root@racnode1 ~] # rpm-Uvh iscsi-initiator-utils-*
[root@racnode1 ~] # cd /
[root@racnode1 ~] # eject
Verify that the iscsi-initiator-utils package is now installed:
[root@racnode1 ~] # rpm-qa-- queryformat "% {NAME} -% {VERSION} -% {RELEASE} (% {ARCH})\ n" | grep iscsi-initiator-utils
Iscsi-initiator-utils-6.2.0.871-0.10.el5 (x86 / 64)
Configure the iSCSI (initiator) service
After verifying that the iscsi-initiator-utils package has been installed on both Oracle RAC nodes, start the iscsid service and have it start automatically when the system boots. We will also configure the iscsi service to start automatically when the system starts and automatically log in to the desired iSCSI target.
[root@racnode1 ~] # service iscsid start
Turning off network shutdown. Starting iSCSI daemon: [OK]
[OK]
[root@racnode1 ~] # chkconfig iscsid on
[root@racnode1 ~] # chkconfig iscsi on
Now that the iSCSI service is started, use the iscsiadm command line interface to discover all available targets on the network storage server. This should be done on both Oracle RAC nodes to verify that the configuration is working properly:
[root@racnode1] # iscsiadm-m discovery-t sendtargets-p openfiler1-priv
192.168.2.195pur3260 iqn.2006-01.com.openfiler:racdb.crs1
192.168.2.195pur3260 iqn.2006-01.com.openfiler:racdb.fra1
192.168.2.195pur3260 iqn.2006-01.com.openfiler:racdb.data1
Manually log in to the iSCSI target
At this point, the iSCSI initiator service has been started, and each Oracle RAC node can discover available targets from the network storage server. The next step is to manually log in to each available target, which can be done using the iscsiadm command line interface. This needs to be run on two Oracle RAC nodes. Note that I have to specify the IP address of the network storage server instead of its hostname (openfiler1-priv)-I think this is necessary because the above discovery uses the IP address to display the target.
[root@racnode1] # iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs1-p 192.168.2.195-l
[root@racnode1] # iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.data1-p 192.168.2.195-l
[root@racnode1] # iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.fra1-p 192.168.2.195-l
If the resource configuration on the openfiler server is updated at this time, you can update the configuration using the following command
Iscsiadm-m node-T iqn.2006-01.com.openfiler:RACDBcrs1-p 192.168.0.201-R
Configure automatic login
The next step is to ensure that when the computer boots (or the iSCSI initiator service starts / restarts), the client automatically logs in to each of the targets listed above. As with the manual login process described above, execute the following command on both Oracle RAC nodes:
[root@racnode1] # iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.crs1-p 192.168.2.195-- op update-n node.startup-v automatic
[root@racnode1] # iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.data1-p 192.168.2.195-- op update-n node.startup-v automatic
[root@racnode1] # iscsiadm-m node-T iqn.2006-01.com.openfiler:racdb.fra1-p 192.168.2.195-- op update-n node.startup-v automatic
Create a permanent local SCSI device name
In this section, we will step by step create a permanent local iSCSI device name for each SCSI target name. We will use udev to accomplish this task. Having a consistent local SCSI device name and the iSCSI target it maps to helps you distinguish between the three volumes when configuring ASM. Although this is not strictly required (because we will use ASMLib 2.0 for all volumes), this provides a self-documenting method to help quickly determine the name and location of each iSCSI volume.
If any Oracle RAC node boots and the iSCSI initiator service starts, it automatically logs in to each configured target in a random manner and maps those targets to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:racdb.crs1 might map to / dev/sdb. In fact, I can determine the current mapping of all targets by looking at the / dev/disk/by-path directory:
[root@racnode1 ~] # (cd / dev/disk/by-path; ls-l * openfiler* | awk'{FS= ""
Print $9 "" $10 "" $11}')
Ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0->.. /.. / sdb
Ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0->.. /.. / sdd
Ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0->.. /.. / sdc
Using the output of the above ls command, we can establish the following current mapping:
Current mapping of iSCSI target name to local SCSI device name iSCSI target name SCSI device name iqn.2006-01.com.openfiler:racdb.crs1/dev/sdbiqn.2006-01.com.openfiler:racdb.data1/dev/sddiqn.2006-01.com.openfiler:racdb.fra1/dev/sdc
However, the mapping may be different each time the Oracle RAC node is rebooted. For example, after a reboot, it may be decided to map the iSCSI target iqn.2006-01.com.openfiler:racdb.crs1 to the local SCSI device / dev/sdc. Since you cannot predict the iSCSI target mapping after a reboot, it is unrealistic to rely on using local SCSI device names.
What we need is a consistent device name that can be referenced (that is, / dev/iscsi/crs1), which will always point to the appropriate iSCSI target after a reboot. This is why a dynamic device management tool called udev was introduced. Udev provides a dynamic device directory that uses a set of configurable rules to point to the actual device via symbolic links. When udev receives a device event (for example, a client logs in to an iSCSI target), it matches its configured rules to identify the device based on the available device properties provided in sysfs. Matching rules can provide additional device information or specify device node names and multiple symbolic link names, and instruct udev to run other programs (for example, a SHELL script) as part of the device event handling process.
The first step is to create a new rule file. The file will be named / etc/udev/rules.d/55-openiscsi.rules and contains only one line to receive the name = value pair of the event we are interested in. It will also define a call-out SHELL script (/ etc/udev/scripts/iscsidev.sh) to handle events.
Create the following rule file / etc/udev/rules.d/55-openiscsi.rules on both Oracle RAC nodes:
# / etc/udev/rules.d/55-openiscsi.rules
KERNEL== "sd*", BUS== "scsi", PROGRAM= "/ etc/udev/scripts/iscsidev.sh% b", SYMLINK+= "iscsi/%c/part%n"
Now we need to create the UNIX SHELL script that will be called when the event is received. Let's first create a separate directory on the two Oracle RAC nodes to store the udev scripts:
[root@racnode1] # mkdir-p / etc/udev/scripts
Next, create the UNIX shell script / etc/udev/scripts/iscsidev.sh on the two Oracle RAC nodes:
#! / bin/sh
# FILE: / etc/udev/scripts/iscsidev.sh
BUS=$ {1}
HOST=$ {BUS%%:*}
[- e / sys/class/iscsi_host] | | exit 1
File= "/ sys/class/iscsi_host/host$ {HOST} / device/session*/iscsi_session*/
Targetname "
Target_name=$ (cat ${file})
# This is not an open-scsi drive
If [- z "${target_name}"]; then
Exit 1
Fi
# Check if QNAP drive
Check_qnap_target_name=$ {target_name%%:*}
If [$check_qnap_target_name = "iqn.2004-04.com.qnap"]; then
Target_name= `echo "${target_name%.*}" `
Fi
Echo "${target_name##*.}"
After you create the UNIX SHELL script, change it to an executable file:
[root@racnode1 ~] # chmod 755 / etc/udev/scripts/iscsidev.sh
Now that udev is configured, the iSCSI service will be restarted on both Oracle RAC nodes:
[root@racnode1 ~] # service iscsi stop
Logging out of session [sid: 6, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195jue 3260]
Logging out of session [sid: 7, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195pr 3260]
Logging out of session [sid: 8, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195jue 3260]
Logout of [sid: 6, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195jue 3260]: successful
Logout of [sid: 7, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195jue 3260]: successful
Logout of [sid: 8, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195jue 3260]: successful
Stopping iSCSI daemon: [OK]
[root@racnode1 ~] # service iscsi start
Iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon: [OK]
[OK]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195jue 3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195jue 3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195jue 3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195jue 3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195jue 3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195jue 3260]: successful [OK]
Next, let's see if our hard work has paid off:
[root@racnode1 ~] # ls-l / dev/iscsi/*
/ dev/iscsi/crs1:
Total 0
Lrwxrwxrwx 1 root root 9 Nov 3 18:13 part->.. /.. / sdc
/ dev/iscsi/data1:
Total 0
Lrwxrwxrwx 1 root root 9 Nov 3 18:13 part->.. /.. / sde
/ dev/iscsi/fra1:
Total 0
Lrwxrwxrwx 1 root root 9 Nov 3 18:13 part->.. /.. / sdd
The list above shows that what udev is doing is exactly what we expect! Now we have a consistent set of local device names that can be used to reference the iSCSI target. For example, we can safely assume that the device name / dev/iscsi/crs1/part will always refer to the iSCSI target iqn.2006-01.com.openfiler:racdb.crs1. We now have a consistent mapping of iSCSI target names to local device names, as shown in the following table:
Mapping of iSCSI target names to local device names iSCSI target names local device names iqn.2006-01.com.openfiler:racdb.crs1/dev/iscsi/crs1/partiqn.2006-01.com.openfiler:racdb.data1/dev/iscsi/data1/partiqn.2006-01.com.openfiler:racdb.fra1/dev/iscsi/fra1/part
What is the implementation principle of iSCSI in openfiler is shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.