Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Build iscsi storage system

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Build iscsi storage system

Overview of NAS and SAN servers

NAS Network attached Storage:

NAS (Network Attached Storage), NAS server is a kind of data storage server which is connected on the network and has the function of data storage. Network attached storage implements data transmission based on standard network protocol (Tcp/IP) and provides file sharing and data backup for computers with different operating systems such as Windows / Linux / Mac OS in the network.

Advantages:

1. Icano consumption is transferred from the front-end server to the back-end storage device

two。 Easy to expand

Disadvantages:

1. In the past, the network would become a bottleneck. But now this problem can be solved by using a 10G fiber optic card.

SAN storage:

Storage area network (Storage Area Network and SAN Protocols, abbreviated SAN, or storage area network) is a high-speed network that provides data transfer between different storage systems of a computer. A storage device is one or more disk devices used to store computer data, usually an array of disks. SAN storage, using Fibre Channel (FC) technology, connects the storage array and the server host through the FC switch, and establishes an area network for data storage.

Because SAN is based on a user network, it is very scalable, whether it is to add a certain amount of storage space in a SAN system or to add several servers that use storage space.

The difference between NAS and SAN is in two aspects:

First, in terms of network architecture, the essential difference is:

NAS, which transmits data directly using TCP/IP. SAN uses the SCSI ring iSCSI protocol to transmit data.

Second, in terms of the implementation method of reading and writing files, the essential difference lies in:

NAS adopts NFS and CIFS technology to realize file sharing. It shows that NAS is a file-level read and write operation based on the operating system.

The interface between computer and storage in SAN is the underlying block protocol, which is located according to the "block address + offset address" of the protocol header. Shared storage has nothing to do with the type of operating system on the front end. The operating system of the service server can be changed normally.

Operation mode: Cramp S

Target target, initiator [n.ietr] sponsor

Port: 3260

-

One: experimental topology

Second: experimental objectives

Practice: configure the IP SAN server

Actual combat: daily operation of IP SAN server

Three: experimental environment

Server: target xuegod63 192.168.1.63

Client: initiator xuegod64 192.168.1.64

Four: experimental code

Practice: configure an IP SAN storage server

Analysis: configure xuegod63 as ip san, partition the sda4 on xuegod63, and share it through ip san.

-

Configure server-side xuegod63

1) install: scsi-target-utils

[root@xuegod63 ~] # yum install-y scsi-target-utils

2) prepare a disk partition: sda4 size 5G

[root@xuegod63 ~] # fdisk / dev/sda # divides the sda4 partition

Command (m for help): P

Command (m for help): n

P

Selected partition 4

Last cylinder, + cylinders or + size {KMagne Mpeng} (1428-2610, default 2610): + 5G

Command (m for help): W

[root@xuegod63 ~] # reboot

9 configure target to share sda4 partitions

3) modify the configuration file

[root@xuegod63 ~] # vim / etc/tgt/targets.conf # writes the following

After referring to this paragraph and after the paragraph, append the following red mark content:

#

76 # direct-store / dev/sdb # Becomes LUN 1

77 # direct-store / dev/sdc # Becomes LUN 2

78 # direct-store / dev/sdd # Becomes LUN 3

79 # write-cache off

80 # vendor_id MyCompany Inc.

81 #

Are:

Backing-store / dev/sda4

Initiator-address 192.168.1.64

Vendor_id xuegod

Product_id target1

Note:

Default-driver iscsi # this profile defaults to all comments and uses the iscsi driver

# iscsi formal name format: iqn. Year-month. The host name is written backwards: target terminal name

Backing-store / dev/sda4 # can be a specific partition or a file from DD. Cannot be less than 5G. (the latter file system is GFS, and the log space alone is 128m.)

Initiator-address 192.168.1.62 # specifies this storage host to which access is allowed

Initiator-address 192.168.1.64 # specifies this storage host to which access is allowed

Vendor_id "xuegod" vendor (vendr vendor). The supplier number identifies the device (the character is not too long)

Product_id "TARGET1" # Product number

4) start the service

[root@xuegod63 ~] # service tgtd restart

[root@xuegod63 ~] # netstat-antup | grep 3260

Tcp 0 0 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0

Tcp 0 0: 3260: * LISTEN 3130/tgtd

5) check the status tgt-admin-- show

[root@xuegod63] # tgt-admin-- show

Account information:

ACL information: # which clients are allowed to access

192.168.1.64

Boot:

[root@xuegod63 Desktop] # chkconfig tgtd on

-

Configure client: xuegod64

1) installation package: iscsi-initiator

[root@xuegod64] # rpm-ivh / mnt/Packages/iscsi-initiator-utils-6.2.0.872-34.el6.x86_64.rpm

2) start the client service:

[root@xuegod64 ~] # / etc/init.d/iscisd start # did not respond after startup

Note: target storage needs to be discovered before starting the client service to be effective.

[root@xuegod64] # iscsiadm-m discovery-t sendtargets-p 192.168.1.63 discovery 3260

Starting iscsid: [OK]

192.168.1.63 Flux 3260 iqn.20116-11.cn.xuegod.www:target_san1

[root@xuegod64 ~] # / etc/init.d/iscsid status

Iscsid (pid 2607) is running...

3) the location where the target storage server information is stored on the client:

[root@xuegod64] # rpm-ivh / mnt/Packages/tree-1.5.3-2.el6.x86_64.rpm

[root@xuegod64 ~] # tree / var/lib/iscsi/

/ var/lib/iscsi/

├── ifaces

├── isns

├── nodes

│ └── iqn.2015-01.cn.xuegod.www:target_san1

│ └── 192.168.1.63,3260,1

│ └── default

├── send_targets

│ └── 192.168.1.63,3260

│ ├── iqn.2015-01.cn.xuegod.wwwdisplacement targetkeeper san1192.168.1.63 magic 3260 legendary 1Legend default-> / var/lib/iscsi/nodes/iqn.2015-01.cn.xuegod.www.wldrel targeting sangod.192.168.1.63 3jue 3260 Legend 1

│ └── st_config

├── slp

└── static

4) restart:

[root@xuegod64 ~] # / etc/init.d/iscsid restart # start iscsid first

[root@xuegod64 ~] # / etc/init.d/iscsi restart # identify the device based on the information found in / var/lib/iscsi/

Close:

[root@xuegod64 ~] # / etc/init.d/iscsi stop

[root@xuegod64 ~] # / etc/init.d/iscsid stop

5) Boot starts automatically:

[root@xuegod64 ~] # chkconfig iscsi on

[root@xuegod64 ~] # chkconfig iscsid on

Check the order in which the two services are turned on by default:

[root@xuegod64 ~] # grep chkconfig: / etc/init.d/iscsid

# chkconfig: 345 7 89

[root@xuegod64 ~] # grep chkconfig: / etc/init.d/iscsi

# chkconfig: 345 13 89

6) check and find a new hard disk:

[root@xuegod64 ~] # ll / dev/sdb

Brw-rw---- 1 root disk 8, 16 Jul 30 19:11 / dev/sdb

-

Uninstall, mount storage device

Uninstall method 1

1: uninstall

[root@xuegod64] # iscsiadm-m node-T iqn.2015-01.cn.xuegod.www:target_san1-u

Logging out of session [sid: 1, target: iqn.2015-01.cn.xuegod.www:target_san1, portal: 192.168.1.63 Magi 3260]

Logout of [sid: 1, target: iqn.2015-01.cn.xuegod.www:target_san1, portal: 192.168.1.63 Logout of 3260] successful.

[root@xuegod64 ~] # ls / dev/sdb

Ls: cannot access / dev/sdb: No such file or directory:

2: log in to the storage device

[root@xuegod64] # iscsiadm-m node-T iqn.2015-01.cn.xuegod.www:target_san1-l

Logging in to [iface: default, target: iqn.2015-01.cn.xuegod.www:target_san1, portal: 192.168.1.63 Magi 3260] (multiple)

Login to [iface: default, target: iqn.2015-01.cn.xuegod.www:target_san1, portal: 192.168.1.63 Magi 3260] successful.

[root@xuegod64 ~] # ls / dev/sdb

/ dev/sdb

Uninstall method 2:

1: uninstall

[root@xuegod64 ~] # / etc/init.d/iscsi stop

Stopping iscsi: [OK]

[root@xuegod64 ~] # ls / dev/sdb

Ls: cannot access / dev/sdb: No such file or directory

2: log in to the storage device

[root@xuegod64 ~] # / etc/init.d/iscsi restart

Exit completely:

[root@xuegod64 ~] # / etc/init.d/iscsi stop

[root@xuegod64 ~] # rm-rf / var/lib/iscsi/*

-

Use the identified hard disk, partition format and mount on xuegod64

1: discover storage Devic

[root@xuegod64] # iscsiadm-m discovery-t sendtargets-p 192.168.1.63 discovery 3260 # Discovery of storage devices

2: start the server

[root@xuegod64 ~] # / etc/init.d/iscsid restart

[root@xuegod64 ~] # / etc/init.d/iscsi restart

[root@xuegod64 ~] # ls / dev/sdb # Discovery sdb

/ dev/sdb

Partition formatting, mount use.

[root@xuegod64 ~] # fdisk / dev/sdb Partition sdb1

[root@xuegod64 ~] # fdisk / dev/sdb

Command (m for help): n

P primary partition (1-4)

P

Partition number (1-4): 1

Last cylinder, + cylinders or + size {Kjol MJ G} (1-1019, default 1019): # enter directly and use all available space.

[root@xuegod64 ~] # ll / dev/sdb*

Brw-rw---- 1 root disk 8, 16 Jul 30 21:44 / dev/sdb

Brw-rw---- 1 root disk 8, 17 Jul 30 21:44 / dev/sdb1

[root@xuegod64 ~] # mkfs.ext4 / dev/sdb1

[root@xuegod64 ~] # mount / dev/sdb1 / opt

-

On the target server, add another storage client

1: add the following:

[root@xuegod63 ~] # vim / etc/tgt/targets.conf

2: restart the service

[root@xuegod63 ~] # / etc/init.d/tgtd restart

Stopping SCSI target daemon: initiators still connected [FAILED]

Starting SCSI target daemon: [FAILED] # error report

Solution: the client exits.

[root@xuegod64 ~] # umount / opt/

[root@xuegod64 ~] # / etc/init.d/iscsi stop

After the client exits, the test starts:

[root@xuegod63 ~] # / etc/init.d/tgtd restart

Stopping SCSI target daemon: [OK]

Starting SCSI target daemon: [OK]

3: test: both xuegod64 and xuegod62 are mounted to the hard disk for data synchronization

[root@xuegod64 ~] # / etc/init.d/iscsi start

[root@xuegod64 ~] # ls / dev/sdb*

/ dev/sdb / dev/sdb1

[root@xuegod64 ~] # ls / dev/sdb*

/ dev/sdb / dev/sdb1

[root@xuegod64 ~] # mount / dev/sdb1 / opt/

[root@xuegod64] # cp / etc/passwd / opt/ # copy some data

4: test: whether xuegod62 is synchronized with data

[root@xuegod62] # rpm-ivh / mnt/Packages/iscsi-initiator-utils-6.2.0.872-34.el6.x86_64.rpm

[root@xuegod62] # iscsiadm-m discovery-t sendtargets-p 192.168.1.63 discovery 3260

192.168.1.63 Flux 3260 iqn.2015-01.cn.xuegod.www:target_san1

[root@xuegod62 ~] # / etc/init.d/iscsi restart

[root@xuegod62 ~] # ls / dev/sdb*

/ dev/sdb / dev/sdb1

[root@xuegod62 ~] # mount / dev/sdb1 / opt/

[root@xuegod62 ~] # ls / opt/ # you can see that the data has been synchronized

Lost+found passwd

5: test whether the xuegod64 data is synchronized:

[root@xuegod62 ~] # cp / etc/hosts / opt

[root@xuegod62 ~] # ls / opt

Hosts lost+found passwd

[root@xuegod64 ~] # ls / opt

Lost+found passwd

# passwd # only see passwd, no synchronization, because we use the ext4 file system, ext4 file system does not support multiple clients to use at the same time. You can synchronize using the GFS file system.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report