In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Storcli/percli regular scene View help Information View Controller number online do raid Volume Group Delete Volume Group modify VD Properties online set hard disk to jbod mode View help information
Storcli64 help
View the number of controllers
Storcli64 show ctrlcount
[root@SZ × × ×-2] # storcli64 show ctrlcount
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Status Code = 0
Status = Success
Description = None
Controller Count = 1
Note:
It means that there are only two controllers, so it corresponds to / c0.
Each server has two controllers.
Do raid volume group online
Storcli64 / c0/eall/sall show
[root@SZ × × ×-2] # storcli64 show ctrlcount
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Status Code = 0
Status = Success
Description = None
Controller Count = 1
[root@SZ × ×-2] # storcli64 / c0/eall/sall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information:
=
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
252Rose 0 14 Onln 0 278.464 GB SAS HDD N N 512B ST9300603SS U-
252Rose 1 21 Onln 0 278.464 GB SAS HDD N N 512B MK3001GRRB U-
252Rose 2 20 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U-
252Rose 3 17 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U-
252MBF2600RC 4 18 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U-
252Rose 5 22 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U-
252MBF2600RC 6 23 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U-
252Rose 7 24 UGood-557.861 GB SAS HDD N N 512B MBF2600RC U-
EID-Enclosure Device ID | Slt-Slot No. | DID-Device ID | DG-DriveGroup
DHS-Dedicated Hot Spare | UGood-Unconfigured Good | GHS-Global Hotspare
UBad-Unconfigured Bad | Onln-Online | Offln-Offline | Intf-Interface
Med-Media Type | SED-Self Encryptive Drive | PI-Protection Info
SeSz-Sector Size | Sp-Spun | U-Up | D-Down/PowerSave | T-Transition | F-Foreign
UGUnsp-Unsupported | UGShld-UnConfigured shielded | HSPShld-Hotspare shielded
CFShld-Configured shielded | Cpybck-CopyBack | CBShld-Copyback Shielded
Note:
At this time, the hard disk 252 7, that is, the hard disk with slot number 7, has just been plugged into the hard disk without raid status, and do raid0 to the hard disk at this time.
Do the VD of raid0 to 252VOL7
Storcli64 / c0 add vd r0 size=all drives=252:7 wb direct strip=128
Note:
R0 means raid0, the default ceph "we choose" single disk to do raid0, there are R1 R5 and other raid level.
Size=all has all the space available to do the vd;..
Drives=252:7 corresponds to the eid/slt of the new disk. If multiple disks are used as VD, you can write the corresponding format of 252 VD.
Slt.
Wb stands for write_back mode and wt for write_through mode.
Direct means that DirectIO read operations are not cached to raid card cache, corresponding to CacheIO will read hot data
Cache to the cache of the raid card.
Strip=128 stands for the difference between "128kb" and "single disk". Multiple disks need to be considered to make raid0, as long as it is kept with other disks.
Storcli64 / c0/vall show all can see the strip size of other vd.
After adding vd, you can see that storcli64 / c0/vall show already exists in vd
[root@SZ × ×-2] # storcli64 / c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Virtual Drives:
=
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
0 RAID1 Optl RW No RWBD-ON 278.464 GB
1 RAID5 Optl RW Yes RWBD-ON 2.178 TB
2 RAID0 Optl RW Yes RWBD-ON 557.861 GB
Cac=CacheCade | Rec=Recovery | OfLn=OffLine | Pdgd=Partially Degraded | Dgrd=Degraded
Optl=Optimal | RO=Read Only | RW=Read Write | HD=Hidden | TRANS=TransportReady | B=Blocked |
Consist=Consistent | R=Read Ahead Always | NR=No Read Ahead | WB=WriteBack |
AWB=Always WriteBack | WT=WriteThrough | C=Cached IO | D=Direct IO | sCC=Scheduled
Check Consistency
Note:
DG/VD 2 is the newly added VD
The system administrator lsblk can see that it already exists:
[root@SZ × × ×-2] # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
Sda 8:0 0 278.5G 0 disk
├─ sda1 8:1 0 1G 0 part / boot
└─ sda2 8:2 0 277.5G 0 part
├─ cl-root 253:0 0 50G 0 lvm /
├─ cl-swap 253:1 0 4G 0 lvm [SWAP]
└─ cl-home 253:2 0 223.5G 0 lvm / home
Sdb 8:16 0 2.2T 0 disk
└─ sdb1 8:17 0 2.2T 0 part / data
Sdc 8:32 0 557.9G 0 disk
Note: sdc is the new dish.
Delete volume group
Get all the VD information through the command storcli64 / c0/vall show
[root@SZ × ×-2] # storcli64 / c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Xiang
Virtual Drives:
=
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
0 RAID1 Optl RW No RWBD-ON 278.464 GB
1 RAID5 Optl RW Yes RWBD-ON 2.178 TB
2 RAID0 Optl RW Yes RWBD-ON 557.861 GB
Cac=CacheCade | Rec=Recovery | OfLn=OffLine | Pdgd=Partially Degraded | Dgrd=Degraded
Optl=Optimal | RO=Read Only | RW=Read Write | HD=Hidden | TRANS=TransportReady | B=Blocked |
Consist=Consistent | R=Read Ahead Always | NR=No Read Ahead | WB=WriteBack |
AWB=Always WriteBack | WT=WriteThrough | C=Cached IO | D=Direct IO | sCC=Scheduled
Check Consistency
If you want to delete VD2 at this point, confirm the corresponding system partition of VD2. Here is sdc and the partition sdc1... of the device. It is not mounted to make it work.
Delete VD command: storcli64 / c0/v2 del force
After deletion, VD is gone.
[root@SZ × ×-2] # storcli64 / c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Virtual Drives:
=
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
0 RAID1 Optl RW No RWBD-ON 278.464 GB
1 RAID5 Optl RW Yes RWBD-ON 2.178 TB
Cac=CacheCade | Rec=Recovery | OfLn=OffLine | Pdgd=Partially Degraded | Dgrd=Degraded
Optl=Optimal | RO=Read Only | RW=Read Write | HD=Hidden | TRANS=TransportReady | B=Blocked |
Consist=Consistent | R=Read Ahead Always | NR=No Read Ahead | WB=WriteBack |
AWB=Always WriteBack | WT=WriteThrough | C=Cached IO | D=Direct IO | sCC=Scheduled
Check Consistency
Modify the properties of VD to view the properties of VD
[root@SZ × ×-2] # storcli64 / c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Virtual Drives:
=
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
0 RAID1 Optl RW No RWBD-ON 278.464 GB
1 RAID5 Optl RW Yes RWBD-ON 2.178 TB
2 RAID0 Optl RW Yes RWBD-ON 557.861 GB
Cac=CacheCade | Rec=Recovery | OfLn=OffLine | Pdgd=Partially Degraded | Dgrd=Degraded
Optl=Optimal | RO=Read Only | RW=Read Write | HD=Hidden | TRANS=TransportReady | B=Blocked |
Consist=Consistent | R=Read Ahead Always | NR=No Read Ahead | WB=WriteBack |
AWB=Always WriteBack | WT=WriteThrough | C=Cached IO | D=Direct IO | sCC=Scheduled
Check Consistency
The Cache attribute of VD2 is RWBD, which is the comment under the output: Read Ahead, WriteBack,Direct IO.
1. Change the cache policy of VD2 to WriteThrough mode:
Storcli64 / c0/v2 set wrcache=wt
At this point, if you look at the status of v2, the Cache column will be RWTD.
Note:
Help from set
Storcli64 / c0/v2 set help
Status = Success
Description = None
Virtual Drives:
=
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
0 RAID1 Optl RW No RWBD-ON 278.464 GB
1 RAID5 Optl RW Yes RWBD-ON 2.178 TB
2 RAID0 Optl RW Yes RWBD-ON 557.861 GB
Cac=CacheCade | Rec=Recovery | OfLn=OffLine | Pdgd=Partially Degraded | Dgrd=Degraded
Optl=Optimal | RO=Read Only | RW=Read Write | HD=Hidden | TRANS=TransportReady | B=Blocked |
Consist=Consistent | R=Read Ahead Always | NR=No Read Ahead | WB=WriteBack |
AWB=Always WriteBack | WT=WriteThrough | C=Cached IO | D=Direct IO | sCC=Scheduled
Check Consistency
[root@SZ × ×-2] # storcli64 / c0/v2 set help
Storage Command Line Tool Ver 007.0415.0000.0000 Feb 13, 2018
(C) Copyright 2018, AVAGO Technologies, All Rights Reserved.
Storcli / cx/vx set ssdcaching=on | off
Storcli / cx/vx set hidden=on | off
Storcli / cx/vx set fshinting=
Storcli / cx/vx set emulationType=0 | 1 | 2
Storcli / cx/vx set cbsize=0 | 1 | 2 cbmode=0 | 1 | 2 | 3 | 4 | 7
Storcli / cx/vx set wrcache=WT | WB | AWB
Storcli / cx/vx set rdcache=RA | NoRA
Storcli / cx/vx set iopolicy=Cached | Direct
Storcli / cx/vx set accesspolicy=RW | RO | Blocked | RmvBlkd
Storcli / cx/vx set pdcache=On | Off | Default
Storcli / cx/vx set name=
Storcli / cx/vx set HostAccess=ExclusiveAccess | SharedAccess
Storcli / cx/vx set ds=Default | Auto | None | Max | MaxNoCache
Storcli / cx/vx set autobgi=On | Off
Storcli / cx/vx set pi=Off
Storcli / cx/vx set bootdrive=
You can set various configuration items
two。 Modify VD2's cache read-ahead policy = to NR mode:
Storcli64 / c0/v2 set rdcache=NoRA
At this point, if you look at the status of v2, the Cache column will be NRWTD.
Set the hard drive to jbod mode online
1. Verify that the raid card holds jbod mode and turns on jbod mode:
Storcli64 / c0 show all | grep-I jbod
[root@SZ × ×-2] # storcli64 / c0 show all | grep-I jbod
Support JBOD = Yes
Support SecurityonJBOD = No
Support JBOD Write cache = No
Enable JBOD = No
Note:
You can see that support JBOD = Yes, which means that the raid card holds jbod mode
However, Enable JBOD = No, which means that the current raid card does not have jbod mode enabled. You need to enable raid mode at this time.
[root@SZ × ×-2] # storcli64 / c0 set jbod=on
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Controller Properties:
=
-
Ctrl_Prop Value
-
JBOD ON
-
[root@SZ × ×-2] # storcli64 / c0 show all | grep-I jbod
Support JBOD = Yes
Support SecurityonJBOD = No
Support JBOD Write cache = No
Enable JBOD = Yes
Enable JBOD = Yes, jbod mode has been turned on
two。 Set the specified device to jbod mode:
Storcli64 / c0/e252/s7 set jbod
[root@SZ × ×-2] # storcli64 / c0/e252/s7 set jbod
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Set Drive JBOD Succeeded.
[root@SZ × ×-2] # storcli64 / c0/e252/s7 show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information:
=
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
252Rose 7 24 JBOD-557.861 GB SAS HDD N N 512B MBF2600RC U-
EID-Enclosure Device ID | Slt-Slot No. | DID-Device ID | DG-DriveGroup
DHS-Dedicated Hot Spare | UGood-Unconfigured Good | GHS-Global Hotspare
UBad-Unconfigured Bad | Onln-Online | Offln-Offline | Intf-Interface
Med-Media Type | SED-Self Encryptive Drive | PI-Protection Info
SeSz-Sector Size | Sp-Spun | U-Up | D-Down/PowerSave | T-Transition | F-Foreign
UGUnsp-Unsupported | UGShld-UnConfigured shielded | HSPShld-Hotspare shielded
CFShld-Configured shielded | Cpybck-CopyBack | CBShld-Copyback Shielded
Note:
At this point the device is already in jbod mode.
3. Change jbod mode to UG mode:
If you want to undo the jbod mode of the device, storcli64 / c0/e252/s7 set good force
[root@SZ × ×-2] # storcli64 / c0/e252/s7 set good force
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Set Drive Good Succeeded.
[root@SZ × ×-2] # storcli64 / c0/e252/s7 show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Note:
The device changes back to the UGood state and the raid volume group can be reconfigured.
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information:
=
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
252Rose 7 24 UGood-557.861 GB SAS HDD N N 512B MBF2600RC U-
EID-Enclosure Device ID | Slt-Slot No. | DID-Device ID | DG-DriveGroup
DHS-Dedicated Hot Spare | UGood-Unconfigured Good | GHS-Global Hotspare
UBad-Unconfigured Bad | Onln-Online | Offln-Offline | Intf-Interface
Med-Media Type | SED-Self Encryptive Drive | PI-Protection Info
SeSz-Sector Size | Sp-Spun | U-Up | D-Down/PowerSave | T-Transition | F-Foreign
UGUnsp-Unsupported | UGShld-UnConfigured shielded | HSPShld-Hotspare shielded
CFShld-Configured shielded | Cpybck-CopyBack | CBShld-Copyback Shielded
Note:
The device changes back to the UGood state and the raid volume group can be reconfigured.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.