In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
ACFS, the official definition of oracle:
Oracle AutomaticStorage Management Cluster File System (Oracle ACFS) is a multi-platform,scalable file system, and storage management technology that extends OracleAutomatic Storage Management (Oracle ASM) functionality to support customerfiles maintained outside of Oracle Database. Oracle ACFS supports many databaseand application files, including executables,database trace files, databasealert logs, application reports, BFILEs, and configuration files. Othersupported files are video, audio, text, p_w_picpaths, engineering drawings, and othergeneral-purpose application file data.
The following is to introduce the use of ACFS in 11G RAC environment. The current operating system version is OEL6.5 and the database version is ORACLE11204.
1. View ACFS related services
Check to see if the acfs/advm driver loads
[root@jxcw1 bin] # lsmod | grep oracle
[root@jxcw1 bin] #
[root@jxcw2 bin] # lsmod | grep oracle
[root@jxcw2 bin] #
View cluster resources
[root@jxcw1 ~] # su- grid-c crs_stat | grep acfs
[root@jxcw1 ~] #
[root@jxcw2 ~] # su- grid-c crs_stat | grep acfs
[root@jxcw2 ~] #
Use acfsload to load the driver
[root@jxcw1 bin] #. / acfsload start-s
ACFS-9459:ADVM/ACFS is not supported on this OS version: '3.8.13-16.2.1.el6uek.x86room64'
[root@jxcw1 bin] # uname-a
Linux jxcw13.8.13-16.2.1.el6uek.x86_64 # 1 SMP Thu Nov 7 17:01:44 PST 2013 x86 / 64x86 _ 64x86_64 GNU/Linux
[root@jxcw1 bin] #
As can be seen from the error message, the OEL6.5 operating system 11204 database is not supported. Upgrade the database version, patch 11204.7 after the driver can be loaded, but can not be automatically loaded by the system. The processing is as follows:
[root@jxcw1 ~] # cat/etc/init.d/acfsload
#! / bin/sh
# chkconfig: 234530 21
# description: LoadOracle ASM volume driver on system startup
ORACLE_HOME=/u01/app/11.2.0/grid
Export ORACLE_HOME
$ORACLE_HOME/bin/acfsloadstart-s
[root@jxcw1 ~] # chkconfigadd acfsload
[root@jxcw1 ~] # chkconfig-- list | grep acfsload
Acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@jxcw1 ~] #
[root@jxcw1 bin] #. / acfsload start-s
[root@jxcw1 bin] # lsmod | grep oracle
Oracleacfs 877320 0
Oracleadvm 221760 0
Oracleoks 276880 2 oracleacfs,oracleadvm
[root@jxcw2bin] # lsmod | grep oracle
Oracleacfs 877192 0
Oracleadvm 221504 0
Oracleoks 277008 2 oracleacfs,oracleadvm
[root@jxcw2 bin] #
Add Resource ora.registry.acfs
[root@jxcw1] # / u01/app/11.2.0/grid/bin/crsctl add type ora.registry.acfs.type-basetypeora.local_resource.type-file/u01/app/11.2.0/grid/crs/template/registry.acfs.type
[root@jxcw1] # / u01/app/11.2.0/grid/bin/crsctladd resource ora.registry.acfs-attrACL=\ 'owner:root:rwx,pgrp:oinstall:r-x,other::r--\'-typeora.registry.acfs.type-f
[root@jxcw1 ~] #
Check to see if the addition is successful
[root@jxcw1 ~] # su- grid-c crs_stat | grep acfs
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
[root@jxcw1 ~] #
[root@jxcw2 ~] # su- grid-c crs_stat | grep acfs
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
[root@jxcw2 ~] #
View the status of added resources.
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Cluster Resources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw2
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw2
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
The added ora.registry.acfs resource defaults to offline and needs to be started manually.
[grid@jxcw1] $crsctl start resource ora.registry.acfs-n jxcw1
CRS-2672:Attempting to start 'ora.registry.acfs' on' jxcw1'
CRS-2676: Start of'ora.registry.acfs' on 'jxcw1' succeeded
[grid@jxcw1] $crsctl start resource ora.registry.acfs-n jxcw2
CRS-2672:Attempting to start 'ora.registry.acfs' on' jxcw2'
CRS-2676: Start of'ora.registry.acfs' on 'jxcw2' succeeded
After startup, the check status is as follows, which is already online.
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
-ClusterResources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw2
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw2
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
two。 Use the ASMCMD command to create an ACFS
(1) use the volcreate command to create a vol01 with a size of 30g
ASMCMD > volcreate-G data-s 30G testvol
(2) use volinfo to view information about vol01
ASMCMD > volinfo-G data testvol
Diskgroup Name:DATA
Volume Name: TESTVOL
Volume Device: / dev/asm/testvol-347
State: ENABLED
Size (MB): 30720
Resize Unit (MB): 32
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
ASMCMD >
(2) use the operating system command to check the device name, where the / dev/asm/ testvol-347 is equivalent to the bare device.
[grid@jxcw1 ~] $cd/dev/asm/
[grid@jxcw1 asm] $ls-l
Total 0
Brwxrwx--- 1 rootasmadmin 251, 177665 Oct 28 13:20 testvol-347
[grid@jxcw1 asm] $
(3) use sql to view volume-related information in the v$asm_volume view
[grid@jxcw1 asm] $sqlplus / as sysasm
SQL*Plus: Release11.2.0.4.0 Production on Fri Oct 28 13:22:33 2016
Copyright (c) 1982 Jing 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11gEnterprise Edition Release 11.2.0.4.0-64bit Production
With the RealApplication Clusters and Automatic Storage Management options
SQL > selectvolume_name,volume_device,size_mb from v$asm_volume
VOLUME_NAME VOLUME_DEVICE SIZE_MB
TESTVOL / dev/asm/testvol-347 30720
SQL >
(4) format the volume and create a file system
[root@jxcw1 ~] # mkfs.acfs / dev/asm/testvol-347
Mkfs.acfs:version = 11.2.0.4.0
Mkfs.acfs: on-diskversion = 39.0
Mkfs.acfs:volume = / dev/asm/testvol-347
Mkfs.acfs: volumesize = 32212254720
Mkfs.acfs: Formatcomplete.
[root@jxcw1 ~] #
(5) registering the ACFS file system is equivalent to defining the mount point, which can be registered only once in node 1.
[root@jxcw1~] # acfsutil registry-a / dev/asm/testvol-347/u01/app/grid/acfsmounts/data_testvol/
Acfsutil registry:mount point / u01/app/grid/acfsmounts/data_testvol successfully added to OracleRegistry
View file system information
[root@jxcw1 ~] # acfsutil info fs
/ u01/app/grid/acfsmounts/data_testvol
ACFS Version: 11.2.0.4.0
Flags: MountPoint,Available
Mount time: Fri Oct 28 13:37:27 2016
Volumes: 1
Total size: 32212254720
Total free: 32068386816
Primary volume: / dev/asm/testvol-347
Label:
Flags: Primary,Available,ADVM
On-disk version: 39.0
Allocation unit: 4096
Major, minor: 251, 177665
Size: 32212254720
Free: 32068386816
ADVM diskgroup DATA
ADVM resize increment: 33554432
ADVM redundancy: unprotected
ADVM stripe columns: 4
ADVM stripe width: 131072
Number of snapshots: 0
Snapshot space usage: 0
Replication status: DISABLED
[root@jxcw1 ~] #
(6) the file system is automatically mounted after successful registration. The two nodes are viewed as follows:
[root@jxcw1 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 532G 34G 476G 7% /
Tmpfs 16G 255m 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
/ dev/asm/testvol-347 30G 138m 30G 1% / u01/app/grid/acfsmounts/data_testvol
[root@jxcw1 ~] #
[root@jxcw2 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 533G 24G 487G 5% /
Tmpfs 16G 254M 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
/ dev/asm/testvol-347 30G 138M 30G 1%/u01/app/grid/acfsmounts/data_testvol
[root@jxcw2 ~] #
The graphical interface view status is as follows
Check the status of cluster resources again
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
-ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Cluster Resources
-ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw2
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw2
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
3. Asmca graphical interface creation
Switch to the grid user and execute asmca to launch the graphical interface for configuration
After launching the graphical interface, click asm instance to see that the advm driver status is loaded. If the acfs and advm drivers are not loaded, the status is installed,volumes and asmcluster file systems tabs are grayed out and cannot be clicked. Click the volumes tab to create the volume.
In the figure, a successful volume is created using the command. Click create again
Specify the volume name is jxcwvol, the size is 25G, and click OK.
If the creation is successful, click ok to exit, which is displayed as follows:
Click the asm clusterfile systems tab
Click create, select the volume you just created, select the database home file system option, and specify the mount point shown in the figure. Select this option the file system will be registered in the cluster as a cluster resource and managed by the cluster.
Click ok to create the ACFS file system. Click ok to execute the following command
Create ACFSCommand:
/ sbin/mkfs-t acfs/dev/asm/jxcw-347
Following commandsshould be run as privileged user:
/ u01/app/11.2.0/grid/bin/srvctladd filesystem-d / dev/asm/jxcw-347-g 'DATA'-v jxcwvol-m/u01/app/grid/acfsmounts/data_jxcwvol-u grid
/ u01/app/11.2.0/grid/bin/srvctlstart filesystem-d / dev/asm/jxcw-347
Chown grid:oinstall/u01/app/grid/acfsmounts/ data_jxcwvol
Chmod 775/u01/app/grid/acfsmounts/data_jxcwvol
When prompted, use the root account to execute the commands shown on the way on the node started by the ASMCA graphical interface, as follows:
[root@jxcw1 ~] # / u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh
ACFS file system isrunning on jxcw1,jxcw2
[root@jxcw1 ~] #
Note: the script executed on the way is as follows:
[root@jxcw1 ~] # vi/u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh
#! / bin/sh
/ u01/app/11.2.0/grid/bin/srvctlstop filesystem-d / dev/asm/jxcwvol-347
/ u01/app/11.2.0/grid/bin/srvctlremove filesystem-d / dev/asm/jxcwvol-347
/ u01/app/11.2.0/grid/bin/srvctladd filesystem-d / dev/asm/jxcwvol-347-g 'DATA'-v JXCWVOL-m/u01/app/grid/acfsmounts/data_jxcwvol-u grid
If [$? = "0"-o $? = "2"]; then
/ u01/app/11.2.0/grid/bin/srvctl startfilesystem-d / dev/asm/jxcwvol-347
If [$? = "0"]; then
Chown grid:oinstall/u01/app/grid/acfsmounts/data_jxcwvol
Chmod 775/u01/app/grid/acfsmounts/data_jxcwvol
/ u01/app/11.2.0/grid/bin/srvctl statusfilesystem-d / dev/asm/jxcwvol-347
Exit 0
Fi
/ u01/app/11.2.0/grid/bin/srvctl statusfilesystem-d / dev/asm/jxcwvol-347
Fi
The successful execution is checked separately on the two nodes, and the new ACFS file system has been mounted.
[root@jxcw1 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 532G 34G 476G 7% /
Tmpfs 16G 258M 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
/ dev/asm/testvol-347 30G 138m 30G 1% / u01/app/grid/acfsmounts/data_testvol
/ dev/asm/jxcwvol-347 25G 128m 25G 1% / u01/app/grid/acfsmounts/data_jxcwvol
[root@jxcw1 ~] #
[root@jxcw2 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 533G 24G 487G 5% /
Tmpfs 16G 255m 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
/ dev/asm/testvol-347 30G 138m 30G 1% / u01/app/grid/acfsmounts/data_testvol
/ dev/asm/jxcwvol-347 25G 128m 25G 1% / u01/app/grid/acfsmounts/data_jxcwvol
[root@jxcw2 ~] #
Looking at the cluster resource status again, the newly created ACFS file system is added to the cluster as a resource.
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.data.jxcwvol.acfs
ONLINE ONLINE jxcw1 mountedon / u01/app
/ grid/acfsmounts/dat
A_jxcwvol
ONLINE ONLINE jxcw2 mountedon / u01/app
/ grid/acfsmounts/dat
A_jxcwvol
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
-ClusterResources
-ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw2
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw2
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
Check the registration information again as follows:
[root@jxcw1 ~] # acfsutil info fs
/ u01/app/grid/acfsmounts/data_testvol
ACFS Version: 11.2.0.4.0
Flags: MountPoint,Available
Mount time: Fri Oct 28 13:37:27 2016
Volumes: 1
Total size: 32212254720
Total free: 32068386816
Primary volume: / dev/asm/testvol-347
Label:
Flags: Primary,Available,ADVM
On-disk version: 39.0
Allocation unit: 4096
Major, minor: 251, 177665
Size: 32212254720
Free: 32068386816
ADVM diskgroup DATA
ADVM resize increment: 33554432
ADVM redundancy: unprotected
ADVM stripe columns: 4
ADVM stripe width: 131072
Number of snapshots: 0
Snapshot space usage: 0
Replication status: DISABLED
/ u01/app/grid/acfsmounts/data_jxcwvol
ACFS Version: 11.2.0.4.0
Flags: MountPoint,Available
Mount time: Fri Oct 28 13:45:44 2016
Volumes: 1
Total size: 26843545600
Total free: 26710327296
Primary volume: / dev/asm/jxcwvol-347
Label:
Flags: Primary,Available,ADVM
On-disk version: 39.0
Allocation unit: 4096
Major, minor: 251, 177666
Size: 26843545600
Free: 26710327296
ADVM diskgroup DATA
ADVM resize increment: 33554432
ADVM redundancy: unprotected
ADVM stripe columns: 4
ADVM stripe width: 131072
Number of snapshots: 0
Snapshot space usage: 0
Replication status: DISABLED
[root@jxcw1 ~] #
4. Delete ACFS file system
(1), current status view
[root@jxcw1 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 532G 34G 476G 7% /
Tmpfs 16G 256M 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
/ dev/asm/jxcwvol-347 30G 138m 30G 1% / u01/app/grid/acfsmounts/data_jxcwvol
/ dev/asm/testvol-347 15G 107m 15G 1% / u01/app/grid/acfsmounts/data_testvol
[root@jxcw1 ~] #
[root@jxcw2 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 533G 24G 487G 5% /
Tmpfs 16G 254M 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
/ dev/asm/jxcwvol-347 30G 138m 30G 1% / u01/app/grid/acfsmounts/data_jxcwvol
/ dev/asm/testvol-347 15G 107m 15G 1% / u01/app/grid/acfsmounts/data_testvol
[root@jxcw2 ~] #
[root@jxcw1 ~] # su-grid
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.data.jxcwvol.acfs
ONLINE ONLINE jxcw1 mountedon / u01/app
/ grid/acfsmounts/dat
A_jxcwvol
ONLINE ONLINE jxcw2 mountedon / u01/app
/ grid/acfsmounts/dat
A_jxcwvol
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Cluster Resources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw1
Ora.cvu
1 ONLINE ONLINE jxcw2
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw2
Ora.scan1.vip
1 ONLINE ONLINE jxcw1
[grid@jxcw1 ~] $
(2) uninstall the disk
Node 1
[root@jxcw1 ~] # umount / dev/asm/testvol-347
[root@jxcw1 ~] # umount / dev/asm/jxcwvol-347
[root@jxcw1 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 532G 34G 476G 7% /
Tmpfs 16G 257m 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
[root@jxcw1 ~] #
Node 2
[root@jxcw2 ~] # umount / dev/asm/testvol-347
[root@jxcw2 ~] # umount / dev/asm/jxcwvol-347
[root@jxcw2 ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/mapper/VolGroup-lv_root 533G 24G 487G 5% /
Tmpfs 16G 254M 16G 2% / dev/shm
/ dev/sda1 477M 55M 397M 13% / boot
[root@jxcw2 ~] #
Check the resource status again
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.data.jxcwvol.acfs
OFFLINE OFFLINE jxcw1 admin unmounted / u0
1/app/grid/acfsmount
S/data_jxcwvol
OFFLINE OFFLINE jxcw2 admin unmounted / u0
1/app/grid/acfsmount
S/data_jxcwvol
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Cluster Resources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw1
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw1
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
You can see the jxcwvol volume created by the graphical interface, and after unmounting the file system, ora.data.jxcwvol.acfs
The resource automatically offline, and the volume is in umounted status; testvol is associated with the ora.registry.acfs resource, which is in a normal state.
(3) stop ora.registry.acfs resources
[grid@jxcw1] $crsctl stop resource ora.registry.acfs-n jxcw1
CRS-2673: Attemptingto stop 'ora.registry.acfs' on' jxcw1'
CRS-2677: Stop of'ora.registry.acfs' on 'jxcw1' succeeded
[grid@jxcw1] $crsctl stop resource ora.registry.acfs-n jxcw2
CRS-2673: Attemptingto stop 'ora.registry.acfs' on' jxcw2'
CRS-2677: Stop of'ora.registry.acfs' on 'jxcw2' succeeded
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.data.jxcwvol.acfs
OFFLINE OFFLINE jxcw1 admin unmounted / u0
1/app/grid/acfsmount
S/data_jxcwvol
OFFLINE OFFLINE jxcw2 admin unmounted / u0
1/app/grid/acfsmount
S/data_jxcwvol
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Cluster Resources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw1
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw1
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
(4) Delete registration information
Execute on only one node
[root@jxcw1] # acfsutil registry-d / u01/app/grid/acfsmounts/data_testvol/
Acfsutil registry: successfully removed ACFS mount point/u01/app/grid/acfsmounts/data_testvol from Oracle Registry
[root@jxcw1 ~] #
The resource offline has been ora.data.jxcwvol.acfs in the previous step. It is normal to report an error when removing the registration information.
[root@jxcw1] # acfsutil registry-d / u01/app/grid/acfsmounts/data_jxcwvol/
Acfsutil registry:ACFS-03143: The specified mount point does not exist and therefore cannot bedeleted.
[root@jxcw1 ~] #
(5) delete the ACFS file system
Delete is performed on only one node
[root@jxcw1 ~] # acfsutil rmfs / dev/asm/testvol-347
[root@jxcw1 ~] # acfsutil rmfs / dev/asm/jxcwvol-347
(6) disable volume
Need to be executed on two nodes
Node 1 forbidden
[grid@jxcw1 ~] $asmcmd
ASMCMD > voldisable-G data testvol
ASMCMD > voldisable-G data jxcwvol
ASMCMD >
Node 2 forbids
[grid@jxcw2 ~] $asmcmd
ASMCMD > voldisable-G data testvol
ASMCMD > voldisable-G data jxcwvol
ASMCMD > exit
[grid@jxcw2 ~] $
(7), delete volume
Executed on only one node.
[grid@jxcw1 ~] $asmcmd
ASMCMD > ls
ASMCMD > voldelete-G data testvol
ASMCMD > voldelete-G data jxcwvol
(8) Delete resource object
[root@jxcw1~] # / u01/app/11.2.0/grid/bin/srvctl remove filesystem-d / dev/asm/jxcwvol-347
[grid@jxcw1] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.ARCH.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.CRS.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.DATA.dg
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.LISTENER.lsnr
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.asm
ONLINE ONLINE jxcw1 Started
ONLINE ONLINE jxcw2 Started
Ora.gsd
OFFLINE OFFLINE jxcw1
OFFLINE OFFLINE jxcw2
Ora.net1.network
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.ons
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
Ora.registry.acfs
ONLINE ONLINE jxcw1
ONLINE ONLINE jxcw2
-ClusterResources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jxcw2
Ora.cvu
1 ONLINE ONLINE jxcw2
Ora.jxcw.db
1 ONLINE ONLINE jxcw1 Open
2 ONLINE ONLINE jxcw2 Open
Ora.jxcw1.vip
1 ONLINE ONLINE jxcw1
Ora.jxcw2.vip
1 ONLINE ONLINE jxcw2
Ora.oc4j
1 ONLINE ONLINE jxcw2
Ora.scan1.vip
1 ONLINE ONLINE jxcw2
[grid@jxcw1 ~] $
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.