Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of new features of Oracle 11gASM

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

one。 ASM Fast Mirror resynchronization (ASMFast Mirror Resync)

1.1 when fast mirror resynchronization without ASM

Whenever ASM is unable to write to an area allocated to a disk, it takes the disk offline and writes at least one mirrored copy of that zone (ASM datazone) on other disks (if the corresponding disk group uses ASM redundancy).

With OracleDatabase 10g, ASM assumes that offline disks contain only obsolete data, so data is no longer read from such disks. Shortly after the disk is offline, ASM uses the redundant zone copy to recreate the zone assigned to the disk (ASM datazone) on the remaining disks in the disk group, removing the offline disk from the disk group. This process is a relatively expensive operation and may take several hours to complete.

If the disk failure is only temporary (such as a cable, host bus adapter, controller failure, or power outage to the disk), the disk must be re-added after the temporary failure has been fixed. However, adding the deleted disk back to the disk group also requires moving the extent (ASM datazone) back to the disk, thus increasing the cost.

1.2 ASM Fast Mirror resynchronization 1.2.1 Overview

ASM fast mirror resynchronization significantly reduces the time required to resynchronize temporarily failed disks. If a disk is offline due to a temporary failure, ASM keeps track of the areas that were modified during the outage. After a temporary failure is fixed, ASM can quickly resynchronize only the ASM extents that were affected during the outage. This feature assumes that the contents of the affected ASM disks have not been damaged or modified.

"when an ASM disk path fails, if you have set the DISK_REPAIR_TIME property of the corresponding disk group, the ASM disk goes offline but is not deleted." The setting of this property determines the disk outage duration that ASM can tolerate; if the outage is within this time range, it can still be resynchronized after the repair is complete.

Note: the tracking mechanism uses one bit for each modified zone, which ensures that the tracking mechanism is very efficient.

1.2.2 set ASM Fast Mirror resynchronization

Please set this feature by disk group. You can do this using the ALTER DISKGROUP command after creating a disk group. Enable ASM fast mirror resynchronization using a command similar to the following command:

ALTER DISKGROUPSET ATTRIBUTE 'DISK_REPAIR_TIME'='2D4H30M'

After repairing the disk, run the SQL statement ALTER DISKGROUP ONLINE DISK. This statement brings the repaired disk group back online to enable writes so that new writes are not lost. This statement also starts a process to copy all extents marked obsolete on their redundant copies. You cannot use ONLINE statements on disks that have been deleted.

You can view the current property values by querying the V$ASM_ATTRIBUTE view. By querying the REPAIR_TIMER column of V$ASM_DISK or V$ASM_DISK_IOSTAT, you can determine the time remaining before ASM deletes an offline disk. In addition, a row corresponding to the disk resynchronization operation appears in V$ASM_OPERATION, where the OPERATION column is set to SYNC.

For preventive maintenance, you can also manually take the ASM disk offline using the SQL statement ALTER DISKGROUP OFFLINE DISK. Use this command to specify a timer to override timers defined at the disk group level. When the maintenance is complete, use the ALTER DISKGROUP ONLINE DISK statement to bring the disk back online.

"if you cannot repair an offline failure group, you can use the ALTER DISKGROUP DROP DISKS INFAILGROUP command with the FORCE option to ensure that the data previously stored on those disks is reconstructed based on redundant copies of the data and stored on other disks in the same disk group."

Note: the time is calculated only when the disk group is mounted. Also, changing the value of DISK_REPAIR_TIME does not affect previously offline disks. The default setting of 3. 6 hours for DISK_REPAIR_TIME should be sufficient for most environments.

two。 Overview of ASM preferred Mirror Reading 2.1

When you configure an ASM fault group in Oracle Database10g, ASM always reads the master copy of the mirror area. It may be more efficient to have a node read data from the fault group area closest to the node, even if it is a secondary area. This is especially true in an extended cluster configuration where nodes are distributed across multiple sites, in which case reading data from the local copy of the zone can improve performance.

When using OracleDatabase 11g, you can do this by using the new initialization parameter ASM_PREFERRED_READ_FAILURE_GROUPS to specify the list of preferred mirror read names to configure the preferred mirror read. The disks in these failure groups will be the preferred read disks. In this way, each node can read data from its local disk. This can not only improve efficiency and performance, but also reduce network traffic. The setting of this parameter depends on the specific instance.

2.2 Settin

To configure this feature, set the new ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameters. This parameter is a multi-valued parameter and should contain a string with a comma-separated list of fault group names. Each fault group name specified should have its disk group name and a "." The character is prefixed. This parameter is dynamic and can be modified at any time using the ALTER SYSTEM command. However, this initialization parameter is valid only for ASM instances. When using an extended cluster, the failure group specified in this parameter should only contain the local disk of the corresponding instance.

The new column PREFERRED_READ has been added to the V$ASM_DISK view, using a single-character format. If the disk group to which the disk belongs belongs to the preferred read failure group, the value of this column is Y.

To determine the specific performance problem of the ASM preferred read failure group, use the V$ASM_DISK_IOSTAT view.

This view will show the disk input / output (Imax O) statistics for each ASM client. If you query this view from a database instance, only the rows for that instance are displayed.

2.3 Best practices

In practice, there are only a limited number of valid disk group configurations in an extended cluster. The valid disk group configuration takes into account both the performance and availability of the disk group in the extended cluster. Here are some possible examples:

For an extended cluster with two sites, a normal redundant disk group should have only two failure groups; all local disks at one site should belong to the same failure group. In addition, only one fault group can be specified per instance as the preferred read fault group. If there are more than two failure groups, ASM does not mirror a virtual zone that spans two sites. "also, if a site with more than two failure groups is shut down, the disk group is also shut down." "if the disk group you are creating is a highly redundant disk group, you should create up to two failure groups at each site and on its local disk, and designate both local failure groups as the preferred read failure groups for the local instance."

For extended clusters with three sites, highly redundant disk groups with three failure groups should be used. In this way, ASM ensures that each virtual zone has a local mirror copy for each site, and that disk groups on all three sites are protected from major disasters.

III. Scalability and performance enhancement

3.1 ASM scalability and performance enhancements

ASM variable large cell is an automatic feature that ASM can use to improve memory efficiency while supporting larger file sizes.

In Oracle Database 11g, ASM supports variable large cells with 1, 8, and 64 allocation units (AU). ASM uses a predetermined number of areas of various sizes. Whenever a file exceeds a specific threshold, the next zone size is used.

With this feature, you can reduce the number of zone pointers required to describe files, as well as the memory required to manage zone mappings in shared pools (prohibited in large file configurations). The zone size varies from file to file as well as within the file.

The variable large cell feature can also be used to deploy hundreds of TB (or even several PB) sized Oracle DB using ASM.

Note: the management of variable large area is completely automatic and does not need manual management.

However, if a large number of discontiguous small data areas are allocated and released, and no other contiguous large areas are available, external fragmentation may occur. The defragmentation operation will be integrated into the rebalance operation. Therefore, DBA can always defragment the disk group by performing a rebalance operation.

However, this situation is extremely rare because ASM also automatically performs defragmentation during the zone allocation process when the required size is not available. This may extend the time of some allocation operations.

Note: this feature can also speed up the opening of files because it can significantly reduce the amount of memory required to store the file area.

ASM scalability and performance enhancements:

(1) the area size automatically increases according to the file size.

(2) ASM supports variable zone size, which can be:

-increase the maximum possible file size

-reduce memory usage in shared pools

(3) when important fragments occur, there is no need to perform management tasks other than manual rebalancing.

ASM scalability in Oracle Database 11g

ASM enforces the following restrictions:

(1) the storage system contains 63 disk groups

(2) the storage system contains 10000 ASM disks

(3) the maximum storage space of each ASM disk is 4 PB.

(4) the maximum storage space per storage system is 40 EB.

(5) each disk group contains 1 million files

(6) the maximum file size depends on the type of redundancy of the disk group used:

External redundancy is 140 PB (this value is currently greater than the possible database file size)

Normal redundancy is 42 PB

The high redundancy is 15 PB.

Note: in Oracle Database10g, the maximum ASM file size for external redundancy is 35 TB.

IV. ASM disk group 4.1 ASM disk group compatibility

There are two types of compatibility that apply to ASM disk groups:

(1) ASM compatibility: processing persistent data structures that describe disk groups

(2) RDBMS compatibility: the ability to handle clients (users of disk groups)

The compatibility of each disk group can be controlled independently. This is required to enable a heterogeneous environment that contains disk groups from OracleDatabase 10g and OracleDatabase 11g. These two compatibility settings are properties of each ASM disk group:

(1) RDBMS compatibility refers to the minimum compatible version of a RDBMS instance, which allows the instance to mount disk groups. This compatibility determines the format of messages exchanged between ASM instances and database (RDBMS) instances. ASM instances can support different RDBMS clients running with different compatibility settings. The database compatibility version setting for each instance must be higher than or equal to the RDBMS compatibility of all disk groups used by the database. Database instances and ASM instances typically run in different Oracle home directories. This means that the version of the software running by the database instance and the ASM instance may be different. When the database instance connects to the ASM instance for the first time, the system agrees on the maximum version supported by both instances. The compatibility parameter settings of the database, the software version of the database, and the RDBMS compatibility settings of the disk group determine whether the database instance can mount the specified disk group.

(2) ASM compatibility refers to the persistent compatibility setting that controls the data structure format of ASM metadata on disk. The ASM compatibility level of a disk group must always be higher than or equal to the RDBMS compatibility level of the same disk group. ASM compatibility is only relevant to the format of ASM metadata. The format of the file contents depends on the database instance. For example, you can set the ASM compatibility of a disk group to 11.0 and the RDBMS compatibility of that disk group to 10.1. This means that the disk group can only be managed by ASM software with software version 11.0 or later, and can be used by any database client with software version greater than or equal to 10.1.

The compatibility of disk groups needs to be improved only if the persistent disk structure or messaging protocol changes. However, improving disk group compatibility is an irreversible operation. You can use the CREATE DISKGROUP command or the ALTER DISKGROUP command to set disk group compatibility.

Note: in addition to determining disk group compatibility, the compatible parameter (the compatible version of the database) also determines which features are enabled. This parameter applies to a database instance or an ASM instance, depending on the instance_type parameter. For example, setting this parameter to 10.1 disables the use of any new features introduced in Oracle Database 11g (disk online / offline, variable zones, etc.).

4.2 ASM disk Group Properties

When you create or change an ASM disk group, you can use the new ATTRIBUTE clause of the CREATE DISKGROUP command or the ALTER DISKGROUP command to change its properties.

ASM disk group properties:

(1) with ASM, you can use different AU sizes that you specified when you created the disk group. The AU size can be 1 MB, 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, or 64 MB.

(2) RDBMS compatibility.

(3) ASM compatibility.

(4) DISK_REPAIR_TIME can be specified in minutes (M), hours (H), or days (D).

If the unit is omitted, the default unit is H. If you omit this attribute, the default value is 3.6H. You can override this property with an ALTER DISKGROUP statement.

(5) redundant attributes can be specified for the specified template.

(6) you can specify striping attributes for the specified template.

Note: for each defined disk group, you can use the V$ASM_ATTRIBUTE fixed view to view all defined properties.

The table is as follows:

4.3 enhanced disk group check

The CHECK diskgroup command has been simplified and all metadata directories are checked by default. Use the CHECK command to verify the internal consistency of the ASM disk group metadata.

ALTER DISKGROUPDATA CHECK

ASM displays a summary of errors and writes details of detected errors to the warning log.

In earlier versions, this clause could be specified for ALL, DISK, DISKS IN FAILGROUP, and FILE.

These clauses have been discarded because they are no longer needed. In the current release, the CHECK keyword can do the following:

(1) check disk consistency (equivalent to CHECK DISK and CHECK DISK IN FAILGROUP in previous versions)

(2) Cross-check the consistency of all file area mapping and allocation tables (equivalent to CHECK FILE in previous versions)

(3) check whether the links between the alias metadata directory and the file directory are correct

(4) check whether the link of the alias directory tree is correct

(5) check the ASM metadata directory to make sure it does not contain allocated blocks that cannot be accessed

Use the REPAIR | NOREPAIR clause to indicate whether ASM attempts to fix errors found during the consistency check. The default setting is REPAIR. If you want to be alerted when there is an inconsistency, but you do not want ASM to perform any automatic action to resolve the problem, you can use the NOREPAIR setting.

Note: introducing additional checks into the disk group check slows down the entire disk group check operation. Introducing additional checks into the disk group check slows down the entire disk group check operation.

4.4 restricted mount disk groups for fast rebalancing

RESTRICTED, a new mount mode for mounting disk groups, is available in Oracle Database11g. When a disk group is mounted in RESTRICTED mode, the client will not be able to access the files in the disk group. If an ASM instance knows that no client exists, it does not attempt to send a message to the client to lock / unlock the zone mapping, thus improving the performance of the rebalance operation.

Disk groups mounted in RESTRICTED mode are mounted exclusively on only one node; the disk group cannot be used by ASM clients on that node.

With RESTRICTED mode, you can perform all maintenance tasks on a disk group in an ASM instance without external interaction.

At the end of the maintenance cycle, you must explicitly unmount the disk group and then remount the corresponding disk group in normal mode.

The ALTER DISKROUPdiskgroupname MOUNT command has been extended to allow ASM to mount disk groups in RESTRICTED mode.

When you start an ASM instance with the RESTRICTED option, all disk groups defined in the ASM_DISKGROUPS parameter are mounted in RESTRICTED mode.

Restricted mount disk groups for rapid rebalancing:

(1) disk groups can only be mounted on a single instance.

(2) No database client or other ASM instance can gain access.

(3) rebalancing can continue without locking overhead.

Example:

(1) ALTER DISKGROUP data DISMOUNT

(2) ALTER DISKGROUP data MOUNT RESTRICT

(3) maintenance tasks: add / remove disks.

(4) ALTER DISKGROUP data DISMOUNT

(5) ALTER DISKGROUP data MOUNT

4.5 forced mount of disk groups

This feature changes the behavior of ASM when mounting incomplete disk groups.

With OracleDatabase 10g, the mount operation succeeds as long as there are enough failure groups to mount the disk groups, even if there are missing or damaged failure groups. This behavior may automatically delete ASM disks and need to be re-added after repair, thus causing the rebalance operation to last for a long time.

When using OracleDatabase 11g, this operation fails unless a new FORCE option is specified when mounting a corrupted disk group. This way, you can correct configuration errors (such as ASM_DISKSTRING setting errors) or resolve connectivity issues before attempting to remount.

"however, disk groups mounted with the FORCE option may take one or more disks offline if they are not available at mount time." Corrective action must be taken before the DISK_REPAIR_TIME expires and these devices cannot be restored. If you do not bring these disks online, the system removes them from the disk group, requiring a costly rebalancing to restore all files in the disk group to redundancy. "in addition, if one or more devices are offline due to MOUNT FORCE, some or all of the files will not be properly protected until redundancy in the disk group is restored by rebalancing."

Therefore, the MOUNT command with the FORCE option is useful if you know that some of the disks that belong to a disk group are not available. If ASM finds enough disks to form a quorum disk, the disk group mount will succeed.

MOUNT with the NOFORCE option when no option is specified is the default MOUNT option. In NOFORCE mode, all disks belonging to a disk group must be accessible, or the mount will not succeed.

Note: specifying the FORCE option unnecessarily can also result in errors. There is also a special case in the cluster: if the first instance loaded into the disk group is not an ASM instance, MOUNT FORCE will fail due to an error when it is determined that the disk cannot be accessed locally but can be accessed by other instances.

Force mount disk group

? By default, MOUNT uses the NOFORCE option:

-all disks must be available

? MOUNT with FORCE option:

-if an arbitration disk exists, the unavailable disk will be taken offline

-if all disks are available, the operation will fail

ALTER DISKGROUP data MOUNT [FORCE | NOFORCE]

4.6 Force deletion of disk groups

Forcibly deleting a disk group marks the disk heads that belong to disk groups that cannot be mounted by the ASM instance as FORMER. "however, the ASM instance first determines whether any other ASM instances using the same storage subsystem are using the disk group." If so, and the disk group is on the same cluster or node, the statement fails.

If the disk group is on a different cluster, the system makes a further check to determine whether an instance in the other cluster has the disk group mounted. If the diskgroup is mounted in another location, the statement fails. However, compared to the inspection of disk groups in the same cluster. I'm not sure about the latter test. Therefore, this clause should be used with caution.

Note: when you execute a DROP DISKGROUP command with the FORCE option, you must also specify the INCLUDING CONTENTS clause.

Force deletion of disk group

(1) allow users to delete disk groups that cannot be mounted

(2) if the disk group is mounted at any location, the operation will fail

DROP DISKGROUP data FORCE INCLUDING CONTENTS

Five. Use SYSASM roles

This feature introduces a new role, SYSASM, dedicated to performing ASM administration tasks. Replacing the SYSDBA role with the SYSASM role can improve security because ASM administration is separate from database administration.

When using OracleDatabase 11g version 1, the OS group for SYSASM and SYSDBA is the same, and the default installation group for SYSASM is dba. In future releases, separate groups must be created, and SYSDBA users will be restricted in the ASM instance.

You can also use a combination of CREATE USER and GRANT SYSASM SQL statements in an ASM instance to create a new SYSASM user. This is useful for remote ASM management. These commands update the password file for each ASM instance without having to start and run the instance. Similarly, you can use the REVOKE command to revoke the user's SYSASM role, and you can use the DROP USER command to remove the user from the password file.

The V$PWFILE_USERS view integrates a new column, SYSASM, to indicate whether the user (TRUE or FALSE) can be associated with SYSASM permissions.

Note: when using Oracle Database 11g version 1, if you log in to the ASM instance as SYSDBA, a warning will be written in the appropriate alert.log file.

VI. Extension of ASMCMD

(1) ASMCMD has been expanded to include the number of ASM elements

-- reproduced from OCP textbook

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report