System Administration Guide
Chapter 8, Administering virtual disks

Virtual disk types

Virtual disk types

Eight types of virtual disk are available:

This table summarizes their storage characteristics: 

Table 8-1 Available virtual disk characteristics

 ---------------------------------------------------------------------
 Rating       I/O performance   Resilience
 ---------------------------------------------------------------------
 Best         RAID 10           mirror
              stripe            RAID 10
                                RAID 53
              RAID 5, 53        RAID 4, 5 (hot spare)
              RAID 4            RAID 4, 5 (no hot spare)
              mirror
              concatenated      simple
 Worst        simple            concatenated, stripe


Simple disk

The simple virtual disk configuration lets you define all your non-root filesystem space as one virtual disk. You can define the pieces that make up the virtual disk by specifying the starting block ``offset'' and length of the piece (a piece is a group of contiguous blocks configured to a specific virtual disk). Simple virtual disks supersede physical disk divisions created using divvy(ADM). Simple virtual disks can also be configured to overlay existing physical divisions. See ``Converting filesystems to virtual disks''.

By overlaying a simple virtual disk definition on a standard disk division, you can make it easier to migrate your data to different virtual disk types. You can do this without interrupting system operation.

Simple virtual disks, like other virtual disks, are defined in the virtual disk configuration file /etc/dktab.

This example is of a simple virtual disk entry: 

   /dev/dsk/vdisk4 simple
       /dev/dsk/2s1 2035 10000
The first line indicates the name of the virtual disk device file, and the type of the disk (simple).

The second line indicates the disk partition, the starting block allocated to the virtual disk, and the number of blocks allocated to the virtual disk (note that the number of blocks is reported in 512-byte blocks). In the device notation, /dev/dsk/dsp, d is the disk number and p is the partition number. Disk numbers start from zero and partition numbers start from 1. See ``Creating additional virtual disk nodes'' for more information on virtual disk node names.


NOTE: Simple virtual disks can be configured to span an entire partition on the physical disk. However, the minimum offset is the size of the partition's reserved area.



Concatenated disk

A concatenated virtual disk is formed by adding two or more disk pieces (that may be physical disk divisions, previously defined virtual disks, or a mixture of both). This type of disk allows you to create a virtual disk that may be larger than any single physical disk in the system. The size of the concatenated disk is the sum of the sizes of all its component parts. 

Figure 8-1 Example concatenated disk

The concatenated disk is referred to as /dev/dsk/vdisk4 and is made up of three virtual disk pieces. The /etc/dktab file definition for the configuration is:

   /dev/dsk/vdisk4 concat  3
       /dev/dsk/1s1        2000  8000 
       /dev/dsk/2s1        2500  17500
       /dev/dsk/1s1        22000 8000
vdisk4 has a total capacity of 33,500 blocks (16,750KB). 

Striped array (RAID 0)

Disk striping distributes data blocks across pieces stored on multiple disks. A ``stripe'' is a set of clusters written in parallel to a set of pieces on different disks; the ``stripe width'' is the number of bytes written in parallel. Because the pieces are written to in parallel, all the pieces must be the same size. A striped disk can consist of 2 to 255 of the same-sized disk pieces.

For information on optimizing application performance by varying the cluster size, see ``Planning your system layout with virtual disks''.

This striped configuration, (RAID level 0), provides high write performance because there is no redundant information to update (redundancy is used to recover from lost or corrupt data). Without redundancy, any single disk failure will result in data loss. Non-redundant disk arrays are ideally suited to environments where performance and capacity, rather than reliability, are the primary concerns. 

Figure 8-2 Example striped array (RAID 0)

The /etc/dktab file definition for this configuration is:

   /dev/dsk/vdisk5 stripe 5 8
       /dev/dsk/1s1 2000 497968 
       /dev/dsk/2s1 2000 487968
       /dev/dsk/3s1 2000 497968 
       /dev/dsk/4s1 2000 497968
       /dev/dsk/5s1 2000 497968
The first line indicates that /dev/dsk/vdisk5 is striped across five disks, with 8 blocks (of 512 bytes) per cluster. The second and subsequent lines indicate the disk pieces belonging to the virtual disk, with start block and end block numbers for each disk. 

Mirrored disk (RAID 1)

Disk mirroring is the duplication of disk data onto a secondary disk. Both the primary and secondary disks are simultaneously online. Data written to the primary disk is simultaneously written to the secondary disk. Read requests are automatically directed to alternate disks to obtain the best performance.

When a mirrored disk is created, one disk is designated as the primary disk. Data from the primary disk is copied to the secondary disk until they mirror each other's content; thereafter, writes are directed to both disks in parallel.

Mirroring is ideally used in database applications where availability and transaction rate are more important than storage efficiency. Mirrors can also be configured across different buses.

An /etc/dktab file definition for the configuration is:

   /dev/dsk/vdisk4 mirror 2
       /dev/dsk/1s1 2000 798000
       /dev/dsk/2s1 2000 798000
   /dev/dsk/vdisk5 mirror 2
       /dev/dsk/3s1 2000 798000
       /dev/dsk/4s1 2000 798000


Block-interleaved undistributed parity array (RAID 4)

RAID 4 is based on data striping (as in RAID 0 configurations) with additional redundancy information stored on a separate disk piece. Data is striped across three or more disk pieces using a defined cluster size. The added reliability of RAID level 4 is obtained by generating parity across the striped data; the parity is written to a separate disk piece. 

Parity information is generated during disk writes and stored on an extra disk configured into the disk array for that purpose. In the event of any single disk failure, data is recreated for each block on the failed disk using the surviving data blocks and the parity block. The virtual disk will remain online, although performance will be reduced due to the need to recreate data from the failed disk. Except for some user warning messages on the console indicating the failure, the applications will not be affected.

Because a RAID 4 virtual disk has only one parity disk, and because that disk must be updated on all write operations, overall disk performance may be reduced. 

Figure 8-3 Example striped disk (RAID level 4)

Figure 8-3 shows an example of a RAID 4 configuration with Disk 5 as the parity disk and a cluster size of 32KB. (Note that disk blocks are 512 bytes long, so a 32KB cluster size corresponds to 64 blocks.)

Here is an example of an entry in /etc/dktab file which shows the definition for the RAID 4 configuration tuned for group reads and writes.

   /dev/dsk/vdisk7  array  5  64
       /dev/dsk/1s1   2000  97840
       /dev/dsk/2s1   2000  97840
       /dev/dsk/3s1   2000  97840
       /dev/dsk/4s1   2000  97840
       /dev/dsk/5s1   2000  97840 parity


Block-interleaved distributed parity array (RAID 5)

RAID 5 is based on data striping (as in RAID 0) and parity (as in RAID 4). The difference between RAID level 5 and RAID level 4 is that parity is striped across all disks. Because parity information is distributed, no one disk bears an excessive I/O load. For this reason, RAID 5 is preferred to RAID 4 for most I/O intensive applications. 

Figure 8-4 Example data and parity striped disk (RAID 5)

Here is an example of an entry for the RAID 5 configuration:

   /dev/dsk/vdisk11  array  5    16
       /dev/dsk/0s1  2000  6864
       /dev/dsk/1s1  2000  6864
       /dev/dsk/2s1  2000  6864
       /dev/dsk/3s1  2000  6864
       /dev/dsk/4s1  2000  6864


Striped, mirrored array (RAID 10)

Striped mirrors (or RAID 10) can be built by striping mirrored disks, which are usually duplexed. Duplexing is achieved by setting up the disks on two independent buses. 

Figure 8-5 Example striped mirrored disk

In this example, Disk 1 is mirrored to Disk 2, Disk 3 is mirrored to Disk 4, and so on. The four mirrored virtual disks are then configured as a single striped disk. Disks 1, 3, 5, and 7 are striped together and disks 2, 4, 6, and 8 mirror them under a single virtual disk node. This configuration will protect disks from:

This configuration will not protect from simultaneous disk failures of Disk 1 and Disk 2 or Disk 3 and Disk 4, and so on. Striped-mirror configurations can result in performance improvements over standard mirror or standard striped configurations. 

Striped array of arrays (RAID 53)

Striped-arrays (or RAID 53) can be built by striping arrays. An example is shown in Figure 8-6, ``Example striped disk arrays (RAID 53)''. 

Figure 8-6 Example striped disk arrays (RAID 53)

In this example, Disk 1, Disk 2 and Disk 3 are configured as a RAID 5 array, Disk 4, Disk 5 and Disk 6 make up the next array, and so on. The four virtual disks are then configured as one striped disk.

This configuration will protect your disks from: