Storage Types in OpenStack



Storage Concepts:

Storage is present in many parts of the OpenStack cloud environment. It is important to understand between ephemeral storage and persistent storage:

Ephemeral Storage – If you deploy only the OpenStack Compute (nova) service, your users do not have access to any form of persistent storage by default. Disks associated with virtual machines are ephemeral, that is, they disappear from the user’s point of view when a virtual machine is shut down.

Persistent Storage – Persistent storage means that the storage resource survives any other resource is always available, regardless of the state running instance.

OpenStack clouds explicitly support three types of persistent storage: object storage, block storage, and file storage.

Object Storage:

Users access binary objects through a REST API. If your intended users need to archive or manage large sets of data, you must provide them with the Object Storage service. Additional benefits include:

OpenStack can store images from your virtual machine (VM) inside an Object Storage system, instead of storing the images on a file system.

Integration with OpenStack Identity and works with the OpenStack dashboard.

Improved support for distributed deployments across multiple datacenters with support for asynchronously consistent replication.

You should consider using the OpenStack Object Storage service if you plan to distribute your storage cluster to multiple data centers, if you need unified accounts for your users for compute and object storage, or if you want to control your object storage with OpenStack. Dashboard. For more information, see the Swift project page.

Block Storage:

Block storage is implemented in OpenStack by the bulk storage service (cinder). Because these volumes are persistent, they can be detached from one instance and attached to another instance to keep the data intact.

The bulk storage service supports multiple back-end systems in the form of drivers. Your choice of back-end storage must be supported by a block storage driver.

Most block storage drivers allow the instance to have direct access to the block device of the underlying storage hardware. This helps to increase the total number of read / write I / Os. However, support for using files as volumes is also well established, with full support for NFS, GlusterFS, and others.

These drivers work a little differently than traditional bulk storage drivers. On an NFS or GlusterFS file system, only one file is created, and then mapped as a virtual volume in the instance. This mapping and translation is similar to how OpenStack uses QEMU-based virtual machines stored in / var / lib / nova / instances.

File Storage:

In a multi-tenant OpenStack cloud environment, the Shared File System (Manilla) service provides a set of services for managing shared file systems. The Shared File System service supports multiple backends in the form of drivers and can be configured to provision shares from one or more back-end. Share servers are virtual machines that export file using different file system protocols such as NFS, CIFS, GlusterFS, or HDFS.

The Shared File Systems service is persistent storage and can be mounted on an unlimited number of client machines. It can also be detached from one instance and attached to another instance without loss of data. During this process, the data is safe unless the Shared File System Service itself is changed or deleted.

Users interact with the Shared File System Service by mounting remote file systems on their instances with the following use of these systems for storing and exchanging files. The Shared File System Service provides shares that are a mountable and remote file system. You can mount a share and access a share from multiple hosts by multiple users at a time. With actions, you can also:

Create a share by specifying its size, shared file system protocol, and visibility level.

Create a share on a share server or in stand-alone mode, depending on the selected back-end mode, with or without a share network.

Specify access rules, security services for existing shares.

Combine multiple shares into groups to maintain data consistency within groups for the following secure group operations.

Create a snapshot of a selected share or share group to consistently store existing shares or create new shares from this snapshot consistently.

Create a share from a snapshot.

Set rate limits and quotas for specific actions and snapshots.

See the use of sharing resources.

Delete shares.   

Product storage technologies:

There are different backend storage technologies available. Depending on the needs of your cloud user, you can implement one or more of these technologies in different combinations.

Ceph:

Ceph is a scalable storage solution that replicates data on commodity storage nodes.

Ceph uses an object storage mechanism for data storage and exposes the data through different types of storage interfaces to the end user supported by the following interfaces: – Storing objects – Storing blocks – Interfaces file system

Ceph supports the same object storage API as swift and can be used as a back-end for the Block Storage (cinder) service as well as for background storage for preview images.

Ceph supports Thin Provisioning implemented using copy-on-write. This can be useful when booting from a volume because a new volume can be provisioned very quickly. Ceph also supports trapezoid distortion-based authentication (from version 0.56). This can be a transparent exchange for the default OpenStack swift implementation.

The benefits of Ceph include:

The administrator has finer control over data distribution and replication policies.
Consolidation of object storage and block storage.
Quick provisioning of boot instances from a volume using thin provisioning.
Support for the CephFS Distributed File System Interface.

You need to consider Ceph if you want to manage your object and block storage on a single system, or if you want to support a quick start from a volume.

Brillance:

A distributed shared file system. Starting with Gluster 3.3, you can use Gluster to consolidate object storage and file storage into a unified file and object storage solution called Gluster for OpenStack (GFO). GFO uses a custom version of swift that allows Gluster to be used as the primary storage.

The main reason for using GFO instead of swift is if you also want to support a distributed file system, either to support the dynamic migration of shared storage, or to provide it as a separate service to your end users. If you want to manage the storage of your objects and files in a single system, you need to consider GFO.

LVM:
The Logical Volume Manager (LVM) is a Linux-based system that provides an abstraction layer over physical disks to expose logical volumes to the operating system. The LVM back end implements block storage LVM logical partitions.

On each host that will host bulk storage, an administrator must initially create a volume group that is dedicated to bulk storage volumes. Blocks are created from LVM logical volumes.

iSCSI:

Internet Small Computer Systems Interface (ISCSI) is a network protocol that adds to Transport Control Protocol (TCP) for connecting data storage devices. It transports data between an iSCSI initiator on a server and the iSCSI target on a storage device.

iSCSI is suitable for cloud environments with the Block Storage service to support applications or for file-sharing systems. Network connectivity can be achieved at a lower cost than other storage technologies because iSCSI does not require a host bus adapter (HBA) or storage-specific network device.

NFS:

The Network File System (NFS) is a file system protocol that allows a user or administrator to mount a file system on a server. File clients can access mounted file systems through remote procedure calls (RPCs).

The benefits of NFS are low implementation cost due to shared network interface cards and traditional network components, as well as a simpler setup and installation process.

For more information on configuring bulk storage to use NFS storage, see Configuring an NFS Storage Backend in the OpenStack Administrator’s Guide.

Sheepdog:

Sheepdog is a storage system distributed in user space. Sheepdog can reach hundreds of nodes and has powerful virtual disk management features, such as snapshot, clone, restore, and thin provisioning.

It is essentially an object storage system that manages disks and intelligently aggregates disk space and performance in a hyper-scale linear fashion on standard hardware. In addition to its object store, Sheepdog provides an elastic volume service and an http service. Sheepdog requires a specific kernel version and can work well with the file systems supported by xattr.

ZFS:

The Solaris iSCSI driver for OpenStack Block Storage implements the blocks as ZFS entities. ZFS is a file system that also has the functionality of volume manager. It does not look like a Linux system, where the volume manager (LVM) and the file system are separate (such as ext3, ext4, xfs, and btrfs). ZFS has a number of advantages over ext4, including improved data integrity verification.

The ZFS backend for OpenStack Block Storage only supports Solaris systems, such as Illumos. Although there is a ZFS Linux port, it is not included in any of the standard Linux distributions and has not been tested with OpenStack Block Storage. As with LVM, ZFS does not support replication across multiple hosts, so you need to add a replication solution over ZFS if your cloud needs to be able to handle the failures of the storage nodes.

Visualpath is the Best place for Online Training Courses in Hyderabad. We are Providing OpenStack Online Training. We make you Expert on this course by our Top Skilled Professional Trainers . Check for More Info @9989971070.

For More Information about Visualpath, visit www.visualpath.in and follow the company on Facebook and Twitter.

Comments