Friday, October 14, 2011

Storage Basics: Clustered File Systems

source : 

http://www.enterprisestorageforum.com/sans/features/article.php/3834771/Storage-Basics-Clustered-File-Systems.htm

Storage Basics: Clustered File Systems

Many options exist for setting up clustered and highly available data storage, but figuring out what each option does will take a bit of research. Your choice of storage architecture as well as file system is critical, as most have severe limitations that require careful design workarounds.
In this article we will cover a few common physical storage configurations, as well as clustered and distributed file system options. Hopefully, this is a good starting point to begin looking into the technology that will work best for your high availability storage needs.

Underlying Architectures

Some readers may wish to configure a cluster of servers that simply have concurrent access to the same file system, while others may want to replicate storage and provide both concurrent access and redundancy. There are two ways to go about providing multiple servers access to the same disks: Let them both see it, or do it through replication.
Shared-disk configurations are most common in the Fibre ChannelSAN and iSCSI worlds. It is quite simple to configure storage systems so that multiple servers can see the same logical block device, orLUN, but without a clustered file system, chaos will ensue if both try to use it at the same time. This problem is dealt with by using clustered file systems, which we will cover in a moment.
Generally speaking, shared disk setups have a single point of failure: the storage system. This is not always true, however, as "shared disk" is a confusing term with today's technology. SANs, NASappliances and commodity hardware running Linux can all replicate the underlying disks in real time to another storage node, which provides a simulated shared disk environment. Since the underlying block devices are replicated, the nodes have access to the same data and both run a clustered file system, but this replication breaks the traditional shared disk definition.
"Shared nothing," in contrast, was the original answer to shared disk single points of failure. Nodes with distinct storage would notify a master server with changes as each block was written. Nowadays, shared nothing architectures still exist in file systems like Hadoop, which purposely creates multiple copies of data across many nodes for both performance and redundancy. Also, clusters that employ replication between storage devices or nodes with their own storage are also said to be shared nothing.

Design Choices

You cannot access the same block device via multiple servers, as we noted. You always hear about file system locking, so it's strange that normal file systems cannot handle this, right?
At the file system level, the file system itself is locking files to protect the data against mistakes. But at the operating system level, the file system drivers have full access to the underlying block device, upon which they are free to roam. Most file systems assume that they are given a block device, and it's theirs and theirs alone.
To get around this, clustered file systems implement a mechanism for concurrency control. Some clustered file systems will store metadatawithin a partition of the shared device, and some choose to utilize a centralized metadata server. Both allow all nodes in the cluster to have a consistent view of the state of the file system, to allow safe concurrent access. The model with the central metadata sever, however, is sub-optimal if your goal is high availability and eliminating single points of failure.
One other note: The clustered file system model requires swift action when a node does something wrong. If a node writes bad data or stops communicating its metadata changes for some reason, other nodes need to be able to "fence" off the offender. Fencing is accomplished in many ways, most often using lights-out management interfaces. Healthy nodes will Shoot The Other Node In The Head (STONITH), or yank its power, at the first sign of inconsistency to preserve the data.

Clustered File Systems

GFS: Global File System.
GFS, available in Linux, is the most widely used clustered file system. Developed by Red Hat (NYSE: RHT), GFS allows concurrent access by all participating cluster nodes. Metadata is generally stored on a partition of the shared (or replicated) storage.
OCFS: Oracle (NASDAQ: ORCL) Clustered File System.
OCFS is conceptually very much like GFS, and OCFS2 is now available in Linux.
VMFS: VMware's (NYSE: VMW) Virtual Machine File System.
VMFS is the clustered file system that ESX Server uses to allow multiple servers access to the same shared storage. This makes virtual machine migration (to different servers) seamless, as the same storage is accessible at the source and destination. Journals are distributed, and there is no single point of failure between the ESX servers.
Lustre: Sun's (NASDAQ: JAVA) clustered, distributed file system.
Lustre is a distributed file system designed to work with very large clusters containing thousands of nodes. Lustre is available for Linux, but its applications outside the high performance computing circle are limited.
Hadoop: a distributed file system, like Google (NASDAQ: GOOG) uses.
This is not a clustered file system, but rather a distributed one. We include Hadoop because of its rising popularity, and the wide array of storage architecture design decisions that can take advantage of Hadoop. By default, you will have three copies of your data on three different nodes. Changes are replicated to each, so in a sense it can be treated as a clustered file system. Hadoop does, however, have a single point of failure: the name node, which keeps track of all file system level data.

Choices, Choices

Having too many choices is never a bad thing. Your implementation goals will dictate which clustered or distributed file system and storage architecture you choose. All of the mentioned file systems work very well, assuming they are used as intended.
Article courtesy of Enterprise Networking Planet
Follow Enterprise Storage Forum on Twitter
Tags: open source, Linux, file systems, clusters


By faizal   July 08 2010 06:59 PDT
hey where is Google File System (GFS)
By kurtenbach   July 21 2010 15:48 PDT
I think the Google File System was designed for very specific use cases. But another more general (or at least "general scratch purpose") cluster fs that is definitely missing here is the Fraunhofer Parallel File System (http://www.fhgfs.com). Was really easy to install at our site and delivered great performance right out of the box...
By bambara   April 05 2010 10:39 PDT
hello i m working on RAC oracle real application cluster on redhat as 4 and i want to share my storages i have 2 cluster nodes plzzzzzzzz if you know how can i share hard disc send me an email thanks
By BD   August 24 2009 23:50 PDT
veritas' clustered vxfs is also missing
By Chris H   August 19 2009 16:00 PDT
http://www.drbd.org/ seems worth a mention
By Glen   August 19 2009 09:41 PDT
Several are missing here, notably PVFS/PVFS2, IBM's GPFS and IBRIX....the latter 2 being commercial.
By Mark   August 19 2009 04:37 PDT
The glusterfs is missing in the list.
By coward   August 18 2009 22:19 PDT
IMHO, glusterfs is the most cost-effective solution.Other than that, It is neat, light-weight, high-performance, administrator-friendly, etc. Sounds like I am from Z Research. Actually I am just a glusterfs user, who switched from lustre.

No comments:

Post a Comment