The [417]braindump for CORNFS explains many things about this project. CORNFS is an attempt at creating a distributed filesystem that mirrors N copies of files across a group of M number of servers. Everything in CORNFS is stored as a file. At any time, it is possible to reconstruct the entire filesystem via a simple overlay rsync from the remote filesystems - there is no "special database" to worry about. Rather than mirroring at the volume or block level, CORNFS mirrors at the file level, tracking what servers a file is mirrored on. CORNFS works with locally cached copies of files and a central metadata state directory. Extended attributes are used to mark metadata state files with information CORNFS uses to track the mirrors for a particular file, as well as cached files that are marked as "dirty" (for copying back to remote servers when a cached file is modified). As files are written, the servers with the most available disk space are used for new files (braindead simple algorithm for the moment). When a cached file is modified, the file is copied back to its mirrors (or new mirrors should a server be unavailable). CORNFS keeps metadata centrally to keep a sane filesystem state. Every remote server's metadata state is known by the central server. The central server's metadata state is authorative; while remote servers may go offline, when they come back online, any files that were updated while they were unavailable will have been removed from that server in the central metadata and will not be referred to (such "orphaned files" will need to be pruned periodically). As a last resort, the master's cached copy is authorative. If mirrors cannot be written to, the cache file will remain dirty, and will not be expired. This is a production running release, as used by my employer today. The history of development so far: [418]cornfs.c v0.0.1.0 The first (broken) release. [419]cornfs.c v0.0.0.2 A number of fixes make this version _usable_. There are most definitely corner cases that have not been dealt with yet, though it seems to suffer an rsync/rm well n ow. [420]cornfs.c v0.0.0.3 Adds partial read()s while the copy is underway during an open() (until I figur e out how to spawn a pthread() for the copy, this does not really do much yet). [421]cornfs.c v0.0.0.4 Added pthread_mutex_lock(&corn_copy_lock) to copy_file. Added corn_magic and USE_MAGIC wrappers for magic file identification. [422]cornfs.c v0.0.1.0 The dynamic expiring cache code is now present. Added cache_inventory(), cache_insert(), cache_update(), cache_expire_to_limit(), cache_rename(), and cache_remove(). [423]cornfs.c v0.0.1.1 Added S_ISREG check to read() and write(). Any non-regular file read/write call s are now mapped correctly to state files. Also fixed cache_insert. [424]cornfs.c v0.0.1.2 Turned off debugging, removed hardcoded size limit. [425]cornfs.c v0.0.1.3 Remove stat()ing of cached files, replace with cache_exists(), particularly in read() and write(). Move as many dirty checks to corn_cache as possible. cache_mark_di rty(), cache_mark_clean(), cache_is_clean(). Fixed some more mallocs. [426]cornfs.c v0.0.1.4 Add dirty check to cache_expire (do not expire something from cache if it does not have a good mirror!!!) [427]cornfs.c v0.0.2.0 Add copy_file_thread, copy_file_wait, and copy_file_nowait. Copying is now threadable! [428]cornfs.c v0.0.3.0 Add fsck_thread and xmp_init/xmp_destroy. The fsck_handler_* functions are no done yet, but are ready to fill in. [429]cornfs.c v0.0.3.1 Relabeled all xmp_ functions to cornfs_; Reworked open() function quite a bit: moved much of the copy to cache logic to download_to_cache(); Defined corn_file_info struct, used to pass open() file descriptor to read() and write(); Moved code from release() to upload_from_cache(), aadded to fsck_cache_handler() [430]cornfs.c v0.0.3.2 Filled in fsck_meta_handler() and fsck_state_handler() Fixed some logic errors in fsck_import_handler() Filesystem appears to fsck correctly now. [431]cornfs-v0.0.5.0.tar.bz2 Add control_file_read()/write() and corn_file_info structure updates to handle control file IO. Profiled code, found cache_update()/cache_insert() biggest culprit Made corn_cache a two-way linked list to remedy above. Split up cornfs.c into numerous .c source files to simplify coding Now in a tarball because of above. Or grab the [432]latest cornfs.tar.bz2 with everything you need. Things to fix: * Hardlinks don't work right. * Add userspace tool to monitor live filesystem in action. * Working toward [433]MetaFS style searchable metadata. The libmagic stuff is just a beginning. I'm looking into [434]id3lib integration now. The storage backend for the searchable data will likely be a [435]BerkeleyDB database. The easiest way to build this is to grab [436]fuse-2.5.2.tar.gz and extract it: $ tar xvzf fuse-2.5.2.tar.gz Then extract the [437]cornfs.tar.bz2 somewhere and build it: $ cd /tmp $ wget http://ian.blenke.com/projects/cornfs/cornfs.tar.bz2 $ cd /tmp ; tar xvjf cornfs.tar.bz2 $ make -C /tmp/cornfs You should now have a "cornfs" runtime. If not, drop me an email. The directory tree to make this usable is hardcoded at the moment into the runtime (constants toward the top of the source file). $ mkdir -p /data/cornfs/cfgs/servers $ echo /remote/path > /data/cornfs/cfgs/servers/SERVERNAME $ mkdir -p /data/cornfs/metadata/state $ mkdir -p /data/cornfs/metadata/cache $ mkdir -p /data/cornfs/metadata/SERVERNAME $ mkdir -p /data/cornfs/import/SERVERNAME The only missing bits are mounting the import/SERVERNAME directories for each filesystem configured in cfgs/servers/. You can use [438]SHFS, NFS, [439]DAVFS2, or whatever the heck your linux kernel has support for. The CORNFS strives to be filesystem agnostic. $ cd /data/cornfs/cfgs/servers $ for server in * ; do mkdir -p /data/cornfs/import/$server ; shfsmount $server :`cat $server` /data/cornfs/import/$server ; done Now start the cornfs server with a reference path: $ cd /tmp/cornfs $ mkdir /mnt/cornfs $ ./cornfs /mnt/cornfs -d The "-d" flag adds FUSE debugging. The lower you set the DEBUG level when building cornfs, the more debugging info will appear. It's an enum, so that can easily be reversed (Verbosity). By default, the DEBUG level isn't set at all. In that mode, all debugging is macroed away to oblivion to speed things up. CornFS is being used in production with SSH over NFS instead of SHFS for stability. If you plan on using CornFS in a production role, please let me know. Enjoy. [440]projects/cornfs | [441]4 trackbacks Mon, 25 Jul 2005 [442]Braindump for CORNFS Please excuse this brain dump. As ideas come up, I continue to edit this node. Eventually, some structure will be enforced. Inspired by [443]SSHFS and [444]SHFS, what would it take to make a filesystem that spans a cluster of servers and exposes aggregate diskspace while still mirroring data? Exposing a filesystem with [445]FUSE on a master node would be ideal, with some form of WebDAV network access (using something as simple as Apache mod_dav) for client access. Most distributed filesystems have the idea of a "master" for metadata: * [446]Google's Filesystem has a master model with distributed "chunk servers" for the data. Not OpenSource. Also not POSIX, it's a programming API interface, you can't "mount" it AFAIK. They could probably throw a FUSE filesystem together in short order if they really wanted to. * [447]HDFS (previously NDFS), or the Hadoop (Nutch) Distributed Filesystem is a Java knockoff of the Google Filesystem. As a backend for the Apache Lucene Nutch project, it is a programmatic API inteface filesystem. While you can't mount it, writing a FUSE frontend wouldn't be hard. * [448]PVFS v1 has one master, v2 has multiple masters, but no mirroring - meant for high-IO scientific clusters. * [449]OpenAFS has many servers, and mirrors at the volume level, but requires a complex kerberos infrastruture and much manual volume creation to balance the layout. There is only one read/write volume, the rest of the volume replicas are read-only. Don't think I'm not temped by OpenAFS, it just doesn't solve the need we have at the moment (long story). * [450]CODA (sometimes referred to as AFSv3) offers disconnected roaming, but mirrors at the server level - not at a volume level. * [451]Lustre has a master model, but mirrors on a volume level. * [452]Intermezzo was Peter J. Braam's predecessor to Lustre. Ideal for straight mirroring, not distributing files throughout a cluster. * both [453]GFS and [454]OpenGFS use a DLM cluster arrangement with shared storage to present a shared filesystem. CLVM mirroring is very young (lvmcreate -m is undocumented at best, allocation is impossible to specify, and you can't have more than one mirror log volume yet). Boy was this fun to play with. * [455]CXFS is SGI's Clustered XFS. Very similar to GFS, only cross platform and very scalable. * [456]OpenSSI's CFS is little more than network mirroring across whatever underlying filesystem to present a unified root image for the OpenSSI cluster. Not what we're looking for. * [457]MFS and DFSA are from [458]Mosix / [459]Openmosix. MFS is the feature of openMosix that enables you access to remote filesystems as if those filesystems were locally mounted. With DFSA enabled, system calls will be executed on the remote node without migrating the process back to it's home node There are others, but these are the "big boys" that I can think of. There are a couple of distributed filesystems that run without a master server. This isn't trivial to implement: * [460]GPFS is IBM's General Parallel File System. What is claims is downright nirvana. I've not have the time (or money) to play with it. Seriously, read this page. I want a copy. Not OpenSource. ;) * [461]xFS is Berkeley's Serverless Network File Service. Basically, a log based network striped filesystem with metadata "map" servers that trade "write tokens" to update files between each other. Storage servers in the cluster might each have some space set aside to this purpose. The easiest way would be to create and mount a loopback file filesystem with the space to be shared: storage-node$ mkdir -p /data/cornfs/spool/ /data/cornfs/export/ storage-node$ dd if=/dev/zero of=/data/cornfs/spool/storage_fs bs=1M count=5k storage-node$ mke2fs -f /data/cornfs/spool/storage_fs storage-node$ mount -o loop /data/cornfs/spool/storage_fs /data/cornfs/export/s torage On the Master, each storage server's remote filesystem would be mounted based on the master's config (which is modeled likewise in a filesystem tree): master-node$ mkdir -p /data/cornfs/cfgs/nodes master-node$ cd /data/cornfs/cfgs/nodes master-node$ echo /data/cornfs/export/storage > storage-node1 master-node$ echo /data/cornfs/export/storage > storage-node2 master-node$ mkdir -p /data/cornfs/import master-node$ for node in * ; do mkdir -p /data/cornfs/import/$node ; shfsmount $node:`cat $node` /data/cornfs/import/$node ; done The beauty of this is that [462]shfs caches files and works with pretty much any host you can ssh into (including Windows via [463]Cygwin). There are some shortcomings to shfs: "df -i" doesn't work, extended attributes aren't maintained, and it only works from linux kernels (were there only a Mac port ;) Each file in the master tree will have a FILE pathname, including the filename. Ideally, each file would have at least two copies. For our purposes, I'll suggest that this filesystem should endeavor to track two mirrors for every file, and clean up any "extra" copies. The Master itself should have a few trees for the metadata. This leaves us with a few directory trees: /data/cornfs/metadata/state/FILE - the FILE has the same owner, group, permissions, ctime/atime/mtime, and size as the actual FILE (as a sparse file). - Extended attributes make a great storage for things like the primary and seco ndary mirror server names (setxattr/getxattr). /data/cornfs/import/SERVER/FILE - contains the actual file, if SERVER is one of the FILE mirrors. /data/cornfs/metadata/SERVER/FILE - this is a sparse version of the above file, used as a sanity check and for re generating a SERVER from scratch. - This local metadata replica of a remote server is the masters opinion of what the server actually holds. - If something does not exist in this copy, but exists on the server, it should be removed from that server. - If something exists in this copy but not on the server, corruption has occurr ed. /data/cornfs/metadata/cache/FILE - a directory tree containing the past N days worth of accessed FILEs (pruned v ia cron) This ends up requiring more than twice the number of actual file inodes to represent the full filesystem on the master. One full copy of the entire metadata state, one copy spread across all of the servers for their metadata state replica on the master server, and some fraction of the filesystem in cache for frequent and/or recent file access. The Master filesystem would be mounted somewhere handy to be filled, like /master: master$ mkdir /master master$ /opt/cornfs/current/bin/cornfs /master Any new files created under /master would be written to the cache until the user closes the file. On file close, the Master needs to: 1. Lock the file in the metadata state tree so that no two close operations can occur in parallel. Run a "df" on all of the /data/cornfs/import/ filesystems to see which two have the most available space, then fork off a copy to those respective filesystems. 2. Creates a /data/cornfs/metadata/state/ sparse file 3. Tag the /data/cornfs/metadata/state/ file with a "mirror1" extended attribute when the copy completes (setxattr). Update the /data/cornfs/metadata/SERVER/ file to mark that the copy was successful. 4. Tag the /data/cornfs/metadata/state file with a "mirror2" extended attribute when the copy completes (setxattr). update the /data/cornfs/metadata/SERVER/ file to mark that the copy was successful. When release() is called for a file, if any write() calls were used on the file, it should have been flagged as "dirty" (by an associative array in memory, along with an extended attribute just in case the running daemon is killed). If a file is dirty, it needs to be written out to the mirrors on release(). If a file is clean, don't do anything at all! The file is handily in the cache for the next access. When reading a file: 1. Check /data/cornfs/metadata/cache/ for the file. Open if it exists. 2. If the file does not exist, one of the mirrors would be selected for the file. 3. Copy the file to the cache. There is nothing wrong with allowing the client to read, as long as it doesn't try to read more data than has been streamed from the mirror server so far (seek or read() past the EOF as the cache file grows). In that case, the read or seek should block until the entire file is in the cache. 4. If no mirrors are accessible, an error would be returned. When moving a file/directory: 1. Move the state/ copy of the file, if it exists. If this fails for any reason, pass the error code up. 2. Move the cache/ copy of the file, if it exists. 3. Iterate through the local metadata/SERVER, moving the file, if it exists. 4. Iterate through the remote import/SERVER, moving the file, if it exists. When unlinking (removing) a file/directory: 1. Remove the state/ copy of the file, if it exists. If this fails for any reason, pass the error code up. 2. Remove any cache/ copy of the file, if it exists. 3. Iterate through the local metadata/SERVER, removing the file/dir, if it exists. 4. Iterate through the remote import/SERVER, removing the file/dir, if it eixsts. Changing permissions, access times, or ownership would really only affect the /data/cornfs/metadata/state/ sparse file. Most metadata information would use the state sparse file. A "helper daemon" needs to run periodically to make sure that servers are accessible. 1. If a server becomes unreachable but has not timed out as "dead", read()s fail over to the other mirror (or fail if both mirrors are unreachable - such operations should probably trigger a mirror copy() as well), and write()s move the unreachable mirror of a file over to another reachable server. 2. If a server is totally inaccessible for a period of time to mark it as "dead", the helper daemon needs to refer to the /data/cornfs/metadata/SERVER/ tree and create a new mirrored copy for each file across the farm. In the process, the metadata/SERVER tree will be pruned. 3. A "sanity" script must be periodically run against each metadata/SERVER tree to see if a copy of a file exists on the server that does NOT exist in the metadata/SERVER tree. If so, that's an orphaned mirror, and should be deleted. Orphans would happen when the master's metadata state for a server says something shouldn't be there, but the server has been down during the time when the mirror would have been removed As metadata state is updated, locking must be used to ensure atomic operations on the metadata tree. We would not want multiple updates to a file to occur out of order due to a delay in a copy operation to a server in the field. Speed and availability should be consistently monitored to select faster responding mirrors (if possible) and/or noting that nodes are unreachable for file operations to trigger a mirror for a file with a broken mirror. Symlinks, block/character devices, and other non-files are stored in the metadata state/ tree alongside the sparse files that represent the actual files that are being distributed. There is no "inode" construct per se, outside of the metadata state/ tree. That is the "master metadata" that most filesystem operations use. Only when reading/writing, opening/closing, moving, or unlinking, do the mounted server filesystems under import/ get involved to hold the data. Making this a single instance store (ideal for backups) would require just a bit more logic to include an SHA1/MD5 hash encoded as a directory tree (broken up by octet to a path tree structure); something like: /data/cornfs/metadata/state/SHA1/MD5/object Another neat extension would be to build a "revision history" of documents in the filesystem by: 1. On close(), if a file has changed, it should be archived. 2. Move original version of files into a revision/ metadata tree by hash ID. 3. Copy in the new version of file from the cache to the mirrors. 4. Tag the state/ tree of the new file with an extended attribute as to the "previous revision"'s SHA1/MD5 HASH in the revision/ metadata tree. This would address files that change, but would not save us from directory trees that are removed. For this, we would want an archive/ metadata tree by datestamp: 1. On unlink(), create an archive/TIMESTAMP/ metadata tree and move the file there. Moving files and/or directory trees around in state/ would maintain the extended attributes, effectively retaining the revisionist history FOR FREE! When files are moved, the mirrors must be moved as well. Reconstructing things from the revision/ and archive/ trees would be interesting, but well beyond the initial scope of this endeavor. The quickest way to throw this together would be with the [464]Fuse.pm perl module. I'm actively writing code now. The eventual goal would be to write a thread aware C version based on the above prototype, primarily for speed reasons. More to come.. SOON..