Pool multiple filesystems into ONE directory/share w/ mhddfs

Guides written by the community, for the community, and only guides!

Pool multiple filesystems into ONE directory/share w/ mhddfs

Postby xenoxaos » Fri Nov 09, 2012 11:30 pm

Probably like a few of you, I have a whole LOT of movies/media. A couple of years ago I converted every single DVD/BluRay I owned to an mkv. At first it was easy, I had a couple of decent sized hard drives in my desktop computer. Put DVD rips on one drive, BluRay rips on another and then share them over the network. Then I got into the ARM world. Moved storage from my desktop to USB drives plugged into a dockstar. I ended up having I think 6 or 7 hard drives of varying sizes plugged into it. Then I asked for a eSATA enclosure for xmas(which I got) and now have it stocked with 2TB drives.

I was looking for a way to pool all my drives with some fault tolerance. I ruled out RAID0 because if I lost one drive, I would lose EVERYTHING. That wasn't acceptable. I could live with a single drive failure (I could always rerip them if necessary). Couldn't do RAID1, (I already have about 4.8TB of data on the enclosure). RAID5 looked better, and I already started migrating stuff around to do that. Then I calculated that I would only have about 300GB free after that. And if I wanted to switch a 2TB drive for a 4TB drive later, I wouldn't be able to grow the array until I replaced ALL of the drives with larger ones. BTRFS looked promising. You can add devices to a pool. Remove a device from a pool. If one drive dies, you can still recover files that are stored wholly on the other drives. But movies are quite large (600M to 15G). So they would almost guaranteed to be spread over multiple drives.

tl;dr

Enter mhddfs. Mhddfs is a FUSE filesystem that joins multiple mount points into one place. It will show everything that's on them in one location. When you write to it, it will put the file in the first one with enough space. Almost exactly what I was looking for. I used to actually have a script that went through my drives and made symlinks to them in one directory, so mhddfs is actually better.

I've added mhddfs to our AUR repo. You can install it with:
Code: Select all
[root@alarm ~]# pacman -S mhddfs

then you can mount your drives like this: (example from my device)
Code: Select all
[root@alarm ~]# mhddfs /media/Rhodium/,/media/Yttrium/,/media/Palladium/,/media/Thallium/ /media/Pool/ -o allow_other

That will spit out:
Code: Select all
mhddfs: directory '/media/Rhodium/' added to list
mhddfs: directory '/media/Yttrium/' added to list
mhddfs: directory '/media/Palladium/' added to list
mhddfs: directory '/media/Thallium/' added to list
mhddfs: mount to: /media/Pool/
mhddfs: move size limit 0%


and then if I run df -h I get something pretty cool looking!
Code: Select all
[root@alarm Movies]# df -h
Filesystem                                                                         Size  Used Avail Use% Mounted on
rootfs                                                                              19G  5.0G   13G  29% /
/dev/root                                                                           19G  5.0G   13G  29% /
devtmpfs                                                                            60M     0   60M   0% /dev
run                                                                                 61M  280K   60M   1% /run
shm                                                                                 61M     0   61M   0% /dev/shm
/dev/sdb1                                                                          1.8T  196M  1.7T   1% /media/Palladium
/dev/sdc1                                                                          1.8T  1.7T   30G  99% /media/Rhodium
/dev/sdd1                                                                          1.8T  1.7T   41G  98% /media/Thallium
/dev/sde1                                                                          1.8T  1.5T  302G  83% /media/Yttrium
tmpfs                                                                               61M     0   61M   0% /tmp
/media/Rhodium/;/media/Yttrium/;/media/Palladium/;/media/Thallium/                 7.2T  4.8T  2.1T  70% /media/Pool


This makes managing groups of random sized disks MUCH easier! And all I have to do to add storage is plug in a new drive, format it, and remount the pool.
Arch Linux ARM exists and continues to grow through community support, please donate today!
xenoxaos
Developer
 
Posts: 323
Joined: Thu Jan 06, 2011 1:45 am

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby sambul13 » Sun Nov 11, 2012 7:17 pm

Do I understand correctly, you suggest mhddfs as an ideal solution for a 2-to-8 drive bay HD Enclosure with a eSATA port Multiplier instead of a less reliable or flexible RAID? Its an interesting idea to test.

One issue I may have with it: once setup, will it be possible to copy files to any disk in such enclosure via eSATA or USB, when directly connected to a Win7 PC? Any software needed? I usually copy files from PC directly to lessen heat buildup of the Dockstar proc. Wanted to download torrents to USB HD via Dockstar with Transmission, but now consider it as well a spare method due to high proc load at lengthy torrent downloads and writes even to EXT4 formatted HDs. :)

In short, would using mhddfs allow to go back to using each drives in such enclosure separately, and will a single TV show episodes be mixed up among multiple drives' directories, when accessed from a Win PC? Would sorting by any criteria or gathering of TV Shows and movies on separate drives be possible?
sambul13
 
Posts: 258
Joined: Sat Aug 18, 2012 10:32 pm

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby WarheadsSE » Sun Nov 11, 2012 7:51 pm

I usually copy files from PC directly to lessen heat buildup of the Dockstar proc.

This is really not a valid reason. this is an embedded system with far higher heat tolerances than x86. You will get faster transfer rates directly attached to an x86, but that also depends on having a filesystem that it knows how to write to.
Core Developer
Remember: Arch Linux ARM is entirely community donation supported!
WarheadsSE
Developer
 
Posts: 6660
Joined: Mon Oct 18, 2010 2:12 pm

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby xenoxaos » Sun Nov 11, 2012 9:23 pm

All of the files are only on one disk, you can pull out any drive and mount it on any linux box. It would just have the portion of files that were written to it. I didn't say it would increase fault tolerance, but it allows combining all of the drives into one location without decreasing fault tolerance like RAID0. All drives are mounted separately then a fuse overlay smushes them all together. RTFM.
Arch Linux ARM exists and continues to grow through community support, please donate today!
xenoxaos
Developer
 
Posts: 323
Joined: Thu Jan 06, 2011 1:45 am

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby sambul13 » Mon Nov 12, 2012 1:26 am

WarheadsSE wrote:this is an embedded system with far higher heat tolerances than x86.
I didn't run durability tests due to severe shortage of Dockstar devices ;) , but there are reports on the web including this forum that transferring a massive media collection to a large USB drive via Dockstar over extended time period continuously may result in its sudden death. I guess, every owner should choose for themselves what to do with their device. Some cooling arrangements may help if setup efficiently.
sambul13
 
Posts: 258
Joined: Sat Aug 18, 2012 10:32 pm

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby WarheadsSE » Mon Nov 12, 2012 1:47 am

If you live in the tropics, I suppose. If you are baking a dockstar, you have larger issues.
Core Developer
Remember: Arch Linux ARM is entirely community donation supported!
WarheadsSE
Developer
 
Posts: 6660
Joined: Mon Oct 18, 2010 2:12 pm

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby bodhi » Mon Nov 12, 2012 6:09 am

Use a set of rubber feet to raise the Dockstar up a bit. You'll be surprise how much cooler it runs with just a little more ventilation (1/8" or 1/4") for the base. The stock Dockstar "feet" are really just sticky pads, no ventilation below. All my plugs have feet :-)

Wallmart or other superstores in your locale should have these sold in set of 20 or 40, very cheap.
bodhi
 
Posts: 224
Joined: Sat Aug 13, 2011 10:06 am

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby sambul13 » Mon Nov 12, 2012 4:47 pm

Thats what I did with some older laptops, and indeed it does wonders. The Dockstar is run on its edge since it allows to attach cables from top in a service closet thus saving some space and improving airflow. :)

Anyway, mhddfs seems like a good idea to try with an eSATA Multiplier Enclosure I'm getting. Wasn't able to find a SATA Sharing Switch or Hub so far to switch the enclosure btw Dockstar and PC similar to USB Sharing Switch. That would allow to test mhddfs with both.
sambul13
 
Posts: 258
Joined: Sat Aug 18, 2012 10:32 pm

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby LightC » Sun May 12, 2013 3:58 am

On all new builds of archlinuxarm v5, mhddfs seems to have been broken for writes. I think it has something to do with the move to systemd. Has anyone else had any issues?

I describe the issue I'm having here: viewtopic.php?f=9&t=2438
LightC
 
Posts: 6
Joined: Mon Jan 10, 2011 6:54 pm

Re: Pool multiple filesystems into ONE directory/share w/ mh

Postby moonman » Wed Jun 05, 2013 5:13 am

Try this build. I just freshly compiled it. If the preoblem is still there then it's the problem with the code itself.

Code: Select all
pacman -U https://dl.dropboxusercontent.com/u/15043728/ArchLinuxArm/mhddfs/mhddfs-0.1.39-1-arm.pkg.tar.xz
Pogoplug V4 | GoFlex Home | Raspberry Pi B 512 | CuBox-i4 Pro | ClearFog | BeagleBone Black | Odroid U2 | Odroid C1 | Odroid XU4
-----------------------------------------------------------------------------------------------------------------------
[armv5] Updated U-Boot | |[armv5] How to install my.pogoplug.com service | [armv5] NAND Rescue System
moonman
Developer
 
Posts: 3074
Joined: Sat Jan 15, 2011 3:36 am
Location: Calgary, Canada


Return to Community Guides

Who is online

Users browsing this forum: No registered users and 1 guest