Aniruddha put forth on 7/27/2010 12:03 PM: > On Tue, Jul 27, 2010 at 6:19 PM, Stan Hoeppner <s...@hardwarefreak.com>wrote: > >> Volkan YAZICI put forth on 7/27/2010 8:22 AM: >> >>> You are missing a very important point: Durability to power failures. >>> (Excuse me, but a majority of GNU/Linux users are not switched to a UPS >>> or something.) And that's where XFS totally fails[1][2]. >> >>> [1] >> http://linux.derkeiler.com/Mailing-Lists/Debian/2008-11/msg00097.html >> >> .... > >> a fantastic piece of FOSS into which many top-of-their-game >> kernel engineers have put tens of thousands of man hours, striving to make >> it >> the best it can be--and are wildly succeeding. >> >> That's was very informative, thanks. You got me curious and I will test XFS > on my home system. To be honest I am still little wary of using XFS in a > production environment. For years now I have heard stories of power failures > with catastrophic results when using XFS. Anyone who using XFS in > a mission critical production environment? Anyone has experience with that?
How about, and this will probably shock many of you: 1. Kernel.org All of Linux source, including what becomes the Debian kernel, and the kernels of all other Linux distros, is served from XFS filesystems: "A bit more than a year ago (as of October 2008) kernel.org, in an ever increasing need to squeeze more performance out of it's machines, made the leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS. We site a number of reasons including fscking 5.5T of disk is long and painful, we were hitting various cache issues, and we were seeking better performance out of our file system." "After initial tests looked positive we made the jump, and have been quite happy with the results. With an instant increase in performance and throughput, as well as the worst xfs_check we've ever seen taking 10 minutes, we were quite happy. Subsequently we've moved all primary mirroring file-systems to XFS, including www.kernel.org , and mirrors.kernel.org. With an average constant movement of about 400mbps around the world, and with peaks into the 3.1gbps range serving thousands of users simultaneously it's been a file system that has taken the brunt we can throw at it and held up spectacularly." 2. NASA Advanced Supercomputing Facility, NASA Ames Research Center See my other post for details 3. Industrial Light and Magic -- ILM At one time ILM had one of the largest installed SGI SAN storage systems on the planet, may have been _the_ largest, running XFS. It backed their render farm(s). They don't currently have any render system info on their site that I can find. Given the number, size, and scope of their animation projects and the size to which their rendering farm has grown, they may have very well switched SAN vendors over the years. I don't know if they still use XFS or not. I would think so given that they're working with multi hundred gigabyte files daily. Many, many others. What you have to understand is that XFS has been around a long long time, 17 years in both IRIX and Linux. It's older than EXT2. Back before cheap Intel/AMD clusters took over the supercomputing marketplace, SGI MIPS IRIX systems with XFS owned upwards of 30-40% of that market. XFS in various platforms and versions has been in government labs, corporations and academia for over a decade. At one time Prof Stephen Hawking had his own "personal" 32 CPU SGI Origin 3800 for running cosmology calculations in order to prove his theories. It had XFS filesytems, as have all SGI systems since 1994. Here's a list of organizations that have volunteered information to xfs.org. It is by far not a complete list, and most of the major SGI customers with XFS on huge SAN systems aren't listed. Note NAS at NASA Ames isn't listed. http://xfs.org/index.php/XFS_Companies -- Stan -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/4c4f25d5.1040...@hardwarefreak.com