On 8/10/20 1:43 AM, Robert G (Doc) Savage via CentOS wrote:
> As if last weekend's UEFI debacle wasn't bad enough, it now seems the
> latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
> ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
> inaccessible on my C8
On Mon, Aug 10, 2020 at 01:43:00AM -0500, Robert G (Doc) Savage via CentOS
wrote:
> As if last weekend's UEFI debacle wasn't bad enough, it now seems the
> latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
> ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
On 10/08/2020 07:43, Robert G (Doc) Savage via CentOS wrote:
As if last weekend's UEFI debacle wasn't bad enough, it now seems the
latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
inaccessible on my C8 stor
mark wrote:
> mark wrote:
>>
>> testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
>> pulled one drive (11-drive, one hot spare pool), and it resilvered with
>> the hot spare. zpool status -x shows me state: DEGRADED status: One or
>> more devices could not be used because the
try, zpool replace export1 sdb sdl
but it says the spare is already in use, so I'm not sure why the resilver
isn't already in progress. you might have to remove sdl from the spares
list before you can use it in a replace.
On Fri, Jun 14, 2019 at 9:03 AM mark wrote:
> Hi, folks,
>
>testing
mark wrote:
> Hi, folks,
>
>
> testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
> pulled one drive (11-drive, one hot spare pool), and it resilvered with
> the hot spare. zpool status -x shows me state: DEGRADED
> status: One or more devices could not be used because the labe
On Thu, Dec 6, 2018 at 3:23 PM Jonathan Billings wrote:
>
> On Dec 6, 2018, at 17:45, david wrote:
> >
> > Folks
> >
> > I have two USB connected drives, configured as a mirrored-pair in ZFS.
> > It's been working fine UNTIL I updated Centos
> > from 3.10.0-862.14.4.el7.x86_64
> > to 3.10.0-95
On Dec 6, 2018, at 17:45, david wrote:
>
> Folks
>
> I have two USB connected drives, configured as a mirrored-pair in ZFS. It's
> been working fine UNTIL I updated Centos
> from 3.10.0-862.14.4.el7.x86_64
> to 3.10.0-957.1.3.el7.x86_64
>
> The import of the pools didn't happen at boot. Whe
Il 03-07-2018 15:39 Nux! ha scritto:
Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
Just recently there was an issue with data loss.
https://github.com/zfsonlinux/zfs/issues/7401
I know; I was on the very same github issue you linked.
While the bug was very unfortunate, ZFS remai
But besides this issue it was in my experience rock solid.
On Tue, Jul 3, 2018 at 3:40 PM Nux! wrote:
> Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
> Just recently there was an issue with data loss.
> https://github.com/zfsonlinux/zfs/issues/7401
>
> hth
> Lucian
>
> --
> Sent
Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
Just recently there was an issue with data loss.
https://github.com/zfsonlinux/zfs/issues/7401
hth
Lucian
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.nux.ro
- Original Message -
> From: "Gionatan Danti"
>
I'm using ZFS On Linux with CentOS 6 since almost two years.
http://zfsonlinux.org/
I'm not using it to boot from it or for any vm stuff, but just for storage
disks.
Recently updated to ZOL version 0.7.9.
To install, I simply follow the instructions from this page:
https://github.com/zfsonlinux/zf
Il 25-06-2018 23:59 Yves Bellefeuille ha scritto:
I think the simplest solution would be to add this (which I haven't
tried):
http://zfsonlinux.org/
https://github.com/zfsonlinux/zfs/wiki/RHEL-and-CentOS
to the Available Repositories for CentOS wiki page:
https://wiki.centos.org/fr/AdditionalR
Gionatan Danti wrote:
> I searched the list but I did not found anything regarding native ZFS.
> Any feedback on the matter is welcomed. Thanks.
I think the simplest solution would be to add this (which I haven't
tried):
http://zfsonlinux.org/
https://github.com/zfsonlinux/zfs/wiki/RHEL-and-Cent
On Thu, October 30, 2014 21:47, david wrote:
> Folks
>
> I have a ZFS file system. It seems to be scrubbing too often. As I
> type, it's 5 hours into the process with 36 hours to go, and seems to
> be doing it several times a week on a slow drive.
>
> I cannot find an option to control the frequ
On 10/30/2014 8:46 PM, david wrote:
OK, I found it. Is this option documented somewhere? Are there other
frequency settings? like once-a-month?
i've only used ZFS on solaris, where there are no automatic scrubs
unless you script your own via cron, and freeNAS where they are done at
interval
OK, I found it. Is this option documented somewhere? Are there
other frequency settings? like once-a-month?
At 08:15 PM 10/30/2014, you wrote:
On 10/30/2014 6:47 PM, david wrote:
I have a ZFS file system. It seems to be scrubbing too often. As
I type, it's 5 hours into the process with 3
On 10/30/2014 6:47 PM, david wrote:
I have a ZFS file system. It seems to be scrubbing too often. As I
type, it's 5 hours into the process with 36 hours to go, and seems to
be doing it several times a week on a slow drive.
I cannot find an option to control the frequency; crontab has no
ref
On Tue, 2014-09-16 at 13:47 -0400, m.r...@5-cent.us wrote:
> Given the upcoming elections, I like Scottish Law Commission,
I don't think they will get independence from the clowns in London,
England, this time, but I do wish them every possible success !
--
Regards,
Paul.
England, EU.
__
Valeri Galtsev wrote:
> On Tue, September 16, 2014 12:47 pm, m.r...@5-cent.us wrote:
>> Warren Young wrote:
>>> On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
>>> that
>>> adding an SSD to a ZFS pool to accelerate it isn't free. Where he
>>> works, it eff
I would strongly suggest anyone interested in ZFS on Linux join the
zfs-discuss list. http://zfsonlinux.org/lists.html There is a fairly good
signal to noise ratio.
On 16 September 2014 20:17, Andrew Holway wrote:
> > Referendum.
>>
>> I sit, and type, corrected.
>>
>
> :)
>
>
> > Referendum.
>
> I sit, and type, corrected.
>
:)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Andrew Holway wrote:
>>
>> Given the upcoming elections,
>
> Referendum.
I sit, and type, corrected.
mark
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Tue, September 16, 2014 12:47 pm, m.r...@5-cent.us wrote:
> Warren Young wrote:
>> On 9/15/2014 16:54, Paul Heinlein wrote:
>>> On Mon, 15 Sep 2014, Valeri Galtsev wrote:
>>>
>>> 1. a throw-away line meant as a joke,
>>
>> I didn't take it as a joke so much as a comment that where he works,
>
>
> Given the upcoming elections,
Referendum.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Warren Young wrote:
> On 9/15/2014 16:54, Paul Heinlein wrote:
>> On Mon, 15 Sep 2014, Valeri Galtsev wrote:
>>
>> 1. a throw-away line meant as a joke,
>
> I didn't take it as a joke so much as a comment that where he works,
> high-end hardware is available for the asking. SLC is the most
> exp
On Tue, September 16, 2014 12:03 pm, Warren Young wrote:
> On 9/15/2014 16:54, Paul Heinlein wrote:
>> On Mon, 15 Sep 2014, Valeri Galtsev wrote:
>>
>> 1. a throw-away line meant as a joke,
>
> I didn't take it as a joke so much as a comment that where he works,
> high-end hardware is available
On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
1. a throw-away line meant as a joke,
I didn't take it as a joke so much as a comment that where he works,
high-end hardware is available for the asking. SLC is the most
expensive sort of SSD; if it's so r
On 2014-09-15 , kkel...@wombat.san-francisco.ca.us wrote:
So the ZoL folks want one more feature before calling it 1.0; otherwise
they believe it's production ready. Only your own testing can convince
you that it's truly production ready.
--keith
That's encouraging news, something I've bee
On Mon, September 15, 2014 18:54, Paul Heinlein wrote:
> On Mon, 15 Sep 2014, Valeri Galtsev wrote:
>
>> Am I the only one who is tempted to say: people, could you kindly
>> start deciphering your abbreviations. I know, I know, computed
>> science uses _that_ abbreviation for years. But we definit
On 2014-09-15, Warren Young wrote:
> On 9/15/2014 13:58, Eero Volotinen wrote:
>> zfs release zero dot something does not sound like production ready ?
>
> https://clusterhq.com/blog/state-zfs-on-linux/
That's a super (and timely!) post on XFS. I saw in particular this
section.
"Upgrades requir
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
On Mon, September 15, 2014 4:49 pm, Andrew Holway wrote:
The SSD and second CPU core are not free.
Where I come from, the streets are paved with SLC
Is it Salt Lake City that you are from? (that is so reach with
Second Level Cache... That is wha
On Mon, September 15, 2014 4:49 pm, Andrew Holway wrote:
>> The SSD and second CPU core are not free.
>>
>
> Where I come from, the streets are paved with SLC
Is it Salt Lake City that you are from? (that is so reach with Second
Level Cache... That is what you actually meant I figure)
Am I the
> The SSD and second CPU core are not free.
>
Where I come from, the streets are paved with SLC
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On 9/15/2014 14:48, Andrew Holway wrote:
Any comparison between ZFS and non-ZFS probably overlooks things like
fully-checksummed data (not just metadata) and redundant copies. ZFS will
always be slower than filesystems without these features. TANSTAAFL.
Not really true. It hugely depends
>
> Any comparison between ZFS and non-ZFS probably overlooks things like
> fully-checksummed data (not just metadata) and redundant copies. ZFS will
> always be slower than filesystems without these features. TANSTAAFL.
Not really true. It hugely depends on your workload. For example, if you
Eero Volotinen wrote:
> 2014-09-15 22:51 GMT+03:00 Steve Thompson :
>> On Mon, 15 Sep 2014, Fernando Cassia wrote:
>>
>> It´s called BTRFS.
>>> It´s supported by SUSE, Fujitsu, Oracle, among others.
>>
>> Yeah, but is it supported by the *US Government* ???
>
> zfs release zero dot something does
Steve Thompson wrote:
> On Mon, 15 Sep 2014, Fernando Cassia wrote:
>
>> It´s called BTRFS.
>> It´s supported by SUSE, Fujitsu, Oracle, among others.
>
> Yeah, but is it supported by the *US Government* ???
Hey, selinux is
___
CentOS mailing list
Ce
On 9/15/2014 13:58, Eero Volotinen wrote:
zfs release zero dot something does not sound like production ready ?
https://clusterhq.com/blog/state-zfs-on-linux/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
2014-09-15 22:51 GMT+03:00 Steve Thompson :
> On Mon, 15 Sep 2014, Fernando Cassia wrote:
>
> It´s called BTRFS.
>> It´s supported by SUSE, Fujitsu, Oracle, among others.
>>
>
> Yeah, but is it supported by the *US Government* ???
zfs release zero dot something does not sound like production re
On Mon, 15 Sep 2014, Fernando Cassia wrote:
It´s called BTRFS.
It´s supported by SUSE, Fujitsu, Oracle, among others.
Yeah, but is it supported by the *US Government* ???
Steve___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman
On Mon, Sep 15, 2014 at 03:29:31PM -0300, Fernando Cassia wrote:
> On Mon, Sep 15, 2014 at 3:27 PM, Chris wrote:
>
> > Isn't fuse / zfs (partly?) in userspace?
>
>
> I believe there´s two separate efforts to run ZFS on Linux. One uses FUSE,
> the other reimplemented ZFS as a loadable kernel mod
On Mon, Sep 15, 2014 at 3:18 AM, Miguel Medalha
wrote:
> Zfsonlinux does not work in user space, it is a kernel module. Just try
> it.
There´s a copy-on-write file system in the GPL Linux kernel, merged into
the mainline Linux kernel in January 2009.
http://www.phoronix.com/forums/showthread.p
On Mon, Sep 15, 2014 at 3:27 PM, Chris wrote:
> Isn't fuse / zfs (partly?) in userspace?
I believe there´s two separate efforts to run ZFS on Linux. One uses FUSE,
the other reimplemented ZFS as a loadable kernel module.
FC
___
CentOS mailing list
Ce
On Mon, Sep 15, 2014 at 3:20 PM, Les Mikesell wrote:
> Ummm, like you've walked on the moon
LOL. I will begin saying that "the US government backs JavaFX" then, just
because NASA uses it in some projects.
https://weblogs.java.net/blog/seanmiphillips/archive/2013/11/20/visualizing-nasa-groun
Les Mikesell wrote:
> On Mon, Sep 15, 2014 at 1:16 PM, Fernando Cassia
> wrote:
>> On Mon, Sep 15, 2014 at 4:37 AM, Andrew Holway
>> wrote:
>>>
>>> ZFS on Linux is backed by the US government as ZFS will be used as the
>>> primary filesystem to back the parallel distributed filesystem
>>> 'Lustre
On 09/15/2014 08:18 AM, Miguel Medalha wrote:
> That alone is meaningless. MDADM with which filesystem?
ext4
> Zfsonlinux does not work in user space, it is a kernel module. Just try it.
Isn't fuse / zfs (partly?) in userspace?
--
Gruß,
Christian
On Mon, Sep 15, 2014 at 1:16 PM, Fernando Cassia wrote:
> On Mon, Sep 15, 2014 at 4:37 AM, Andrew Holway
> wrote:
>
>>
>> ZFS on Linux is backed by the US government as ZFS will be used as the
>> primary filesystem to back the parallel distributed filesystem 'Lustre'.
>
>
> wow, the US government
On Mon, Sep 15, 2014 at 4:37 AM, Andrew Holway
wrote:
>
> ZFS on Linux is backed by the US government as ZFS will be used as the
> primary filesystem to back the parallel distributed filesystem 'Lustre'.
wow, the US government!!. *sarcasm implied*
FC
--
During times of Universal Deceit, telli
On 9/15/2014 01:37, Andrew Holway wrote:
To set expectation. Actually, the most recent release (0.63) of ZFS on
Linux is not that quick.
Compared to what? To ZFS on Solaris, ZFS on FreeBSD, or ext4 on Linux?
Any comparison between ZFS and non-ZFS probably overlooks things like
fully-checks
>
> > Maybe you can tune ZFS further, but I tried it in userspace (with FUSE)
> > and reading was a almost 5 times slower than MDADM.
>
ZFS on Linux is backed by the US government as ZFS will be used as the
primary filesystem to back the parallel distributed filesystem 'Lustre'.
Lustre is used in
On 2014-09-15, Chris wrote:
> On 09/08/2014 09:00 PM, Andrew Holway wrote:
>> Try ZFS http://zfsonlinux.org/
>
> Maybe you can tune ZFS further, but I tried it in userspace (with FUSE)
> and reading was a almost 5 times slower than MDADM.
Just running ZFS in the kernel is going to be a lot faster
> Maybe you can tune ZFS further, but I tried it in userspace (with FUSE) and
> reading was a almost 5 times slower than MDADM.
That alone is meaningless. MDADM with which filesystem?
Zfsonlinux does not work in user space, it is a kernel module. Just try it.
I had promised to weigh in on my experiences using ZFS in a production
environment. We've been testing it for a few months now, and confidence
is building. We've started using it in production about a month ago
after months of non production testing.
I'll append my thoughts in a cross-post from
On 1/6/2014 3:26 PM, Cliff Pratt wrote:
> Grub only needs to know about the filesystems that it uses to boot the
> system. Mounting of the other file systems including /var is the
> responsibility of the system that has been booted. I suspect that you have
> something else wrong if you can't boot w
Grub only needs to know about the filesystems that it uses to boot the
system. Mounting of the other file systems including /var is the
responsibility of the system that has been booted. I suspect that you have
something else wrong if you can't boot with /var/ on ZFS.
I may be wrong, but I don't t
On 11/30/2013 06:20 AM, Andrew Holway wrote:
> Hey,
>
> http://zfsonlinux.org/epel.html
>
> If you have a little time and resource please install and report back
> any problems you see.
>
Andrew,
I want to run /var on zfs, but when I try to move /var over it won't
boot thereafter, with errors ab
On 12/19/2013, 04:00 , li...@benjamindsmith.com wrote:
> BackupPC is a great product, and if I knew of it and/or it was available
> when I started, I would likely have used it instead of cutting code. Now
> that we've got BackupBuddy working and integrated, we aren't going to be
> switching as it
On Wed, Dec 18, 2013 at 5:41 PM, Lists wrote:
> On 12/18/2013 03:04 PM, Les Mikesell wrote:
>> For the people who don't know, backuppc builds a directory tree for
>> each backup run where the full runs are complete and the incrementals
>> normally only contain the changed files. However, when you
On 12/18/2013 3:41 PM, Lists wrote:
> Should I read this as "BackupPC now has its own filesystem driver"? If
> so, wow. Or do you mean that there are command line tools to read/copy
> BackupPC save points?
web interface, primarily. you can restore any portion of any version of
any backup to the
On 12/18/2013 03:04 PM, Les Mikesell wrote:
> For the people who don't know, backuppc builds a directory tree for
> each backup run where the full runs are complete and the incrementals
> normally only contain the changed files. However, when you access the
> incremental backups through the web int
On Wed, Dec 18, 2013 at 3:13 PM, Lists wrote:
> >
> I would differentiate BackupBuddy in that there is no "incremental" and
> "full" distinction. All backups are "full" in the truest sense of the
> word,
For the people who don't know, backuppc builds a directory tree for
each backup run where the
On 12/18/2013 07:50 AM, Les Mikesell wrote:
> I've always considered backuppc to be one of those rare things that
> you set up once and it takes care of itself for years. If you have
> problems with it, someone on the backuppc mail list might be able to
> help. It does tend to be slower than nat
On Wed, Dec 18, 2013 at 9:13 AM, Chuck Munro wrote:
>
> Not presumptuous at all! I have not heard of backupbuddy (or dirvish),
> so I should investigate. Your description makes it sound somewhat like
> OS-X Time Machine, which I like a lot. I did try backuppc but it got a
> bit complex to manag
On 12/18/2013, 04:00 , li...@benjamindsmith.com wrote:
> I may be being presumptuous, and if so, I apologize in advance...
>
> It sounds to me like you might consider a disk-to-disk backup solution.
> I could suggest dirvish, BackupPC, or our own home-rolled rsync-based
> solution that works rath
On 12/14/2013 08:50 AM, Chuck Munro wrote:
> Hi Ben,
>
> Yes, the initial replication of a large filesystem is *very* time
> consuming! But it makes sleeping at night much easier. I did have to
> crank up the inotify kernel parameters by a significant amount.
>
> I did the initial replication usi
On 12/14/2013, 04:00 , li...@benjamindsmith.com wrote:
> We checked lsyncd out and it's most certainly an very interesting tool.
> I*will* be using it in the future!
>
> However, we found that it has some issues scaling up to really big file
> stores that we haven't seen (yet) with ZFS.
>
> For
On 12/04/2013 06:05 AM, John Doe wrote:
> Not sure if I already mentioned it but maybe have a look at:
> http://code.google.com/p/lsyncd/
We checked lsyncd out and it's most certainly an very interesting tool.
I *will* be using it in the future!
However, we found that it has some issues scalin
On 04.12.2013 14:05, n...@li.nux.ro wrote:
>>> >>On 04.12.2013 14:05, John Doe wrote:
> From: Lists
>
>>> >>Our next big test is to try out ZFS filesystem send/receive in
>>> >>lieu
>>> >>of
>>> >>our current backup processes based on rsync. Rsync i
On 05.12.2013 22:46, Chuck Munro wrote:
>> On 04.12.2013 14:05, John Doe wrote:
From: Lists
>> Our next big test is to try out ZFS filesystem send/receive in
>> lieu
>> of
>> our current backup processes based on rsync. Rsync is a fabulous
>> tool,
>> but is begi
> On 04.12.2013 14:05, John Doe wrote:
>> >From: Lists
>> >
>>> >>Our next big test is to try out ZFS filesystem send/receive in lieu
>>> >>of
>>> >>our current backup processes based on rsync. Rsync is a fabulous
>>> >>tool,
>>> >>but is beginning to show performance/scalability issues dealing wi
On 04.12.2013 14:05, John Doe wrote:
> From: Lists
>
>> Our next big test is to try out ZFS filesystem send/receive in lieu
>> of
>> our current backup processes based on rsync. Rsync is a fabulous
>> tool,
>> but is beginning to show performance/scalability issues dealing with
>> the
>> many
From: Lists
> Our next big test is to try out ZFS filesystem send/receive in lieu of
> our current backup processes based on rsync. Rsync is a fabulous tool,
> but is beginning to show performance/scalability issues dealing with the
> many millions of files being backed up, and we're hoping th
On Sat, Nov 30, 2013 at 9:20 AM, Andrew Holway wrote:
> Hey,
>
> http://zfsonlinux.org/epel.html
>
> If you have a little time and resource please install and report back
> any problems you see.
>
> A filesystem or Volume sits within a zpool
> a zpool is made up of vdevs
> vdevs are made up of blo
Andrew,
We've been testing ZFS since about 10/24, see my original post (and
replies) asking about its suitability "ZFS on Linux in production" on
this list. So far, it's been rather impressive. Enabling compression
better than halved the disk space utilization in a low/medium bandwidth
(mainly
On 11/4/2013 3:21 PM, Nicolas Thierry-Mieg wrote:
> but why would this be much worse with ZFS than eg ext4?
because ZFS works considerably differently than extfs... its a
copy-on-write system to start with.
--
john r pierce 37N 122W
somewhere on the middle
On 11/04/2013 08:01 PM, John R Pierce wrote:
> On 11/4/2013 10:43 AM, Les Mikesell wrote:
>> On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb
>>> wrote:
>>
>>> 3) NEVER let a zpool fill up above about 70% full, or the performance
> really goes downhill.
>>>
Why is it? It sounds cost inte
On 11/4/2013 10:43 AM, Les Mikesell wrote:
> On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb
>> wrote:
>>> >>
>> 3) NEVER let a zpool fill up above about 70% full, or the performance
>> >>really goes downhill.
>>
>> >Why is it? It sounds cost intensive, if not ridiculous.
>> >Disk space not to used,
On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb wrote:
>
>> 3) NEVER let a zpool fill up above about 70% full, or the performance
>> really goes downhill.
>
> Why is it? It sounds cost intensive, if not ridiculous.
> Disk space not to used, forbidden land...
> Is the remaining 30% used by some ZFS in
On 24.Okt.2013, at 22:59, John R Pierce wrote:
> On 10/24/2013 1:41 PM, Lists wrote:
>> Was wondering if anybody here could weigh in with real-life experience?
>> Performance/scalability?
>
> I've only used ZFS on Solaris and FreeBSD.some general observations...
...
> 3) NEVER let a zpool
Greetings,
On Fri, Oct 25, 2013 at 3:57 AM, Keith Keller
wrote:
>
> I don't have my own, but I have heard of other shops which have had lots
> of success with ZFS on OpenSolaris and their variants.
And I know of a shop which could not recover a huge ZFS on freebsd and
had to opt for something li
To be honest is not easier to install on server FreeBSD or Solaris where ZFS is
natively supported? I moved my own server to FreeBSD and I didn't noticed huge
difference between Linux distros and freebsd, I have no idea what about Solaris
but it might be still similar environment.
Sent from my
On 10/25/2013 11:14 AM, Chuck Munro wrote:
> To keep the two servers in sync I use 'lsyncd' which is essentially a
> front-end for rsync that cuts down thrashing and overhead dramatically
> by excluding the full filesystem scan and using inotify to figure out
> what to sync. This allows almost-rea
On Sat, Oct 26, 2013 at 4:36 PM, Ray Van Dolson wrote:
> On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
> > On 10/24/2013 1:41 PM, Lists wrote:
> > > Was wondering if anybody here could weigh in with real-life experience?
> > > Performance/scalability?
> >
> > I've only used ZFS o
On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
> On 10/24/2013 1:41 PM, Lists wrote:
> > Was wondering if anybody here could weigh in with real-life experience?
> > Performance/scalability?
>
> I've only used ZFS on Solaris and FreeBSD.some general observations...
>
> 1) you n
On Thu, Oct 24, 2013 at 01:41:17PM -0700, Lists wrote:
> We are a CentOS shop, and have the lucky, fortunate problem of having
> ever-increasing amounts of data to manage. EXT3/4 becomes tough to
> manage when you start climbing, especially when you have to upgrade, so
> we're contemplating swit
On 10/26/2013 06:40 AM, John R Pierce wrote:
>
> to see how it should have
> been done, see IBM AIX's version of lvm.you grow a jfs file system,
> it automatically grows the underlying LV (logical volume), online,
> live.
lvm can do this with the --resizefs flag for lvextend, one command t
On re-reading, I realized I didn't complete some of my thoughts:
On 10/25/2013 00:18, Warren Young wrote:
> ZFS is nicer in this regard, in that it lets you schedule the scrub
> operation. You can obviously schedule one for btrfs,
...with cron...
> but that doesn't take into account scrub time.
On 10/25/2013 1:26 PM, Warren Young wrote:
> On 10/25/2013 00:44, John R Pierce wrote:
>> >current version of OpenZFS no longer relies on 'version numbers',
>> >instead it has 'feature flags' for all post v28 features.
> This must be the zpool v5000 thing I saw while researching my previous
> post.
On 10/25/2013 11:33, Lists wrote:
>
> I'm just trying to find the best tool for the job.
Try everything. Seriously.
You won't know what you like, and what works *for you* until you have
some experience. Buy a Drobo for the home, replace one of your old file
servers with a FreeBSD ZFS box, ena
On 10/25/2013 00:44, John R Pierce wrote:
> current version of OpenZFS no longer relies on 'version numbers',
> instead it has 'feature flags' for all post v28 features.
This must be the zpool v5000 thing I saw while researching my previous
post. Apparently ZFSonLinux is doing the same thing, or
On Fri, Oct 25, 2013 at 1:40 PM, John R Pierce wrote:
> On 10/25/2013 10:33 AM, Lists wrote:
>> LVM2 complicates administration terribly.
>
> huh? it hugely simplifies it for me, when I have lots of drives. I just
> wish mdraid and lvm were better integrated. to see how it should have
> been don
On 10/25/2013, 05:00 , centos-requ...@centos.org wrote:
> We are a CentOS shop, and have the lucky, fortunate problem of having
> ever-increasing amounts of data to manage. EXT3/4 becomes tough to
> manage when you start climbing, especially when you have to upgrade, so
> we're contemplating switc
On 10/25/2013 10:33 AM, Lists wrote:
> LVM2 complicates administration terribly.
huh? it hugely simplifies it for me, when I have lots of drives. I just
wish mdraid and lvm were better integrated. to see how it should have
been done, see IBM AIX's version of lvm.you grow a jfs file system,
On 10/24/2013 11:18 PM, Warren Young wrote:
> - vdev, which is a virtual device, something like a software RAID. It is one
> or more disks, configured together, typically with some form of redundancy.
>
> - pool, which is one or more vdevs, which has a capacity equal to all of its
> vdevs added
On 10/24/2013 11:18 PM, Warren Young wrote:
> All of the ZFSes out there are crippled relative to what's shipping in
> Solaris now, because Oracle has stopped releasing code. There are nontrivial
> features in zpool v29+, which simply aren't in the free forks of older
> versions o the Sun code.
On Oct 24, 2013, at 8:01 PM, Lists wrote:
> Not sure enough of the vernacular
Yes, ZFS is complicated enough to have a specialized vocabulary.
I used two of these terms in my previous post:
- vdev, which is a virtual device, something like a software RAID. It is one
or more disks, configured
On 10/24/2013 05:29 PM, Warren Young wrote:
> On 10/24/2013 17:12, Lists wrote:
>> 2) The ability to make the partition bigger by adding drives with very
>> minimal/no downtime.
> Be careful: you may have been reading some ZFS hype that turns out not
> as rosy in realiIdeally, ZFS would work like
On 10/24/2013 5:29 PM, Warren Young wrote:
> The least complicated*safe* way to add 1 TB to a pool is add*two* 1 TB
> disks to the system, create a ZFS mirror out of them, and add*that*
> vdev to the pool. That gets you 1 TB of redundant space, which is what
> you actually wanted. Just realiz
On 10/24/2013 5:31 PM, Warren Young wrote:
> To be fair, you want to treat XFS the same way.
>
> And it, too is "unstable" on 32-bit systems with anything but smallish
> filesystems, due to lack of RAM.
I thought it had stack requirements that 32 bit couldn't meet, and it
would simply crash, so i
1 - 100 of 159 matches
Mail list logo