On Sep 10, 2007, at 13:40, [EMAIL PROTECTED] wrote:
> I am not against refactoring solutions, but zfs quotas and the
> lack of
> user quotas in general either leave people trying to use zfs quotas
> in lieu
> of user quotas, suggesting weak end runs against the problem (a
> cron to
> calcu
On 9/11/07, Gino <[EMAIL PROTECTED]> wrote:
> -ZFS performs badly with a lot of small files.
> (about 20 times slower that UFS with our millions file rsync procedures)
We have seen just the opposite... we have a server with about
40 million files and only 4 TB of data. We have been benchm
Bill Sommerfeld wrote:
> On Tue, 2007-09-11 at 13:43 -0700, Gino wrote:
>> -ZFS+FC JBOD: failed hard disk need a reboot :(
>> (frankly unbelievable in 2007!)
>
> So, I've been using ZFS with some creaky old FC JBODs (A5200's) and old
> disks which have been failing regularly and haven't s
On Tue, 2007-09-11 at 13:43 -0700, Gino wrote:
> -ZFS+FC JBOD: failed hard disk need a reboot :(
> (frankly unbelievable in 2007!)
So, I've been using ZFS with some creaky old FC JBODs (A5200's) and old
disks which have been failing regularly and haven't seen that; the worst
I've seen run
On Tue, Sep 11, 2007 at 01:31:17PM -0700, Bill Moore wrote:
> I would also suggest setting the recordsize property on the zvol when
> you create it to 4k, which is, I think, the native ext3 block size.
> If you don't do this and allow ZFS to use it's 128k default blocksize,
> then a 4k write from e
On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> > I've got 12Gb or so of db+web in a zone on a ZFS
> > filesystem on a mirrored zpool.
> > Noticed during some performance testing today that
> > its i/o bound but
> > using hardly
> > any CPU, so I thought turning on compression would be
> >
> To put this in perspective, no system on the planet
> today handles all faults.
> I would even argue that building such a system is
> theoretically impossible.
no doubt about that ;)
> So the subset of faults which ZFS covers which is
> different than the subset
> that UFS covers and different
I would also suggest setting the recordsize property on the zvol when
you create it to 4k, which is, I think, the native ext3 block size.
If you don't do this and allow ZFS to use it's 128k default blocksize,
then a 4k write from ext3 will turn into a 128k read/modify/write on the
ZFS side. This c
> I've got 12Gb or so of db+web in a zone on a ZFS
> filesystem on a mirrored zpool.
> Noticed during some performance testing today that
> its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be
> a quick win.
If it is io bound won't compression make it worse?
>
Joshua Goodall wrote:
> I've been seeing read and write performance pathologies with Linux
> ext3 over iSCSI to zvols, especially with small writes. Does running
> a journalled filesystem to a zvol turn the block storage into swiss
> cheese? I am considering serving ext3 journals (and possibly swap
On 9/11/07, Dick Davies <[EMAIL PROTECTED]> wrote:
>
> I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored
> zpool.
> Noticed during some performance testing today that its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be a quick win.
>
> I
> > Invalidating COW filesystem patents would of course be the best.
> > Unfortunately those lawsuits are usually not handled in the open
> and in order
> > to understand everything you would need to know about the
> background interests
> > of both parties.
>
> IANAL, but I was under the impressi
Stephen Usher <[EMAIL PROTECTED]> wrote:
> Joerg Schilling wrote:
> > I am not sure about the current state, but 2 years ago, Linux was only able
> > to
> > run a few simple proramg in 32 bit mode because the drivers did not support
> > 32 bit ioctl interfaces. This made e.g. a 32 bit cdrecord on
Joerg Schilling wrote:
> I am not sure about the current state, but 2 years ago, Linux was only able to
> run a few simple proramg in 32 bit mode because the drivers did not support
> 32 bit ioctl interfaces. This made e.g. a 32 bit cdrecord on 64 bit Linux
> impossible.
I've never had a 32bit bi
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool.
Noticed during some performance testing today that its i/o bound but
using hardly
any CPU, so I thought turning on compression would be a quick win.
I know I'll have to copy files for existing data to be compressed, s
Stephen Usher <[EMAIL PROTECTED]> wrote:
> Oliver Schinagl wrote:
> > not to start a flamewar or the like, but linux can run 32bit bins, just not
> > nativly afaik, you need some sort of emu library. But since I use gentoo,
> > and pretty much everything is compiled from source anyway, I only ha
> My question is: Is there any interest in finishing RAID5/RAID6 for ZFS?
> If there is no chance it will be integrated into ZFS at some point, I
> won't bother finishing it.
Your work is as pure an example as any of what OpenSolaris should be about. I
think there should be no problem having a n
Oliver Schinagl wrote:
> not to start a flamewar or the like, but linux can run 32bit bins, just not
> nativly afaik, you need some sort of emu library. But since I use gentoo, and
> pretty much everything is compiled from source anyway, I only have stupid
> closed src bins that would need to wo
Ian Collins wrote:
> Oliver Schinagl wrote:
>
>> [EMAIL PROTECTED] wrote:
>>
>>
However, I found on the liveDVD/CD that nexentia and beleniX both don't
come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit
mode at boottime?
>so you are saying that i can run both 32bit and 64bit code
>simultaneously, natively with the solaris kernel? That's pretty damn cool
Correct. There are several reasons for an OS which generally comes in
binary distributions to do so:
- maintain complete binary compatibility with old
>once I boot it in 64bit mode, i'd have to run emulation libraries to run
>32bit bins right? (As I'm a solaris newbie, and only know a little about
>64bit stuff from the Linux world, this is all I know)
No; you run the exact same binaries and libraries under 32 and 64 bit.
It's not emulation; it
Ian Collins wrote:
> Oliver Schinagl wrote:
>
>> Ian Collins wrote:
>>
>>
>>> Oliver Schinagl wrote:
>>>
once I boot it in 64bit mode, i'd have to run emulation libraries to run
32bit bins right?
>>> No. Solaris has both 32 and 64 bit librar
Oliver Schinagl wrote:
> Ian Collins wrote:
>
>> Oliver Schinagl wrote:
>>> once I boot it in 64bit mode, i'd have to run emulation libraries to run
>>> 32bit bins right?
>>>
>> No. Solaris has both 32 and 64 bit libraries.
>>
> so you are saying that i can run both 32bit and 64bit
Oliver Schinagl wrote:
> [EMAIL PROTECTED] wrote:
>
>>> However, I found on the liveDVD/CD that nexentia and beleniX both don't
>>> come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit
>>> mode at boottime?
>>>
>>>
>> Solaris autodetects the CPU type and boots in
[EMAIL PROTECTED] wrote:
>> However, I found on the liveDVD/CD that nexentia and beleniX both don't
>> come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit
>> mode at boottime?
>>
>
> Solaris autodetects the CPU type and boots in 64 bit mode on 64 bit
> CPUs and in 32 bit
>However, I found on the liveDVD/CD that nexentia and beleniX both don't
>come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit
>mode at boottime?
Solaris autodetects the CPU type and boots in 64 bit mode on 64 bit
CPUs and in 32 bit mode on 32 bit CPUs.
large parts of the run
Oliver Schinagl wrote:
> However, I found on the liveDVD/CD that nexentia and beleniX both don't
> come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit
> mode at boottime?
Yes Solaris auto detects, but note that some Live systems and install
images only do 32bit but will insta
Hi,
I'm still debating wether I should use ZFS or not and how. Here is my
scenario.
I want to run a server with a lot of storage, that gets disks
added/upgraded from time to time to expand space. I'd want to store
large files on it, 15mb - 5gb per file, and they'd only need to be
accessible via N
Joshua Goodall wrote:
> I've been seeing read and write performance pathologies with Linux
> ext3 over iSCSI to zvols, especially with small writes. Does running
> a journalled filesystem to a zvol turn the block storage into swiss
> cheese? I am considering serving ext3 journals (and possibly swap
> As you can see, two independent ZFS blocks share one parity block.
> COW won't help you here, you would need to be sure that each ZFS
> transaction goes to a different (and free) RAID5 row.
>
> This is I belive the main reason why poor RAID5 wasn't used in the first
> place.
Exactly right. RAI
On Tue, Sep 11, 2007 at 08:16:02AM +0100, Robert Milkowski wrote:
> Are you overwriting old data? I hope you're not...
I am, I overwrite parity, this is the whole point. That's why ZFS
designers used RAIDZ instead of RAID5, I think.
> I don't think you should suffer from above problem in ZFS due
Hello Pawel,
Monday, September 10, 2007, 6:18:37 PM, you wrote:
PJD> On Mon, Sep 10, 2007 at 04:31:32PM +0100, Robert Milkowski wrote:
>> Hello Pawel,
>>
>> Excellent job!
>>
>> Now I guess it would be a good idea to get writes done properly,
>> even if it means make them slow (like
32 matches
Mail list logo