Hey all again,
Looking into a few other options. How about infiniband? it would give us more
bandwidth, but will it increase complexity/price? any thoughts?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
I've been seeing read and write performance pathologies with Linux
ext3 over iSCSI to zvols, especially with small writes. Does running
a journalled filesystem to a zvol turn the block storage into swiss
cheese? I am considering serving ext3 journals (and possibly swap
too) off a raw, hardware-mirr
Joerg Schilling wrote:
> David Hopwood <[EMAIL PROTECTED]> wrote:
>
>> Al Hopper wrote:
>>> So back to patent portfolios: yes there will be (public and private)
>>> posturing; yes there will be negotiations; and, ultimately, there will
>>> be a resolution. All of this won't affect ZFS or anyone
On 10 Sep 2007, at 16:41, Brian H. Nelson wrote:
Stephen Usher wrote:
Brian H. Nelson:
I'm sure it would be interesting for those on the list if you could
outline the gotchas so that the rest of us don't have to re-invent
the
wheel... or at least not fall down the pitfalls.
Also, here's a
MC wrote:
> To expand on this:
>
>> The recommended use of whole disks is for drives with volatile
>> write caches where ZFS will enable the cache if it owns the whole disk.
>
> Does ZFS really never use disk cache when working with a disk slice?
This question doesn't make sense. ZFS doesn't
Hello all,
There is a way to configure the zpool to "legacy_mount", and have all
filesystems in that pool mounted automatically?
I will try explain better:
- Imagine that i have a zfs pool with "1000" filesystems.
- I want to "control" the mount/unmount of that pool, so, i did configure the
To expand on this:
> The recommended use of whole disks is for drives with volatile write caches
> where ZFS will enable the cache if it owns the whole disk.
Does ZFS really never use disk cache when working with a disk slice? Is there
any way to force it to use the disk cache?
This messag
[EMAIL PROTECTED] wrote:
> All of these threads to this point have not answered the needs in
> anyway close to an solution that user quotas allow.
I thought I did answer that... for some definition of "answer"...
>> The main gap for .edu sites is quotas which will likely be solv
On Mon, Sep 10, 2007 at 04:31:32PM +0100, Robert Milkowski wrote:
> Hello Pawel,
>
> Excellent job!
>
> Now I guess it would be a good idea to get writes done properly,
> even if it means make them slow (like with SVM). The end result
> would be - do you want fast wrties/slow read
[EMAIL PROTECTED] wrote on 09/10/2007 12:13:18 PM:
> [EMAIL PROTECTED] wrote:
> > Very true, you could even pay people to track down heavy users
and
> > bonk them on the head. Why is everyone responding with alternate
routes to
> > a simple need?
>
> For the simple reason that sometimes
[EMAIL PROTECTED] wrote:
> Very true, you could even pay people to track down heavy users and
> bonk them on the head. Why is everyone responding with alternate routes to
> a simple need?
For the simple reason that sometimes it is good to challenge existing
practice and try and find the
> Now I guess it would be a good idea to get writes done properly,
> even if it means make them slow (like with SVM). The end result
> would be - do you want fast wrties/slow reads go ahead with
> raid-z; if you need fast reads/slow writes go with raid-5.
>
> btw: I'm just thin
[EMAIL PROTECTED] wrote on 09/10/2007 11:40:16 AM:
> Richard Elling wrote:
> > There is also a long tail situation here, which is how I approached the
> > problem at eng.Auburn.edu. 1% of the users will use > 90% of the
space. For
> > them, I had special places. For everyone else, they were lump
Gino wrote:
>>> Richard, thank you for your detailed reply.
>>> Unfortunately an other reason to stay with UFS in
>> production ..
>>>
>> IMHO, maturity is the primary reason to stick with
>> UFS. To look at
>> this through the maturity lens, UFS is the great
>> grandfather living on
>> life su
Richard Elling wrote:
> There is also a long tail situation here, which is how I approached the
> problem at eng.Auburn.edu. 1% of the users will use > 90% of the space. For
> them, I had special places. For everyone else, they were lumped into
> large-ish
> buckets. A daily cron job easily ide
Mike Gerdts wrote:
> On 9/8/07, Richard Elling <[EMAIL PROTECTED]> wrote:
>> Changing the topic slightly, the strategic question is:
>> why are you providing disk space to students?
>
> For most programming and productivity (e.g. word processing, etc.)
> people will likely be better suited by havi
Stephen Usher wrote:
>
> Brian H. Nelson:
>
> I'm sure it would be interesting for those on the list if you could
> outline the gotchas so that the rest of us don't have to re-invent the
> wheel... or at least not fall down the pitfalls.
>
Also, here's a link to the ufs on zvol blog where I or
> I'm more worried about the availability of my data in the even of a
> controller failure. I plan on using 4-chan SATA controllers and
> creating multiple 4 disk RAIDZ vdevs. I want to use a single pool, but
> it looks like I can't as controller failure = ZERO access, although the
> same can be
Mike Gerdts wrote:
> The UFS on zvols option sounds intriguing to me, but I would guess
> that the following could be problems:
>
> 1) Double buffering: Will ZFS store data in the ARC while UFS uses
> traditional file system buffers?
>
This is probably an issue. You also have the journal+COW co
Hello Pawel,
Excellent job!
Now I guess it would be a good idea to get writes done properly,
even if it means make them slow (like with SVM). The end result
would be - do you want fast wrties/slow reads go ahead with
raid-z; if you need fast reads/slow writes go with raid-5.
Stephen Usher wrote:
> Brian H. Nelson:
>
> I'm sure it would be interesting for those on the list if you could
> outline the gotchas so that the rest of us don't have to re-invent the
> wheel... or at least not fall down the pitfalls.
>
I believe I ran into one or both of these bugs:
642999
> If I have a pool that made up of 2 raidz vdevs, all data is striped across?
> So if I somehow lose a vdev I lose all my data?!
If your vdevs are RAID-Z's, there has to be a rare coincidence to happen
to break the pool (two disks failing in the same RAID-Z)...
But yeah, ZFS spreads blocks to
So,
If I have a pool that made up of 2 raidz vdevs, all data is striped across?
So if I somehow lose a vdev I lose all my data?!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Hi.
I've a prototype RAID5 implementation for ZFS. It only works in
non-degraded state for now. The idea is to compare RAIDZ vs. RAID5
performance, as I suspected that RAIDZ, because of full-stripe
operations, doesn't work well for random reads issued by many processes
in parallel.
There is of co
dudekula mastan wrote:
> Hi All,
>
> At the time of zpool creation, user controls the zpool mount point by
> using "-m" option. Is there a way to change this mount point dynamically ?
By dynamically I assume me mean after the pool has been created. If yes
then do this if your pool is called
Hi All,
At the time of zpool creation, user controls the zpool mount point by using
"-m" option. Is there a way to change this mount point dynamically ?
Your help is appreciated.
Thanks & Regards
Masthan D
-
Building a website is a piece
David Hopwood <[EMAIL PROTECTED]> wrote:
> Al Hopper wrote:
> > So back to patent portfolios: yes there will be (public and private)
> > posturing; yes there will be negotiations; and, ultimately, there will
> > be a resolution. All of this won't affect ZFS or anyone running ZFS.
>
> It matters
Bruce Shaw wrote:
> You should probably be doing a ZFS clone and backing that up.
Why ? clones are writeable and are thus changing. I don't think that
is good advice at all.
Snapshots don't change and are perfect for backups for that reason.
You don't need to clone to be able to continue writ
28 matches
Mail list logo