Hello Thomas,
try to set txg_time via mdb to 60 - this shuld maqke ZFS to "flush" every 60s
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss m
On Thu, Jul 13, 2006 at 11:42:21AM -0700, Richard Elling wrote:
> >Yes, and while it's not an immediate showstopper for me, I'll want to
> >know that expansion is coming imminently before I adopt RAID-Z.
>
> [in brainstorming mode, sans coffee so far this morning]
>
> Better yet, buy two disks,
On Thu, 2006-07-13 at 07:58, David Abrahams wrote:
> It seems, on the face of it, as though a *single* sensible answer
> might be impossible. But it also seems like it might be unnecessary.
I'm aware of at least one case where a customer wrote a "delete file at
head of queue; repeat until statvfs
[EMAIL PROTECTED] said:
> There's no reason at all why you can't do this. The only thing preventing
> most file systems from taking advantage of ?adjustable? replication is that
> they don?t have the integrated volume management capabilities that ZFS does.
And in fact, Sun's own QFS can do this,
> Of course when it's time to upgrade you can always
> just call sun and get a Thumper on a "Try before you
> Buy" - and use it as a temporary storage space for
> your files while you re-do your raidz/raidz2 virtual
> device from scratch with an additional disk. zfs
> send/zfs recieve here I come..
How could i monitor zfs ?
or the zpool activity ?
I want to know if anything wrong is going on.
If i could receive those warning by email, it would be great :)
Martin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Dennis Clarke wrote:
whoa whoa ... just one bloody second .. whoa ..
That looks like a real nasty bug description there.
What are the details on that? Is this particular to a given system or
controller config or something liek that or are we talking global to Solaris
10 Update 2 everywhere
Joseph Mocker schrieb:
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.
I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subseq
Hello,
When files are created that are <= 512 bytes using a raidz pool, how are
full stripe writes performed ?
-thanks,
-Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
There's no reason at all why you can't do this. The only thing preventing most
file systems from taking advantage of “adjustable” replication is that they
don’t have the integrated volume management capabilities that ZFS does.
ZFS already allows its storage pools to contain multiple types of blo
On Thu, 2006-07-13 at 11:42 -0700, Richard Elling wrote:
> [in brainstorming mode, sans coffee so far this morning]
>
> Better yet, buy two disks, say 500 GByte. Need more space, replace
> them with 750 GByte, because by then the price of the 750 GByte disks
> will be as low as the 250 GByte disk
> Who hoo! It looks like the resilver completed sometime over night. The
> system appears to be running normally, (after one final reboot):
>
> [EMAIL PROTECTED]: zpool status
> pool: storage
> state: ONLINE
> scrub: none requested
> config:
>
> NAME ST
David Abrahams wrote:
David Dyer-Bennet <[EMAIL PROTECTED]> writes:
Adam Leventhal <[EMAIL PROTECTED]> writes:
I'm not sure I even agree with the notion that this is a real
problem (and if it is, I don't think is easily solved). Stripe
widths are a function of the expected failure rate and
comfortable with having 2 parity drives for 12 disks,
the thread starting config of 4 disks per controller(?):
zpool create tank raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0c2t1d0 c2t2d0
then later
zpool add tank raidz2 c2t3d0 c2t4d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0
as described, doubles ones IOPs,
Jeff Bonwick said:
> RAID-Z takes a different approach. We were designing a filesystem
> as well, so we could make the block pointers as semantically rich
> as we wanted. To that end, the block pointers in ZFS contains data
> layout information. One nice side effect of this is that we don't
> n
Infrant NAS box and using their X-RAID instead.
I've gone back to solaris from an Infrant box.
1) while the Infrant cpu is sparc, its way, way, slow.
a) the web IU takes 3-5 seconds per page
b) any local process, rsync, UPnP, SlimServer is cpu starved
2) like a netapp, its
Of course when it's time to upgrade you can always just call sun and get a
Thumper on a "Try before you Buy" - and use it as a temporary storage space for
your files while you re-do your raidz/raidz2 virtual device from scratch with
an additional disk. zfs send/zfs recieve here I come.
Not
David Dyer-Bennet <[EMAIL PROTECTED]> writes:
> Adam Leventhal <[EMAIL PROTECTED]> writes:
>
>> I'm not sure I even agree with the notion that this is a real
>> problem (and if it is, I don't think is easily solved). Stripe
>> widths are a function of the expected failure rate and fault domains
>>
Jeff Bonwick <[EMAIL PROTECTED]> writes:
> The main issues are administrative. ZFS is all about ease of use
> (when it's not busy being all about data integrity), so getting the
> interface to be simple and intuitive is important -- and not as
> simple as it sounds. If your free disk space might
On Thu, Jul 13, 2006 at 09:44:18AM -0500, Al Hopper wrote:
> On Thu, 13 Jul 2006, David Dyer-Bennet wrote:
>
> > Adam Leventhal <[EMAIL PROTECTED]> writes:
> >
> > > I'm not sure I even agree with the notion that this is a real
> > > problem (and if it is, I don't think is easily solved). Stripe
>
David Dyer-Bennet wrote:
It's easy to corrupt the volume, though -- just copy random data over
*two* disks of a RAIDZ volume. Okay, you have to either do the whole
volume, or get a little lucky to hit both copies of some piece of
information before you get corruption. Or pull two disks out of t
On Thu, 13 Jul 2006, David Dyer-Bennet wrote:
> Adam Leventhal <[EMAIL PROTECTED]> writes:
>
> > I'm not sure I even agree with the notion that this is a real
> > problem (and if it is, I don't think is easily solved). Stripe
> > widths are a function of the expected failure rate and fault domains
Dennis Clarke wrote:
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.
I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subsequent
"Dick Davies" <[EMAIL PROTECTED]> writes:
> On 13/07/06, Yacov Ben-Moshe <[EMAIL PROTECTED]> wrote:
> > How can I remove a device or a partition from a pool.
> > NOTE: The devices are not mirrored or raidz
>
> Then you can't - there isn't a 'zfs remove' command yet.
Yeah, I ran into that in my t
I am seeing the same behavior on my SunBlade 2500 while running firefox. I
think my disks are
quieter than yours though, because I don't really notice the difference that
much.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
Adam Leventhal <[EMAIL PROTECTED]> writes:
> I'm not sure I even agree with the notion that this is a real
> problem (and if it is, I don't think is easily solved). Stripe
> widths are a function of the expected failure rate and fault domains
> of the system which tend to be static in nature. A co
Luke Scharf <[EMAIL PROTECTED]> writes:
> As for the claims, I don't buy that it's impossible to corrupt a ZFS
> volume. I've replicated the demo where the guy dd's /dev/urandom
> over part of the disk, and I believe that works -- but there are a
> lot of other ways to corrupt a filesystem in the
David Abrahams wrote:
I've seen people wondering if ZFS was a scam because the claims just
seemed too good to be true. Given that ZFS *is* really great, I don't
think it would hurt to prominently advertise limitations like this one
it would probably benefit credibility considerably, and it's a r
On 7/13/06, Darren Reed <[EMAIL PROTECTED]> wrote:
When ZFS compression is enabled, although the man page doesn't
explicitly say this, my guess is that only new data that gets
written out is compressed - in keeping with the COW policy.
[ ... ]
Hmmm, well, I suppose the same problem might appl
On 7/13/06, Darren Reed <[EMAIL PROTECTED]> wrote:
When ZFS compression is enabled, although the man page doesn't
explicitly say this, my guess is that only new data that gets
written out is compressed - in keeping with the COW policy.
This is all well and good, if you enable compression when yo
If it was possible to implement raidz/raidz2 expansion it would be a big
feature in favor of ZFS. Most hardware RAID controllers have the ability to
expand a raid pool - some have to take the raid array offline, but the ones I
work with generally do it online, although you are forced to suffer t
When ZFS compression is enabled, although the man page doesn't
explicitly say this, my guess is that only new data that gets
written out is compressed - in keeping with the COW policy.
This is all well and good, if you enable compression when you
create the ZFS filesystem. If I enable compressio
Hi,
after switching over to zfs from ufs for my ~/ at home, I am a little bit
disturbed by the noise the disks are making. To be more precise, I always have
thunderbird and firefox running on my desktop and either or both seem to be
writing to my ~/ at short intervals and ZFS flushes these tran
On 13/07/06, Yacov Ben-Moshe <[EMAIL PROTECTED]> wrote:
How can I remove a device or a partition from a pool.
NOTE: The devices are not mirrored or raidz
Then you can't - there isn't a 'zfs remove' command yet.
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net
How can I remove a device or a partition from a pool.
NOTE: The devices are not mirrored or raidz
Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
> Maybe this is a dumb question, but I've never written a
> filesystem is there a fundamental reason why you cannot have
> some files mirrored, with others as raidz, and others with no
> resilience? This would allow a pool to initially exist on one
> disk, then gracefully change between different r
> > I guess that could be made to work, but then the data on
> > the disk becomes much (much much) more difficult to
> > interpret because you have some rows which are effectively
> > one width and others which are another (ad infinitum).
>
> How do rows come into it? I was just assuming that
37 matches
Mail list logo