I have a script which generates a file and then immediately uses 'du
-h' to obtain its size. With Solaris 10 I notice that this often
returns an incorrect value of '0' as if ZFS is lazy about reporting
actual disk use. Meanwhile, 'ls -l' does report the correct size.
Bob
=
Vincent Fox wrote:
> Let's say you are paranoid and have built a pool with 40+ disks in a Thumper.
>
> Is there a way to set metadata copies=3 manually?
>
> After having built RAIDZ2 sets with 7-9 disks and then pooled these together,
> it just seems like a little bit of extra insurance to increas
Mike Gerdts wrote:
> On Feb 15, 2008 2:31 PM, Dave <[EMAIL PROTECTED]> wrote:
>
>> This is exactly what I want - Thanks!
>>
>> This isn't in the man pages for zfs or zpool in b81. Any idea when this
>> feature was integrated?
>>
>
> Interesting... it is in b76. I checked several other rele
Hey, Richard -
I'm confused now.
My understanding was that any files created after the recordsize was set
would use that as the new maximum recordsize, but files already created
would continue to use the old recordsize.
Though I'm now a little hazy on what will happen when the new existing
fi
What about new blocks written to an existing file?
Perhaps we could make that clearer in the manpage too...
hm.
Mattias Pantzare wrote:
>> >
>> > If you created them after, then no worries, but if I understand
>> > correctly, if the *file* was created with 128K recordsize, then it'll
>> > k
Me again,
Thanks for all the previous help my 10 disc RAIDz2 is running mostly great.
Just ran into a problem though:
I have the RAIDz2 partition mounted to OS X via smb and I can upload OR
download data to it just fine, however if I start an upload then start a
download the upload fails and s
Anyone have a pointer to a general ZFS health/monitoring module for
SunMC? There isn't one baked into SunMC proper which means I get to
write one myself if someone hasn't already done it.
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
> >
> > If you created them after, then no worries, but if I understand
> > correctly, if the *file* was created with 128K recordsize, then it'll
> > keep that forever...
>
>
> Files have nothing to do with it. The recordsize is a file system
> parameter. It gets a little more complicated be
On Feb 15, 2008 2:31 PM, Dave <[EMAIL PROTECTED]> wrote:
> This is exactly what I want - Thanks!
>
> This isn't in the man pages for zfs or zpool in b81. Any idea when this
> feature was integrated?
Interesting... it is in b76. I checked several other releases both
before and after and they didn'
The segment size is amount of contiguous space that each drive contributes to a
single stripe.
So if you have a 5 drive RAID-5 set @ 128k segment size, a single stripe =
(5-1)*128k = 512k
BTW, Did you tweak the cache sync handling on the array?
-Joel
This message posted from opensolaris.or
On Fri, 15 Feb 2008, Albert Chin wrote:
>
> http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=st&q=#0b500afc4d62d434
This is really discouraging. Based on these newsgroup postings I am
thinking that the Sun StorageTek 2540 was not a good inv
Bob Friesenhahn wrote:
> On Fri, 15 Feb 2008, Luke Lonergan wrote:
>
>>> I only managed to get 200 MB/s write when I did RAID 0 across all
>>> drives using the 2540's RAID controller and with ZFS on top.
>>>
>> Ridiculously bad.
>>
>
> I agree. :-(
>
>
>>> While I agree that data
Nathan Kroenert wrote:
> And something I was told only recently - It makes a difference if you
> created the file *before* you set the recordsize property.
Actually, it has always been true for RAID-0, RAID-5, RAID-6.
If your I/O strides over two sets then you end up doing more I/O,
perhaps twice
On Fri, 15 Feb 2008, Luke Lonergan wrote:
>> I only managed to get 200 MB/s write when I did RAID 0 across all
>> drives using the 2540's RAID controller and with ZFS on top.
>
> Ridiculously bad.
I agree. :-(
>> While I agree that data is sent twice (actually up to 8X if striping
>> across four
Hi Bob,
On 2/15/08 12:13 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote:
> I only managed to get 200 MB/s write when I did RAID 0 across all
> drives using the 2540's RAID controller and with ZFS on top.
Ridiculously bad.
You should max out both FC-AL links and get 800 MB/s.
> While I agree
On Fri, Feb 15, 2008 at 09:00:05PM +, Peter Tribble wrote:
> On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
> <[EMAIL PROTECTED]> wrote:
> > On Fri, 15 Feb 2008, Peter Tribble wrote:
> > >
> > > May not be relevant, but still worth checking - I have a 2530 (which
> > ought
> > > to be tha
On Fri, 15 Feb 2008, Bob Friesenhahn wrote:
>
> Notice that the first six LUNs are active to one controller while the
> second six LUNs are active to the other controller. Based on this, I
> should rebuild my pool by splitting my mirrors across this boundary.
>
> I am really happy that ZFS makes s
On Fri, 15 Feb 2008, Peter Tribble wrote:
> Each LUN is accessed through only one of the controllers (I presume the
> 2540 works the same way as the 2530 and 61X0 arrays). The paths are
> active/passive (if the active fails it will relocate to the other path).
> When I set mine up the first time i
On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Fri, 15 Feb 2008, Peter Tribble wrote:
> >
> > May not be relevant, but still worth checking - I have a 2530 (which ought
> > to be that same only SAS instead of FC), and got fairly poor performance
> > at first. T
[EMAIL PROTECTED] said:
> I also tried using O_DSYNC, which stops the pathological behaviour but makes
> things pretty slow - I only get a maximum of about 20MBytes/sec, which is
> obviously much less than the hardware can sustain.
I may misunderstand this situation, but while you're waiting for
On Fri, 15 Feb 2008, Peter Tribble wrote:
>
> May not be relevant, but still worth checking - I have a 2530 (which ought
> to be that same only SAS instead of FC), and got fairly poor performance
> at first. Things improved significantly when I got the LUNs properly
> balanced across the controller
On Fri, Feb 15, 2008 at 12:30 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
> up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
> connected via load-shared 4Gbit FC links. This week I have tried many
> differen
On Fri, 15 Feb 2008, Luke Lonergan wrote:
I'm assuming you're measuring sequential write speed posting the iozone
results would help guide the discussion.
Posted below. I am also including the output from mpathadm in case
there is something wrong with the load sharing.
For the configura
Hi Bob,
I¹m assuming you¹re measuring sequential write speed posting the iozone
results would help guide the discussion.
For the configuration you describe, you should definitely be able to sustain
200 MB/s write speed for a single file, single thread due to your use of
4Gbps Fibre Channel inte
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>> What was the interlace on the LUN ?
>
> The question was about LUN interlace not interface.
> 128K to 1M works better.
The "segment size" is set to 128K. The max the 2540 allows is 512K.
Unfortunately, the StorageTek 2540 and CAM documentation do
Le 15 févr. 08 à 18:24, Bob Friesenhahn a écrit :
> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>>
>>> As mentioned before, the write rate peaked at 200MB/second using
>>> RAID-0 across 12 disks exported as one big LUN.
>>
>> What was the interlace on the LUN ?
>
The question was about LUN in
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>> The latter appears to be bug 6429855. But the underlying behaviour
>> doesn't really seem desirable; are there plans afoot to do any work on
>> ZFS write throttling to address this kind of thing?
>
> Throttling is being addressed.
>
> http://bug
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>
>> As mentioned before, the write rate peaked at 200MB/second using
>> RAID-0 across 12 disks exported as one big LUN.
>
> What was the interlace on the LUN ?
There are two 4Gbit FC interfaces on an Emulex LPe11002 card which are
supposedly acting
Oops, I forgot a step. I also upgraded the zpool in snv79b before I
tried the remove. It is now version 10.
On 2/15/08, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> The pool was exported from snv_73 and the spare was disconnected from
> the system. The OS was upgraded to snv_79b (SXDE 1/08) and
The pool was exported from snv_73 and the spare was disconnected from
the system. The OS was upgraded to snv_79b (SXDE 1/08) and the pool
was re-imported.
I think this weekend I'll try connecting a different drive to that
controller and see if it will remove then.
Thanks for your help.
On 2/15/0
Let's say you are paranoid and have built a pool with 40+ disks in a Thumper.
Is there a way to set metadata copies=3 manually?
After having built RAIDZ2 sets with 7-9 disks and then pooled these together,
it just seems like a little bit of extra insurance to increase metadata copies.
I don't
Nathan Kroenert wrote:
> And something I was told only recently - It makes a difference if you
> created the file *before* you set the recordsize property.
>
> If you created them after, then no worries, but if I understand
> correctly, if the *file* was created with 128K recordsize, then it'l
Ross wrote:
> I thought that too, but actually, I'm not sure you can. You can stripe
> multiple mirror or raid sets with zpool create, but I don't see any
> documentation or examples for mirroring a raid set.
>
Split the USB disk in half, then mirror each IDE disk to a USB disk half.
> Howe
On Thu, Feb 14, 2008 at 11:17 PM, Dave <[EMAIL PROTECTED]> wrote:
> I don't want Solaris to import any pools at bootup, even when there were
> pools imported at shutdown/at crash time. The process to prevent
> importing pools should be automatic and not require any human
> intervention. I want
On 2/15/08, Roch Bourbonnais <[EMAIL PROTECTED]> wrote:
>
> Le 15 févr. 08 à 11:38, Philip Beevers a écrit :
>
[...]
> > Obviously this isn't good behaviour, but it's particularly unfortunate
> > given that this checkpoint is stuff that I don't want to retain in any
> > kind of cache anyway - i
I thought that too, but actually, I'm not sure you can. You can stripe
multiple mirror or raid sets with zpool create, but I don't see any
documentation or examples for mirroring a raid set.
However, in this case even if you could, you might not want to. Creating a
stripe that way will restri
Hi Roch,
Thanks for the response.
> Throttling is being addressed.
>
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6429205
>
>
> BTW, the new code will adjust write speed to disk speed very quickly.
> You will not see those ultra fast initial checkpoints. Is
> this a c
Le 15 févr. 08 à 11:38, Philip Beevers a écrit :
> Hi everyone,
>
> This is my first post to zfs-discuss, so be gentle with me :-)
>
> I've been doing some testing with ZFS - in particular, in
> checkpointing
> the large, proprietary in-memory database which is a key part of the
> application I
Le 10 févr. 08 à 12:51, Robert Milkowski a écrit :
> Hello Nathan,
>
> Thursday, February 7, 2008, 6:54:39 AM, you wrote:
>
> NK> For kicks, I disabled the ZIL: zil_disable/W0t1, and that made
> not a
> NK> pinch of difference. :)
>
> Have you exported and them imported pool to get zil_disable
Hi everyone,
This is my first post to zfs-discuss, so be gentle with me :-)
I've been doing some testing with ZFS - in particular, in checkpointing
the large, proprietary in-memory database which is a key part of the
application I work on. In doing this I've found what seems to be some
fairly unh
Le 15 févr. 08 à 03:34, Bob Friesenhahn a écrit :
> On Thu, 14 Feb 2008, Tim wrote:
>>
>> If you're going for best single file write performance, why are you
>> doing
>> mirrors of the LUNs? Perhaps I'm misunderstanding why you went
>> from one
>> giant raid-0 to what is essentially a raid-1
Le 14 févr. 08 à 02:22, Marion Hakanson a écrit :
> [EMAIL PROTECTED] said:
>> It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
>> Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
>> handily pull 120MB/sec from it, and write at over 100MB/sec. It
>> f
Hi,
I'm trying to boot a HP dl360 G5 from via iSCSI from a solaris 10 u4 zfs device
but it's failing the login at boot:
POST messages from the dl360:
Starting iSCSI boot option rom initialization...
Connecting.connected.
Logging in...error - failing.
Interestingly (and correctly) the auth
Using ZFS ofcourse *g*
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi everybody,
thanks for at very good source of information! I hope maybe you guys can help
out a little.
I have 3 disk, one usb 300gb and 2x150gb ide. I would like to get the most
space out of what ever configuration i apply. So i've been thinking (and
testing without success), is it at all
45 matches
Mail list logo