On Fri, May 28, 2010 at 00:56, Marc Bevand wrote:
> Giovanni Tirloni sysdroid.com> writes:
>>
>> The chassis has 4 columns of 6 disks. The 18 disks I was testing were
>> all on columns #1 #2 #3.
>
> Good, so this confirms my estimations. I know you said the current
> ~810 MB/s are amply sufficien
Giovanni Tirloni sysdroid.com> writes:
>
> The chassis has 4 columns of 6 disks. The 18 disks I was testing were
> all on columns #1 #2 #3.
Good, so this confirms my estimations. I know you said the current
~810 MB/s are amply sufficient for your needs. Spreading the 18 drives
across all 4 port
Brandon High wrote:
On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh wrote:
I was wondering if there is a special option to share out a set of nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system, mydir1 shows up, and I can see mydir2, but no
On 5/27/2010 9:30 PM, Reshekel Shedwitz wrote:
> Some tips…
>
> (1) Do a zfs mount -a and a zfs share -a. Just in case something didn't get
> shared out correctly (though that's supposed to automatically happen, I think)
>
> (2) The Solaris automounter (i.e. in a NIS environment) does not seem to
Some tips…
(1) Do a zfs mount -a and a zfs share -a. Just in case something didn't get
shared out correctly (though that's supposed to automatically happen, I think)
(2) The Solaris automounter (i.e. in a NIS environment) does not seem to
automatically mount descendent filesystems (i.e. if the
On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh wrote:
> I was wondering if there is a special option to share out a set of nested
> directories? Currently if I share out a directory with
> /pool/mydir1/mydir2
> on a system, mydir1 shows up, and I can see mydir2, but nothing in
> mydir2.
On Thu, May 27, 2010 at 2:16 AM, Per Jorgensen wrote:
> is there a way i can get c9t8d0 out of the pool , or how do i get the pool
> back to optimal redundancy ?
It's not possible to remove vdevs right now. When the mythical
bp_rewrite shows up, then you can.
For now, the only thing you can do
> > Hi all
> >
> > Since Windows 2003 Server or so, it has had some versioning support
> > usable from the client side if checking the properties on a file. Is
> > it somehow possible to use this functionality with ZFS snapshots?
http://blogs.sun.com/amw/entry/using_the_previous_versions_tab ;)
on 27/05/2010 07:11 Jeff Bonwick said the following:
> You can set metaslab_gang_bang to (say) 8k to force lots of gang block
> allocations.
Bill, Jeff,
thanks a lot!
This helped to reproduce the issue and find the bug.
Just in case:
http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/144214
> On M
On Wed, May 26, 2010 at 6:09 PM, Giovanni Tirloni wrote:
> On Wed, May 26, 2010 at 9:22 PM, Brandon High wrote:
>>
>> I'd wager it's the PCIe x4. That's about 1000MB/s raw bandwidth, about
>> 800MB/s after overhead.
>
> Makes perfect sense. I was calculating the bottlenecks using the
> full-duple
Cassandra,
Which Solaris release is this?
This is working for me between an Solaris 10 server and a OpenSolaris
client.
Nested mount points can be tricky and I'm not sure if you are looking
for the mirror mount feature that is not available in the Solaris 10
release, where new directory conte
>> FWIW (even on a freshly booted system after a panic)
>> # lofiadm zyzzy.iso /dev/lofi/1
>> # mount -F hsfs /dev/lofi/1 /mnt
>> mount: /dev/lofi/1 is already mounted or /mnt is busy
>> # mount -O -F hsfs /dev/lofi/1 /mnt
>> # share /mnt
>> #
>>
>> If you unshare /mnt and then do this aga
- "Cassandra Pugh" skrev:
I was wondering if there is a special option to share out a set of nested
directories? Currently if I share out a directory with /pool/mydir1/mydir2
on a system, mydir1 shows up, and I can see mydir2, but nothing in mydir2.
mydir1 and mydir2 are each a zfs fil
I share filesystems all the time this way, and have never had this
problem. My first guess would be a problem with NFS or directory
permissions. You are using NFS, right?
- Garrett
On 5/27/2010 1:02 PM, Cassandra Pugh wrote:
I was wondering if there is a special option to share out a
I was wondering if there is a special option to share out a set of
nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system, mydir1 shows up, and I can see mydir2, but nothing in
mydir2.
mydir1 and mydir2 are each a zfs filesystem, each shared with
On 5/27/2010 12:21 PM, Carson Gaspar wrote:
Jan Kryl wrote:
the bug (6798273) has been closed as incomplete with following
note:
"I cannot reproduce any issue with the given testcase on b137."
So you should test this with b137 or newer build. There have
been some extensive changes going to tre
Jan Kryl wrote:
the bug (6798273) has been closed as incomplete with following
note:
"I cannot reproduce any issue with the given testcase on b137."
So you should test this with b137 or newer build. There have
been some extensive changes going to treeclimb_* functions,
so the bug is probably fi
On 5/27/2010 2:45 PM, Jan Kryl wrote:
> Hi Frank,
>
> On 24/05/10 16:52 -0400, Frank Middleton wrote:
>
>> Many many moons ago, I submitted a CR into bugs about a
>> highly reproducible panic that occurs if you try to re-share
>> a lofi mounted image. That CR has AFAIK long since
>> disappe
Hi Frank,
On 24/05/10 16:52 -0400, Frank Middleton wrote:
> Many many moons ago, I submitted a CR into bugs about a
> highly reproducible panic that occurs if you try to re-share
> a lofi mounted image. That CR has AFAIK long since
> disappeared - I even forget what it was called.
>
> This
> > Hi all
> >
> > Since Windows 2003 Server or so, it has had some versioning support
> > usable from the client side if checking the properties on a file. Is
> > it somehow possible to use this functionality with ZFS snapshots?
http://blogs.sun.com/amw/entry/using_the_previous_versions_tab ;)
-
On May 27, 2010, at 6:32 AM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> Since Windows 2003 Server or so, it has had some versioning support usable
> from the client side if checking the properties on a file. Is it somehow
> possible to use this functionality with ZFS snapshots?
Yes, there is som
Hi all
Since Windows 2003 Server or so, it has had some versioning support usable from
the client side if checking the properties on a file. Is it somehow possible to
use this functionality with ZFS snapshots?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...
Edward Ned Harvey wrote:
>> From: sensille [mailto:sensi...@gmx.net]
>>
>> The only thing I'd like to point out
>> is that
>> ZFS doesn't do random writes on a slog, but nearly linear writes. This
>> might
>> even be hurting performance more than random writes, because you always
>> hit
>> the wors
On 5/27/2010 10:33 AM, sensille wrote:
(resent because of received bounce)
Edward Ned Harvey wrote:
From: sensille [mailto:sensi...@gmx.net]
So this brings me back to the question I indirectly asked in the
middle of a
much longer previous email -
Is there some way, in software, to detect th
(resent because of received bounce)
Edward Ned Harvey wrote:
From: sensille [mailto:sensi...@gmx.net]
So this brings me back to the question I indirectly asked in the middle of a
much longer previous email -
Is there some way, in software, to detect the current position of the head?
If not,
(resent because of mail problems)
Edward Ned Harvey wrote:
From: sensille [mailto:sensi...@gmx.net]
The only thing I'd like to point out
is that
ZFS doesn't do random writes on a slog, but nearly linear writes. This
might
even be hurting performance more than random writes, because you always
hi
On 05/27/10 09:16 PM, Per Jorgensen wrote:
thanks for the quick responses and yes the history show just what you said :(
is there a way i can get c9t8d0 out of the pool , or how do i get the pool back
to optimal redundancy ?
No, you will have to destroy the pool and start over. Or if tha
On Thu, May 27, 2010 at 2:39 AM, Marc Bevand wrote:
> Hi,
>
> Brandon High freaks.com> writes:
>>
>> I only looked at the Megaraid that he mentioned, which has a PCIe
>> 1.0 4x interface, or 1000MB/s.
>
> You mean x8 interface (theoretically plugged into that x4 slot below...)
>
>> The board
thanks for the quick responses and yes the history show just what you said :(
is there a way i can get c9t8d0 out of the pool , or how do i get the pool back
to optimal redundancy ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing li
Neil Perrin wrote:
> Yes, I agree this seems very appealing. I have investigated and
> observed similar results. Just allocating larger intent log blocks but
> only writing to say the first half of them has seen the same effect.
> Despite the impressive results, we have not pursued this further mai
On May 27, 2010, at 12:37 PM, Per Jorgensen wrote:
> I get the following output when i run a zpool status , but i am a little
> confused of why c9t8d0 is more "left align" then the rest of the disks in the
> pool , what does it mean ?
It means that is is another top-level vdev in your pool.
Ba
On 27 May, 2010 - Per Jorgensen sent me these 1,0K bytes:
> I get the following output when i run a zpool status , but i am a
> little confused of why c9t8d0 is more "left align" then the rest of
> the disks in the pool , what does it mean ?
Because someone forced it in without redundancy (or cre
I get the following output when i run a zpool status , but i am a little
confused of why c9t8d0 is more "left align" then the rest of the disks in the
pool , what does it mean ?
$ zpool status blmpool
pool: blmpool
state: ONLINE
scrub: none requested
config:
NAMESTATE RE
Richard Elling wrote:
> On May 26, 2010, at 8:38 AM, Neil Perrin wrote:
>
>> On 05/26/10 07:10, sensille wrote:
>>> My idea goes as follows: don't write linearly. Track the rotation
>>> and write to the position the head will hit next. This might be done
>>> by a re-mapping layer or integrated int
34 matches
Mail list logo