OK, I've been putting off this question for a while now, but it eating
at me, so I can't hold off any more. I have a nice 8 gig memory stick
I've formated with the ZFS file system. Works great on all my Solaris
PC's, but refuses to work on my Sparc processor. So I've formated it on
my Sparc
> > > On 11/7/07, can you guess?
> > [EMAIL PROTECTED]
> > > wrote:
> > However, ZFS is not the *only* open-source
> approach
> > which may allow that to happen, so the real
> question
> > becomes just how it compares with equally
> inexpensive
> > current and potential alternatives (and that would
James C. McPherson wrote:
> Got an issue which is rather annoying to me - three of my
> ZFS caches are regularly using nearly 1/2 of the 1.09Gb of
> allocated kmem in my system
...[snip]
Following suggestions from Andre and Rich that this was
probably the ARC, I've implemented a 256Mb limit for my
> > On 11/7/07, can you guess?
> [EMAIL PROTECTED]
> > wrote:
> However, ZFS is not the *only* open-source approach
> which may allow that to happen, so the real question
> becomes just how it compares with equally inexpensive
> current and potential alternatives (and that would
> make for an inter
I can't decide if this is a dumb question or not (so I'll try asking it).
We have two Solaris machines (Solaris 08/07) one (x86) with a load of disk
attached and one (sparc) without. I've configured a volume on the disk server
and made that available via iscsi. Connected to that on the sparc a
It's currently planned for integration into Nevada in the
build 82 or 83 time frame.
Lori
Jerry K wrote:
> I haven't seen anything about this recently, or I have missed it.
>
> Can anyone share what the current status of ZFS boot partition on Sparc is?
>
> Thanks,
>
> Jerry K
> __
I haven't seen anything about this recently, or I have missed it.
Can anyone share what the current status of ZFS boot partition on Sparc is?
Thanks,
Jerry K
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
It seems my script got lost during edit/posting message.
I'll try again attaching...
- Andreas
This message posted from opensolaris.org
test-zfs-clone.sh
Description: Bourne shell script
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
Why didn't this command just fail?
># zpool add tank c4t0d0
>invalid vdev specification
>use '-f' to override the following errors:
>mismatched replication level: pool uses raidz and new vdev is disk
I did not use '-f' and yet my configuration was changed. That was unexpected
behaviour.
Thanks
So, if your array is something big like an HP XP12000, you wouldn't just make a
zpool of one big LUN (LUSE volume), you'd split it in two and make a mirror
when creating the zpool?
If the array has redundancy built in, you're suggesting to add another layer of
redundancy using ZFS on top of tha
I forgot to mention: we are running Solaris 10 Update 4 (08/07)...
- Andreas
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hallo all,
while experimenting with "zfs send" and "zfs receive" mixed with cloning on
receiver side I found the following...
On server A there is a zpool with snapshots created on regular basis via cron.
Server B get's updated by a zfs-send-ssh-zfs-receive command pipe.
Sometimes I want to do s
> And some results (for OLTP workload):
>
> http://przemol.blogspot.com/2007/08/zfs-vs-vxfs-vs-ufs
> -on-scsi-array.html
While I was initially hardly surprised that ZFS offered only 11% - 15% of the
throughput of UFS or VxFS, a quick glance at Filebench's OLTP workload seems to
indicate that it
Your response here appears to refer to a different post in this thread.
> I never said I was a typical consumer.
Then it's unclear how your comment related to the material which you quoted
(and hence to which it was apparently responding).
> If you look around photo forums, you'll see an
> inte
Hi Ralf,
Thank you for the suggestion. About half of the disks are reporting
1968-1969 in the "Soft Errors" field. All disks are reporting 1968 in
the "Illegal Request" field. There don't appear to be any other
errors; all other counters are 0. The Illegal Request count seems a
little fishy...lik
Jason J. W. Williams wrote:
> Have any of y'all seen a condition where the ILOM considers a disk
> faulted (status is 3 instead of 1), but ZFS keeps writing to the disk
> and doesn't report any errors? I'm going to do a scrub tomorrow and
> see what comes back. I'm curious what caused the ILOM to f
Hey Guys,
Have any of y'all seen a condition where the ILOM considers a disk
faulted (status is 3 instead of 1), but ZFS keeps writing to the disk
and doesn't report any errors? I'm going to do a scrub tomorrow and
see what comes back. I'm curious what caused the ILOM to fault the
disk. Any advice
17 matches
Mail list logo