Tim
On Wed, 30 Jul 2008, Tim Haley wrote:
> Ah, ignore my previous question. We believe we found the problem, and filed:
>
> 6731778 'ls *' in empty zfs snapshot directory returns EILSEQ vs. ENOENT we
> get in other empty directories
>
> Fix will likely go back today or tomorrow and be present
Peter Tribble writes:
> A question regarding zfs_nocacheflush:
>
> The Evil Tuning Guide says to only enable this if every device is
> protected by NVRAM.
>
> However, is it safe to enable zfs_nocacheflush when I also have
> local drives (the internal system drives) using ZFS, in particula
I'm not sure you're actually seeing the same problem there Richard. It seems
that for you I/O is stopping on removal of the device, whereas for me I/O
continues for some considerable time. You are also able to obtain a result
from "zpool status" whereas that completely hangs for me.
To illu
Hello all
I have weird problem with a snapshot... when I try to delete it kernel
panics. However I can successfully create and then delete other
snapshots on same file system. The OS version I noticed it happens was
snv_81 so I've upgraded to snv_94 (LU) but it doesn't help.
I've attached a
I have a Sun Blade 2500 running nv_88. I want to install nv_94 with a mirrored
zfs root filesystem. At the ok prompt, I entered boot cdrom -w so that I would
get the test installer and could select zfs as the root filesystem.
Unfortunately, I got the GUI installer and can not select a zfs root.
Hi Ron,
Try again by using this syntax:
ok boot cdrom - text
Make sure you have reviewed the ZFS boot/install chapter in the ZFS
admin guide, here:
http://opensolaris.org/os/community/zfs/docs/
Cindy
Ron Halstead wrote:
> I have a Sun Blade 2500 running nv_88. I want to install nv_94 with a
Hello,
We have a S10U5 server sharing with zfs sharing up NFS shares. While using
the nfs mount for a log destination for syslog for 20 or so busy mail servers
we have noticed that the throughput becomes severly degraded shortly. I have
tried disabling the zil, turning off cache flushing and
Stephen Stogner wrote:
> Hello,
> We have a S10U5 server sharing with zfs sharing up NFS shares. While using
> the nfs mount for a log destination for syslog for 20 or so busy mail servers
> we have noticed that the throughput becomes severly degraded shortly. I have
> tried disabling the zi
There's not much that CIFS can do as far as user quotas go
without filesystem support. I've CC'ed zfs-discuss, somebody
there might be able to provide you something useful.
Afshin
Ross wrote:
> Not sure if this is the best place to ask about this. I know ZFS doesn't
> have user quotas, but is t
Stephen Stogner wrote:
> Hello,
> We have a S10U5 server sharing with zfs sharing up NFS shares. While using
> the nfs mount for a log destination for syslog for 20 or so busy mail servers
> we have noticed that the throughput becomes severly degraded shortly. I have
> tried disabling the zi
True we could have all the syslog data be directed towards the host but the
underlying issue remains the same with the performance hit. We have used nfs
shares for log hosts and mail hosts and we are looking towards using a zfs
based mail store with nfs moutnts from x mail servers but if nfs/zf
Stephen Stogner wrote:
> True we could have all the syslog data be directed towards the host but the
> underlying issue remains the same with the performance hit. We have used nfs
> shares for log hosts and mail hosts and we are looking towards using a zfs
> based mail store with nfs moutnts fr
On Thu, Jul 31, 2008 at 01:07:20PM -0500, Paul Fisher wrote:
> Stephen Stogner wrote:
> > True we could have all the syslog data be directed towards the host but the
> > underlying issue remains the same with the performance hit. We have used
> > nfs shares for log hosts and mail hosts and we ar
Thanks Cindy. My co-worker ( whom I mentor), told me the proper way. It is
boot cdrom - w, not cdrom -w. He's right. He should be mentoring me. I'll try
your way later, nv94 is loading now..
--ron
This message posted from opensolaris.org
___
zfs-dis
Hey folks,
I guess this is an odd question to be asking here, but I could do with some
feedback from anybody who's actually using ZFS in anger.
I'm about to go live with ZFS in our company on a new fileserver, but I have
some real concerns about whether I can really trust ZFS to keep my data al
Hi Tim,
> Finally getting around to answering Nil's mail properly - only a month
> late!
Not a problem.
> Okay, after careful consideration, I don't think I'm going to add this
that's fine for me, but ...
> but in cases where you're powering down a laptop overnight,
> you don't want to just ta
Ross wrote:
> Hey folks,
>
> I guess this is an odd question to be asking here, but I could do with some
> feedback from anybody who's actually using ZFS in anger.
>
> I'm about to go live with ZFS in our company on a new fileserver, but I have
> some real concerns about whether I can really tr
Mark,
I filed two bugs for these issues but they are not visible in the
Opensolaris bug database yet:
6731639 More NFSv4 ACL changes for ls.1 (Nevada)
6731650 More NFSv4 ACL changes for acl.5 (Nevada)
The current ls.1 man page can be displayed on docs.sun.com, here:
http://docs.sun.com/app/docs
Hey Nils,
Nils Goroll wrote:
but in cases where you're powering down a laptop overnight,
you don't want to just take a load of snapshots after you power on for
every missed cron job, you just want one
This is precisely what the at solution is doing: As there is only one
at job for each zfs sna
Stephen Stogner wrote:
> True we could have all the syslog data be directed towards the host but the
> underlying issue remains the same with the performance hit. We have used nfs
> shares for log hosts and mail hosts and we are looking towards using a zfs
> based mail store with nfs moutnts fr
Since all the other components can be the same (ram, cpu, hdd, case, etc) why
not to spend $30 more for this?
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
On Thu, Jul 31, 2008 at 16:25, Ross <[EMAIL PROTECTED]> wrote:
> The problems with zpool status hanging concern me, knowing that I can't hot
> plug drives is an issue, and the long resilver times bug is also a potential
> problem. I suspect I can work around the hot plug drive bug with a big
>
On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
> Hey folks,
>
> I guess this is an odd question to be asking here, but I could do with some
> feedback from anybody who's actually using ZFS in anger.
ZFS in anger ? That's an interesting way of putting it :-)
> but I have some real concerns abo
> We haven't had any "real life" drive failures at work, but at home I
> took some old flaky IDE drives and put them in a pentium 3 box running
> Nevada.
Similar story here. Some IDE and SATA drive burps under Linux (and
please don't tell me how wonderful Reiser4 is - 'cause it's banned in
this
On Jul 31, 2008, at 2:56 PM, Bob Netherton wrote:
> On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
>> Hey folks,
>>
>> I guess this is an odd question to be asking here, but I could do
>> with some
>> feedback from anybody who's actually using ZFS in anger.
>
> ZFS in anger ? That's an intere
We have 50,000 users worth of mail spool on ZFS.
So we've been trusting it for production usage for THE most critical & visible
enterprise app.
Works fine. Our stores are ZFS RAID-10 built of LUNS from pairs of 3510FC.
Had an entire array go down once, the system kept going fine. Brought the
Enda O'Connor wrote:
>
> As for thumpers, once 138053-02 ( marvell88sx driver patch ) releases
> within the next two weeks ( assuming no issues found ), then the thumper
> platform running s10 updates will be up to date in terms of marvel88sx
> driver fixes, which fixes some pretty important
> "s" == Steve <[EMAIL PROTECTED]> writes:
s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
no ECC:
http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
pgpSbbK6c48b6.pgp
Description: PGP signature
___
zfs-
i must pose the question then:
is ECC required?
i am running non-ECC RAM right now on my machine (it's AMD and it would support
ECC, i'd just have to buy it online and wait for it)
but will it have any negative effects on ZFS integrity/checksumming if ECC RAM
is not used? obviously it's nice t
Miles Nordin wrote:
>> "s" == Steve <[EMAIL PROTECTED]> writes:
>>
>
> s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
>
> no ECC:
>
> http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
>
This MB will take these:
http://www.inte
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> This is a big step for us, we're a 100% windows company and
r> I'm really going out on a limb by pushing Solaris.
I'm using it in anger. I'm angry at it, and can't afford anything
that's better.
Whatever I replaced ZFS with, I would ma
Ross wrote:
> Hey folks,
>
> I guess this is an odd question to be asking here, but I could do with some
> feedback from anybody who's actually using ZFS in anger.
>
I've been using ZFS for nearly 3 years now. It has been my (mirrored :-)
home directory for that time. I've never lost any of
We have having some issues in copying the existing data on our Sol 11
snv_70b x4500 to the new Sol 10 5/08 x4500. With all the panics, and
crashes, we have had no chance to completely copy a single filesystem.
(ETA for that is about 48 hours).
What are the chances that I can zpool import all f
>Going back to your USB remove test, if you protect that disk
>at the ZFS level, such as a mirror, then when the disk is removed
>then it will be detected as removed and zfs status will show its
>state as "removed" and the pool as "degraded" but it will continue
>to function, as expected.
>-- richa
On Thu, Jul 31, 2008 at 11:03 PM, Ross <[EMAIL PROTECTED]> wrote:
> >Going back to your USB remove test, if you protect that disk
> >at the ZFS level, such as a mirror, then when the disk is removed
> >then it will be detected as removed and zfs status will show its
> >state as "removed" and the p
Hey Brent,
On the Sun hardware like the Thumper you do get a nice bright blue "ready to
remove" led as soon as you issue the "cfgadm -c unconfigure xxx" command. On
other hardware it takes a little more care, I'm labelling our drive bays up
*very* carefully to ensure we always remove the rig
36 matches
Mail list logo