On Sep 20, 2007, at 6:46 PM, Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Gary Mills wrote:
>
>> You should consider a Netapp filer. It will do both NFS and CIFS,
>> supports disk quotas, and is highly reliable. We use one for 30,000
>> students and 3000 employees. Ours has never failed us.
>
Matthew Flanagan wrote:
> Mike,
>
> I followed your procedure for cloning zones and it worked well up until
> yesterday when I tried applying the S10U4 kernel patch 12001-14 and it
> wouldn't apply because I had my zones on zfs :(
>
> I'm still figuring out how to fix this other than moving all o
Mike,
I followed your procedure for cloning zones and it worked well up until
yesterday when I tried applying the S10U4 kernel patch 12001-14 and it wouldn't
apply because I had my zones on zfs :(
I'm still figuring out how to fix this other than moving all of my zones onto
UFS.
Anyone got an
John-Paul Drawneek wrote:
> yep.
>
> but it said that the pools were upto date with the system on 3.
>
> zpool upgrade says the system just has version 3
>
> also patch 120272-12 has been pulled which 120011-14 depends on yay
Yeah, the listed reason -- " corrupts the snmpd.conf file causing
Paul B. Henson wrote:
> Is it comparable storage though? Does it use SATA drives similar to the
> x4500, or more expensive/higher performance FC drives? Is it one of the
> models that allows connecting dual clustered heads and failing over the
> storage between them?
>
> I agree the x4500 is a swee
yep.
but it said that the pools were upto date with the system on 3.
zpool upgrade says the system just has version 3
also patch 120272-12 has been pulled which 120011-14 depends on yay
This message posted from opensolaris.org
___
zfs-discuss m
On Thu, 20 Sep 2007, Chris Kirby wrote:
> We're adding a style of quota that only includes the bytes referenced by
> the active fs. Also, there will be a matching style for reservations.
>
> "some point in the future" is very soon (weeks). :-)
I don't think my management will let me run Solaris
On Thu, 20 Sep 2007, Tim Spriggs wrote:
> It's an IBM re-branded NetApp which can which we are using for NFS and
> iSCSI.
Ah, I see.
Is it comparable storage though? Does it use SATA drives similar to the
x4500, or more expensive/higher performance FC drives? Is it one of the
models that allows
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, James F. Hranicky wrote:
>
>
>>and due to the fact that snapshots counted toward ZFS quota, I decided
>
>
> Yes, that does seem to remove a bit of their value for backup purposes. I
> think they're planning to rectify that at some point in the future
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>
>> We are in a similar situation. It turns out that buying two thumpers is
>> cheaper per TB than buying more shelves for an IBM N7600. I don't know
>> about power/cooling considerations yet though.
>>
>
> It's really a com
On Thu, 20 Sep 2007, Dickon Hood wrote:
> On Thu, Sep 20, 2007 at 16:22:45 -0500, Gary Mills wrote:
>
> : You should consider a Netapp filer. It will do both NFS and CIFS,
> : supports disk quotas, and is highly reliable. We use one for 30,000
> : students and 3000 employees. Ours has never fai
On Thu, 20 Sep 2007, Gary Mills wrote:
> You should consider a Netapp filer. It will do both NFS and CIFS,
> supports disk quotas, and is highly reliable. We use one for 30,000
> students and 3000 employees. Ours has never failed us.
We had actually just finished evaluating Netapp before I sta
On Thu, 20 Sep 2007, Tim Spriggs wrote:
> We are in a similar situation. It turns out that buying two thumpers is
> cheaper per TB than buying more shelves for an IBM N7600. I don't know
> about power/cooling considerations yet though.
It's really a completely different class of storage though, r
On Thu, 20 Sep 2007, Andy Lubel wrote:
> Looks like its completely scalable but your boot time may suffer the more
> you have. Just don't reboot :)
I'm not sure if it's accurate, but the SE we were meeting with claimed that
we could failover all of the filesystems to one half of the cluster, rebo
On Thu, 20 Sep 2007, James F. Hranicky wrote:
> This can be solved using an automounter as well.
Well, I'd say more "kludged around" than "solved" ;), but again unless
you've used DFS it might not seem that way.
It just seems rather involved, and relatively inefficient to continuously
be mountin
On Thu, Sep 20, 2007 at 16:22:45 -0500, Gary Mills wrote:
: You should consider a Netapp filer. It will do both NFS and CIFS,
: supports disk quotas, and is highly reliable. We use one for 30,000
: students and 3000 employees. Ours has never failed us.
And they might only lightly sue you for c
On Thu, Sep 20, 2007 at 12:49:29PM -0700, Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Richard Elling wrote:
>
> > 50,000 directories aren't a problem, unless you also need 50,000 quotas
> > and hence 50,000 file systems. Such a large, single storage pool system
> > will be an outlier... signific
Andy Lubel wrote:
> On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
>
>
>> On Thu, 20 Sep 2007, Richard Elling wrote:
>>
>>
>> That would also be my preference, but if I were forced to use hardware
>> RAID, the additional loss of storage for ZFS redundancy would be painful.
On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
> On Thu, 20 Sep 2007, Richard Elling wrote:
>
>> 50,000 directories aren't a problem, unless you also need 50,000 quotas
>> and hence 50,000 file systems. Such a large, single storage pool system
>> will be an outlier... significan
Paul B. Henson wrote:
> One issue I have is that our previous filesystem, DFS, completely spoiled
> me with its global namespace and location transparency. We had three fairly
> large servers, with the content evenly dispersed among them, but from the
> perspective of the client any user's files w
On Thu, 20 Sep 2007, Richard Elling wrote:
> 50,000 directories aren't a problem, unless you also need 50,000 quotas
> and hence 50,000 file systems. Such a large, single storage pool system
> will be an outlier... significantly beyond what we have real world
> experience with.
Yes, considering
a few comments below...
Paul B. Henson wrote:
> We are looking for a replacement enterprise file system to handle storage
> needs for our campus. For the past 10 years, we have been happily using DFS
> (the distributed file system component of DCE), but unfortunately IBM
> killed off that product
On Sep 15, 2007, at 12:55 PM, Victor Latushkin wrote:
> I'm proposing new project for ZFS community - Block Selection
> Policy and
> Space Map Enhancements.
+1.
I wonder if some of this could look into a dynamic policy. For
example, a policy that switches when the pool becomes "too full".
Did you upgrade your pools? "zpool upgrade -a"
John-Paul Drawneek wrote:
> err, I installed the patch and am still on zfs 3?
>
> solaris 10 u3 with kernel patch 120011-14
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
err, I installed the patch and am still on zfs 3?
solaris 10 u3 with kernel patch 120011-14
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Here is a different twist on your interesting scheme. First
start with writting 3 blocks and parity in a full stripe.
Disk0 Disk1 Disk2 Disk3
D0 D1 D2 P0,1,2
Next application modifies D0 -> D0' and also writes other
data D3, D4. Now you have
D
On 9/20/07, Mark J Musante <[EMAIL PROTECTED]> wrote:
> I for one would like to see live upgrade support ZFS. Even with Snap
> Upgrade on the horizon (the page on the OpenSolaris site says 'March' but
> the current scedule is a sea of TBDs [see
> http://opensolaris.org/os/project/caiman/Snap_Upgra
On Wed, 19 Sep 2007, Mike Gerdts wrote:
> The rather consistent answer is that zoneadm clone will not do zfs until
> live upgrade does zfs. Since there is a new project in the works (Snap
> Upgrade) that is very much targeted at environments that use zfs, I
> would be surprised to see zfs support
Hi Roch,
Roch - PAE wrote:
> [EMAIL PROTECTED] writes:
> > Roch - PAE wrote:
> > > [EMAIL PROTECTED] writes:
> > > > Jim Mauro wrote:
> > > > >
> > > > > Hey Max - Check out the on-disk specification document at
> > > > > http://opensolaris.org/os/community/zfs/docs/.
>
> > > > Ok. I
[EMAIL PROTECTED] writes:
> Roch - PAE wrote:
> > [EMAIL PROTECTED] writes:
> > > Jim Mauro wrote:
> > > >
> > > > Hey Max - Check out the on-disk specification document at
> > > > http://opensolaris.org/os/community/zfs/docs/.
> > > >
> > > > Page 32 illustration shows the rootbp po
On Sep 20, 2007, at 12:55 AM, Tore Johansson wrote:
> Hi,
>
> I am running solaris 10 on ufs and the rest on ZFS. Now has the
> solaris disk crashed.
> How can I recover the other ZFS disks?
> Can I reinstall solaris and recreate the zfs systems without data
> loss?
Zpool import is your frie
31 matches
Mail list logo