On Thu, Jun 5, 2008 at 9:26 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>
> On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <[EMAIL PROTECTED]> wrote:
>>
>> On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote:
>> >
>> >
>> > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <[EMAIL PROTECTED]>
>> > wrot
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote:
>
> On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
>
> > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >>
> clients do not. Without per-filesystem mounts, 'df' on the client
> will not report correct da
Hello,
I have two disks with a partition mounted as swap, having also some space
unallocated. I would like to format the disk to create a partition from that
unallocated space.
This should be safe given i've done it several time on disks with ufs, but
i'm not too sure with zfs. Is there any
Mattias Pantzare wrote:
> 2008/6/6 Richard Elling <[EMAIL PROTECTED]>:
>
>> Richard L. Hamilton wrote:
>>
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boy
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote:
> On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >
> > >> clients do not. Without per-filesystem mounts, 'df' on the client
> > >> will not report correct data though.
> > >
> > > I expect that mirror mounts will be
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
> On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
>>
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
>>>
>>> I expect that mirror mounts will be coming Linux's way to
On Fri, Jun 06, 2008 at 04:52:45PM -0500, Nicolas Williams wrote:
>
> Mirror mounts take care of the NFS problem (with NFSv4).
>
> NFSv3 automounters could be made more responsive to server-side changes
> is share lists, but hey, NFSv4 is the future.
So basically it's just a waiting game at this
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
>
> >> clients do not. Without per-filesystem mounts, 'df' on the client
> >> will not report correct data though.
> >
> > I expect that mirror mounts will be coming Linux's way too.
>
> The should already have them:
> http://blogs.su
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >I expect that mirror mounts will be coming Linux's way too.
>
> The should already have them:
> http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
Even better.
__
On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote:
> On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
>> On Fri, 6 Jun 2008, Brian Hechinger wrote:
>>
>>> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separa
On Fri, Jun 06, 2008 at 08:51:13PM +0200, Mattias Pantzare wrote:
> 2008/6/6 Richard Elling <[EMAIL PROTECTED]>:
> > I was going to post some history of scaling mail, but I blogged it instead.
> > http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
>
> The problem with that argument is that
On Fri, Jun 06, 2008 at 07:37:18AM -0400, Brian Hechinger wrote:
> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
> >
> > - as separate filesystems, they have to be separately NFS mounted
>
> I think this is the one that gets under my skin. If there would be a
> way to "merge"
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
> On Fri, 6 Jun 2008, Brian Hechinger wrote:
>
> > On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
> >>
> >> - as separate filesystems, they have to be separately NFS mounted
> >
> > I think this is the one that get
On Fri, Jun 6, 2008 at 16:23, Tom Buskey <[EMAIL PROTECTED]> wrote:
> I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives
> in a RAIDZ.
...
> I get 550 MB/s
I doubt this number a lot. That's almost 200 (550/N-1 = 183) MB/s per
disk, and drives I've seen are usually more i
On Thu, Jun 5, 2008 at 2:11 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
> Quotas are great when, for administrative purposes, you want a large
> number of users on a single filesystem, but to restrict the amount of
> space for each. The primary place I can think of this being useful is
> /var/ma
On Fri, Jun 6, 2008 at 3:23 PM, Tom Buskey <[EMAIL PROTECTED]> wrote:
> >**pci or pci-x. Yes, you might see
> > *SOME* loss in speed from a pci interface, but
> > let's be honest, there aren't a whole lot of
> > users on this list that have the infrastructure to
> > use greater than 100MB/sec who
Hi Ricardo,
I'll try that.
Thanks (Obrigado)
Paulo Soeiro
On 6/5/08, Ricardo M. Correia <[EMAIL PROTECTED]> wrote:
>
> On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
>
> 6)Remove and attached the usb sticks:
>
> zpool status
> pool: myPool
> state: UNAVAIL
> status: One or more devices
>**pci or pci-x. Yes, you might see
> *SOME* loss in speed from a pci interface, but
> let's be honest, there aren't a whole lot of
> users on this list that have the infrastructure to
> use greater than 100MB/sec who are asking this sort
> of question. A PCI bus should have no issues
> pushing t
2008/6/6 Richard Elling <[EMAIL PROTECTED]>:
> Richard L. Hamilton wrote:
>>> A single /var/mail doesn't work well for 10,000 users
>>> either. When you
>>> start getting into that scale of service
>>> provisioning, you might look at
>>> how the big boys do it... Apple, Verizon, Google,
>>> Amazon
Richard L. Hamilton wrote:
>> A single /var/mail doesn't work well for 10,000 users
>> either. When you
>> start getting into that scale of service
>> provisioning, you might look at
>> how the big boys do it... Apple, Verizon, Google,
>> Amazon, etc. You
>> should also look at e-mail systems des
On Fri, Jun 6, 2008 at 9:29 AM, John Kunze <[EMAIL PROTECTED]> wrote:
> My organization is considering an RFP for MAID storage and we're
> wondering about potential conflicts between MAID and ZFS.
I had to look up MAID, first link Google gave me was
http://www.closetmaid.com/ which doesn't seem ri
Folks,
I am running into an issue with a quota enabled ZFS system. I tried to check
out the ZFS properties but could not figure out a workaround.
I have a file system /data/project/software which has 250G quota set. There
are no snapshots enabled for this system. When the quota is reached on this,
I think most MAID is sold as a (misguided IMHO) replacement for
Tape, not as a Tier 1 kind of storage. YMMV.
-- mark
John Kunze wrote:
> My organization is considering an RFP for MAID storage and we're
> wondering about potential conflicts between MAID and ZFS.
>
> We want MAID's power management
My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.
We want MAID's power management benefits but are concerned
that what we understand to be ZFS's use of dynamic striping across
devices with filesystem metadata replication and
On Fri, 6 Jun 2008, Brian Hechinger wrote:
> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
>>
>> - as separate filesystems, they have to be separately NFS mounted
>
> I think this is the one that gets under my skin. If there would be a
> way to "merge" a filesystem into a pare
On Fri, Jun 6, 2008 at 10:41 PM, Brandon High <[EMAIL PROTECTED]> wrote:
> On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li <[EMAIL PROTECTED]> wrote:
>> Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept
>> a directory as an output. So I use zfs send and zfs receive:
>
> Really
That was it!
hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F
On Thu, Jun 5, 2008 at 11:37 PM, Albert Lee
<[EMAIL PROTECTED]> wrote:
> Raw disk images are, uh, nice and all, but I don't think that was what
> Aubrey had in mind when asking zfs-discuss about a backup solution. This
> is 2008, not 1960.
But retro is in!
The point that I didn't really make is t
On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li <[EMAIL PROTECTED]> wrote:
> Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept
> a directory as an output. So I use zfs send and zfs receive:
Really? zfs send just gives you a byte stream, and the shell redirects
it to the file
Brian Hechinger wrote:
> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
>> - as separate filesystems, they have to be separately NFS mounted
>
> I think this is the one that gets under my skin. If there would be a
> way to "merge" a filesystem into a parent filesystem for the p
[...]
> > That's not to say that there might not be other
> problems with scaling to
> > thousands of filesystems. But you're certainly not
> the first one to test it.
> >
> > For cases where a single filesystem must contain
> files owned by
> > multiple users (/var/mail being one example), old
>
> I encountered an issue that people using OS-X systems
> as NFS clients
> need to be aware of. While not strictly a ZFS issue,
> it may be
> encounted most often by ZFS users since ZFS makes it
> easy to support
> and export per-user filesystems. The problem I
> encountered was when
> using
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
>
> - as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to "merge" a filesystem into a parent filesystem for the purposes
of NFS, that would be
On Thu, Jun 05, 2008 at 10:45:09PM -0700, Vincent Fox wrote:
> Way to drag my post into the mud there.
>
> Can we just move on?
Absolutely not! Just be glad you never had to create a swap file on an
NFS mount just to be able to build software on your machine! Yes, I really
did have to do that.
Richard L. Hamilton smart.net> writes:
> But I suspect to some extent you get what you pay for; the throughput on the
> higher-end boards may well be a good bit higher.
Not really. Nowadays, even the cheapest controllers, processors & mobos are
EASILY capable of handling the platter-speed throug
If I read the man page right, you might only have to keep a minimum of two
on each side (maybe even just one on the receiving side), although I might be
tempted to keep an extra just in case; say near current, 24 hours old, and a
week old (space permitting for the larger interval of the last one).
Buy a 2-port SATA II PCI-E x1 SiI3132 controller ($20). The solaris driver is
very stable.
Or, a solution I would personally prefer, don't use a 7th disk. Partition
each of your 6 disks with a small ~7-GB slice at the beginning and the rest of
the disk for ZFS. Install the OS in one of the sma
Or you could use Tim Fosters ZFS snapshot service
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with
/peter
On Jun 6, 2008, at 14:07, Tobias Exner wrote:
> Hi,
>
> I'm thinking about the following situation and I know there are some
> things I have to understand:
>
> I want to use
Hi Tobias,
I did this for a large lab we had last month, I have it setup
something like this.
zfs snapshot [EMAIL PROTECTED]
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh server2 zfs recv rep_pool
ssh zfs destroy [EMAIL PROTECTED]
ssh zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs
> On Thu, Jun 05, 2008 at 09:13:24PM -0600, Keith
> Bierman wrote:
> > On Jun 5, 2008, at 8:58 PM 6/5/, Brad Diggs
> wrote:
> > > Hi Keith,
> > >
> > > Sure you can truncate some files but that
> effectively corrupts
> > > the files in our case and would cause more harm
> than good. The
> > > onl
Hi Erik,
Thanks for your instruction, but let me dig into details.
On Thu, Jun 5, 2008 at 10:04 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
> Thus, you could do this:
>
> (1) Install system A
No problem, :-)
> (2) hook USB drive to A, and mount it at /mnt
I created a zfs pool, and mount it at
I don't presently have any working x86 hardware, nor do I routinely work with
x86 hardware configurations.
But it's not hard to find previous discussion on the subject:
http://www.opensolaris.org/jive/thread.jspa?messageID=96790
for example...
Also, remember that SAS controllers can usually also
Hi,
I'm thinking about the following situation and I know there are some
things I have to understand:
I want to use two SUN-Servers with the same amount of storage capacity
on both of them and I want to replicate the filesystem ( zfs )
incrementally two times a day from the first to the second
43 matches
Mail list logo