Hello Dave,
Tuesday, October 16, 2007, 9:17:30 PM, you wrote:
DJ> you mean c9n ? ;)
DJ> does anyone actually *use* compression ? i'd like to see a poll on how many
DJ> people are using (or would use) compression on production systems that are
DJ> larger than your little department catch-all dum
Hello Matthew,
Wednesday, October 17, 2007, 1:46:02 AM, you wrote:
MA> Richard Elling wrote:
>> Paul B. Henson wrote:
>>> On Fri, 12 Oct 2007, Paul B. Henson wrote:
>>>
I've read a number of threads and blog posts discussing zfs send/receive
and its applicability is such an implementati
> Is the mount using NFSv4? If so, there is likely a midguided
> mapping of the user/groups between the client and server.
>
> While not including BSD info, there is a little bit on
> NFSv4 user/group mappings at this blog:
> http://blogs.sun.com/nfsv4
It defaults to nfs ver. 3. As a sidenote sam
From: "Robert Milkowski" <[EMAIL PROTECTED]>
> LDAP servers with several dozen millions accounts?
> Why? First you get about 2:1 compression ratio with lzjb, and you also
> get better performance.
a busy ldap server certainly seems a good fit for compression but when i
said "large" i meant, as in
How painful is this going to be? Completely?
-brian
--
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burg
On 10/16/07, Claus Guttesen <[EMAIL PROTECTED]> wrote:
> I have created some zfs-partitions. First I create the
> home/user-partitions. Beneath that I create additional partitions.
> Then I have do a chown -R for that user. These partitions are shared
> using the sharenfs=on. The owner- and group-
> > I have created some zfs-partitions. First I create the
> > home/user-partitions. Beneath that I create additional partitions.
> > Then I have do a chown -R for that user. These partitions are shared
> > using the sharenfs=on. The owner- and group-id is 1009.
> >
> > These partitions are visible
hi
I am currently running a linux box as my fileserver at home.
It's got eight 250 gig sata2 drives connected to two sata pci controllers and
configured as one big raid5 with linux software raid.
Linux is (and solaris will be) installed on two separate mirrored disks.
I've been playing around w
On 10/17/07, Claus Guttesen <[EMAIL PROTECTED]> wrote:
> > Did you mount both the parent and all the children on the client ?
>
> No, I just assumed that the sub-partitions would inherit the same
> uid/gid as the parent. I have done a chown -R.
Ahhh, the issue is not permissions, but ho
> > > Did you mount both the parent and all the children on the client ?
> >
> > No, I just assumed that the sub-partitions would inherit the same
> > uid/gid as the parent. I have done a chown -R.
>
> Ahhh, the issue is not permissions, but how the NFS server
> sees the various directori
Dave Johnson wrote:
> From: "Robert Milkowski" <[EMAIL PROTECTED]>
>
>> LDAP servers with several dozen millions accounts?
>> Why? First you get about 2:1 compression ratio with lzjb, and you also
>> get better performance.
>>
>
> a busy ldap server certainly seems a good fit for compressio
We are using zfs compression across 5 zpools, about 45TB of data on
iSCSI storage. I/O is very fast, with small fractional CPU usage (seat
of the pants metrics here, sorry). We have one other large 10TB volume
for nearline Networker backups, and that one isn't compressed. We
already compre
Jonathan Loran wrote:
>
> We are using zfs compression across 5 zpools, about 45TB of data on
> iSCSI storage. I/O is very fast, with small fractional CPU usage (seat
> of the pants metrics here, sorry). We have one other large 10TB volume
> for nearline Networker backups, and that one isn't
Richard Elling wrote:
> Jonathan Loran wrote:
...
> Do not assume that a compressed file system will send compressed.
> IIRC, it
> does not.
Let's say, if it were possible to detect the remote compression support,
couldn't we send it compressed? With higher compression rates, wouldn't
that
On Tue, 16 Oct 2007, Matthew Ahrens wrote:
> I know of customers who are using send|ssh|recv to replicate entire
> thumpers across the country, in production. I'm sure they'll speak up
> here if/when they find this thread...
Ah, that's who I'd like to hear from :)... Thanks for the secondhand
in
Jonathan Loran wrote:
> Richard Elling wrote:
>
>> Jonathan Loran wrote:
>>
> ...
>
>
>> Do not assume that a compressed file system will send compressed.
>> IIRC, it
>> does not.
>>
> Let's say, if it were possible to detect the remote compression support,
> couldn't we send it
In pre-ZFS era, we had observable parameters like scan rate and anonymous
page-in/-out counters to discover situations when a system experiences a lack
of physical memory. With ZFS, it's difficult to use mentioned parameters to
figure out situations like that. Has someone any idea what we can us
being at $300 now - a friend of mine just adding another $100
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was presenting to a customer at the EBC yesterday, and one of the
people at the meeting said using df in ZFS really drives him crazy (no,
that's all the detail I have). Any ideas/suggestions?
--
David Runyon
Disk Sales Specialist
Sun Microsystems, Inc.
4040 Palm Drive
Santa Clara, CA 95054
I asked this recently, but haven't done anything else about it:
http://www.opensolaris.org/jive/thread.jspa?messageID=155583𥾿
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
On 10/17/07, David Runyon <[EMAIL PROTECTED]> wrote:
> I was presenting to a customer at the EBC yesterday, and one of the
> people at the meeting said using df in ZFS really drives him crazy (no,
> that's all the detail I have). Any ideas/suggestions?
I suspect that this is related to the notion
Apparently with zfs boot, if the zpool is a version grub doesn't
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot. I ran
into this myself due to some boneheaded mistakes while doing a very
manual zfs / install at the summi
Hey Guys,
Its not possible yet to fracture a snapshot or clone into a
self-standing filesystem is it? Basically, I'd like to fracture a
snapshot/clone into is own FS so I can rollback past that snapshot in
the original filesystem and still keep that data.
Thank you in advance.
Best Regards,
Jaso
Sandro wrote:
> hi
>
> I am currently running a linux box as my fileserver at home.
> It's got eight 250 gig sata2 drives connected to two sata pci controllers and
> configured as one big raid5 with linux software raid.
> Linux is (and solaris will be) installed on two separate mirrored disks.
>
Ok...got a break from the 25xx release...
Trying to catch up so...sorry for the late response...
The 6120 firmware does not support the Cache Sync command at all...
You could try using a smaller blocksize setting on the array to attempt to
reduce the number of read/modify/writes that you will in
Hey all -
Time for my silly question of the day, and before I bust out vi and
dtrace...
If there a simple, existing way I can observe the read / write / IOPS on
a per-zvol basis?
If not, is there interest in having one?
Cheers!
Nathan.
___
zfs-disc
I may not be understanding your usage case correctly, so bear with me.
Here is what I understand your request to be. Time is increasing from
left to right.
A -- B -- C -- D -- E
\
- F -- G
Where E and G are writable filesystems and the others are snapshots.
I think y
On Oct 17, 2007, at 11:25 AM, Claus Guttesen wrote:
Did you mount both the parent and all the children on the client ?
>>>
>>> No, I just assumed that the sub-partitions would inherit the same
>>> uid/gid as the parent. I have done a chown -R.
>>
>> Ahhh, the issue is not permissio
28 matches
Mail list logo