[please don't top-post, please remove CC's, please trim quotes. it's
really tedious to clean up your post to make it readable.]
Marc Nicholas writes:
> Brent Jones wrote:
>> Marc Nicholas wrote:
>>> Kjetil Torgrim Homme wrote:
his problem is "lazy" ZFS, notice how it gathers up data for
Thanks for the tip but it was not that. The two hard drives where running under
RAID 1 on my Linux install so the two drives have identical information on them
when I installed OpenSolaris. I disable the hardware RAID support in my BIOS to
install OpenSolaris. Looking at the disk from the still
On Wed, Feb 10, 2010 at 10:48:57PM -0600, David Dyer-Bennet wrote:
> But I see how it could indeed be useful in
> theory to send just a *little* extra if you weren't sure quite what was
> needed but could guess pretty closely.
I think it's mostly for the benefit of retrying the same command,
On 2/10/2010 9:36 PM, Jason King wrote:
My experience (perhaps others will have different experiences) is that
due to the added complexity and administrative overhead, ACLs are used
when it's absolutely necessary -- i.e. you have something that due to
it's nature must have very explicit and prec
On 2/10/2010 7:21 PM, Daniel Carosone wrote:
On Wed, Feb 10, 2010 at 05:36:10PM -0600, David Dyer-Bennet wrote:
That's all about *ME* picking the suitable base snapshot, as I understand
it.
Correct.
I understood the recent reference to be suggesting that I didn't have
to, that z
Abdullah,
On Tue, Feb 09, 2010 at 02:12:24PM -0500, Abdullah Al-Dahlawi wrote:
> Greeting ALL
>
> I am wondering if it is possible to monitor the ZFS ARC cache hits using
> DTRACE. In orher words, would be possible to know how many ARC cache hits
> have been resulted by a particular application s
On Wed, 10 Feb 2010, Jason King wrote:
> I suspect that zfs is interpreting the group ACLs and adjusting the mode
> value accordingly to try to indicate the 'preserve owner/group on new
> file' semantics with the old permissions, however it sounds like it's not
> a symmetric operation -- if chgrp
On Wed, Feb 10, 2010 at 6:45 PM, Paul B. Henson wrote:
>
> We have an open bug which results in new directories created over NFSv4
> from a linux client having the wrong group ownership. While waiting for a
> patch to resolve the issue, we have a script running hourly on the server
> which finds d
CC'ed to security-disc...@opensolaris.org
-- richard
On Feb 10, 2010, at 4:45 PM, Paul B. Henson wrote:
>
> We have an open bug which results in new directories created over NFSv4
> from a linux client having the wrong group ownership. While waiting for a
> patch to resolve the issue, we have a
On Wed, Feb 10, 2010 at 05:36:10PM -0600, David Dyer-Bennet wrote:
> That's all about *ME* picking the suitable base snapshot, as I understand
> it.
Correct.
> I understood the recent reference to be suggesting that I didn't have
> to, that zfs would figure it out for me. Which still appears to
On Wed, Feb 10, 2010 at 4:05 PM, Brent Jones wrote:
> On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas wrote:
>> How does lowering the flush interval help? If he can't ingress data
>> fast enough, faster flushing is a Bad Thibg(tm).
>>
>> -marc
>>
>> On 2/10/10, Kjetil Torgrim Homme wrote:
>>> Bob
We have an open bug which results in new directories created over NFSv4
from a linux client having the wrong group ownership. While waiting for a
patch to resolve the issue, we have a script running hourly on the server
which finds directories owned by the wrong group and fixes them.
One of our u
Look at your BIOS setting and make sure you're booting from the HD that
has Opensolaris.
Antonello
On 02/10/10 03:53 PM, Jeff Rogers wrote:
Just finished setting up a new DB server with the latest OpenSolaris release
from the LiveCD image. After spending the last few days learning about the n
This is a Windows box, not a DB that flushes every write.
The drives are capable of over 2000 IOPS (albeit with high latency as
its NCQ that gets you there) which would mean, even with sync flushes,
8-9MB/sec.
-marc
On 2/10/10, Brent Jones wrote:
> On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas
On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas wrote:
> How does lowering the flush interval help? If he can't ingress data
> fast enough, faster flushing is a Bad Thibg(tm).
>
> -marc
>
> On 2/10/10, Kjetil Torgrim Homme wrote:
>> Bob Friesenhahn writes:
>>> On Wed, 10 Feb 2010, Frank Cusack wr
Just finished setting up a new DB server with the latest OpenSolaris release
from the LiveCD image. After spending the last few days learning about the new
admin features and the ZFS I wanted to see if the second disk in my machine was
part of the rpool. I could not find for sure so I restarted
On Wed, February 10, 2010 16:51, Tim Cook wrote:
> On Wed, Feb 10, 2010 at 4:31 PM, David Dyer-Bennet wrote:
>
>>
>> On Wed, February 10, 2010 16:15, Tim Cook wrote:
>> > On Wed, Feb 10, 2010 at 3:38 PM, Terry Hull wrote:
>> >
>> >> Thanks for the info.
>> >>
>> >> If that last common snapshot g
How does lowering the flush interval help? If he can't ingress data
fast enough, faster flushing is a Bad Thibg(tm).
-marc
On 2/10/10, Kjetil Torgrim Homme wrote:
> Bob Friesenhahn writes:
>> On Wed, 10 Feb 2010, Frank Cusack wrote:
>>
>> The other three commonly mentioned issues are:
>>
>> -
Bob Friesenhahn writes:
> On Wed, 10 Feb 2010, Frank Cusack wrote:
>
> The other three commonly mentioned issues are:
>
> - Disable the naggle algorithm on the windows clients.
for iSCSI? shouldn't be necessary.
> - Set the volume block size so that it matches the client filesystem
>block
Definitely use Comstar as Tim says.
At home I'm using 4*WD Caviar Blacks on an AMD Phenom x4 @ 1.Ghz and
only 2GB of RAM. I'm running svn132. No HBA - onboard SB700 SATA
ports.$
I can, with IOmeter, saturate GigE from my WinXP laptop via iSCSI.
Can you toss the RAID controller aside an use mothe
On Wed, 10 Feb 2010, Frank Cusack wrote:
On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote:
I then, Create a zpool, using raidz2, using all 24 drives, 1 as a
hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare
c1t23d00
Well there's one problem anyway. That's going to be horrib
On Wed, Feb 10, 2010 at 4:31 PM, David Dyer-Bennet wrote:
>
> On Wed, February 10, 2010 16:15, Tim Cook wrote:
> > On Wed, Feb 10, 2010 at 3:38 PM, Terry Hull wrote:
> >
> >> Thanks for the info.
> >>
> >> If that last common snapshot gets destroyed on the primary server, it is
> >> then a full
On Wed, Feb 10, 2010 at 4:06 PM, Brian E. Imhoff wrote:
> I am in the proof-of-concept phase of building a large ZFS/Solaris based
> SAN box, and am experiencing absolutely poor / unusable performance.
>
> Where to begin...
>
> The Hardware setup:
> Supermicro 4U 24 Drive Bay Chassis
> Supermicro
On Wed, February 10, 2010 16:28, Will Murnane wrote:
> On Wed, Feb 10, 2010 at 17:06, Brian E. Imhoff
> wrote:
>> I am in the proof-of-concept phase of building a large ZFS/Solaris based
>> SAN box, and am experiencing absolutely poor / unusable performance.
>>
>> I then, Create a zpool, using ra
On Wed, February 10, 2010 16:15, Tim Cook wrote:
> On Wed, Feb 10, 2010 at 3:38 PM, Terry Hull wrote:
>
>> Thanks for the info.
>>
>> If that last common snapshot gets destroyed on the primary server, it is
>> then a full replication back to the primary server. Is that correct?
>>
>> --
>> Terry
On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote:
I then, Create a zpool, using raidz2, using all 24 drives, 1 as a
hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare
c1t23d00
Well there's one problem anyway. That's going to be horribly slow no
matter what.
___
On Wed, Feb 10, 2010 at 17:06, Brian E. Imhoff wrote:
> I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
> box, and am experiencing absolutely poor / unusable performance.
>
> I then, Create a zpool, using raidz2, using all 24 drives, 1 as a hotspare:
> zpool create ta
On Wed, Feb 10, 2010 at 3:38 PM, Terry Hull wrote:
> Thanks for the info.
>
> If that last common snapshot gets destroyed on the primary server, it is
> then a full replication back to the primary server. Is that correct?
>
> --
> Terry
>
>
I think a better way of stating it is that it picks th
On Feb 10, 2010, at 1:38 PM, Terry Hull wrote:
> Thanks for the info.
>
> If that last common snapshot gets destroyed on the primary server, it is then
> a full replication back to the primary server. Is that correct?
If there are no common snapshots, then the first question is "how did we
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
box, and am experiencing absolutely poor / unusable performance.
Where to begin...
The Hardware setup:
Supermicro 4U 24 Drive Bay Chassis
Supermicro X8DT3 Server Motherboard
2x Xeon E5520 Nehalem 2.26 Quad Core CPUs
4
Thanks for the info.
If that last common snapshot gets destroyed on the primary server, it is then a
full replication back to the primary server. Is that correct?
--
Terry
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
On Wed, Feb 10, 2010 at 12:37:46PM -0500, rwali...@washdcmail.com wrote:
> I don't disagree with any of the facts you list, but I don't think the
> alternatives are fully described by "Sun vs. much cheaper retail parts."
>
> We face exactly this same decision with buying RAM for our servers
> (ma
Hello,
Immediately after a promote, the snapshot of the promoted clone has 1.25G used.
NAME USED AVAIL REFER
q2/fs1 4.01G 9.86G 8.54G
q2/f...@test1 [b]1.25G[/b] - 5.78G -
prior to the promote the snapshot of the origin file system looke
Just saw this go by my twitter stream:
http://staff.science.uva.nl/~delaat/sne-2009-2010/p02/report.pdf
via @legeza
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
On Feb 10, 2010, at 10:31 AM, Terry Hull wrote:
> First of all, I must apologize. I'm an OpenSolaris newbie so please don't
> be too hard on me.
[phasers on stun]
> Sorry if this has been beaten to death before, but I could not find it, so
> here goes. I'm wanting to be able to have two d
First of all, I must apologize. I'm an OpenSolaris newbie so please don't be
too hard on me.
Sorry if this has been beaten to death before, but I could not find it, so here
goes. I'm wanting to be able to have two disk servers that I replicate data
between using send / receive with snapsho
On Tue, Feb 09, 2010 at 11:16:44PM -0700, Eric D. Mudama wrote:
> >no one is selling disk brackets without disks. not Dell, not EMC, not
> >NetApp, not IBM, not HP, not Fujitsu, ...
>
> http://discountechnology.com/Products/SCSI-Hard-Drive-Caddies-Trays
I don't see why we have to hunt down rand
On Feb 9, 2010, at 1:55 PM, matthew patton wrote:
>> It might help people to understand how ridiculous they
>> sound going on and on
>> about buying a premium storage appliance without any
>> storage.
>
> Since I started this, let me explain to those who can't begin to understand
> why I propose
OK FORGET IT... I MUST BE VERY TIRED AND CONFUSED ;-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have additional problem, whicxh worries me.
I tried different ways of sending/receiving my data pool.
I took some snapshots, sent them, then destroyed them, using destroy -r.
AFAIK this shoud not have affected the filesystem's _current_ state or am I
mislead ?
Now I succeeded to send a snapsho
For those who've been suffering this problem and who have non-Sun
jbods, could you please let me know what model of jbod and cables
(including length thereof) you have in your configuration.
For those of you who have been running xVM without MSI support,
could you please confirm whether the devic
On Wed, Feb 10, 2010 at 9:06 AM, Ross Walker wrote:
> On Feb 9, 2010, at 1:55 PM, matthew patton wrote:
>
> The cheapest solution out there that isn't a Supermicro-like server
>> chassis, is DAS in the form of HP or Dell MD-series which top out at 15 or
>> 16 3" drives. I can only chain 3 units
On Feb 9, 2010, at 1:55 PM, matthew patton wrote:
The cheapest solution out there that isn't a Supermicro-like server
chassis, is DAS in the form of HP or Dell MD-series which top out at
15 or 16 3" drives. I can only chain 3 units per SAS port off a HBA
in either case.
The new Dell MD11
actually I succeded using :
# zfs create ezdata/data
# zfs send -RD d...@prededup | zfs recv -duF ezdata/data
I still have to check the result, though
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Sorry if my question was confused.
Yes I'm wondering about the catch22 resulting of the two errors : it means we
are not able to send/receive a pool's root filesystem without using -F.
The zpool list was just meant to say it was a whole pool...
Bruno
--
This message posted from opensolaris.org
_
On Tue, Feb 9, 2010 at 2:04 AM, Thomas Burgess wrote:
>
> On Mon, Feb 08, 2010 at 09:33:12PM -0500, Thomas Burgess wrote:
>> > This is a far cry from an apples to apples comparison though.
>>
>> As much as I'm no fan of Apple, it's a pity they dropped ZFS because
>> that would have brought consid
> amber ~ # zpool list data
> NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
> data 930G 295G 635G31% 1.00x ONLINE -
>
> amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata
> cannot receive new filesystem stream: destination 'ezdata' exists
> must specify -F to overwrit
Until zfs-crypto arrives, I am using a pool for sensitive data inside
several files encrypted via lofi crypto. The data is also valuable,
of course, so the pool is mirrored, with one file on each of several
pools (laptop rpool, and a couple of usb devices, not always
connected).
These backing fil
matthew patton wrote:
> > It might help people to understand how ridiculous they
> > sound going on and on
> > about buying a premium storage appliance without any
> > storage.
>
> Since I started this, let me explain to those who can't begin to understand
> why I proposed something so "stupid".
"Eric D. Mudama" writes:
> On Tue, Feb 9 at 2:36, Kjetil Torgrim Homme wrote:
>> no one is selling disk brackets without disks. not Dell, not EMC,
>> not NetApp, not IBM, not HP, not Fujitsu, ...
>
> http://discountechnology.com/Products/SCSI-Hard-Drive-Caddies-Trays
very nice, thanks. unfort
50 matches
Mail list logo