> > So, at this point in time that seems pretty
> discouraging for an everyday user, on Linux.
>
> nobody told, that zfs-fuse is ready for an everyday
> user at it`s current state ! ;)
That's what I found out, wanted to share and get other's opinion on.
I did not complain. I thought it might wo
- I will try your test.
- But How the zfs cash affect my test?
- can you send me the guide that tell to change the sd_max_throttle to 32 on
solaris10 or solaris8?
there would me no more then 30 LUN active on the port but no simultanies.
- do you know if a HBA dual QLA2342 can use both 2 ports
Hi,
I am trying to build a Live DIstro for a particular tool using Open Solaris
LiveMedia kit. I am experiencing problems in interpreting the
Build_live_dvd.conf.sample file present in the kit.
I have attached a file which gives the details of what i have tried and my
problems...
I request for h
Wee Yeh Tan wrote:
On 4/24/07, Richard Elling <[EMAIL PROTECTED]> wrote:
Wee Yeh Tan wrote:
> I didn't spot anything that reads it from /etc/system. Appreciate any
> pointers.
The beauty, and curse, of /etc/system is that modules do not need to
create
an explicit reader.
Grr I suspecte
However, the MTTR is likely to be 1/8 the time
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Leon Koll wrote:
> > My guess that Yaniv assumes that 8 pools with 62.5
> million files each have significantly less chances to
> be corrupted/cause the data loss than 1 pool with 500
> million files in it.
> > Do you agree with this?
>
> I do not agree with this statement. The probability
> is
On 4/24/07, Richard Elling <[EMAIL PROTECTED]> wrote:
Wee Yeh Tan wrote:
> I didn't spot anything that reads it from /etc/system. Appreciate any
> pointers.
The beauty, and curse, of /etc/system is that modules do not need to create
an explicit reader.
Grr I suspected after I replied that
Leon Koll wrote:
My guess that Yaniv assumes that 8 pools with 62.5 million files each have
significantly less chances to be corrupted/cause the data loss than 1 pool with
500 million files in it.
Do you agree with this?
I do not agree with this statement. The probability is the same,
regard
>
> On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:
>
> > Hello,
> >
> > I'd like to plan a storage solution for a system
> currently in
> > production.
> >
> > The system's storage is based on code which writes
> many files to
> > the file system, with overall storage needs
> currently aroun
Hello, Roch
<...>
> Then SFS over ZFS is being investigated by others
> within
> Sun. I believe we have stuff in the pipe to make ZFS
> match
> or exceed UFS on small server level loads. So I
> think your
> complaint is being heard.
You're the first one who said this and I am glad I'm being he
On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:
Hello,
I'd like to plan a storage solution for a system currently in
production.
The system's storage is based on code which writes many files to
the file system, with overall storage needs currently around 40TB
and expected to reach hund
On Apr 23, 2007, at 10:56 AM, Andy Lubel wrote:
What I'm saying is ZFS doesn't play nice with NFS in all the
scenarios I could think of:
-Single second disk in a v210 (sun72g) write cache on and off =
~1/3 the performance of UFS when writing files using dd over an NFS
mount using the s
On Apr 20, 2007, at 7:54 AM, Robert Milkowski wrote:
Hello eric,
Friday, April 20, 2007, 4:01:46 PM, you wrote:
ek> On Apr 18, 2007, at 9:33 PM, Robert Milkowski wrote:
Hello Robert,
Thursday, April 19, 2007, 1:57:38 AM, you wrote:
RM> Hello nfs-discuss,
RM> Does anyone have a dtrace s
On Apr 20, 2007, at 1:02 PM, Anton B. Rang wrote:
So if someone has a real world workload where having the ability
to purposely not cache user
data would be a win, please let me know.
Multimedia streaming is an obvious one.
assuming a single reader? or multiple readers at the same spot?
> remember that solaris express can only be distributed by authorized parties.
Mmmyeah, I think we'll be fine. Sun is a capable organization and doesn't need
you or I to put a damper on the growth of OpenSolaris. If they have a problem
with something, they'll let us know.
Just waiting on you,
This opens up a whole new dialog now...
I believe FreeBSD has a lot better eSATA port multiplier support.
Would anyone here think it's a bad idea to get something like a
Highpoint card (http://eshop.macsales.com/item/Highpoint%20Technologies/RR2314/)
and chain 4x eSATA 4 or 5 drive enclosures off
> So, at this point in time that seems pretty discouraging for an everyday
> user, on Linux.
nobody told, that zfs-fuse is ready for an everyday user at it`s current state
! ;)
although it runs pretty stable for now, there still remain major issues and
especially, it`s not yet being optimized
On Mon, Apr 23, 2007 at 17:43:31 -0400, Torrey McMahon wrote:
: Dickon Hood wrote:
: >[snip]
: >I'm currently playing with ZFS on a T2000 with 24x500GB SATA discs in an
: >external array that presents as SCSI. After having much 'fun' with the
: >Solaris SCSI driver not handling LUNs >2TB
: That
Dickon Hood wrote:
[snip]
I'm currently playing with ZFS on a T2000 with 24x500GB SATA discs in an
external array that presents as SCSI. After having much 'fun' with the
Solaris SCSI driver not handling LUNs >2TB
That should work if you have the latest KJP and friends. (Actually, it
should h
Hello Robert,
Monday, April 23, 2007, 11:12:39 PM, you wrote:
RM> Hello Robert,
RM> Monday, April 23, 2007, 10:44:00 PM, you wrote:
RM>> Hello Peter,
RM>> Monday, April 23, 2007, 9:27:56 PM, you wrote:
PT>>> On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Relatively low traf
Robert Milkowski wrote:
Hello Darren,
Monday, April 23, 2007, 9:14:35 PM, you wrote:
DRSC> The environment that it is running in has less memory than I've used
DRSC> it on with Solaris before, so I went to look at how to tune the ARC,
DRSC> only to discover that it had already been capped to r
Hello Robert,
Monday, April 23, 2007, 10:44:00 PM, you wrote:
RM> Hello Peter,
RM> Monday, April 23, 2007, 9:27:56 PM, you wrote:
PT>> On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>>>
>>> Relatively low traffic to the pool but sync takes too long to complete
>>> and other operations
Hello Eric,
Monday, April 23, 2007, 7:13:26 PM, you wrote:
ES> On Mon, Apr 23, 2007 at 10:10:23AM -0700, Ron Halstead wrote:
>> What is the status of bug 6437054? The bug tracker still shows it open.
>>
>> Ron
ES> Do you mean:
ES> 6437054 vdev_cache: wise up or die
ES> This bug is still under
Hello Darren,
Monday, April 23, 2007, 9:14:35 PM, you wrote:
DRSC> The environment that it is running in has less memory than I've used
DRSC> it on with Solaris before, so I went to look at how to tune the ARC,
DRSC> only to discover that it had already been capped to roughly half the
DRSC> size
FYI,
Sun is having a big, 25th Anniversary sale. X4500s are half price --
24 TBytes for $24k. ZFS runs really well on a X4500.
http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101
I appologize for those not in the US or UK and can't take advantage
of the sale.
-- richard
___
On Mon, Apr 23, 2007 at 20:27:56 +0100, Peter Tribble wrote:
: On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
: >Relatively low traffic to the pool but sync takes too long to complete
: >and other operations are also not that fast.
: >Disks are on 3510 array. zil_disable=1.
: >bash-3.00
Hello Robert,
Monday, April 23, 2007, 1:20:28 AM, you wrote:
RM> Hello zfs-discuss,
RM> bash-3.00# uname -a
RM> SunOS nfs-10-1.srv 5.10 Generic_125100-04 sun4u sparc SUNW,Sun-Fire-V440
RM> zil_disable set to 1
RM> Disks are over FCAL from 3510.
RM> bash-3.00# dtrace -n
RM> fbt::*SYNCHRONIZE*:e
Hello Peter,
Monday, April 23, 2007, 9:27:56 PM, you wrote:
PT> On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>>
>> Relatively low traffic to the pool but sync takes too long to complete
>> and other operations are also not that fast.
>>
>> Disks are on 3510 array. zil_disable=1.
>>
>>
Over the weekend I got ZFS up and running under FreeBSD and have
had much the same experience with it that I have with Solaris - it works
great out of the box and once configured, it is easy to forget about.
So far the only real difference is anything you might tune via /etc/system
(or mdb) is don
I'm sometimes seeing anomalously slow chown and chmod performance.
System: thumper running S10U3 with a single pool of 4 raidz2 vdevs.
I have a number of directory trees with something like 60 to 80 thousand
files in each. As part of the processing of this data, it is necessary to chown
and chmo
On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Relatively low traffic to the pool but sync takes too long to complete
and other operations are also not that fast.
Disks are on 3510 array. zil_disable=1.
bash-3.00# ptime sync
real 1:21.569
user0.001
sys 0.027
He
On Mon, 23 Apr 2007, Eric Schrock wrote:
> On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> > So If I send a snapshot of a filesystem to a receive command like this:
> > zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
> >
> > In order to get compression turned on, am I corr
Over the weekend I got ZFS up and running under FreeBSD and have
had much the same experience with it that I have with Solaris - it works
great out of the box and once configured, it is easy to forget about.
So far the only real difference is anything you might tune via /etc/system
(or mdb) is don
On Mon, 23 Apr 2007, Lyle Merdan wrote:
> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
>
> In order to get compression turned on, am I correct in my thought that I
> need to start the send/receive and then in a
On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
>
> In order to get compression turned on, am I correct in my thought that
> I need to start the send/r
So If I send a snapshot of a filesystem to a receive command like this:
zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
In order to get compression turned on, am I correct in my thought that I need
to start the send/receive and then in a separate window set the compression
property?
O
[EMAIL PROTECTED] said:
> And I did another preforman test by copy 512MB file into zfs pool that
> created from 1 lun only. and the test result was the same - 12 sec !?
>
> NOTE : server V240, solaris10(11/06), 2GB RAM, connected to HDS storage type
> AMS500 with two HBA type qlogic QLA2342.
>
>
[EMAIL PROTECTED] said:
> bash-3.00# uname -a SunOS nfs-10-1.srv 5.10 Generic_125100-04 sun4u sparc
> SUNW,Sun-Fire-V440
>
> zil_disable set to 1 Disks are over FCAL from 3510.
>
> bash-3.00# dtrace -n fbt::*SYNCHRONIZE*:entry'{printf("%Y",walltimestamp);}'
> dtrace: description 'fbt::*SYNCHRONIZ
What I'm saying is ZFS doesn't play nice with NFS in all the scenarios I could
think of:
-Single second disk in a v210 (sun72g) write cache on and off = ~1/3 the
performance of UFS when writing files using dd over an NFS mount using the same
disk.
-2 raid 5 volumes composing of 6 spindles ea
On Mon, Apr 23, 2007 at 10:10:23AM -0700, Ron Halstead wrote:
> What is the status of bug 6437054? The bug tracker still shows it open.
>
> Ron
Do you mean:
6437054 vdev_cache: wise up or die
This bug is still under investigation. A bunch of investigation has
been done, but no definitive acti
What is the status of bug 6437054? The bug tracker still shows it open.
Ron
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Apr 23, 2007 at 09:38:47AM -0700, Gino wrote:
>
> we had 5 corrupted zpool (on different servers and different SANs) !
> With Solaris up to S10U3 and Nevada up to snv59 we are able to corrupt
> easily a zpool only disconnecting a few times one or more luns of a
> zpool under high i/o load.
> > Is ZFS really supposed to be more reliable than UFS
> w/ logging, for
> > example, in single disk, root file system scenario?
>
> Yes. The failure to cope with a failed write in an
> unreplicated pool
> affects the availability of the system (because we
> panic), but not the
> underlying reli
On Mon, Apr 23, 2007 at 08:49:35AM -0700, Ivan Wang wrote:
>
> Now this is scary, looking from the descriptions, it is possible that
> we might lose data in zfs, and/or resulted in a corrupted zpool that
> panics the kernel, if during the write operation, zfs loses connection
> to underlying hardw
shay wrote:
stripe all of them maybe OK.
another 2 questions :
1. there is any concatination mathod in ZFS?
No. ZFS does dynamic striping. Some will argue that there is no real advantage
to concatenation that does not also exist in dynamic striping.
2. I test the prepormance by copy 512MB f
Wee Yeh Tan wrote:
I didn't spot anything that reads it from /etc/system. Appreciate any
pointers.
The beauty, and curse, of /etc/system is that modules do not need to create
an explicit reader.
-- richard
___
zfs-discuss mailing list
zfs-discuss@op
stripe all of them maybe OK.
another 2 questions :
1. there is any concatination mathod in ZFS?
2. I test the prepormance by copy 512MB file into zfs pool that created from 2
luns with stripe,
one Lun came from Storage-controller-0 and connected through HBA-0
second Lun came from Storage-c
On 4/23/07, Manoj Joseph <[EMAIL PROTECTED]> wrote:
Wee Yeh Tan wrote:
> On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> bash-3.00# mdb -k
>> Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md
>> ip sctp usba fcp fctl qlc ssd crypto lofs zfs random ptm cpc nfs ]
>> >
> On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B.
> Rang wrote:
> >
> > That's only one cause of panics.
> >
> > At least two of gino's panics appear due to
> corrupted space maps, for
> > instance. I think there may also still be a case
> where a failure to
> > read metadata during a transact
Hello shay,
Monday, April 23, 2007, 10:14:31 AM, you wrote:
s> I want to configure my zfs like this :
s> concatination_stripe_pool :
s>concatination
s> lun0_controller0
s> lun1_controller0
s> concatination
s> lun2_controller1
s> lun3_controller1
s> 1. There is any
Hello Manoj,
Monday, April 23, 2007, 5:58:43 AM, you wrote:
MJ> Wee Yeh Tan wrote:
>> On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>>> bash-3.00# mdb -k
>>> Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md
>>> ip sctp usba fcp fctl qlc ssd crypto lofs zfs random pt
> > At least two of gino's panics appear due to
> corrupted space maps, for
> > instance. I think there may also still be a case
> where a failure to
> > read metadata during a transaction commit leads to
> a panic, too. Maybe
> > that one's been fixed, or maybe it will be handled
> by the above bu
On Mon, 2007-04-23 at 07:25 -0500, Rich Brown wrote:
> As it turns out, that has been proposed along with some other
> reorganization of communities/projects:
>
>http://mail.opensolaris.org/pipermail/ogb-discuss/2007-April/000289.html
>
> Thanks,
>
> Rich
>
Ah, ok. Nice to see this i
As it turns out, that has been proposed along with some other
reorganization of communities/projects:
http://mail.opensolaris.org/pipermail/ogb-discuss/2007-April/000289.html
Thanks,
Rich
Mark Phalan wrote:
On 21 Apr 2007, at 04:42, Rich Brown wrote:
...
Hi Frank,
I'm about
Gino wrote:
Apr 23 02:02:22 SERVER144 ^Mpanic[cpu1]/thread=ff0017fa1c80:
Apr 23 02:02:22 SERVER144 genunix: [ID 809409 kern.notice] ZFS: I/O failure (write on
off 0: zio 9a5d4cc0 [L0 bplist] 4000L/4000P DVA[0]=<0:770b24
000:4000> DVA[1]=<0:dfa984000:4000> fletcher4 uncompressed LE
Albert Chin wrote:
On Sat, Apr 21, 2007 at 09:05:01AM +0200, Selim Daoud wrote:
isn't there another flag in /etc/system to force zfs not to send flush
requests to NVRAM?
I think it's zfs_nocacheflush=1, according to Matthew Ahrens in
http://blogs.digitar.com/jjww/?itemid=44.
s.
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/[EMAIL PROTECTED] (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/[
Leon Koll writes:
> Welcome to the club, Andy...
>
> I tried several times to attract the attention of the community to the
> dramatic performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS
> combination - without any result : href="http://www.opensolaris.org/jive/thread.jspa?messa
Albert Chin writes:
> On Sat, Apr 21, 2007 at 09:05:01AM +0200, Selim Daoud wrote:
> > isn't there another flag in /etc/system to force zfs not to send flush
> > requests to NVRAM?
>
> I think it's zfs_nocacheflush=1, according to Matthew Ahrens in
> http://blogs.digitar.com/jjww/?itemid=44.
I want to configure my zfs like this :
concatination_stripe_pool :
concatination
lun0_controller0
lun1_controller0
concatination
lun2_controller1
lun3_controller1
1. There is any option to implement it in ZFS?
2. there is other why to get the same configuration?
than
I have Storage-SAN of HDS(AMS500), and I want to do striping on luns from the
storage. I don't want any raid-5(because I have one on the disks in the HDS
storage).
I only want to do strip on 2 luns(25GB each) that cames from different
controllers and deffrent fiber channel port(dual HBA).
I tes
61 matches
Mail list logo