ok, see below...
On Oct 23, 2009, at 8:14 PM, Adam Cheal wrote:
Here is example of the pool config we use:
# zpool status
pool: pool002
state: ONLINE
scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52
2009
config:
NAME STATE READ WRITE CKSUM
poo
On Tue, Oct 20 at 22:24, Frédéric VANNIERE wrote:
You can't use the Intel X25-E because it has a 32 or 64 MB volatile
cache that can't be disabled neither flushed by ZFS.
I don't believe the above statement is correct.
According to anandtech who asked Intel:
http://www.anandtech.com/cpuchips
Here is example of the pool config we use:
# zpool status
pool: pool002
state: ONLINE
scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009
config:
NAME STATE READ WRITE CKSUM
pool002 ONLINE 0 0 0
raidz2 ONLINE
Hi Joerg,
Thanks for this clarification. We understand that we can distribute ZFS
binary under a non GPL license, as long as it does not use GPL symbols.
Our plan regarding ZFS is to first port it to Linux kernel and then make its
binary distributions available for various different distributions
On Oct 23, 2009, at 5:32 PM, Tim Cook wrote:
On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling > wrote:
Tim has a valid point. By default, ZFS will queue 35 commands per
disk.
For 46 disks that is 1,610 concurrent I/Os. Historically, it has
proven to be
relatively easy to crater performance o
And therein lies the issue. The excessive load that causes the IO issues is
almost always generated locally from a scrub or a local recursive "ls" used to
warm up the SSD-based zpool cache with metadata. The regular network IO to the
box is minimal and is very read-centric; once we load the box
On Tue, Oct 20 at 21:54, Bob Friesenhahn wrote:
On Tue, 20 Oct 2009, Richard Elling wrote:
Intel: X-25E read latency 75 microseconds
... but they don't say where it was measured or how big it was...
Probably measured using a logic analyzer and measuring the time from
the last bit of the r
On Fri, Oct 23, 2009 at 7:19 PM, Adam Leventhal wrote:
> On Fri, Oct 23, 2009 at 06:55:41PM -0500, Tim Cook wrote:
> > So, from what I gather, even though the documentation appears to state
> > otherwise, default checksums have been changed to SHA256. Making that
> > assumption, I have two quest
On Fri, Oct 23, 2009 at 06:55:41PM -0500, Tim Cook wrote:
> So, from what I gather, even though the documentation appears to state
> otherwise, default checksums have been changed to SHA256. Making that
> assumption, I have two questions.
That's false. The default checksum has changed from fletch
On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling wrote:
>
> Tim has a valid point. By default, ZFS will queue 35 commands per disk.
> For 46 disks that is 1,610 concurrent I/Os. Historically, it has proven to
> be
> relatively easy to crater performance or cause problems with very, very,
> very ex
On Fri, Oct 23, 2009 at 7:17 PM, Adam Cheal wrote:
> LSI's sales literature on that card specs "128 devices" which I take with a
> few hearty grains of salt. I agree that with all 46 drives pumping out
> streamed data, the controller would be overworked BUT the drives will only
> deliver data as
What luupgrade do you use?
I uninstall lu package in current build first, then install new lu package in
the verion to upgrade.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
On Oct 23, 2009, at 4:46 PM, Tim Cook wrote:
On Fri, Oct 23, 2009 at 6:32 PM, Adam Cheal
wrote:
I don't think there was any intention on Sun's part to ignore the
problem...obviously their target market wants a performance-oriented
box and the x4540 delivers that. Each 1068E controller chip
LSI's sales literature on that card specs "128 devices" which I take with a few
hearty grains of salt. I agree that with all 46 drives pumping out streamed
data, the controller would be overworked BUT the drives will only deliver data
as fast as the OS tells them to. Just because the speedometer
Hi,
Check that... I'm on the alias now...
Jon Aimone spake thusly, on or about 10/23/09 17:15:
Hi,
I have a functional OpenSolaris x64 system on which I need to physically
move the boot disk, meaning its physical device path will change and
probably its cXdX name.
When I do this the system fa
Hi,
I have a functional OpenSolaris x64 system on which I need to physically
move the boot disk, meaning its physical device path will change and
probably its cXdX name.
When I do this the system fails to boot. The error messages indicate
that it's still trying to read from the original path.
I
On 10/23/09 16:56, sean walmsley wrote:
Eric and Richard - thanks for your responses.
I tried both:
echo ::spa -c | mcb
zdb -C (not much of a man page for this one!)
and was able to match the POOL id from the log (hex 4fcdc2c9d60a5810) with both
outputs. As Richard pointed out, I needed t
Eric and Richard - thanks for your responses.
I tried both:
echo ::spa -c | mcb
zdb -C (not much of a man page for this one!)
and was able to match the POOL id from the log (hex 4fcdc2c9d60a5810) with both
outputs. As Richard pointed out, I needed to convert the hex value to decimal
to get
So, from what I gather, even though the documentation appears to state
otherwise, default checksums have been changed to SHA256. Making that
assumption, I have two questions.
First, is the default updated from fletcher2 to SHA256 automatically for a
pool that was created with an older version of
On Fri, Oct 23, 2009 at 6:32 PM, Adam Cheal wrote:
> I don't think there was any intention on Sun's part to ignore the
> problem...obviously their target market wants a performance-oriented box and
> the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY
> channels = 1 channel per
I don't think there was any intention on Sun's part to ignore the
problem...obviously their target market wants a performance-oriented box and
the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY channels
= 1 channel per drive = no contention for channels. The x4540 is a monste
Hi,
On Mon, Oct 19, 2009 at 05:03:18PM -0600, Cindy Swearingen wrote:
> Currently, the device naming changes in build 125 mean that you cannot
> use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a
> mirrored root pool.
> [...]
Just ran into this yesterday... The change to get thin
Anyone know if this means that this will actually show up in SNV soon, or
whether it will make 2010.02? (on disk dedup specifically)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
On Fri, Oct 23, 2009 at 3:48 PM, Bruno Sousa wrote:
> Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of "hidden"
> problems found by Sun where the HBA resets, and due to market time pressure
> the "quick and dirty" solution was to spread the load over multiple HBA's
> instead of softw
Probably if you try to use any LU operation after you have upgraded to
build 125.
cs
On 10/23/09 16:18, Chris Du wrote:
Sorry, do you mean luupgrade from previous versions or from 125 to future
versions?
I luupgrade from 124 to 125 with mirrored root pool and everything is working
fine.
___
On Oct 23, 2009, at 3:19 PM, Eric Schrock wrote:
On 10/23/09 15:05, Cindy Swearingen wrote:
I'm stumped too. Someone with more FM* experience needs to comment.
Looks like your errlog may have been rotated out of existence - see
if there is a .X or .gz version in /var/fm/fmd/errlog*. The
Sad to hear that Apple is apparently going in another direction.
http://www.macrumors.com/2009/10/23/apple-shuts-down-open-source-zfs-project/
-cheers, CSB
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris
I haven't seen any mention of it in this forum yet, so FWIW you might be
interested in the details of ZFS deduplication mentioned in this recently-filed
case.
Case log: http://arc.opensolaris.org/caselog/PSARC/2009/571/
Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115507
V
Sorry, do you mean luupgrade from previous versions or from 125 to future
versions?
I luupgrade from 124 to 125 with mirrored root pool and everything is working
fine.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discus
On 10/23/09 15:05, Cindy Swearingen wrote:
I'm stumped too. Someone with more FM* experience needs to comment.
Looks like your errlog may have been rotated out of existence - see if
there is a .X or .gz version in /var/fm/fmd/errlog*. The list.suspect
fault should be including a location fie
I'm stumped too. Someone with more FM* experience needs to comment.
Cindy
On 10/23/09 14:52, sean walmsley wrote:
Thanks for this information.
We have a weekly scrub schedule, but I ran another just to be sure :-) It
completed with 0 errors.
Running fmdump -eV gives:
TIME
Is there a CR yet for this?
Thanks
Karl
Cindy Swearingen wrote:
Hi everyone,
Currently, the device naming changes in build 125 mean that you cannot
use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a
mirrored root pool.
If you are considering this release for the ZFS log dev
Tommy McNeely wrote:
I have a system who's rpool has gone defunct. The rpool is made of a
single "disk" which is a raid5EE made of all 8 146G disks on the box.
The raid card is the Adaptec brand card. It was running nv_107, but its
currently net booted to nv_121. I have already checked in the
On Oct 23, 2009, at 1:48 PM, Bruno Sousa wrote:
Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of
"hidden" problems found by Sun where the HBA resets, and due to
market time pressure the "quick and dirty" solution was to spread
the load over multiple HBA's instead of software fix
Hi Cindy,
Thank you for the update, mas it seems like i can't see any information
specific to that bug.
I can only see bugs number 6702538 and 6615564, but according to their
history, they have been fixed quite some time ago.
Can you by any chance present the information about bug 6694909 ?
T
Thanks for this information.
We have a weekly scrub schedule, but I ran another just to be sure :-) It
completed with 0 errors.
Running fmdump -eV gives:
TIME CLASS
fmdump: /var/fm/fmd/errlog is empty
Dumping the faultlog (no -e) does give some output, but again there
Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of "hidden"
problems found by Sun where the HBA resets, and due to market time
pressure the "quick and dirty" solution was to spread the load over
multiple HBA's instead of software fix?
Just my 2 cents..
Bruno
Adam Cheal wrote:
J
Adam Cheal wrote:
Just submitted the bug yesterday, under advice of James, so I don't have a number you can
refer to you...the "change request" number is 6894775 if that helps or is
directly related to the future bugid.
From what I seen/read this problem has been around for awhile but only re
On Fri, Oct 23, 2009 at 3:05 PM, Travis Tabbal wrote:
> Hmm.. I expected people to jump on me yelling that it's a bad idea. :)
>
> How about this, can I remove a vdev from a pool if the pool still has
> enough space to hold the data? So could I add it in and mess with it for a
> while without los
Hmm.. I expected people to jump on me yelling that it's a bad idea. :)
How about this, can I remove a vdev from a pool if the pool still has enough
space to hold the data? So could I add it in and mess with it for a while
without losing anything? I would expect the system to resliver the data o
> - How can I effect OCE with ZFS? The traditional
> 'back up all the data somewhere, add a drive,
> re-establish the file system/pools/whatever, then
> copy the data back' is not going to work because
> there will be nowhere to temporarily 'put' the
> data.
Add devices to the pool. Preferably in
On Oct 23, 2009, at 12:42 PM, Tim Cook wrote:
On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling > wrote:
FYI,
The ZFS project on MacOS forge (zfs.macosforge.org) has provided the
following announcement:
ZFS Project Shutdown2009-10-23
The ZFS project has been discontinued.
On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling wrote:
> FYI,
> The ZFS project on MacOS forge (zfs.macosforge.org) has provided the
> following announcement:
>
>ZFS Project Shutdown2009-10-23
>The ZFS project has been discontinued. The mailing list and
> repository wil
FYI,
The ZFS project on MacOS forge (zfs.macosforge.org) has provided the
following announcement:
ZFS Project Shutdown2009-10-23
The ZFS project has been discontinued. The mailing list and
repository will
also be removed shortly.
The community is migrating t
Hi Karim,
All ZFS storage pools are going to use some amount of space for
metadata and in this example it looks like 3 GB. This is what
the difference between zpool list and zfs list is telling you.
No other way exists to calculate the space that is consumed by
metadata.
pool space (199 GB) minu
Just submitted the bug yesterday, under advice of James, so I don't have a
number you can refer to you...the "change request" number is 6894775 if that
helps or is directly related to the future bugid.
>From what I seen/read this problem has been around for awhile but only rears
>its ugly head
Hi Sean,
A better way probably exists but I use the fdump -eV to identify the
pool and the device information (vdev_path) that is listed like this:
# fmdump -eV | more
.
.
.
pool = test
pool_guid = 0x6de45047d7bde91d
pool_context = 0
pool_failmode = wait
"David Dyer-Bennet" wrote:
> The problem with this, I think, is that to be used by any significant
> number of users, the module has to be included in a distribution, not just
> distributed by itself. (And the different distributions have their own
> policies on what they will and won't consider
Bob Friesenhahn wrote:
> On Fri, 23 Oct 2009, Kyle McDonald wrote:
> >
> > Along these lines, it's always struck me that most of the restrictions of
> > the
> > GPL fall on the entity who distrbutes the 'work' in question.
>
> A careful reading of GPLv2 shows that restrictions only apply when
Kyle McDonald wrote:
> Arguably that line might even be shifted from the act of compiling it,
> to the act of actually loading (linking) it into the Kernel, so that
> distributing a compiled module might even work the same way. I'm not so
> sure about this though. Presumably compiling it befor
Sorry, running snv_123, indiana
On Fri, Oct 23, 2009 at 11:16 AM, Jeremy f wrote:
> What bug# is this under? I'm having what I believe is the same problem. Is
> it possible to just take the mpt driver from a prior build in the time
> being?
> The below is from the load the zpool scrub creates. T
What bug# is this under? I'm having what I believe is the same problem. Is
it possible to just take the mpt driver from a prior build in the time
being?
The below is from the load the zpool scrub creates. This is on a dell t7400
workstation with a 1068E oemed lsi. I updated the firmware to the newe
This morning we got a fault management message from one of our production
servers stating that a fault in one of our pools had been detected and fixed.
Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID
Bob Friesenhahn wrote:
On Fri, 23 Oct 2009, Kyle McDonald wrote:
Along these lines, it's always struck me that most of the
restrictions of the GPL fall on the entity who distrbutes the 'work'
in question.
A careful reading of GPLv2 shows that restrictions only apply when
distributing binar
On Fri, October 23, 2009 11:57, Kyle McDonald wrote:
>
> Along these lines, it's always struck me that most of the restrictions
> of the GPL fall on the entity who distrbutes the 'work' in question.
>
> I would thinkthat distributing the source to a separate original work
> for a module, leaves t
On Fri, 23 Oct 2009, Kyle McDonald wrote:
Along these lines, it's always struck me that most of the restrictions of the
GPL fall on the entity who distrbutes the 'work' in question.
A careful reading of GPLv2 shows that restrictions only apply when
distributing binaries.
I would thinkthat
On Fri, October 23, 2009 09:57, Robert wrote:
> A few months ago I happened upon ZFS and have been excitedly trying to
> learn all I can about it. There is much to admire about ZFS, so I would
> like to integrate it into my solution. The simple statement of
> requirements is: support for total of
Bob Friesenhahn wrote:
On Fri, 23 Oct 2009, Anand Mitra wrote:
One of the biggest questions around this effort would be “licensing”.
As far as our understanding goes; CDDL doesn’t restrict us from
modifying ZFS code and releasing it. However GPL and CDDL code cannot
be mixed, which implies that
Mike Bo wrote:
Once data resides within a pool, there should be an efficient method of moving
it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove.
Here's my scenario... When I originally created a 3TB pool, I didn't know the
best way carve up the space, so I used a single
I consider myself also as a "noob" when it gets to ZFS but I already built
myself a ZFS filer and maybe I can
enlighten you by sharing my "advanced noob who read about a lot about ZFS"
thoughts about ZFS
> A few examples of "duh" ?s
>- How can I effect OCE with ZFS? The traditional 'back up a
On Fri, 23 Oct 2009, Arne Jansen wrote:
3) Do you have any configuration hints for setting up a pool layout
which might help resilver performance? (aside from using hardware
RAID instead of RAIDZ)
Using fewer drives per vdev should certainly speed up resilver
performance. It sounds you pool
On Fri, 23 Oct 2009, Anand Mitra wrote:
One of the biggest questions around this effort would be “licensing”.
As far as our understanding goes; CDDL doesn’t restrict us from
modifying ZFS code and releasing it. However GPL and CDDL code cannot
be mixed, which implies that ZFS cannot be compiled
Hi,
I have a pool of 22 1T SATA disks in a RAIDZ3 configuration. It is filled with
files of an average size of 2MB. I filled it randomly to resemble the expected
workload in production use.
Problems arise when I try to scrub/resilver this pool. This operation takes the
better part of a week (!)
I am in the beginning stage of converting multiple two-drive NAS devices to a
more proper single-device storage solution for my home network.
Because I have a pretty good understanding of hardware-based storage solutions,
originally I was going to go with a a traditional server-class motherboar
Our config is:
OpenSolaris snv_118 x64
1 x LSISAS3801E controller
2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives)
Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise
we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has
one ZFS filesyst
( sry for cross post , I posted this in opensolaris discuss. but I think it
belongs here )
I can no longer mount 1 of my 2 volumes.
they are both on zfs. I can still mount my home, which is on rpool.
but can not mount my data which is on a raidz pool.
setting are the same.
this is from AppleVolum
Hi,
snv_123, x64
zfs recv -F complains it can't open a snapshot it just destroyed itself as it
was destroyed on a sending side. Other than complaining about it it finishes
successfully.
Below is an example where I created a filesystem fs1 with three snapshots of it
called snap1, snap2, snap3.
Darren J Moffat wrote:
> > One of the biggest questions around this effort would be ?licensing?.
> > As far as our understanding goes; CDDL doesn?t restrict us from
> > modifying ZFS code and releasing it. However GPL and CDDL code cannot
> > be mixed, which implies that ZFS cannot be compiled in
Anand Mitra wrote:
Hi All,
At KQ Infotech, we have always looked at challenging ourselves by
trying to scope out new technologies. Currently we are porting ZFS to
Linux and would like to share our progress and the challenges faced,
we would also like to know your thoughts/inputs regarding our ef
Hi All,
At KQ Infotech, we have always looked at challenging ourselves by
trying to scope out new technologies. Currently we are porting ZFS to
Linux and would like to share our progress and the challenges faced,
we would also like to know your thoughts/inputs regarding our efforts.
Though we are
2009/10/23 Gaëtan Lehmann :
>
> Le 23 oct. 09 à 08:46, Stathis Kamperis a écrit :
>
>> 2009/10/23 michael schuster :
>>>
>>> Stathis Kamperis wrote:
Salute.
I have a filesystem where I store various source repositories (cvs +
git). I have compression enabled on and zfs get
Hi Adam,
How many disks and zpoo/zfs's do you have behind that LSI?
I have a system with 22 disks and 4 zpools with around 30 zfs's and so
far it works like a charm, even during heavy load. The opensolaris
release is snv_101b .
Bruno
Adam Cheal wrote:
Cindy: How can I view the bug report you
Hi Cindy,
I have a couple of questions about this issue :
1. i have exactly the same LSI controller in another server running
opensolaris snv_101b, and so far no errors like this ones where
seen in the system
2. up to snv_118 i haven't seen any problems, only now within snv_125
3
Le 23 oct. 09 à 08:46, Stathis Kamperis a écrit :
2009/10/23 michael schuster :
Stathis Kamperis wrote:
Salute.
I have a filesystem where I store various source repositories (cvs +
git). I have compression enabled on and zfs get compressratio
reports
1.46x. When I copy all the stuff to a
74 matches
Mail list logo