Recently upgraded a system from b98 to b114. Also replaced two 400G
Seagate Barracudea 7200.8 SATA disks with two WD 750G RE3 SATA disks
from a 6-device raidz1 pool. Replacing the first 750G went ok. While
replacing the second 750G disk, I noticed CKSUM errors on the first
disk. Once the second dis
I'm running vanilla 2009.06 since its release. I'll definitely give it a
shot with the Live CD.
Also I tried importing with only the five good disks physically attached and
get the same message.
- Kyle
On Mon, Sep 21, 2009 at 3:50 AM, Chris Murray
wrote:
> That really sounds like a scenario tha
On Mon, Sep 21, 2009 at 3:37 AM, wrote:
>
> >
> >The disk has since been replaced, so now:
> >k...@localhost:~$ pfexec zpool import
> > pool: chronicle
> >id: 11592382930413748377
> > state: DEGRADED
> >status: One or more devices contains corrupted data.
> >action: The pool can be imported
I was able to get Netatalk built on OpenSolaris for my ZFS NAS at home.
Everything is running great so far, and I'm planning on using it on the 96TB
NAS I'm building for my office. It would be nice to have this supported out of
the box, but there are probably licensing issues involved.
--
This
On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling wrote:
OK, so the problem you are trying to solve is "how much stuff can I
place in the remaining free space?" I don't think this is knowable
for a dynamic file system like ZFS where metadata
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling wrote:
> OK, so the problem you are trying to solve is "how much stuff can I
> place in the remaining free space?" I don't think this is knowable
> for a dynamic file system like ZFS where metadata is dynamically
> allocated.
Yes. And I acknowle
On Fri, Sep 18, 2009 at 01:51:52PM -0400, Steffen Weiberle wrote:
> I am trying to compile some deployment scenarios of ZFS.
>
> # of systems
One, our e-mail server for the entire campus.
> amount of storage
2 TB that's 58% used.
> application profile(s)
This is our Cyrus IMAP spool. In addi
On Sep 21, 2009, at 7:11 AM, Andrew Deason wrote:
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling wrote:
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other
features as per best practices for your workload? In
On 21 September, 2009 - Chris Banal sent me these 4,4K bytes:
> It appears as though zfs reports the size of a directory to be one byte per
> file. Traditional file systems such as ufs or ext3 report the actual size of
> the data needed to store the directory.
Or rather, "the size needed at some
It appears as though zfs reports the size of a directory to be one byte per
file. Traditional file systems such as ufs or ext3 report the actual size of
the data needed to store the directory.
This causes some trouble with the default behavior of some nfs clients
(linux) to decide to to use a read
Hi,
I've got some strange problems with my serer today.
When I boot b123, it stops at "reading zfs config". I've tried several
times to get past this point, but it seems to freeze there.
Then I tried single user mode, from GRUB, and it seems to get me a
little further.
After a few minutes however,
Tristan Ball wrote:
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as of
today, the receiving zfs process has started running extremely slowly,
and is running at 100% CPU on one core, comp
Thinking more about this I'm confused about what you are seeing.
The function dsl_pool_zil_clean() will serialise separate calls to
zil_clean() within a pool. I don't expect you have >1037 pools on your laptop!
So I don't know what's going on. What is the typical call stack for those
zil_clean() t
On 09/21/09 12:04 PM, David Pacheco wrote:
This is
6839260 want zfs send with properties
Could it be amended to ask for an option on incrementals
/not/ to send properties? IMO non-recursive streams should
be able to be consistent. I wonder what the thinking was
to not send properties on a new
Hej Richard.
think I'll update all our servers to the same version of zfs...
That will hopefully make sure that this doesn't happen again :-)
Darren and Richard: Thank you very much for your help !
Sascha
--
This message posted from opensolaris.org
__
On Sep 21, 2009, at 4:34 AM, David Magda wrote:
On Sep 21, 2009, at 06:52, Chris Ridd wrote:
Does zpool destroy prompt "are you sure" in any way? Some admin
tools do (beadm destroy for example) but there's not a lot of
consistency.
No it doesn't, which I always found strange.
Personally
Frank Middleton wrote:
The problem with the regular stream is that most of the file
system properties (such as mountpoint) are not copied as they
are with a recursive stream. This may seem an advantage to some,
(e.g., if the remote mountpoint is already in use, the mountpoint
seems to default to
On Sep 21, 2009, at 8:59 AM, Sascha wrote:
Hi Darren,
sorry that it took so long before I could answer.
The good thing:
I found out what went wrong.
What I did:
After resizing a Disk on the Storage, solaris recognizes it
immediately.
Everytime you resize a disk, the EVA storage updates the
Hi Darren,
sorry that it took so long before I could answer.
The good thing:
I found out what went wrong.
What I did:
After resizing a Disk on the Storage, solaris recognizes it immediately.
Everytime you resize a disk, the EVA storage updates the discription which
contains the size. So typing
Nils,
A zil_clean() is started for each dataset after every txg.
this includes snapshots (which is perhaps a bit inefficient).
Still, zil_clean() is fairly lightweight if there's nothing
to do (grab a non contended lock; find nothing on a list;
drop the lock & exit).
Neil.
On 09/21/09 08:08, Ni
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling wrote:
> If you are just building a cache, why not just make a file system and
> put a reservation on it? Turn off auto snapshots and set other
> features as per best practices for your workload? In other words,
> treat it like we
> treat dump spa
Hi All,
out of curiosity: Can anyone come up with a good idea about why my snv_111
laptop computer should run more than 1000 zil_clean threads?
ff0009a9dc60 fbc2c0300 tq:zil_clean
ff0009aa3c60 fbc2c0300 tq:zil_clean
ff0009aa9c60 f
On Mon, Sep 21, 2009 at 13:34, David Magda wrote:
> On Sep 21, 2009, at 06:52, Chris Ridd wrote:
>
>> Does zpool destroy prompt "are you sure" in any way? Some admin tools do
>> (beadm destroy for example) but there's not a lot of consistency.
>
> No it doesn't, which I always found strange.
>
> P
>
>The disk has since been replaced, so now:
>k...@localhost:~$ pfexec zpool import
> pool: chronicle
>id: 11592382930413748377
> state: DEGRADED
>status: One or more devices contains corrupted data.
>action: The pool can be imported despite missing or damaged devices. The
>fault tol
On Sep 21, 2009, at 06:52, Chris Ridd wrote:
Does zpool destroy prompt "are you sure" in any way? Some admin
tools do (beadm destroy for example) but there's not a lot of
consistency.
No it doesn't, which I always found strange.
Personally I always thought you should be queried for a "zfs
Hi all, I have a RAID-Z2 setup with 6x 500Gb SATA disks. I exported the
array to use under a different system but during or after the export one of
the disks failed:
k...@localhost:~$ pfexec zpool import
pool: chronicle
id: 11592382930413748377
state: DEGRADED
status: One or more devices ar
On 20 Sep 2009, at 19:46, dick hoogendijk wrote:
On Sun, 2009-09-20 at 11:41 -0700, vattini giacomo wrote:
Hi there,i'm in a bad situation,under Ubuntu i was tring to import
a solaris zpool that is in /dev/sda1,while the Ubuntu is in /dev/
sda5;not being able to mount the solaris pool i dec
Is it possible to migrate data from iscsitgt for comstar iscsi target? I guess
comstar wants metadata at beginning of volume and this makes things difficult?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
28 matches
Mail list logo