On 2/19/13 6:32 AM, Jim Klimov wrote:
On 2013-02-19 14:24, Konstantin Kuklin wrote:
zfs set canmount=off zroot/var/crash
i can`t do this, because zfs list empty
I'd argue that in your case it might be desirable to evacuate data and
reinstall the OS - just to be certain that ZFS on-disk struc
On Apr 28, 2011, at 5:04 PM, Stephan Budach wrote:
> Am 28.04.11 11:51, schrieb Markus Kovero:
>>> failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo,
>>> spa->spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab
>>> Is this something I should worry about?
>>> uname -a
>>
On Apr 18, 2011, at 11:22 AM, jeff.liu wrote:
> Hello List,
>
> I am trying to fetch the data/hole info of a sparse file through the
> lseek(SEEK_HOLE/SEEK_DATA)
> stuff, the result of fpathconf(..., _PC_MIN_HOLE_SIZE) is ok, so I think this
> interface is supported
> on my testing ZFS, but SE
On Apr 8, 2011, at 2:38 PM, Karl Wagner wrote:
> One of them was simply an alternative way to do a "live CD" environment. As
> ZFS already does COW etc, it would avoid all the hassle you get in e.g.
> linux. You could have a ZFS vdev on the CD, then use a RAM disk as a second
> vdev.
IIRC this id
On Nov 24, 2010, at 7:52 PM, Karel Gardas wrote:
> Hello,
>
> during my attempts to update my workstation OS to the latest Solaris 11
> Express 2010.11 I've come to the point where machine nolonger booted anymore.
> That was just after the last reboot of Sol11Express when everything was
> upd
On Nov 13, 2010, at 7:33 AM, Edward Ned Harvey wrote:
> Log devices are generally write-only. They are only read during boot, after
> an ungraceful crash. It is extremely difficult to get a significant number
> of GB used on the log device, because they are flushed out to primary storage
> s
On Nov 12, 2010, at 5:21 PM, Alexander Skwar wrote:
> Hm. Why are there no errors shown for the logs devices?
You need to crash you machine while log devices are in use, then you'll see
some reads on the next reboot. "In use" here means that system is actively
writing to log devices at the tim
On Oct 8, 2010, at 10:25 AM, James C. McPherson wrote:
> On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote:
> ...
>> --
>> *From:* James C. McPherson
>> *To:* Ramesh Babu
>>
>> On 7/10/10 03:46 PM, Ramesh Babu wrote:
>> > I
On Sep 30, 2010, at 11:00 PM, Ben Miller wrote:
> On 09/22/10 04:27 PM, Ben Miller wrote:
>> On 09/21/10 09:16 AM, Ben Miller wrote:
>
>>> I had tried a clear a few times with no luck. I just did a detach and that
>>> did remove the old disk and has now triggered another resilver which
>>> hopef
On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote:
> I am running nexenta CE 3.0.3.
>
> I have a file system that at some point in the last week went from a
> directory per 'ls -l' to a special character device. This results in not
> being able to get into the file system. Here is my file sy
On Sep 23, 2010, at 1:11 AM, Stephan Ferraro wrote:
>>
>> He had problems with ZFS. It turned out to be faulty
>> RAM. ZFS is so sensitive it detects and reports
>> problems to you. No other filesystem does that, so
>> you think ZFS is problematic and switch. But the
>> other filesystems is slowl
ou the IP and credentials to have ssh access.
> Please notify me when you don't need anymore
>
> Valerio Piancastelli
> +39 348 8072760
> piancaste...@iclos.com
>
> - Messaggio originale -
> Da: "Victor Latushkin"
> A: "Valerio Piancastelli&qu
On Sep 20, 2010, at 7:23 PM, Valerio Piancastelli wrote:
> Unfortunately not.
>
> When i do
>
> # /usr/bin/ls -lv /sas/mail-cts
> brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 /volumes/store/nfs/ICLOS/prod/mail-cts
>
> it seem a block device:
Yes,it looks like we have bad mode field value in the z
On Sep 19, 2010, at 12:08 AM, Stephan Ferraro wrote:
> Is there a way to fsck the spacemap?
> Does scrub helps for this?
No, because issues that you see are internal inconsistencies with unclear
nature.
Though as actual issue varies from one inctance to another this is likely some
random corru
On Sep 19, 2010, at 12:11 AM, Stephan Ferraro wrote:
> This is new for me:
>
> $ zpool status
> pool: rpool
> state: ONLINE
> status: One or more devices has experienced an unrecoverable error. An
> attempt was made to correct the error. Applications are unaffected.
> action: Determine if t
On Sep 19, 2010, at 9:06 PM, Stephan Ferraro wrote:
> Am 19.09.2010 um 18:59 schrieb Victor Latushkin:
>
>> On Sep 19, 2010, at 12:08 AM, Stephan Ferraro wrote:
>>
>>> Is there a way to fsck the spacemap?
>>> Does scrub helps for this?
>>
>&
On Sep 18, 2010, at 11:37 AM, Stephan Ferraro wrote:
> I'm really angry against ZFS:
Emotions rarely help to get to the root cause...
> My server no more reboots because the ZFS spacemap is again corrupt.
> I just replaced the whole spacemap by recreating a new zpool from scratch and
> copying
ems all data blocks are there.
Any ideas on hot to recover from this situation?
Valerio Piancastelli
piancaste...@iclos.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
--
Vict
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
--
Victor Latushkin phone: x11467 / +74959370467
TSC-Kernel EMEAmobile: +78957693012
On Jul 9, 2010, at 4:27 AM, George wrote:
>> I think it is quite likely to be possible to get
>> readonly access to your data, but this requires
>> modified ZFS binaries. What is your pool version?
>> What build do you have installed on your system disk
>> or available as LiveCD?
For the record
hing back. In
future it should be easier as ZFS readonly import support is now integrated
into source code thanks to George Wilson's efforts.
regards
victor
>
> Thanks,
> Dmitry
>
>
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailt
e...@backstopllp.com>
www.backstopllp.com <http://www.backstopllp.com>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
--
Victor Latushkin phone: x11467 / +74959370467
TSC-Kernel EMEA
On Aug 16, 2010, at 12:29 PM, Marc Emmerson wrote:
> Hi Victor,
> I just woke up and checked my server and the delete operation has completed,
> however I ran your command anyway and here is the output:
If all is well, then requested information is no longer relevant ;-)
victor
>
> m...@serv
On Aug 15, 2010, at 11:30 PM, Marc Emmerson wrote:
> Hi all,
> I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a
> couple of filesystems which I decided to delete last week, the first
> contained about 6GB of data and was deleted in about 30 minutes, the second
> (about
On Aug 4, 2010, at 12:23 AM, Darren Taylor wrote:
> Hi George,
>
> I think you are right. The log device looks to have suffered a complete loss,
> there is no data on the disk at all. The log device was a "acard" ram drive
> (with battery backup), but somehow it has faulted clearing all data.
On Jul 7, 2010, at 3:27 AM, Richard Elling wrote:
>
> On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote:
>
>> Hello list,
>>
>> I posted this a few days ago on opensolaris-discuss@ list
>> I am posting here, because there my be too much noise on other lists
>>
>> I have been without this zfs
On Jul 8, 2010, at 11:15 AM, R. Eulenberg wrote:
>
> pstack 'pgrep zdb'/1
>
> and system answers:
>
> pstack: cannot examine pgrep zdb/1: no such process or core file
use ` instead of ' in the above command.
___
zfs-discuss mailing list
zfs-discuss@o
On Jun 28, 2010, at 11:27 PM, George wrote:
> Again this core dumps when I try to do "zpool clear storage2"
>
> Does anyone have any suggestions what would be the best course of action now?
Do you have any crahsdumps saved? First one is most interesting one...
__
On Jul 4, 2010, at 1:33 AM, R. Eulenberg wrote:
> R. Eulenberg web.de> writes:
>
>>
op> I was setting up a new systen (osol 2009.06
>>> and updating to
op> the lastest version of osol/dev - snv_134 -
>>> with
op> deduplication) and then I tried to import my
>>> backup zpoo
On Jul 3, 2010, at 1:20 PM, George wrote:
>> Because of that I'm thinking that I should try
>> to change the hostid when booted from the CD to be
>> the same as the previously installed system to see if
>> that helps - unless that's likely to confuse it at
>> all...?
>
> I've now tried changing
On Jul 4, 2010, at 4:58 AM, Andrew Jones wrote:
> Victor,
>
> The zpool import succeeded on the next attempt following the crash that I
> reported to you by private e-mail!
>From the threadlist it looked like system was pretty low on memory with stacks
>of userland stuff swapped out, hence s
On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote:
> Well, I see no takers or even a hint...
>
> I've been playing with zdb to try to examine the pool, but I get:
>
> # zdb -b pool4_green
> zdb: can't open pool4_green: Bad exchange descriptor
>
> # zdb -d pool4_green
> zdb: can't open pool4_green
On Jul 1, 2010, at 10:28 AM, Andrew Jones wrote:
> Victor,
>
> I've reproduced the crash and have vmdump.0 and dump device files. How do I
> query the stack on crash for your analysis? What other analysis should I
> provide?
Output of 'echo "::threadlist -v" | mdb 0' can be a good start in th
On Jun 30, 2010, at 10:48 AM, George wrote:
>> I suggest you to try running 'zdb -bcsv storage2' and
>> show the result.
>
> r...@crypt:/tmp# zdb -bcsv storage2
> zdb: can't open storage2: No such device or address
>
> then I tried
>
> r...@crypt:/tmp# zdb -ebcsv storage2
> zdb: can't open sto
On Jun 29, 2010, at 1:30 AM, George wrote:
> I've attached the output of those commands. The machine is a v20z if that
> makes any difference.
Stack trace is similar to one bug that I do not recall right now, and it
indicates that there's likely a corruption in ZFS metadata.
I suggest you to
On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
> Victor,
>
> The 'zpool import -f -F tank' failed at some point last night. The box was
> completely hung this morning; no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset, as I had no diagnostic
On Jun 28, 2010, at 11:27 PM, George wrote:
> I've tried removing the spare and putting back the faulty drive to give:
>
> pool: storage2
> state: FAULTED
> status: An intent log record could not be read.
>Waiting for adminstrator intervention to fix the faulted pool.
> action: Either r
On Jun 28, 2010, at 9:32 PM, Andrew Jones wrote:
> Update: have given up on the zdb write mode repair effort, as least for now.
> Hoping for any guidance / direction anyone's willing to offer...
>
> Re-running 'zpool import -F -f tank' with some stack trace debug, as
> suggested in similar thr
On 28.06.10 16:16, Gabriele Bulfon wrote:
Yes...they're still running...but being aware that a power failure causing an
unexpected poweroff may make the pool unreadable is a pain
Pool integrity is not affected by this issue.
___
zfs-discuss maili
On 05.06.10 00:10, Ray Van Dolson wrote:
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson wrote:
Makes sense. So, as someone else suggested, decreasing my block size
may improve the deduplication ratio.
It might. It might make your
On Jun 4, 2010, at 10:18 PM, Miles Nordin wrote:
>> "sl" == Sigbjørn Lie writes:
>
>sl> Excellent! I wish I would have known about these features when
>sl> I was attempting to recover my pool using 2009.06/snv111.
>
> the OP tried the -F feature. It doesn't work after you've lost
On Jun 4, 2010, at 5:01 PM, Sigbjørn Lie wrote:
>
> R. Eulenberg wrote:
>> Sorry for reviving this old thread.
>>
>> I even have this problem on my (productive) backup server. I lost my
>> system-hdd and my separate ZIL-device while the system crashs and now I'm in
>> trouble. The old system
On Jun 3, 2010, at 3:16 AM, Erik Trimble wrote:
> Expanding a RAIDZ (which, really, is the only thing that can't do right now,
> w/r/t adding disks) requires the Block Pointer (BP) Rewrite functionality
> before it can get implemented.
Strictly speaking BP rewrite is not required to expand a RAI
On May 27, 2010, at 12:37 PM, Per Jorgensen wrote:
> I get the following output when i run a zpool status , but i am a little
> confused of why c9t8d0 is more "left align" then the rest of the disks in the
> pool , what does it mean ?
It means that is is another top-level vdev in your pool.
Ba
On May 17, 2010, at 5:29 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>>
>> I was messing around with a ramdisk on a pool and I forgot to remove it
>> before I shut down the server. Now I am
On May 4, 2010, at 2:02 PM, Robert Milkowski wrote:
> On 16/02/2010 21:54, Jeff Bonwick wrote:
>>> People used fastfs for years in specific environments (hopefully
>>> understanding the risks), and disabling the ZIL is safer than fastfs.
>>> Seems like it would be a useful ZFS dataset parameter.
On May 2, 2010, at 8:47 AM, Steve Staples wrote:
> Hi there!
>
> I am new to the list, and to OpenSolaris, as well as ZPS.
>
> I am creating a zpool/zfs to use on my NAS server, and basically I want some
> redundancy for my files/media. What I am looking to do, is get a bunch of
> 2TB drives,
On Apr 29, 2010, at 2:20 AM, Freddie Cash wrote:
> On Wed, Apr 28, 2010 at 2:48 PM, Victor Latushkin
> wrote:
>
> 2. Run 'zdb -ddd storage' and provide section titles Dirty Time Logs
>
> See attached.
So you really do have enough redundancy to be able to handl
On Apr 29, 2010, at 3:03 AM, Edward Ned Harvey wrote:
>> From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
>> Sent: Wednesday, April 28, 2010 3:49 PM
>>
>> What indicators do you have that ONTAP/WAFL has inode->name lookup
>> functionality?
>
> I don't have any such indicator, and if that's the
On Apr 28, 2010, at 8:00 PM, Freddie Cash wrote:
> Looks like I've hit this bug:
> http://bugs.opensolaris.org/view_bug.do?bug_id=6782540 However, none of the
> workaround listed in that bug, or any of the related bugs, works. :(
>
> Going through the zfs-discuss and freebsd-fs archives, I s
On Apr 14, 2010, at 2:42 AM, Ragnar Sundblad wrote:
>
> On 12 apr 2010, at 19.10, Kyle McDonald wrote:
>
>> On 4/12/2010 9:10 AM, Willard Korfhage wrote:
>>> I upgraded to the latest firmware. When I rebooted the machine, the pool
>>> was back, with no errors. I was surprised.
>>>
>>> I will
On Apr 13, 2010, at 9:52 PM, Cyril Plisko wrote:
> Hello !
>
> I've had a laptop that crashed a number of times during last 24 hours
> with this stack:
>
> panic[cpu0]/thread=ff0007ab0c60:
> assertion failed: ddt_object_update(ddt, ntype, nclass, dde, tx) == 0,
> file: ../../common/fs/zfs/d
On Mar 29, 2010, at 1:57 AM, Jim wrote:
> Yes - but it does nothing. The drive remains FAULTED.
Try to detach one of the failed devices:
zpool detach tank 4407623704004485413
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
This problem is known an fixed in later builds:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6923585
AFAIK it is going to be included into b134a as well
Sent from my iPhone
On Mar 27, 2010, at 22:26, Russ Price wrote:
I have two 500 GB drives on my system that are attached to
On Mar 26, 2010, at 23:37, David Dyer-Bennet wrote:
On Fri, March 26, 2010 14:25, Malte Schirmacher wrote:
Bob Friesenhahn wrote:
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading terminology.
What
On Mar 25, 2010, at 22:10, Freddie Cash wrote:
On Thu, Mar 25, 2010 at 11:47 AM, Bruno Sousa
wrote:
What do you mean by "Using fewer than 4 disks in a raidz2 defeats
the purpose of raidz2, as you will always be in a degraded mode" ?
Does it means that having 2 vdevs with 3 disks it won't
Christian Hessmann wrote:
Victor,
Btw, they affect some files referenced by snapshots as
'zpool status -v' suggests:
>> tank/DVD:<0x9cd> tank/d...@2010025100:/Memento.m4v
>> tank/d...@2010025100:/Payback.m4v
>> tank/d...@2010025100:/TheManWhoWasntThere.m4v
In case of OpenSolari
Mark J Musante wrote:
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has
some data on it that hasn't been copied off yet, and it's not considering the
fact that it's part of a raidz group and ad4p2.
There is a CR on this,
http://bugs.opensolaris.org/bugdatabase/view_bug
Ethan wrote:
On Thu, Feb 18, 2010 at 13:22, Victor Latushkin
mailto:victor.latush...@sun.com>> wrote:
Ethan wrote:
So, current plan:
- export the pool.
- format c9t1d0 to have one slice being the entire disk.
- import. should be degraded, missing c9
Ethan wrote:
So, current plan:
- export the pool.
- format c9t1d0 to have one slice being the entire disk.
- import. should be degraded, missing c9t1d0p0.
- replace missing c9t1d0p0 with c9t1d0 (should this be c9t1d0s0? my
understanding is that zfs will treat the two about the same, since it
ad
Mark J Musante wrote:
On Thu, 11 Feb 2010, Cindy Swearingen wrote:
On 02/11/10 04:01, Marc Friesacher wrote:
fr...@vault:~# zpool import
pool: zedpool
id: 10232199590840258590
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zedpool
Lori Alt wrote:
Can you please send a complete list of the actions taken: The commands
you used to create the send stream, the commands used to receive the
stream. Also the output of `zfs list -t all` on both the sending and
receiving sides. If you were able to collect a core dump (it shou
John wrote:
I was able to solve it, but it actually worried me more than anything.
Basically, I had created the second pool using the mirror as a primary device.
So three disks but two full disk root mirrors.
Shouldn't zpool have detected an active pool and prevented this? The other LDOM
was
LevT wrote:
switched to another system, RAM 4Gb -> 16Gb
the importing process lasts about 18hrs now
the system is responsive
if developers want it I may provide ssh access
I have no critical data there, it is an acceptance test only
If it is still relevant, feel free to contact me offline to
Lutz Schumann wrote:
The on Disk Layout is shown here:
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf
You can use the name value pairs in the vdev label. ( I guess). Unfortunately I
do not know any scripts.
you can try
zdb -l /dev/rdsk/cXtYdZs0
___
Jim Sloey wrote:
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don't know what happened next but by the time I got involved there was no
evidence tha
On Jan 7, 2010, at 23:47, Cindy Swearingen
wrote:
Hi Gunther,
Are these external USB disks?
You could determine what disk problems caused the errors by using the
fmdump -eV output.
From your output, the scrub is still in progress so maybe these errors
will clear up. Or, the objects no long
JD Trout wrote:
Hello,
I am running OpenSol 2009.06 and after a power outage opsensol will no longer
boot past GRUB. Booting from the liveCD shows me the following:
r...@opensolaris:~# zpool import -f rpool
cannot import 'rpool': I/O error
r...@opensolaris:~# zpool import -f
pool: rpool
esponsive.
I'll post the script I'll be running here shortly after I write it.
Also, as far as using 'sync' I"m not sure what exactly I would do there.
--
--
Victor Latushkin phone: x11467 / +74959370467
TSC-Kernel EMEAmobile: +7
On 16.12.09 16:03, Detlef Drewanz wrote:
I just realized that with b129 there are now for each existing zpool a
system process is running now, e.g. zpool-rpool with pid 5. What is the
purpose of this process ?
Please read this ARC case materials:
System Duty Cycle Scheduling Class and ZFS IO
On Dec 15, 2009, at 5:50, Jack Kielsmeier wrote:
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where I can ping my
server (rarely), but most of the time not. I have not been able to
ssh into it, and the
On Dec 4, 2009, at 9:33, James Risner wrote:
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of
ZFS iirc.)
At some point I knocked it out (export) somehow, I don't remember
doing so intentionally. So I can't do commands like zpool replace
since there are no pools.
Ha
On Dec 5, 2009, at 0:52, Cindy Swearingen
wrote:
Hi Gary,
To answer your questions, the hardware read some data and ZFS detected
a problem with the checksums in this dataset and reported this
problem.
ZFS can do this regardless of ZFS redundancy.
I don't think a scrub will fix these perm
Peter Jeremy wrote:
I have a zpool on a JBOD SE3320 that I was using for data with Solaris
10 (the root/usr/var filesystems were all UFS). Unfortunately, we had
a bit of a mixup with SCSI cabling and I believe that we created a
SCSI target clash. The system was unloaded and nothing happened unt
On 13.11.09 16:09, Ross wrote:
Isn't dedupe in some ways the antithesis of setting copies > 1? We go to a
lot of trouble to create redundancy (n-way mirroring, raidz-n, copies=n,
etc) to make things as robust as possible and then we reduce redundancy
with dedupe and compression
But are we redu
roland wrote:
hello,
one of my colleague has a problem with an application. the sysadmins,
responsible for that server told him that it was the applications fault, but i
think they are wrong, and so does he.
from time to time, the app gets unkillable and when trying to list the contents of s
Jeremy Kitchen wrote:
On Nov 2, 2009, at 9:07 AM, Victor Latushkin wrote:
Enda O'Connor wrote:
it works at a pool wide level with the ability to exclude at a
dataset level, or the converse, if set to off at top level dataset
can then set lower level datasets to on, ie one can includ
Enda O'Connor wrote:
it works at a pool wide level with the ability to exclude at a dataset
level, or the converse, if set to off at top level dataset can then set
lower level datasets to on, ie one can include and exclude depending on
the datasets contents.
so largefile will get deduped in t
On 02.11.09 18:38, Ross wrote:
Double WOHOO! Thanks Victor!
Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David Magda wrote:
Deduplication was committed last night by Mr. Bonwick:
Log message:
PSARC 2009/571 ZFS Deduplication Properties
6677093 zfs should have dedup capability
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
And "PSARC 2009/479 zpool recovery suppor
Donald Murray, P.Eng. wrote:
Hi,
I've got an OpenSolaris 2009.06 box that will reliably panic whenever
I try to import one of my pools. What's the best practice for
recovering (before I resort to nuking the pool and restoring from
backup)?
Could you please post panic stack backtrace?
There a
On 30.10.09 02:13, Scott Meilicke wrote:
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as
backing stores before attaching it to my production pool. However, when I
exported the test pool and imported, I get an error. Here is what I did:
I created a file to
On 26.10.09 14:25, Stathis Kamperis wrote:
Greetings to everyone.
I'm trying to retrieve the checksumming algorithm on a per-block basis
with zdb(1M). I know it's supposed to be ran by Sun's support
engineers only & I take full responsibility for whatever damage I
cause to my machine by using it
Tommy McNeely wrote:
I have a system who's rpool has gone defunct. The rpool is made of a
single "disk" which is a raid5EE made of all 8 146G disks on the box.
The raid card is the Adaptec brand card. It was running nv_107, but its
currently net booted to nv_121. I have already checked in the
On 21.10.09 23:23, Paul B. Henson wrote:
I've had a case open for a while (SR #66210171) regarding the inability to
import a pool whose log device failed while the pool was off line.
I was told this was CR #6343667,
CR 6343667 synopsis is "scrub/resilver has to start over when a snapshot is
t
On Oct 22, 2009, at 4:18, Ian Allison wrote:
Hi,
I've been looking at a raidz using opensolaris snv_111b and I've
come across something I don't quite understand. I have 5 disks
(fixed size disk images defined in virtualbox) in a raidz
configuration, with 1 disk marked as a spare. The dis
Stacy Maydew wrote:
I have an exported zpool that had several drives incur errors at the same time
and as a result became unusable. The pool was exported at the time the drives
had problems and now I can't find a way to either delete or import the pool.
I've tried relabeling the disks and usi
Marc Althoff wrote:
We have the same problem since of today. The pool was to be "renamed" width
zpool export, after an import it didn't come back online. A import -f results in a kernel
panic.
zpool status -v freports a degraded drive also.
I'll also try to supply som,e traces and logs.
Pl
On 11.10.09 12:59, Darren Taylor wrote:
I have searched the forums and google wide, but cannot find a fix for the issue
I'm currently experiencing. Long story short - I'm now at a point where I
cannot even import my zpool (zpool import -f tank) without causing a kernel
panic
I'm running OpenS
Erik Trimble wrote:
ZFS no longer has the issue where loss of a single device (even
intermittently) causes pool corruption. That's been fixed.
Erik, it does not help at all when you are talking about some issue
being fixed and does not provide corresponding CR number. It does not
allow intere
On 23.09.09 05:57, Shu Wu wrote:
Hi pals, I'm now looking into zfs source and have been puzzled about
128-bit. It's announced that ZFS is an 128-bit file system. But what
does 128-bit mean? Does that mean the addressing capability is 2^128?
But in the source, 'zp_size' (in 'struct znode_phys'),
On 05.10.09 23:07, Miles Nordin wrote:
"re" == Richard Elling writes:
re> As I said before, if the checksum matches, then the data is
re> checked for sequence number = previous + 1, the blk_birth ==
re> 0, and the size is correct. Since this data lives inside the
re> block, it
Victor Latushkin wrote:
Liam Slusser wrote:
Long story short, my cat jumped on my server at my house crashing two
drives at the same time. It was a 7 drive raidz (next time ill do
raidz2).
Long story short - we've been able to get access to data in the pool.
This involved finding b
Osvald Ivarsson wrote:
On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin
wrote:
Osvald Ivarsson wrote:
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
wrote:
On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to
my mother
Osvald Ivarsson wrote:
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
wrote:
On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to
my motherboard. The raid, a raidz, which is called "rescamp", has worked
good befor
On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to my motherboard.
The raid, a raidz, which is called "rescamp", has worked good before until a
power failure yesterday. I'm now unable to import the pool. I can't export the raid,
s
Carson Gaspar wrote:
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
D'oh! Of course, I should have been paying attention
to the fact that the
pool wasn't imported.
My
On 30.09.09 14:30, Nicolas Szalay wrote:
Le mercredi 30 septembre 2009 à 11:43 +0200, Nicolas Szalay a écrit :
Hello all,
I have a critical ZFS problem, quick history
[snip]
little addition : zdb -l /dev/rdsk/c7t0d0 sees the metadatas
What does zdb -l /dev/rds/c7t0d0s0 show?
Victor
Isn't
On 29.09.09 03:58, Albert Chin wrote:
snv114# zfs get
used,reservation,volsize,refreservation,usedbydataset,usedbyrefreservation
tww/opt/vms/images/vios/mello-0.img
NAME PROPERTY VALUE SOURCE
tww/opt/vms/images/vios/mello-0.img used
On 28.09.09 22:01, Richard Elling wrote:
On Sep 28, 2009, at 10:31 AM, Victor Latushkin wrote:
Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be
1 - 100 of 224 matches
Mail list logo