> I suggest you to try running 'zdb -bcsv storage2' and
> show the result.
r...@crypt:/tmp# zdb -bcsv storage2
zdb: can't open storage2: No such device or address
then I tried
r...@crypt:/tmp# zdb -ebcsv storage2
zdb: can't open storage2: File exists
George
--
This message posted from opensola
>
> On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
>
> > Victor,
> >
> > The 'zpool import -f -F tank' failed at some point
> last night. The box was completely hung this morning;
> no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset,
> as I had
On Jun 29, 2010, at 1:30 AM, George wrote:
> I've attached the output of those commands. The machine is a v20z if that
> makes any difference.
Stack trace is similar to one bug that I do not recall right now, and it
indicates that there's likely a corruption in ZFS metadata.
I suggest you to
On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
> Victor,
>
> The 'zpool import -f -F tank' failed at some point last night. The box was
> completely hung this morning; no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset, as I had no diagnostic
Have you tried creating tank/bkp without the -s option. I believe I read
somewhere that the -s option can lead to poor performance on larger
volumes (which doesn't make sense to me). Also are you using a zil/log
device?
Josh Simon
On 06/29/2010 09:33 AM, Effrem Norwood wrote:
Hi All,
I crea
I reviewed the zpool clear syntax (looking at my own docs) and didn't
remember that a one-device pool probably doesn't need the device
specified. For pools with many devices, you might want to just clear
the errors on a particular device.
USB sticks for pools are problemmatic. It would be good
Hi Brian,
Because the pool is still online and the metadata is redundant, maybe
these errors were caused by a brief hiccup from the USB device's
physical connection. You might try:
# zpool clear external c0t0d0p0
Then, run a scrub:
# zpool scrub external
If the above fails, then please identi
Hi
There was some messup with switching of drives and an unexpected reboot, so I
suddenly have a drive in my pool that is partly resilvered. zfs status shows
the pool is fine, but after a scrub, it shows the drive faulted. I've been told
on #opensolaris that making a new pool on the drive and t
I have a box of 6 new Sun J4200/J4400 SATA HDD Caddies (250GB SATA drives
removed) moutings I am selling on ebay. Do a search on ebay for the listing
title
"Sun Storage J4200/J4400 Array-SATA Hard Drive Caddy x 6"
or follow this link
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=11055319086
evik wrote:
Reading this list for a while made it clear that zfs send is not a
backup solution, it can be used for cloning the filesystem to a backup
array if you are consuming the stream with zfs receive so you get
notified immediately about errors. Even one bitflip will render the
stream unusa
Hi All,
I created a zvol with the following options on an X4500 with 16GB of ram:
zfs create -s -b 64K -V 250T tank/bkp
I then enabled dedup and compression and exported it to Windows Server 2008 as
iSCSI via COMSTAR. There it was formatted with a 64K cluster size which is the
NTFS default for
Victor,
The 'zpool import -f -F tank' failed at some point last night. The box was
completely hung this morning; no core dump, no ability to SSH into the box to
diagnose the problem. I had no choice but to reset, as I had no diagnostic
ability. I don't know if there would be anything in the log
Hi Brian,
You might try running a scrub on this pool.
Is this an external USB device?
Thanks,
Cindy
On 06/29/10 09:16, Brian Leonard wrote:
Hi,
I have a zpool which is currently reporting that the ":<0x13>" file
is corrupt:
bleon...@opensolaris:~$ pfexec zpool status -xv external
pool:
Hi,
I have a zpool which is currently reporting that the ":<0x13>" file
is corrupt:
bleon...@opensolaris:~$ pfexec zpool status -xv external
pool: external
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
a
> I'm not sure I didn't have dedup enabled. I might
> have.
> As it happens, the system rebooted and is now in
> single user mode.
> I'm trying another import. Most services are not
> running which should free ram.
>
> If it crashes again, I'll try the live CD while I see
> about more RAM.
Succ
Okay, at this point, I would suspect the spare is having a problem
too or some other hardware problem.
Do you have another disk that you can try as a replacement to c6t1d0?
If so, you can try to detach the spare like this:
# zpool detach tank c7t1d0
Then, physically replace c6t1d0. You review t
>
> if, for example, the network pipe is bigger then one
> unsplitted stream
> of zfs send | zfs recv then splitting it to multiple
> streams should
> optimize the network bandwidth, shouldn't it ?
>
Well, I guess so. But I wonder, what is the bottle neck here. If it is the
rate at which zfs
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/28/2010 10:30 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tristram Scott
>>
>> If you would like to try it out, download the package from:
>> http://www.qu
You may wish to look at this thread:
http://opensolaris.org/jive/thread.jspa?threadID=128046&tstart=0 (last
post):
start quote from thread
Hi everybody,
after looking into the current on source code (b134) into
/usr/src/lib/libstmf/common/store.c, I don't think that this bug
does st
On Tue, Jun 29, 2010 at 8:17 AM, Tristram Scott
wrote:
>>
>> would be nice if i could pipe the zfs send stream to
>> a split and then
>> send of those splitted stream over the
>> network to a remote system. it would help sending it
>> over to remote
>> system quicker. can your tool do that?
>>
>>
Another related question -
I have a second enclosure with blank disks which I would like to use to take a
copy of the existing zpool as a precaution before attempting any fixes. The
disks in this enclosure are larger than those that the one with a problem.
What would be the best way to do this
Ouch...!
Thanks for the head's up, and is there any workaround for this?
Can i for instance :
1. Create the iscsi block on the primary server with command : zfs
create -p -s -V 10G vol0/iscsi/LUN_10GB
2. Use sbdadm to make the lun available with command : sbdadm
create-lu /dev/z
>
> would be nice if i could pipe the zfs send stream to
> a split and then
> send of those splitted stream over the
> network to a remote system. it would help sending it
> over to remote
> system quicker. can your tool do that?
>
> something like this
>
>s | ->
On Tue, 2010-06-29 at 08:58 +0200, Bruno Sousa wrote:
> Hmm...that easy? ;)
>
> Thanks for the tip, i will see if that works out.
>
> Bruno
Be aware of the Important Note in
http://wikis.sun.com/display/OpenSolarisInfo/Backing+Up+and+Restoring+a
+COMSTAR+Configuration regarding Backing Up and Re
Edward Ned Harvey wrote:
> Due to recent experiences, and discussion on this list, my colleague and
> I performed some tests:
>
> Using solaris 10, fully upgraded. (zpool 15 is latest, which does not
> have log device removal that was introduced in zpool 19) In any way
> possible, you lose an un
Hmm...that easy? ;)
Thanks for the tip, i will see if that works out.
Bruno
On 29-6-2010 2:29, Mike Devlin wrote:
> I havnt tried it yet, but supposedly this will backup/restore the
> comstar config:
>
> $ svccfg export -a stmf > comstar.bak.${DATE}
>
> If you ever need to restore the configur
26 matches
Mail list logo