Good call Saso. Sigh... I guess I wait to hear from HP on supported IT
mode HBAs in their D2000s or other jbods.
On Tue, Jan 8, 2013 at 11:40 AM, Sašo Kiselkov wrote:
> On 01/08/2013 04:27 PM, mark wrote:
> >> On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
> >>
> &g
her level compared to HPs pre-sales
department... not that theyre bad but in this realm youre the man :)
Thanks,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/19/12 1:14 PM, Jim Klimov wrote:
On 2012-11-19 20:58, Mark Shellenbaum wrote:
There is probably nothing wrong with the snapshots. This is a bug in
ZFS diff. The ZPL parent pointer is only guaranteed to be correct for
directory objects. What you probably have is a file that was hard
On 11/16/12 17:15, Peter Jeremy wrote:
I have been tracking down a problem with "zfs diff" that reveals
itself variously as a hang (unkillable process), panic or error,
depending on the ZFS kernel version but seems to be caused by
corruption within the pool. I am using FreeBSD but the issue look
Otherwise does anyone have any other tips for monitoring usage? I
wonder how they have it all working in Fishworks gear as some of the
analytics demos show you being able to drill down on through file
activity in real time.
Any advice or suggestions greatly appreciated.
Cheers,
Mark
__
: Permanent errors have been detected in the following files:
rpool/filemover:<0x1>
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool6.64T 0 29.9K /rpool
rpool/filemover 6.64T 323G 6.32T -
Thanks
Mark
_
,root=192.168.1.52:192.168.1.51:192.168.1.53
local
-Original Message-
From: Jim Klimov [mailto:jimkli...@cos.ru]
Sent: Wednesday, February 29, 2012 1:44 PM
To: Mark Wolek
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Problem with ESX NFS store on ZFS
2012-02-29 21:15, Mark
thing?
Thanks
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You can see the original ARC case here:
http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt
On 8 Dec 2011, at 16:41, Ian Collins wrote:
> On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
>> On 12/07/11 20:48, Mertol Ozyoney wrote:
>>> Unfortunetly the answer is no. Neither l1 nor l2
ying to get me to do. Do I have to do:
zfs create datastore/zones/zonemaster
before I can create a zone in that path? That's not in the documentation,
so I didn't want to do anything until someone can point out my error for
me. Thanks for your help!
--
Mark
_
n...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Neil Perrin
Sent: Friday, October 28, 2011 11:38 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Log disk with all ssd pool?
On 10/28/11 00:54, Neil Perrin wrote:
On 10/28/11 00:04, Mark Wolek wrote:
Still kick
ing a drive to "sorry no more writes aloud" scenarios.
Thanks
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why don't you see which byte differs, and how it does?
Maybe that would suggest the "failure mode". Is it the
same byte data in all affected files, for instance?
Mark
Sent from my iPhone
On Oct 22, 2011, at 2:08 PM, Robert Watzlavick wrote:
> On Oct 22, 2011, at 13:14,
7;t lie, but it's good to have it anyways, and is critical for
> personal systems such as laptops.
IIRC, fsck was seldom needed at
my former site once UFS journalling
became available. Sweet update.
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 27 Sep 2011, at 18:29, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tony MacDoodle
>>
>>
>> Now:
>> mirror-0 ONLINE 0 0 0
>> c1t2d0 ONLINE 0 0 0
>> c1t
On Tue, 6 Sep 2011, Tyler Benster wrote:
It seems quite likely that all of the data is intact, and that something
different is preventing me from accessing the pool. What can I do to
recover the pool? I have downloaded the Solaris 11 express livecd if
that would be of any use.
Try running zd
Hi Doug,
The "vms" pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
afterwards.
If not, you'll need to restore from backup, pref
Shouldn't the choice of RAID type also
be based on the i/o requirements?
Anyway, with RAID-10, even a second
failed disk is not catastophic, so long
as it is not the counterpart of the first
failed disk, no matter the no. of disks.
(With 2-way mirrors.)
But that's why we do backups, ri
minor quibble: compressratio uses a lowercase x for the description text
whereas the new prop uses an uppercase X
On 6 Jun 2011, at 21:10, Eric Schrock wrote:
> Webrev has been updated:
>
> http://dev1.illumos.org/~eschrock/cr/zfs-refratio/
>
> - Eric
>
> --
> Eric Schrock
> Delphix
>
> 2
Yeah, this is a known problem. The DTL on the toplevel shows an outage, and is
preventing the removal of the spare even though removing the spare won't make
the outage worse.
Unfortunately, for opensolaris anyway, there is no workaround.
You could try doing a full scrub, replacing any disks th
On 6/1/11 12:51 AM, lance wilson wrote:
The problem is that nfs clients that connect to my solaris 11 express server
are not inheriting the acl's that are set for the share. They create files that
don't have any acl assigned to them, just the normal unix file permissions. Can
someone please pr
nect your external eSATA enclosures to these. You'll
get two eSATA ports without needing to use any PCI slots
and I believe that if you use the very bottom pci slot opening
you won't even block any of the actual pci slots from future use.
-Mark D.
On 05/ 6/11 12:04 PM, Rich Teer wrote:
Hi a
On Apr 8, 2011, at 11:19 PM, Ian Collins wrote:
> On 04/ 9/11 03:53 PM, Mark Sandrock wrote:
>> I'm not arguing. If it were up to me,
>> we'd still be selling those boxes.
>
> Maybe you could whisper in the right ear?
I wish. I'd have a long list if I could
On Apr 8, 2011, at 9:39 PM, Ian Collins wrote:
> On 04/ 9/11 03:20 AM, Mark Sandrock wrote:
>> On Apr 8, 2011, at 7:50 AM, Evaldas Auryla wrote:
>>> On 04/ 8/11 01:14 PM, Ian Collins wrote:
>>>>> You have built-in storage failover with an AR cluster;
>>
production
> line. Before, ZFS geeks had choice - buy 7000 series if you want quick "out
> of the box" storage with nice GUI, or build your own storage with x4540 line,
> which by the way has brilliant engineering design, the choice is gone now.
Okay, so what is the g
On Apr 8, 2011, at 3:29 AM, Ian Collins wrote:
> On 04/ 8/11 08:08 PM, Mark Sandrock wrote:
>> On Apr 8, 2011, at 2:37 AM, Ian Collins wrote:
>>
>>> On 04/ 8/11 06:30 PM, Erik Trimble wrote:
>>>> On 4/7/2011 10:25 AM, Chris Banal wrote:
>>>>&
stuff.
>>
>> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html
>>
>>
> Which is not a lot of use to those of us who use X4540s for what they were
> intended: storage appliances.
Can you elaborate briefly on what exactly th
ely removing the affected drive in the
software, before physically replacing it?
I'm also not sure at exactly which juncture to do a 'zpool clear' and 'zpool
scrub'?
I'd appreciate any guidance - thanks in advance,
Mark
Mark Mahabir
Systems Manager, X-Ray an
rs.
If you plan to generate a lot of data, why use the root pool? You can put the
/home
and /proj filesystems (/export/...) on a separate pool, thus off-loading the
root pool.
My two cents,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and raidz)
>
> That's all there is to it. To split, or not to split.
I'd just put /export/home on this second set of drives, as a striped mirror.
Same as I would have done in the "old days" under SDS. :-)
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The fix for 6991788 would probably let the 40mb drive work, but it would
depend on the asize of the pool.
On Fri, 4 Mar 2011, Cindy Swearingen wrote:
Hi Robert,
We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.
Yes, yo
limit= 3836 MB
arc_meta_max = 3951 MB
Is it normal for arc_meta_used == arc_meta_limit?
Does this explain the hang?
Thanks,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
trouble.
This environment is completely available to mess with (no data at risk), so
I'm willing to try any option you guys would recommend.
Thanks!
--
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
On Feb 2, 2011, at 8:10 PM, Eric D. Mudama wrote:
> All other
> things being equal, the 15k and the 7200 drive, which share
> electronics, will have the same max transfer rate at the OD.
Is that true? So the only difference is in the access ti
iirc, we would notify the user community that the FS'es were going to hang
briefly.
Locking the FS'es is the best way to quiesce it, when users are worldwide, imo.
Mark
On Jan 31, 2011, at 9:45 AM, Torrey McMahon wrote:
> A matter of seconds is a long time for a running Oracle
Why do you say fssnap has the same problem?
If it write locks the file system, it is only for a matter of seconds, as I
recall.
Years ago, I used it on a daily basis to do ufsdumps of large fs'es.
Mark
On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:
> On 1/30/2011 5:26 PM, Joerg S
It well may be that different methods are optimal for different use cases.
Mechanical disk vs. SSD; mirrored vs. raidz[123]; sparse vs. populated; etc.
It would be interesting to read more in this area, if papers are available.
I'll have to take a look. ... Or does someone have pointers?
On Dec 20, 2010, at 2:05 PM, Erik Trimble wrote:
> On 12/20/2010 11:56 AM, Mark Sandrock wrote:
>> Erik,
>>
>> just a hypothetical what-if ...
>>
>> In the case of resilvering on a mirrored disk, why not take a snapshot, and
>> then
>>
s for busy filesystems.)
I'm supposing that a block-level snapshot is not doable -- or is it?
Mark
On Dec 20, 2010, at 1:27 PM, Erik Trimble wrote:
> On 12/20/2010 9:20 AM, Saxon, Will wrote:
>>> -Original Message-
>>> From: zfs-discuss-boun...@opensolar
stem type.
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 6 Dec 2010, Curtis Schiewek wrote:
Hi Mark,
I've tried running "zpool attach media ad24 ad12" (ad12 being the new
disk) and I get no response. I tried leaving the command run for an
extended period of time and nothing happens.
What version of solaris
On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk wrote:
>> Hot spares are dedicated spares in the ZFS world. Until you replace
>> the actual bad drives, you will be running in a degraded state. The
>> idea is that spares are only used in an emergency. You are degraded
>> until your spares are
y clean up ad24 & ad18
for you.
On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante wrote:
On Fri, 3 Dec 2010, Curtis Schiewek wrote:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
On Fri, 3 Dec 2010, Curtis Schiewek wrote:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
ad8ONLINE 0 0 0
ad10 ONLINE 0 0 0
maybe
give in touch with their support and see if you can use something
similar.
Cheers,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Edward,
I recently installed a 7410 cluster, which had added Fiber Channel HBAs.
I know the site also has Blade 6000s running VMware, but no idea if they
were planning to run fiber to those blades (or even had the option to do so).
But perhaps FC would be an option for you?
Mark
On Nov 12
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote:
> On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
>>
>>
>> I'm working with someone who replaced a failed 1TB drive (50% utilized),
>> on an X4540 running OS build 134, and I think something must be wrong.
>>
&
On Wed, 10 Nov 2010, Darren J Moffat wrote:
On 10/11/2010 11:18, sridhar surampudi wrote:
I was wondering how zpool split works or implemented.
Or are you really asking about the implementation details ? If you want
to know how it is implemented then you need to read the source code.
Also
sue.
I'd search the archives, but they don't seem searchable. Or am I wrong about
that?
Thanks.
Mark (subscription pending)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You should only see a "HOLE" in your config if you removed a slog after having
added more stripes. Nothing to do with bad sectors.
On 14 Oct 2010, at 06:27, Matt Keenan wrote:
> Hi,
>
> Can someone shed some light on what this ZPOOL_CONFIG is exactly.
> At a guess is it a bad sector of the dis
On Thu, 30 Sep 2010, Darren J Moffat wrote:
* It can be applied recursively down a ZFS hierarchy
True.
* It will unshare the filesystems first
Actually, because we use the zfs command to do the unmount, we end up
doing the unshare on the filesystem first.
See the opensolaris code for de
On Thu, 30 Sep 2010, Linder, Doug wrote:
Michael Schuster [mailto:michael.schus...@oracle.com] wrote:
Mark, I think that wasn't the question, rather, "what's the difference
between 'zfs u[n]mount' and '/usr/bin/umount'?"
Yes, that was the question
On Thu, 30 Sep 2010, Linder, Doug wrote:
Is there any technical difference between using "zfs unmount" to unmount
a ZFS filesystem versus the standard unix "umount" command? I always
use "zfs unmount" but some of my colleagues still just use umount. Is
there any reason to use one over the ot
On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
Yes, it is mounted
r...@disk-00:/volumes/store# zfs get sas/mail-ccts
NAME PROPERTY VALUESOURCE
sas/mail-cts mounted yes -
OK - so the next question would be where the data is. I assume when you
say you "cannot access" t
On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
After a crash i cannot access one of my datasets anymore.
ls -v cts
brwxrwxrwx+ 2 root root 0, 0 ott 18 2009 cts
zfs list sas/mail-cts
NAME USED AVAIL REFER MOUNTPOINT
sas/mail-cts 149G 250G 149G /sas/mail-cts
a
Hi Steve,
Couple of options.
Create a new boot environment on the SSD, and this will copy the data over.
Or
zfs send -R rp...@backup | zfs recv altpool
I'd use the alt boot environment, rather than the send and receive.
Cheers,
-Mark.
On 19/09/2010, at 5:37 PM, Steve Arkley
m, the experts there seem to recommend NOT having TLER
enabled when using ZFS as ZFS can be configured for its timeouts, etc,
and the main reason to use TLER is when using those drives with hardware
RAID cards which will kick a drive out of the array if it takes longer
than 10 seconds.
Can anyone
Did you run installgrub before rebooting?
On Tue, 7 Sep 2010, Piotr Jasiukajtis wrote:
Hi,
After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system.
Here is what I get.
Any idea why it's not able to import rpool?
I saw this issue also on older builds on a different mac
On Thu, 2 Sep 2010, Dominik Hoffmann wrote:
I think, I just destroyed the information on the old raidz members by doing
zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2
It should have warned you that two of the disks were already formatted
with a zfs pool. Did it not do
What does 'zpool import' show? If that's empty, what about 'zpool import
-d /dev'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 1 Sep 2010, Benjamin Brumaire wrote:
your point have only a rethoric meaning.
I'm not sure what you mean by that. I was asking specifically about your
situation. You want to run labelfix on /dev/rdsk/c0d1s4 - what happened
to that slice that requires a labelfix? Is there something
On Mon, 30 Aug 2010, Benjamin Brumaire wrote:
As this feature didn't make it into zfs it would be nice to have it
again.
Better to spend time fixing the problem that requires a 'labelfix' as a
workaround, surely. What's causing the need to fix vdev labels?
__
On Mon, 30 Aug 2010, Jeff Bacon wrote:
All of this would be ok... except THOSE ARE THE ONLY DEVICES THAT WERE
PART OF THE POOL. How can it be missing a device that didn't exist?
The device(s) in question are probably the logs you refer to here:
I can't obviously use b134 to import the pool
It does, its on a pair of large APC's.
Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are
mounted inside a couple Windows VM's. I'd have to migrate all our VM's to
iSCSI, which I'm willing to do if it would help and not cause other issues.
So far the 7210 Applia
Hey thanks for the replies everyone.
Saddly most of those options will not work, since we are using a SUN Unified
Storage 7210, the only option is to buy the SUN SSD's for it, which is about
$15k USD for a pair. We also don't have the ability to shut off ZIL or any of
the other options that o
On Fri, 27 Aug 2010, Rainer Orth wrote:
zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.
Any suggestions?
Try removing the symlinks or using 'devfsadm -C' as suggested here:
https://defect.opensolaris.org/bz/show_bug.cgi?id=14
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
installed I selected the best bang for the buck on the speed vs capacity chart.
We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all running
NFS, and it sucks... sooo slow.
iSCSI was no better.
I a
SSD's I have been testing to work properly.
On the other hand, I could just use the spare 7210 Appliance boot disk I have
lying about.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
You need to let the resilver complete before you can detach the spare. This is
a known problem, CR 6909724.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724
On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:
> Hi!
>
> I had trouble with my raidz in the way, that some o
On 16 Aug 2010, at 22:30, Robert Hartzell wrote:
>
> cd /mnt ; ls
> bertha export var
> ls bertha
> boot etc
>
> where is the rest of the file systems and data?
By default, root filesystems are not mounted. Try doing a "zfs mount
bertha/ROOT/snv_134"__
s encouraging, so I exported it and booted from the origional 132 boot
drive.
Well, it came back, and at 1:00AM I was able to get back to the origional issue
I was chasing.
So, don't give up hope when all hope appears to be lost.
Mark.
Still an Open_Solaris fan keen to help the commu
ormat the
drive for you.
With UFS-based installs, jumpstart will format the disk per the
jumpstart profile. of course. It's not clear how to dive this with JET.
Any experiences?
Thanks,
mark
___
zfs-discuss mailing list
zfs-dis
On Mon, 16 Aug 2010, Matthias Appel wrote:
Can anybody tell me how to get rid of c1t3d0 and heal my zpool?
Can you do a "zpool detach performance c1t3d0/o"? If that works, then
"zpool replace performance c1t3d0 c1t0d0" should replace the bad disk with
the new hot spare. Once the resilver c
I keep the pool version information up-to-date here:
http://blogs.sun.com/mmusante/entry/a_zfs_taxonomy
On Sun, 15 Aug 2010, Haudy Kazemi wrote:
Hello,
This is a consolidated list of ZFS pool and filesystem versions, along with
the builds and systems they are found in. It is based on multip
for what eventually becomes a commercial
Enterprise offering. OpenSolaris was the Solaris equivalent.
Losing the free bleeding edge testing community will no doubt impact on the
Solaris code quality.
It is now even more likely Solaris will revert to it'
oking
into a USB boot/raidz root combination for 1U storage.
I ran Red Hat 9 with updated packages for quite a few years.
As long as the kernel is stable, and you can work through the hurdles, it can
still do the job.
Mark.
--
This message posted from opensolaris.org
On Wed, 11 Aug 2010, seth keith wrote:
NAME STATE READ WRITE CKSUM
brick DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c13d0 ONLINE 0 0 0
c4d0
On Wed, 11 Aug 2010, Seth Keith wrote:
When I do a zdb -l /dev/rdsk/ I get the same output for all my
drives in the pool, but I don't think it looks right:
# zdb -l /dev/rdsk/c4d0
What about /dev/rdsk/c4d0s0?
___
zfs-discuss mailing list
zfs-discu
On Tue, 10 Aug 2010, seth keith wrote:
# zpool status
pool: brick
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool fro
On Tue, 10 Aug 2010, seth keith wrote:
first off I don't have the exact failure messages here, and I did not take good
notes of the failures, so I will do the best I can. Please try and give me
advice anyway.
I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all
wit
You can also use the "zpool split" command and save yourself having to do the
zfs send|zfs recv step - all the data will be preserved.
"zpool split rpool preserve" does essentially everything up to and including
the "zpool export preserve" commands you listed in your original email. Just
don'
You can use 'zpool history -l syspool' to show the username of the person
who created the dataset. The history is in a ring buffer, so if too many
pool operations have happened since the dataset was created, the
information is lost.
On Wed, 4 Aug 2010, Peter Taps wrote:
Folks,
In my app
I'm trying to understand how snapshots work in terms of how I can use them for
recovering and/or duplicating virtual machines, and how I should set up my file
system.
I want to use OpenSolaris as a storage platform with NFS/ZFS for some
development VMs; that is, the VMs use the OpenSolaris box
On Wed, 28 Jul 2010, Gary Gendel wrote:
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool
Hello, first time posting. I've been working with zfs on and off with limited
*nix experience for a year or so now, and have read a lot of things by a lot of
you I'm sure. Still tons I don't understand/know I'm sure.
We've been having awful IO latencies on our 7210 running about 40 VM's spread
On Thu, 15 Jul 2010, Tim Castle wrote:
j...@opensolaris:~# zpool import -d /dev
...shows nothing after 20 minutes
OK, then one other thing to try is to create a new directory, e.g. /mydev,
and create in it symbolic links to only those drives that are part of your
pool.
Based on your label
What does 'zpool import -d /dev' show?
On Wed, 14 Jul 2010, Tim Castle wrote:
My raidz1 (ZFSv6) had a power failure, and a disk failure. Now:
j...@opensolaris:~# zpool import
pool: files
id: 3459234681059189202
state: UNAVAIL
status: One or mor
I had an interesting dilemma recently and I'm wondering if anyone here can
illuminate on why this happened.
I have a number of pools, including the root pool, in on-board disks on the
server. I also have one pool on a SAN disk, outside the system. Last night the
SAN crashed, and shortly thereaf
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote:
what I'm saying is that there are several posts in here where the only
solution is to boot onto a live cd and then do an import, due to
metadata corruption. This should be doable from the installed system
Ah, I understand now.
A couple of thing
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote:
Hi all
With several messages in here about troublesome zpools, would there be a
good reason to be able to fsck a pool? As in, check the whole thing
instead of having to boot into live CDs and whatnot?
You can do this with "zpool scrub". It vi
On Fri, 2 Jul 2010, Julie LaMothe wrote:
Cindy - this discusses how to rename the rpool temporarily. Is there a
way to do it permanently and will it break anything? I have to rename a
root pool because of a type-o. This is on a Solaris sparc environment.
Please help!
The only difference
I'm new with ZFS, but I have had good success using it with raw physical disks.
One of my systems has access to an iSCSI storage target. The underlying
physical array is in a propreitary disk storage device from Promise. So the
question is, when building a OpenSolaris host to store its data on a
I'm guessing that the virtualbox VM is ignoring write cache flushes. See this
for more ifno:
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661
On 12 Jun, 2010, at 5.30, zfsnoob4 wrote:
> Thanks, that works. But it only when I do a proper export first.
>
> If I export the pool then I can
IHAC
Who has an x4500(x86 box) who has a zfs root filesystem. They installed
patches today,
the latest solaris 10 x86 recommended patch cluster and the patching
seemed to complete
successfully. Then when they tried to reboot the box the machine would
not boot? They
get the following error
N
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at
least, but the zpool import command only looks in /dev. One thing you can try
is doing this:
# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a
And then see if 'zpool import -d /tmpdev' finds the pool.
On 2
On 28 May, 2010, at 17.21, Vadim Comanescu wrote:
> In a stripe zpool configuration (no redundancy) is a certain disk regarded as
> an individual vdev or do all the disks in the stripe represent a single vdev
> ? In a raidz configuration im aware that every single group of raidz disks is
> reg
On Mon, 24 May 2010, h wrote:
but...wait..that cant be.
i disconnected the 1TB drives and plugged in the 2TB's before doing replace
command. no information could be written to the 1TBs at all since it is
physically offline.
Do the labels still exist? What does 'zdb -l /dev/rds
On Mon, 24 May 2010, h wrote:
i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB
drives. i have installed the older 1TB drives in another system and
would like to import the old pool to access some files i accidentally
deleted from the new pool.
Did you use the 'zpool
On Thu, 20 May 2010, Edward Ned Harvey wrote:
Also, since you've got "s0" on there, it means you've got some
partitions on that drive. You could manually wipe all that out via
format, but the above is pretty brainless and reliable.
The "s0" on the old disk is a bug in the way we're formattin
On Wed, 19 May 2010, John Andrunas wrote:
ff001f45e830 unix:die+dd ()
ff001f45e940 unix:trap+177b ()
ff001f45e950 unix:cmntrap+e6 ()
ff001f45ea50 zfs:ddt_phys_decref+c ()
ff001f45ea80 zfs:zio_ddt_free+55 ()
ff001f45eab0 zfs:zio_execute+8d ()
ff001f45eb50 genunix:taskq
1 - 100 of 573 matches
Mail list logo