Paul B. Henson wrote:
> I was playing with SXCE to get a feel for the soon to be released U6.
> Performance wise, I'm hoping U6 will be better, hopefully some new code in
> SXCE was introduced that hasn't quite been optimized yet... Last date I
> heard was Nov 10, if I'm lucky I'll be able to start
I recently tried to import a b97 pool into a b98 upgraded version of that os,
and it failed because of some bug. So maybe try eliminating that kind of
problem by making sure to use the version that you know worked in the past.
Maybe you already did this.
>
> Folks,I have a zpool with a
> rai
Eric Schrock пишет:
> These are the symptoms of a shrinking device in a RAID-Z pool. You can
> try to run the attached script during the import to see if this the
> case. There's a bug filed on this, but I don't have it handy.
it's
6753869 labeling/shrinking a disk in raid-z vdev makes pool un-
These are the symptoms of a shrinking device in a RAID-Z pool. You can
try to run the attached script during the import to see if this the
case. There's a bug filed on this, but I don't have it handy.
- Eric
On Sun, Oct 26, 2008 at 05:18:25PM -0700, Terry Heatlie wrote:
> Folks,
> I have a zpoo
Hi Matt
Unfortunately, I'm having problems un-compressing that zip file.
I tried with 7-zip and WinZip reports this:
skipping _1_20081027010354.cap: this file was compressed using an unknown
compression method.
Please visit www.winzip.com/wz54.htm for more information.
The compression m
Hi Tano
Please check out my post on the storage-forum for another idea
to try which may give further clues:
http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006458.html
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
__
around 30 to 40% it really starts to slow down, but no disconnection or timeout
yet. The speed is unacceptable and therefore will continue to with the notion
that something is wrong in the tcp stack/iscsi.
Following the snoop logs, it shows that the Windows size on the iscsi end is 0,
and the o
Hi Miles
I think you make some very good points in your comments.
It would be nice to get some positive feedback on these from Sun.
And my thought also on (quickly) looking at that bug & ARC case was
does not this also need to be factored into the SATA framework.
I really miss not having 'smartct
I did some testing of a couple of x4500 servers to see how well they scaled
in terms of number of filesystems. Both were running snv_97 with the
following config:
# zpool create -f export mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0 mirror
c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0 mirr
Hi Kristof
Please could you post back to this forum the output from
# zdb -l /dev/rdsk/...
... for each of the storage devices in your pool,
while it is in a working condition on Server1.
(Maybe best as an attachment)
Then do the same again with the pool on Server2.
What is the reported 'status
Hi Terry
Please could you post back to this forum the output from
# zdb -l /dev/rdsk/...
... for each of the 5 drives in your raidz2.
(maybe best as an attachment)
Are you seeing labels with the error 'failed to unpack'?
What is the reported 'status' of your zpool?
(You have not provided a 'zpo
so I found some more information and have been at it diligently.
Checking my hardware bios, Dell likes to share a lot of it's IRQs with other
peripherals.
back in the old days when we were limited to just 15 IRQs it was imperative
that certain critical hardware had it's own IRQ. It may seem to
> "k" == kristof <[EMAIL PROTECTED]> writes:
k> iscsitadm create target /dev/rdsk/c5t1d0s0 disk1
k> zpool create -f box3 mirror c5t1d0s0
k> c8t600144F048FFCC00E081B33B9800d0
I wonder if ZFS is silently making an EFI label inside slice 0. I've
never used the Solaris iSCSI
Hi Miles thanks for your reply.
Sorry for my bad english, but it's not my first language.
Ok I will try explain again.
My goal is to have a zpool that can move between 2 nodes (via zpool
export/import)
So I start by creating 3 equal slices on Server A:
I create a 450 GB slice 0 on disk:
c5
Dick,
Well, not at the same time. :-)
If you are running a recent SXCE release and you have a mirrored ZFS
root pool with two disks, for example, you can boot off either disk,
as described in the ZFS Admin Guide, pages 81-85, here:
http://opensolaris.org/os/community/zfs/docs/
If you create a m
I've found that SFU NFS is pretty poor in general. I setup Samba on the host
system. Let the client stay native & have the server adapt.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
> "k" == kristof <[EMAIL PROTECTED]> writes:
k> I have create a zpool on server1 with 3 mirrors. Every mirror
k> consists out: 1 local disk slice + 1 iscsi disk from server2.
k> I export the pool on server1, and try to import the pool on
k> server2
I can't follow your sit
> "ns" == Nigel Smith <[EMAIL PROTECTED]> writes:
ns> make a note of your hard drive and partitions sizes now, while
ns> you have a working system.
keeping a human-readable backup of all your disklabels somewhere safe
has helped me a few times. For me it was mostly moving disks among
I have create a zpool on server1 with 3 mirrors. Every mirror consists out: 1
local disk slice + 1 iscsi disk from server2.
I export the pool on server1, and try to import the pool on server2
Without connecting over iscsi to the local targets, no zpool is seen.
after connecting over iscsi to my
...check out that link that Eugene provided.
It was a GigaByte GA-G31M-S2L motherboard.
http://www.gigabyte.com.tw/Products/Motherboard/Products_Spec.aspx?ProductID=2693
Some more info on 'Host Protected Area' (HPA), relating to OpenSolaris here:
http://opensolaris.org/os/community/arc/caselog/200
Hi Eugene
I'm delighted to hear you got your files back!
I've seen a few posts to this forum where people have
done some change to the hardware, and then found
that the ZFS pool have gone. And often you never
hear any more from them, so you assume they could
not recover it.
Thanks for reporting b
Armin Ollig пишет:
> Hi Victor,
>
> it was initially created from
> c4t600D02300088824BC4228807d0s0, then destroyed and recreated
> from /dev/did/dsk/d12s0. You are right: It still shows up the old
> dev.
You pool is cached in zpool.caches of both hosts (with the old device path):
bash-
Hi Victor,
it was initially created from c4t600D02300088824BC4228807d0s0, then
destroyed and recreated from /dev/did/dsk/d12s0. You are right: It still shows
up the old dev.
There seems to be a problem with device reservation and the cluster framework,
since i get a kernel panic once
I'm running a scrub and I'm running "zpool status" every 5 minutes.
This happens:
pool: export
state: ONLINE
scrub: scrub in progress for 1h16m, 44.91% done, 1h34m to go
config:
NAMESTATE READ WRITE CKSUM
export ONLINE 0 0 0
c0d0s7
Armin Ollig пишет:
> Hi Venku and all others,
>
> thanks for your suggestions. I wrote a script to do some IO from both
> hosts (in non-cluster-mode) to the FC-LUNs in questions and check the
> md5sums of all files afterwards. As expected there was no corruption.
>
>
> After recreating the clust
Hi Richard,
Thank you very much for your quick response.
SB
Richard Elling wrote:
> Simon Bonilla wrote:
>>
>> Hi Team,
>>
>> We have a customer who wants to implement the following architecture:
>>
>> - Solaris 10
>>
>> - Sun Cluster 3.2
>>
>> - Oracle RAC
>>
>
> Oracle does not support RAC on
Simon Bonilla wrote:
>
> Hi Team,
>
> We have a customer who wants to implement the following architecture:
>
> - Solaris 10
>
> - Sun Cluster 3.2
>
> - Oracle RAC
>
Oracle does not support RAC on ZFS, nor will ZFS work as a
shared, distributed file system. If you want a file system, then
QFS is
Hi Venku and all others,
thanks for your suggestions.
I wrote a script to do some IO from both hosts (in non-cluster-mode) to the
FC-LUNs in questions and check the md5sums of all files afterwards. As expected
there was no corruption.
After recreating the cluster-resource and a few failovers
Cyril ROUHASSIA wrote:
>
> Dear all,
>
> From an implementation point of view, I really do not understand where
> the list of all datasets lie. I know how to to get from the uberblock
> to the "active dataset" (through MOS and so on) but once there how to
> get to others datasets located
>A damned new motherboard BIOS silently cut 2 megabytes down from
>the drive so that ZFS went insane
.
Can you tell us which BIOS/Motherboard we should avoid?
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
Hi everyone, sorry for the late reply.
First of all, I've got all my files back. Cheers! :-)
Next, I'd like to tnank Nigel Smith, you are the best!
And, if anyone is interested, here is the end of the story and I'll try not to
make it too long.
As one can see at http://www.freebsd.org/cgi/quer
Hi Team,
We have a customer who wants to implement the following architecture:
- Solaris 10
- Sun Cluster 3.2
- Oracle RAC
The customer wants to use file system instead of raw partitions
Question:
- Is ZFS supported in Sun Cluster 3.2?
Sincerely,
SB
Yup, I'd agree with that too.
If the desktop guys want snapshotting to be as simple as possible, could ZFS be
configured so that this property is set on creation by default?
That means that it's something that just works for the average user, more
advanced users can create pools that don't auto
If I have 16 disks SATA 7200 rpm and 8 disks SAS 15k rpm in the same
JBOD array what should be the optimal zpool configuration?
Should I create 2 zpool, one with the Sata Disks and one with the SAS
disks? Or
If they are in the same zpool (Let say mirror (same type of disks) and
then stripped) wh
Tim Foster wrote:
Chris Gerhard wrote:
Not quite. I want to make the default for all pools imported or not
to not do this and then turn it on where it makes sense and won't do
harm.
Aah I see. That's the complete opposite of what the desktop folk wanted
then - you want to opt-in, instead o
Chris Gerhard wrote:
> Tim Foster wrote:
>
>>
>> Yep, you can do that. It uses ZFS user properties and respects
>> inheritance, so you can do:
>>
>> # zfs set com.sun:auto-snapshot=false rpool
>> # zfs set com.sun:auto-snapshot=true rpool/snapshot/this
>> # zfs set com.sun:auto-snapshot=false rpo
>> > This smells of name resolution delays somewhere.
>
>[...]
>> I think you've misunderstood something here, perhaps in the way I've tried
>> to explain it.
>
>No, I was just offering a hunch. Writing files into a directory checks
>access permissions for that directory, and that involves name s
Dear all,
>From an implementation point of view, I really do not understand where
the list of all datasets lie. I know how to to get from the uberblock to
the "active dataset" (through MOS and so on) but once there how to get to
others datasets located in the pool?
Thank for your answer,
> > This smells of name resolution delays somewhere.
[...]
> I think you've misunderstood something here, perhaps in the way I've tried
> to explain it.
No, I was just offering a hunch. Writing files into a directory checks
access permissions for that directory, and that involves name services.
39 matches
Mail list logo