All this reminds me: how much work (if any) has been done on the
"asyncronous" mirroring option? That is, for supporting mirrors with
radically different access times? (useful for supporting a mirror
across a WAN, where you have hundred(s)-millisecond latency to the other
side of the mirro
Bob Friesenhahn wrote:
On Fri, 18 Sep 2009, David Magda wrote:
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
One suggestion I heard was to get a LUN that's twice the size, and
set "copies=2". This way you have some redund
On Fri, 18 Sep 2009, David Magda wrote:
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
One suggestion I heard was to get a LUN that's twice the size, and set
"copies=2". This way you have some redundancy for incorrect checksu
On Sep 18, 2009, at 16:52, Bob Friesenhahn wrote:
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
One suggestion I heard was to get a LUN that's twice the size, and set
"copies=2". This way you have some redundancy for incorr
Hi Chris,
Unless we can figure out the best way to provide this info, please ask
about specific features and we'll tell you.
One convoluted way is that a CR that integrate a ZFS feature
identifies the Nevada integration build and the Solaris 10 release,
but not all CRs provide this info. You can
Dave,
I've searched opensolaris.org and our internal bug database.
I don't see that anyone else has reported this problem.
I asked someone from the OSOL install team and this behavior
is a mystery.
If you destroyed the phantom pools before you reinstalled,
then they probably returned from the i
Scott Lawson wrote:
Sun Directory environment generally isn't very IO intensive, except
for in massive data reloads or indexing operations. Other than this it
is an ideal candidate for ZFS
and it's rather nice ARC cache. Memory is cheap on a lot of boxes and
it will make read only type file sys
Andrew Deason wrote:
On Fri, 18 Sep 2009 16:38:28 -0400
Robert Milkowski wrote:
No. We need to be able to tell how close to full we are, for
determining when to start/stop removing things from the cache
before we can add new items to the cache again.
but having a dedicated dataset
Lloyd H. Gill wrote:
Hello folks,
I am sure this topic has been asked, but I am new to this list. I have
read a ton of doc's on the web, but wanted to get some opinions from
you all. Also, if someone has a digest of the last time this was
discussed, you can just send that to me. In any cas
On Fri, 18 Sep 2009 16:38:28 -0400
Robert Milkowski wrote:
> > No. We need to be able to tell how close to full we are, for
> > determining when to start/stop removing things from the cache
> > before we can add new items to the cache again.
> >
>
> but having a dedicated dataset will let you
On Wed, 2009-09-16 at 14:19 -0700, Richard Elling wrote:
> Actually, I had a ton of data on resilvering which shows mirrors and
> raidz equivalently bottlenecked on the media write bandwidth. However,
> there are other cases which are IOPS bound (or CR bound :-) which
> cover some of the postings
Richard Elling wrote:
On Sep 18, 2009, at 10:06 AM, Chris Banal wrote:
Since most zfs features / fixes are reported in snv_XXX terms. Is
there some sort of way to figure out which versions of Solaris 10
have the equivalent features / fixes?
There is no automated nor easy way to do this. Not
Hi,
see comments inline:
Lloyd H. Gill wrote:
Hello folks,
I am sure this topic has been asked, but I am new to this list. I have
read a ton of doc’s on the web, but wanted to get some opinions from
you all. Also, if someone has a digest of the last time this was
discussed, you can just se
On Fri, 18 Sep 2009, Lloyd H. Gill wrote:
The Sun docs seem to indicate it possible, but not a recommended course. I
realize there are some advantages, such as snapshots, etc. But, the h/w raid
will handle most disk problems, basically reducing the great capabilities
of the big reasons to deploy
Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being used in the dataset.
Would
Hello folks,
I am sure this topic has been asked, but I am new to this list. I have read
a ton of doc¹s on the web, but wanted to get some opinions from you all.
Also, if someone has a digest of the last time this was discussed, you can
just send that to me. In any case, I am reading a lot of mix
Cindy Swearingen wrote:
Michael,
Get some rest. :-)
Then see if you can import your root pool while booted from the LiveCD.
that's what I tried - I'm never even shown "rpool", I probably wouldn't
have mentioned localpool at all if I had ;-)
After you get to that point, you might search the
Michael,
Get some rest. :-)
Then see if you can import your root pool while booted from the LiveCD.
After you get to that point, you might search the indiana-discuss
archive for tips on
resolving the pkg-image-update no grub menu problem.
Cindy
On 09/18/09 12:08, michael schuster wrote:
Ci
On Sep 18, 2009, at 10:06 AM, Chris Banal wrote:
Since most zfs features / fixes are reported in snv_XXX terms. Is
there some sort of way to figure out which versions of Solaris 10
have the equivalent features / fixes?
There is no automated nor easy way to do this. Not all features are
bac
On Sep 18, 2009, at 7:36 AM, Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being
On 9/18/2009 1:51 PM, Steffen Weiberle wrote:
I am trying to compile some deployment scenarios of ZFS.
# of systems
do zfs root count? or only big pools?
amount of storage
raw or after parity ?
--
Jeremy Kister
http://jeremy.kister.net./
___
I just did a fresh reinstall of OpenSolaris and I'm again seeing
the phenomenon described in
http://article.gmane.org/gmane.os.solaris.opensolaris.zfs/26259
which I posted many months ago and got no reply to.
Can someone *please* help me figure out what's going on here?
Thanks in Advance,
--
Dav
Cindy Swearingen wrote:
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the disk
that you are booting from.
Are you saying that localtank is your root pool?
no... (I was on the plane yesterday, I'm still jet-lagged), I should have
realised that that's strange.
I b
I am trying to compile some deployment scenarios of ZFS.
If you are running ZFS in production, would you be willing to provide
(publicly or privately)?
# of systems
amount of storage
application profile(s)
type of workload (low, high; random, sequential; read-only, read-write,
write-only)
st
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the disk
that you are booting from.
Are you saying that localtank is your root pool?
I believe the OSOL install creates a root pool called rpool. I don't
remember if its configurable.
Changing labels or partitions from
Since most zfs features / fixes are reported in snv_XXX terms. Is there some
sort of way to figure out which versions of Solaris 10 have the equivalent
features / fixes?
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
michael schuster wrote:
All,
this morning, I did "pkg image-update" from 118 to 123 (internal repo),
and upon reboot all I got was the grub prompt - no menu, nothing.
I found a 2009.06 CD, and when I boot that and run "zpool import", I
get told
localtank UNAVAIL insufficient replicas
On Fri, 18 Sep 2009 12:48:34 -0400
Richard Elling wrote:
> The transactional nature of ZFS may work against you here.
> Until the data is committed to disk, it is unclear how much space
> it will consume. Compression clouds the crystal ball further.
...but not impossible. I'm just looking for a
All,
this morning, I did "pkg image-update" from 118 to 123 (internal repo), and
upon reboot all I got was the grub prompt - no menu, nothing.
I found a 2009.06 CD, and when I boot that and run "zpool import", I
get told
localtank UNAVAIL insufficient replicas
c8t1d0O
Thanks James! I look forward to these - we could really use dedup in my org.
Blake
On Thu, Sep 17, 2009 at 6:02 PM, James C. McPherson
wrote:
> On Thu, 17 Sep 2009 11:50:17 -0500
> Tim Cook wrote:
>
>> On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wrote:
>>
>> >
>> > I think you're right, a
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski wrote:
> if you would create a dedicated dataset for your cache and set quota
> on it then instead of tracking a disk space usage for each file you
> could easily check how much disk space is being used in the dataset.
> Would it suffice for you
I have exactly these symptoms on 3 thumpers now.
2 x x4540s and 1 x x4500
Rebooting/Power cycling doesn't even bring them back. The only thing I found,
is that if I boot from the osol.2009.06 Cd, I can see all the drives
I had to reinstall the OS on one box.
I've only just recently upgraded them
On Thu, Sep 17, 2009 at 11:41 AM, Adam Leventhal wrote:
> RAID-3 bit-interleaved parity (basically not used)
There was a hardware RAID chipset that used RAID-3. Netcell Revolution
I think it was called.
It looked interesting and I thought about grabbing one at the time but
never got arou
33 matches
Mail list logo