On 2009/09/01, at 22:15, kurosan wrote:
Hi kurosan,
I met the same but probably it cannot work.
check zfs get all your_pool_mounted_/pathname
you can see 'mountpoint' is 'legacy'
so you have to use zfs sharenfs=on again to try
Hi,
thanks for the reply... I've had time only today to retry.
On 02/09/2009, at 9:54 AM, Adam Leventhal wrote:
After investigating this problem a bit I'd suggest avoiding
deploying RAID-Z
until this issue is resolved. I anticipate having it fixed in build
124.
Thanks for the status update on this Adam.
cheers,
James
Hi James,
After investigating this problem a bit I'd suggest avoiding deploying
RAID-Z
until this issue is resolved. I anticipate having it fixed in build 124.
Apologies for the inconvenience.
Adam
On Aug 28, 2009, at 8:20 PM, James Lever wrote:
On 28/08/2009, at 3:23 AM, Adam Leventhal
FYI,
Western Digital shipping high-speed 2TB hard drive
http://news.cnet.com/8301-17938_105-10322886-1.html?tag=newsEditorsPicksArea.0
I'm not sure how many people think 7,200 rpm is "high speed"
but, hey, it is better than 5,900 rpm :-)
-- richard
__
On Sep 1, 2009, at 1:28 PM, Jason wrote:
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the
water.
Yep.
If you have 2 iscsi target boxes that replicate and one dies, you
are OK but you then have to have a 2:1 total storage
You are completely off your rocker :)
No, just kidding. Assuming the virtual front-end servers are running on
different hosts, and you are doing some sort of raid, you should be fine.
Performance may be poor due to the inexpensive targets on the back end, but you
probably know that. A while bac
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the water.
If you have 2 iscsi target boxes that replicate and one dies, you are OK but
you then have to have a 2:1 total storage to usable ratio (excluding expensive
shared disks).
On Sep 1, 2009, at 12:17 PM, Jason wrote:
True, though an enclosure for shared disks is expensive. This isn't
for production but for me to explore what I can do with x86/x64
hardware. The idea being that I can just throw up another x86/x64
box to add more storage. Has anyone tried anythi
On Tue, Sep 1, 2009 at 2:17 PM, Jason wrote:
> True, though an enclosure for shared disks is expensive. This isn't for
> production but for me to explore what I can do with x86/x64 hardware. The
> idea being that I can just throw up another x86/x64 box to add more storage.
> Has anyone tried a
True, though an enclosure for shared disks is expensive. This isn't for
production but for me to explore what I can do with x86/x64 hardware. The idea
being that I can just throw up another x86/x64 box to add more storage. Has
anyone tried anything similar?
--
This message posted from openso
On Tue, 1 Sep 2009, Jpd wrote:
Thanks.
Any idea on how to work out which one.
I can't find smart in ips, so what other ways are there?
You could try using a script like this one to find pokey disks:
#!/bin/ksh
# Date: Mon, 14 Apr 2008 15:49:41 -0700
# From: Jeff Bonwick
# To: Henrik Hjort
On Sep 1, 2009, at 11:45 AM, Jason wrote:
So aside from the NFS debate, would this 2 tier approach work? I am
a bit fuzzy on how I would get the RAIDZ2 redundancy but still
present the volume to the VMware host as a raw device. Is that
possible or is my understanding wrong? Also could it
So aside from the NFS debate, would this 2 tier approach work? I am a bit
fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to
the VMware host as a raw device. Is that possible or is my understanding
wrong? Also could it be defined as a clustered resource?
--
This m
On Tue, 1 Sep 2009, John-Paul Drawneek wrote:
i did not migrate my disks.
I now have 2 pools - rpool is at 60% as is still dog slow.
Also scrubbing the rpool causes the box to lock up.
This sounds like a hardware problem and not something related to
fragmentation. Probably you have a slow/
Hi all;
I'm currently working on a small "cookbook" to showcase the backup and
restore capabilities of ZFS using snapshots.
I chose to backup the data directory of a MySQL 5.1 server for the
example purpose, using several backup/restore scenarios. The most simple
is to simple snapshot the file
i did not migrate my disks.
I now have 2 pools - rpool is at 60% as is still dog slow.
Also scrubbing the rpool causes the box to lock up.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
> Hi kurosan,
> I met the same but probably it cannot work.
> check zfs get all your_pool_mounted_/pathname
> you can see 'mountpoint' is 'legacy'
> so you have to use zfs sharenfs=on again to try
Hi,
thanks for the reply... I've had time only today to retry.
I've re-enabled zfs sharenfs=on but n
Thanks for all the answers, I've now cleared out the old BEs and upgraded the
pools and everything just works as expected.
/Per
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
Per Öberg wrote:
When I check
--
# pfexec zpool status rpool
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
Per Öberg wrote:
When I check
--
# pfexec zpool status rpool
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
>When I check
>--
># pfexec zpool status rpool
> pool: rpool
> state: ONLINE
>status: The pool is formatted using an older on-disk format. The pool can
>still be used, but some features are unavailable.
>action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
>po
When I check
--
# pfexec zpool status rpool
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no
14408718082181993222 + 4867536591080553814 - 2^64 + 4015976099930560107 =
484548669948327
there was an overflow inbetween, that I overlooked.
pak
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sat, Aug 29, 2009 at 10:09:00AM +1200, Ian Collins wrote:
> I have a case open for this problem on Solaris 10u7.
Interesting. One of our thumpers was previously running snv_112 and
experiencing these issues. Switching to 10u7 has cured it and it's been
stable now for several months.
> The case
Thanks for tha answers.
Lori Alt wrote:
On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote:
Hi !
Have anyone given an answer to this that I have missed ? I have a
customer that have the same question and I want to give him a correct
answer.
/Henrik
Ketan wrote:
I created a snap
Jorgen Lundman wrote:
The mv8 is a marvell based chipset, and it appears there are no
Solaris drivers for it. There doesn't appear to be any movement from
Sun or marvell to provide any either.
Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8
and AOC-SAT2-MV8, which use
26 matches
Mail list logo