Aha, found it! It was this thread, also started by Carsten :)
http://www.opensolaris.org/jive/thread.jspa?threadID=78921&tstart=45
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
Guys, this looks to me like the second time we've had something like this
reported on the forums for an x4500, again with the first zvol having much
lower load than the other two, despite being created at the same time.
I can't find the thread to check, can anybody else remember it?
--
This mes
Hi,
A good rough estimate would be the total of the space
that is displayed under the "USED" column of "zfs list" for those snapshots.
Here is an example :
-- snip --
[EMAIL PROTECTED] zfs list -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank24.6M 38.9M19K /tank
tank/fs1
On Tue, Dec 2, 2008 at 6:13 PM, Lori Alt <[EMAIL PROTECTED]> wrote:
> On 12/02/08 10:24, Mike Gerdts wrote:
> I follow you up to here. But why do the next steps?
>
> > zonecfg -z $zone
> > remove fs dir=/var
> >
> > zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var
It's not strictly
It's something we've considered here as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello none,
Thursday, November 6, 2008, 7:55:42 PM, you wrote:
n> Hi Milek,
n> Thanks for your reply.
n> What I really need is a way to tell how much space will be freed
n> for any particular set of snapshots that I delete.
n> So I would like to query zfs,
n> "if I delete these snapshots
n> sto
Would any of this have to do with the system being a T2000? Would ZFS
resilvering be affected by single threadedness, slowish US-T1 clock
speed or lack of strong FPU performance?
On 12/1/08, Alan Rubin <[EMAIL PROTECTED]> wrote:
> We will be considering it in the new year, but that will not happe
On 12/02/08 11:04, Brian Wilson wrote:
- Original Message -
From: Lori Alt <[EMAIL PROTECTED]>
Date: Tuesday, December 2, 2008 11:19 am
Subject: Re: [zfs-discuss] Separate /var
To: Gary Mills <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
On 12/02/08 09:00, Gary Mills wrote:
On 12/02/08 10:24, Mike Gerdts wrote:
On Tue, Dec 2, 2008 at 11:17 AM, Lori Alt <[EMAIL PROTECTED]> wrote:
I did pre-create the file system. Also, I tried omitting "special" and
zonecfg complains.
I think that there might need to be some changes
to zonecfg and the zone installation code to
Hi,
Eeemmm, i think its safe to say your zpool and its data are gone for ever.
Use the Samsung disk checker boot CD, and see if it can fix your faulty disk.
Then connect all 3 drives to your system and use raidz. Your data will then be
well protected.
Brian,
--
This message posted from opensol
Hello Mattias,
Saturday, November 15, 2008, 12:24:05 AM, you wrote:
MP> On Sat, Nov 15, 2008 at 00:46, Richard Elling <[EMAIL PROTECTED]> wrote:
>> Adam Leventhal wrote:
>>>
>>> On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
>>>
That is _not_ active-active, that is ac
On 2-Dec-08, at 3:35 PM, Miles Nordin wrote:
>> "r" == Ross <[EMAIL PROTECTED]> writes:
>
> r> style before I got half way through your post :) [...status
> r> problems...] could be a case of oversimplifying things.
> ...
> And yes, this is a religious argument. Just because it sp
OK,
In the end I managed to install OpenSolaris snv_101b on hp blade on smart array
drive directly from install cd. Everything is fine. The problems I experienced
with hangs on boot on snv_99+ is related to Qlogic driver, but this is a
different story.
Simon
--
This message posted from opens
Francois Dion wrote:
> >>"Francois Dion" wrote:
> >> Source is local to rsync, copying from a zfs file system,
> >> destination is remote over a dsl connection. Takes forever to just
> >> go through the unchanged files. Going the other way is not a
> >> problem, it takes a fraction of the t
>
> I don't want to steer you wrong under the
> circumstances,
> so I think we need more information.
>
> First, is the failure the same as in the earlier part
> of this
> thread. I.e., when you boot, do you get a failure
> like this?
>
> Warning: Fcode sequence resulted in a net stack depth
I don't want to steer you wrong under the circumstances,
so I think we need more information.
First, is the failure the same as in the earlier part of this
thread. I.e., when you boot, do you get a failure like this?
Warning: Fcode sequence resulted in a net stack depth change of 1
Evaluating
The SupportTech responding to case #66153822 so far
has only suggested "boot from cdrom and patchrm 137137-09"
which tells me I'm dealing with a level-1 binder monkey.
It's the idle node of a cluster holding 10K email accounts
so I'm proceeding cautiously. It is unfortunate the admin doing
the ori
On Tue, Dec 02, 2008 at 12:50:08PM -0600, Tim wrote:
> On Tue, Dec 2, 2008 at 11:42 AM, Brian Hechinger <[EMAIL PROTECTED]> wrote:
>
> I believe the issue you're running into is the failmode you currently have
> set. Take a look at this:
> http://prefetch.net/blog/index.php/2008/03/01/configuring
Vincent Fox wrote:
> Reviving this thread.
>
> We have a Solaris 10u4 system recently patched with 137137-09.
> Unfortunately the patch was applied from multi-user mode, I wonder if this
> may have been original posters problem as well? Anyhow we are now stuck
> with an unbootable system as well.
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> style before I got half way through your post :) [...status
r> problems...] could be a case of oversimplifying things.
yeah I was a bit inappropriate, but my frustration comes from the
(partly paranoid) imagining of how the idea ``we nee
On Tue, 2 Dec 2008, Carsten Aulbert wrote:
>
> Hmm, since I only started with Solaris this year, is there a way to
> identify a "slow" disk? In principle these should all be identical
> Hitachi Deathstar^WDeskstar drives and should only have the standard
> deviation during production.
Look at the
Reviving this thread.
We have a Solaris 10u4 system recently patched with 137137-09.
Unfortunately the patch was applied from multi-user mode, I wonder if this
may have been original posters problem as well? Anyhow we are now stuck
with an unbootable system as well.
I have submitted a case to Su
On Tue, 2 Dec 2008, Carsten Aulbert wrote:
>
> No I think a single disk would be much less performant, however I'm a
> bit disappointed by the overall performance of the boxes and just now we
> have users where they experience extremely slow performance.
If all of the disks in the vdev need to be
Bob Friesenhahn wrote:
> You may have one or more "slow" disk drives which slow down the whole
> vdev due to long wait times. If you can identify those slow disk drives
> and replace them, then overall performance is likely to improve.
>
> The problem is that under severe load, the vdev with the
Hi Miles,
Miles Nordin wrote:
>> "ca" == Carsten Aulbert <[EMAIL PROTECTED]> writes:
>
> ca> (a) Why the first vdev does not get an equal share
> ca> of the load
>
> I don't know. but, if you don't add all the vdev's before writing
> anything, there's no magic to make them balance t
> "ca" == Carsten Aulbert <[EMAIL PROTECTED]> writes:
ca> (a) Why the first vdev does not get an equal share
ca> of the load
I don't know. but, if you don't add all the vdev's before writing
anything, there's no magic to make them balance themselves out. Stuff
stays where it's writt
On Tue, 2 Dec 2008, Carsten Aulbert wrote:
>
> Questions:
> (a) Why the first vdev does not get an equal share of the load
You may have one or more "slow" disk drives which slow down the whole
vdev due to long wait times. If you can identify those slow disk
drives and replace them, then overall
Hi Miles,
It's probably a bad sign that although that post came through as anonymous in
my e-mail, I recognised your style before I got half way through your post :)
I agree, the zpool status being out of date is weird, I'll dig out the bug
number for that at some point as I'm sure I've mention
On 12/02/08 11:29, dick hoogendijk wrote:
Lori Alt wrote:
On 12/02/08 03:21, jan damborsky wrote:
Hi Dick,
I am redirecting your question to zfs-discuss
mailing list, where people are more knowledgeable
about this problem and your question could be
better answered.
Best regards,
Jan
Hi all,
We are running pretty large vdevs since the initial testing showed that
our setup was not too much off the optimum. However, under real world
load we do see quite some weird behaviour:
The system itself is a X4500 with 500 GB drives and right now the system
seems to be under heavy load, e
On Tue, Dec 2, 2008 at 11:42 AM, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> I was not in front of the machine, I had remote hands working with me, so I
> appologize in advance for any lack of detail I'm about to give.
>
> The server in question is running snv_81 booting ZFS Root using Tim's
> sc
I was not in front of the machine, I had remote hands working with me, so I
appologize in advance for any lack of detail I'm about to give.
The server in question is running snv_81 booting ZFS Root using Tim's scripts to
"convert" it over to ZFS Root.
My server in colo stopped responding. I had
> "rs" == Ross Smith <[EMAIL PROTECTED]> writes:
rs> 4. zpool status still reports out of date information.
I know people are going to skim this message and not hear this.
They'll say ``well of course zpool status says ONLINE while the pool
is hung. ZFS is patiently waiting. It doesn't
Lori Alt wrote:
> On 12/02/08 03:21, jan damborsky wrote:
>> Hi Dick,
>>
>> I am redirecting your question to zfs-discuss
>> mailing list, where people are more knowledgeable
>> about this problem and your question could be
>> better answered.
>>
>> Best regards,
>> Jan
>>
>>
>> dick hoogendijk wr
- Original Message -
From: Lori Alt <[EMAIL PROTECTED]>
Date: Tuesday, December 2, 2008 11:19 am
Subject: Re: [zfs-discuss] Separate /var
To: Gary Mills <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
> On 12/02/08 09:00, Gary Mills wrote:
> > On Mon, Dec 01, 2008 at 04:45:16PM -0700
It seems that my devices have several settings of pools :-(
zdb -l /dev/rdsk/c0t5d0
tells me
LABEL 0
failed to unpack label 0
LABEL 1
On Tue, 2 Dec 2008, Toby Thain wrote:
>
> Even that is probably more frequent than necessary. I'm sure somebody
> has done the MTTDL math. IIRC, the big win is doing any scrubbing at
> all. The difference between scrubbing every 2 weeks and every 2
> months may be negligible. (IANAMathematician tho
On Tue, Dec 2, 2008 at 11:17 AM, Lori Alt <[EMAIL PROTECTED]> wrote:
> I did pre-create the file system. Also, I tried omitting "special" and
> zonecfg complains.
>
> I think that there might need to be some changes
> to zonecfg and the zone installation code to get separate
> /var datasets in non
On 12/02/08 09:00, Gary Mills wrote:
On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote:
On 11/27/08 17:18, Gary Mills wrote:
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM +11
On 12/02/08 03:21, jan damborsky wrote:
> Hi Dick,
>
> I am redirecting your question to zfs-discuss
> mailing list, where people are more knowledgeable
> about this problem and your question could be
> better answered.
>
> Best regards,
> Jan
>
>
> dick hoogendijk wrote:
>
>> I have s10u6 insta
Hi Richard,
Thanks, I'll give that a try. I think I just had a kernel dump while
trying to boot this system back up though, I don't think it likes it
if the iscsi targets aren't available during boot. Again, that rings
a bell, so I'll go see if that's another known bug.
Changing that setting on
On 2-Dec-08, at 8:24 AM, Glaser, David wrote:
> Ok, thanks for all the responses. I'll probably do every other week
> scrubs, as this is the backup data (so doesn't need to be checked
> constantly).
Even that is probably more frequent than necessary. I'm sure somebody
has done the MTTDL ma
On Tue, Dec 2, 2008 at 10:15, Paul Weaver <[EMAIL PROTECTED]> wrote:
> So you've got a zpool across 46 (48?) of the disks?
>
> When I was looking into our thumpers everyone seemed to think a raidz
> over
> more than 10 disks was a hideous idea.
A vdev that size is bad, a pool that size composed of
On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote:
>On 11/27/08 17:18, Gary Mills wrote:
> On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
> On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
> On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
>
> I'm c
thx for your suggestions couper88, but this did not help :-/.
I tried the lastes live-cd of 2008.11
and got new information:
a zpool import shows me now:
[EMAIL PROTECTED]:~# zpool import
pool: tank
id: 1717390511944489
state: UNAVAIL
status: One or more devices contains corrupted da
So you've got a zpool across 46 (48?) of the disks?
When I was looking into our thumpers everyone seemed to think a raidz
over
more than 10 disks was a hideous idea.
--
Paul Weaver
Systems Development Engineer
News Production Facilities, BBC News
Work: 020 822 58109
Room 1244 Television
How are the two sides different? If you run something like 'openssl md5sum' on
both sides is it much faster on one side?
Does one machine have a lot more memory/ARC and allow it to skip the physical
reads? Is the dataset compressed on one side?
--
This message posted from opensolaris.org
Ok, thanks for all the responses. I'll probably do every other week scrubs, as
this is the backup data (so doesn't need to be checked constantly). I'm a
little concerned about the time involved to do 33TB (after the 48TB has been
RAIDed fully) when it is fully populated with filesystems and snap
> I have a Thumper (ok, actually 3) with each having one large pool,
multiple
> filesystems and many snapshots. They are holding rsync copies of
multiple
> clients, being synced every night (using snapshots to keep
'incremental'
> backups).
>
> I'm wondering how often (if ever) I should do scr
>>"Francois Dion" wrote:
>> Source is local to rsync, copying from a zfs file system,
>> destination is remote over a dsl connection. Takes forever to just
>> go through the unchanged files. Going the other way is not a
>> problem, it takes a fraction of the time. Anybody seen that?
>> Sugg
Hi,
Has anyone implemented the Hardware RAID 1/5 on Sun X4150/X4450 class of
servers .
Also any comparison between ZFS Vs H/W Raid ?
I would like to know the experience (good/bad) and the pros/cons?
Regards,
Vikash
___
zfs-discuss maili
Incidentally, while I've reported this again as a RFE, I still haven't seen a
CR number for this. Could somebody from Sun check if it's been filed please.
thanks,
Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
t. johnson wrote:
>>> One would expect so, yes. But the usefulness of this is limited to the
>>> cases where the entire working set will fit into an SSD cache.
>>>
>> Not entirely out of the question. SSDs can be purchased today
>> with more than 500 GBytes in a 2.5" form factor. One or more of
>>
Hi,
Attach both original drives to the system, the faulty one may only have had a
few check sum errors.
zpool status -vshould hopefully show your data pool. Provided you have not
started to replaced the faulty drive yet. If it don't see the pool, zpool
export then zpool import and hope
Hey folks,
I've just followed up on this, testing iSCSI with a raided pool, and
it still appears to be struggling when a device goes offline.
>>> I don't see how this could work except for mirrored pools. Would that
>>> carry enough market to be worthwhile?
>>> -- richard
>>>
>>
>> I have to adm
>>
>> One would expect so, yes. But the usefulness of this is limited to the cases
>> where the entire working set will fit into an SSD cache.
>>
>
> Not entirely out of the question. SSDs can be purchased today
> with more than 500 GBytes in a 2.5" form factor. One or more of
> these would make a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have a system connected to an external DAS (SCSI) array, using ZFS. the
array has an nvram write cache, but it honours SCSI cache flush commands by
flushing the nvram to disk. the array has no way to disable this behaviour. a
well-known behav
Hi Dick,
I am redirecting your question to zfs-discuss
mailing list, where people are more knowledgeable
about this problem and your question could be
better answered.
Best regards,
Jan
dick hoogendijk wrote:
> I have s10u6 installed on my server.
> zfs list (partly):
> NAME
58 matches
Mail list logo