tcook,
Thanks for your response. Well, I don't imagine there would be a lot of
requests from enterprise customers with deep pockets. My impression has been
that OS is targetting the little guy though, and as such, this would really be
a welcome feature.
--
This message posted from opensolaris.
On Tue, Dec 16, 2008 at 7:10 PM, Daniel wrote:
> I'm using zfs not to have access to a fail-safe backed up system, but to
> easily manage my file system. I would like to be able to, as I buy new
> harddrives, just to be able to replace the old ones. I'm very
> environmentally concious, so I don'
On Tue, 16 Dec 2008, Reed Gregory wrote:
> 8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares.
> zraid of these 8 Raid Groups. ~ 14TB usable.
>
> I did read in a FAQ that doing double redundancy is not recommended
> since parity would have to be calculated twice. I was wondering
>
(CC-ing both lists, since I post the question to both under different
subject)
sim wrote:
> Ok,
>
> In my situation qlogic cannot get insync on second FC port (I can see it on
> switch).
> If you want just bypass loading of qcl driver, before booting the system add
> in grub this:
>
> -B disable
New ZFS user here.
NexSAN Satabeast with 42 500G Sata drives of course, Dual Channel 4G Fiber.
Fibre network attached to Solaris 10 running on HP Blade. Possibly will add
second solaris blade for failover.
Now I am looking for reliability with decent disk space totals. I would prefer
not
Thanks for the responses.
Richard,
Yes, zpool status returns an error:
# zpool status -xv
pool: zpool1
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the de
Hi again Cindy,
Well, I got the two new 1.5 TB disks, but I ran into a snag:
> a...@diotima:~# zpool attach rpool c3t0d0s0 c3t1d0
> cannot label 'c3t1d0': EFI labeled devices are not supported on root pools.
The Solaris 10 System Administration Guide: Devices and File Systems gives some
pertine
I'm using zfs not to have access to a fail-safe backed up system, but to easily
manage my file system. I would like to be able to, as I buy new harddrives,
just to be able to replace the old ones. I'm very environmentally concious, so
I don't want to leave old drives in there to consume power a
cindy.swearin...@sun.com wrote:
> Hi Seymour,
>
> I didn't get a chance to reproduce this until today and I noticed
> that originally you used zfs destroy to remove the unwanted BE (b99).
>
> I tested these steps by using beadm destroy with the auto snapshots
> running and didn't see the proble
Glaser, David wrote:
> Hi all,
>
> A few weeks ago I was inquiring of the group on how often to do zfs
> scrubs of pools on our x4500's. Figures that the first time I try
> to do a monthly scrub of our pools, we get one of the three machines
> to throw an error. On one of the machines, there's o
Niall Power wrote:
> Hi all,
>
> A while back, I posted here about the issues ZFS has with USB hotplugging
> of ZFS formatted media when we were trying to plan an external media backup
> solution for time-slider:
> http://www.opensolaris.org/jive/thread.jspa?messageID=299501
>
> As well as the U
I'd first resolve the OBP and HBA fcode issues, then I'd verify that
you are starting from a cold-reset system. "Fast Data Access MMU Miss"
is a notorious problem on SF280R and is very often associated with
attempting to reboot after a warm cycle of the system.
We instituted a cold cycle of
Hi Seymour,
I didn't get a chance to reproduce this until today and I noticed
that originally you used zfs destroy to remove the unwanted BE (b99).
I tested these steps by using beadm destroy with the auto snapshots
running and didn't see the problems listed below. I think your
eventual beadm des
Le 15 déc. 08 à 01:13, Ahmed Kamal a écrit :
> Hi,
>
> I have been doing some basic performance tests, and I am getting a
> big hit when I run UFS over a zvol, instead of directly using zfs.
> Any hints or explanations is very welcome. Here's the scenario. The
> machine has 30G RAM, and two
Ok,
In my situation qlogic cannot get insync on second FC port (I can see it on
switch).
If you want just bypass loading of qcl driver, before booting the system add in
grub this:
-B disable-qlc=true
(in line starting with kernel ...)
On my server, without qlc driver everything works fine.
I was wondering, if I have vdev setup and I do present it to another box via
iscsi, is there any way to grow that vdev?
for example when I do this:
zfs create -V 100G mypool6/v1
zfs set shareiscsi=on mypool6/v1
can I then expand 100G pool to lets say 150G?
I do not care about file system on the
On Tue, Dec 16, 2008 at 1:53 PM, Miles Nordin wrote:
> > "np" == Niall Power writes:
>
>np> So I'd like to ask if this is an appropriate use of ZFS mirror
>np> functionality?
>
> I like it a lot.
>
> I tried to set up something like that ad-hoc using a firewire disk on
> an Ultra10 a
There is ZFS source tour :
URL : http://www.opensolaris.org/os/community/zfs/source/
--
Prabahar.
On Dec 14, 2008, at 10:25 PM, kavita wrote:
> Is there a documentation available for zfs source code?
> --
> This message posted from opensolaris.org
>
> "np" == Niall Power writes:
np> So I'd like to ask if this is an appropriate use of ZFS mirror
np> functionality?
I like it a lot.
I tried to set up something like that ad-hoc using a firewire disk on
an Ultra10 at first, and then, just as you thought, tried using one
firewire dis
Glaser, David wrote:
> Hi all,
[snipped]
> So, is there a way to see if it is a bad disk, or just zfs being a pain?
> Should I reset the checksum error counter and re-run the scrub?
You could try using smartctl to query the disk directly, although I
don't recall if it works on the x4500. Normal
Miles Nordin wrote:
>> "nw" == Nicolas Williams writes:
>
>
> nw> You're not required to go with one-filesystem-per-user though!
>
> It was pitched as an architectural advantage, but never fully
> delivered, and worse, used to justify removing traditional Unix
> quotas. Consequently, qu
> "nw" == Nicolas Williams writes:
nw> For NFSv4 clients that support mirror mounts its not a problem
nw> at all.
no, 3000 - 10,000 users is common for a large campus, and according to
posters here, sometimes that many users actually can fit into the
bandwidth of a single pool. But
On Tue, Dec 16, 2008 at 12:05 PM, Glaser, David wrote:
> Hi all,
>
>
>
> A few weeks ago I was inquiring of the group on how often to do zfs scrubs
> of pools on our x4500's. Figures that the first time I try to do a monthly
> scrub of our pools, we get one of the three machines to throw an erro
Hi all,
A few weeks ago I was inquiring of the group on how often to do zfs scrubs of
pools on our x4500's. Figures that the first time I try to do a monthly scrub
of our pools, we get one of the three machines to throw an error. On one of the
machines, there's one disk that has registered one
On Tue, Dec 16, 2008 at 12:07:52PM +, Ross Smith wrote:
> It sounds to me like there are several potentially valid filesystem
> uberblocks available, am I understanding this right?
>
> 1. There are four copies of the current uberblock. Any one of these
> should be enough to load your pool wit
Andrew Gabriel wrote:
>
> Different USB memory sticks vary enormously in speed.
> The speed is often not described on the packaging, so it's often not
> possible to know how fast one is until after you've bought it and
> tried it.
>
This was tested with an external laptop hardisk inside a USB enc
Niall Power wrote:
>> Yes to both I believe, while the USB device is
>> attached your system will run slower, and it will run
>> considerably slower while replicating data.
>> Hopefully USB 3 or eSATA drives would address this
>> to some extent.
>
> I think I've confirmed this is the case, at lea
>
> Yes to both I believe, while the USB device is
> attached your system will run slower, and it will run
> considerably slower while replicating data.
> Hopefully USB 3 or eSATA drives would address this
> to some extent.
I think I've confirmed this is the case, at least in the configuration
I
Well done, Nathan, thank you taking on the additional effort to write it all up.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'd start by upgrading the fcode on the QLogic adapter as well as upgrading
the obp on the server.
http://filedownloads.qlogic.com/Files/TempDownlods/20340/qla23xxFcode2.12.tar.Z
I'd also double check your LUN security on the storage array. Seems to me
you might not have it configured properly, a
Stanislav Filippov wrote:
> Hi, Milek!
> I try to get Sun-Fire-280R to boot zfs-rpool, where 1 of 2 slices is in SAN.
>
> here is a small listing of properties:
>
> {0} ok show-devs
>
> /p...@8,60/SUNW,q...@4
> /p...@8,60/SUNW,q...@4/f...@0,0
> /p...@8,60/SUNW,q...@4/f...@0
Hi, Milek!
I try to get Sun-Fire-280R to boot zfs-rpool, where 1 of 2 slices is in SAN.
here is a small listing of properties:
{0} ok show-devs
/p...@8,60/SUNW,q...@4
/p...@8,60/SUNW,q...@4/f...@0,0
/p...@8,60/SUNW,q...@4/f...@0,0/disk
/p...@8,70/QLGC,q...@3
/p...@8,700
> One other question I have about using mirrors is
> potential performance implications.
> In a common scenario the user might be using the main
> S(ATA) attached disk and
> a USB external disk as a mirror configuration. Could
> the slower disk become a
> bottleneck because of it's lower I/O read/
Hi Volker,
> Yes, by all means. I am doing something very similar
> on my T1000, but
> I have two separate one-disk pools and copy to the
> backup pool using
> rsync. I would very much like to replace this with
> automatic resilvering.
>
> One prerequisite for wide adoption would be to fix
> th
It sounds to me like there are several potentially valid filesystem
uberblocks available, am I understanding this right?
1. There are four copies of the current uberblock. Any one of these
should be enough to load your pool with no data loss.
2. There are also a few (would love to know how many)
> Does 1. really need to be fixed?
>
I'm not suggesting that it's currently broken I'm just asking if it would be
reasonable to special case our usage a little bit in order to avoid unnecessary
alarm to users. This will be seen as a fit and finish/polish issue. If it's
easy to
address that then
On Tue, Dec 16, 2008 at 1:43 PM, wrote:
>
>
> >When current uber-block A is detected to point to a corrupted on-disk
> data,
> >how would "zpool import" (or any other tool for that matter) quickly and
> >safely know that, once it found an older uber-block "B" that it points to
> a
> >set of block
>When current uber-block A is detected to point to a corrupted on-disk data,
>how would "zpool import" (or any other tool for that matter) quickly and
>safely know that, once it found an older uber-block "B" that it points to a
>set of blocks which does not include any blocks that has since been
On Tue, Dec 16, 2008 at 11:39 AM, Ross wrote:
> I know Eric mentioned the possibility of zpool import doing more of this
> kind of thing, and he said that it's current inability to do this will be
> fixed, but I don't know if it's an official project, RFE or bug. Can
> anybody shed some light on
I know Eric mentioned the possibility of zpool import doing more of this kind
of thing, and he said that it's current inability to do this will be fixed, but
I don't know if it's an official project, RFE or bug. Can anybody shed some
light on this?
See Jeff's post on Oct 10, and Eric's follow
PS. One thing that really would be a useful extension of ZFS for this would be
the ability to mirror a raid-z volume. I know it's not in the spec, does
anybody know if this is even vaguely possible or whether there's an RFE for
this kind of functionality?
--
This message posted from opensolar
Does 1. really need to be fixed?
I ask this since I imagine there will be some resistance from the ZFS team to
essentially breaking the spec for the sake of not confusing some users.
I would argue that anybody who knows enough to run "zpool status" is also
capable of learning what a mirror is a
On Mon, Dec 15, 2008 at 7:57 PM, Thanos McAtos wrote:
> Hello all.
>
> I'm doing a course project to evaluate recovery time of RAID-Z.
>
> One of my tests is to examine the impact of aging on recovery speed.
>
> I've used PostMark to stress the file-system but I didn't observe any
> noticeable sl
43 matches
Mail list logo