Since ZFS already has error correction, would drives that limit the time a hard
drive attempts to recover from errors such as WD RE drives or Seagate ES drive
be necessary? Would it be safe to use standard hard drives without the Time
Limited Error Recovery feature in a RAIDZ array?
This me
On Thu, 18 Jan 2007, . wrote:
> Looking around there still is not a good "these cards/motherboards" work
> list. the HCL is hardly ever updated, and its far more geared towards
> business use than hobbyist/home use. So bearing all of that in mind I
> will need the following things:
> 1. At least
Toby Thain:
>
> On 18-Jan-07, at 9:55 PM, Jason J. W. Williams wrote:
>
> > Hi Frank,
> >
> > What do they [not] support?
>
> Hotplug.
and NCQ. and SMART.
-frank
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
On 1/18/07, Christophe Dupré <[EMAIL PROTECTED]> wrote:
I've been looking for the patches to get the latest ZFS bits for S10U2,
like kernel patch 108833-30, but I can't find then on sunsolve.sun.com.
Latest seems to be 108833-24
Is there any other location I should look for the patches ?
If y
On 1/18/07, . <[EMAIL PROTECTED]> wrote:
Looking around there still is not a good "these cards/motherboards" work list.
the HCL is hardly ever updated, and its far more geared towards business use than
hobbyist/home use.
Yes, this is true. This list is the best resource I have found so far,
a
> Jeremy Teo wrote:
> > On the issue of the ability to remove a device from
> a zpool, how
> > useful/pressing is this feature? Or is this more
> along the line of
> > "nice to have"?
>
> This is a pretty high priority. We are working on
> it.
Good news! Where is the discussion on the best appr
I get that part. I think I asked that question before (although not as
direct) - basically you're talking about the ability to shrink volumes
and/or disable/change the mirroring/redundancy options if there is
space available to account for it.
If this was allowed, this would also allow for a conv
2007/1/18, . <[EMAIL PROTECTED]>:
2. What consumer level SATAII chipsets work. 4-ports onboard is fine for now
since I can always add a card later. I will need at least four ports to start.
pci-e cards are highly preferred since pci-x is expensive and going to become
rarer. (mark my words)
S
Couldn't this be considered a compatibility list that we can trust for
OpenSolaris and ZFS?
http://www.sun.com/io_technologies/
I've been looking at it for the past few days. I am looking for eSATA
support options - more details below.
Only 2 devices on the list show support for eSATA, both are
On January 18, 2007 6:27:14 PM -0800 "." <[EMAIL PROTECTED]> wrote:
Looking around there still is not a good "these cards/motherboards" work
list.
You must have just missed the "What SATA controllers are people using
for ZFS?" thread. Not a list, but you can probably find similar components
to
So after toying around with some stuff a few months back I got bogged down and
set this project aside for a while. Time to revisit.
Looking around there still is not a good "these cards/motherboards" work list.
the HCL is hardly ever updated, and its far more geared towards business use
than
Mike,
I think you are missing the point. What we are talking about is
removing a drive from a zpool, that is, reducing the zpool's total
capacity by a drive. Say you have 4 drives of 100GB in size,
configured in a striped mirror, capacity of 200GB usable. We're
discussing the case where if the
what is the technical difference between forcing a removal and an
actual failure?
isn't it the same process? except one is manually triggered? i would
assume the same resilvering process happens when a usable drive is put
back in...
On 1/18/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
Not quite.
On 1/19/07, mike <[EMAIL PROTECTED]> wrote:
Would this be the same as failing a drive on purpose to remove it?
I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.
Not quite. I suspect you are thinking about drive replacement rather
than
Hi Toby,
Thanks for the links. That's interesting. I assume this goes forward
to the M2s. Glad hot-swap isn't a requirement where we use them.
Best Regards,
Jason
On 1/18/07, Toby Thain <[EMAIL PROTECTED]> wrote:
On 18-Jan-07, at 9:55 PM, Jason J. W. Williams wrote:
> Hi Frank,
>
> What do t
On 18-Jan-07, at 9:55 PM, Jason J. W. Williams wrote:
Hi Frank,
What do they [not] support?
Hotplug.
See, inter alia,
http://groups.google.com/group/comp.unix.solaris/msg/56e9e341607aa984
http://groups.google.com/group/comp.unix.solaris/msg/9c0afc2668207d36
--Toby
We've had some various s
Please don't top-post. It's annoying.
On January 18, 2007 4:55:35 PM -0700 "Jason J. W. Williams"
<[EMAIL PROTECTED]> wrote:
On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams"
<[EMAIL PROTECTED]> wrote:
> Sun doesn't support the X21
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 01/01 - 01/15
=
Size of all threads during per
Hi Frank,
What do they support? We've had some various service issues on the
NICs on the original X2100...which they gave us some flack on because
we were running Gentoo. Once we proved it on Solaris 10 Update 2 (at
the time) they got on board with the problem.
Best Regards,
Jason
On 1/18/07, F
On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams"
<[EMAIL PROTECTED]> wrote:
Sun doesn't support the X2100 SATA controller on Solaris 10? That's
just bizarre.
Not only that, their marketing is misleading (at best) on the issue.
-frank
___
z
Hi Frank,
Sun doesn't support the X2100 SATA controller on Solaris 10? That's
just bizarre.
-J
On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
THANK YOU Naveen, Al Hopper, others, for sinking yourselves into the
shit world of PC hardware and [in]compatibility and coming up with
well qualif
THANK YOU Naveen, Al Hopper, others, for sinking yourselves into the
shit world of PC hardware and [in]compatibility and coming up with
well qualified white box solutions for S10.
I strongly prefer to buy Sun kit, but I am done waiting for Sun to support
the SATA controller on the x2100.
-frank
Of course, I meant 118833, not 108833... :-(
Christophe Dupré wrote:
> I've been looking for the patches to get the latest ZFS bits for S10U2,
> like kernel patch 108833-30, but I can't find then on sunsolve.sun.com.
> Latest seems to be 108833-24
>
> Is there any other location I should look for
I've been looking for the patches to get the latest ZFS bits for S10U2,
like kernel patch 108833-30, but I can't find then on sunsolve.sun.com.
Latest seems to be 108833-24
Is there any other location I should look for the patches ?
--
Christophe Dupré
Administrateur Unix et Réseau Sénior
(514
On Thu, Jan 18, 2007 at 11:37:26PM +0100, Henk Langeveld wrote:
> When ZFS was first announced, one argument was how ZFS complexity and
> code size was actually significantly less than for instance, UFS+SVM.
>
> Over a year has passed, and I wonder how code size has grown since, with
> all of the
When ZFS was first announced, one argument was how ZFS complexity and
code size was actually significantly less than for instance, UFS+SVM.
Over a year has passed, and I wonder how code size has grown since, with
all of the features that have been added.
Has anyone kept track of this? Would it
Celso wrote:
> Both removing disks from a zpool and modifying raidz arrays would be very
> useful.
Add my vote for this.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Both removing disks from a zpool and modifying raidz arrays would be very
useful.
I would also still love to have ditto data blocks. Is there any progress on
this?
Celso.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
Would this be the same as failing a drive on purpose to remove it?
I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.
On 1/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > This is a pretty high priority. We are working on it.
___
[EMAIL PROTECTED] wrote on 01/18/2007 01:29:23 PM:
> On Thu, 2007-01-18 at 10:51 -0800, Matthew Ahrens wrote:
> > Jeremy Teo wrote:
> > > On the issue of the ability to remove a device from a zpool, how
> > > useful/pressing is this feature? Or is this more along the line of
> > > "nice to h
On Thu, 2007-01-18 at 10:51 -0800, Matthew Ahrens wrote:
> Jeremy Teo wrote:
> > On the issue of the ability to remove a device from a zpool, how
> > useful/pressing is this feature? Or is this more along the line of
> > "nice to have"?
>
> This is a pretty high priority. We are working on it.
>
Rats, didn't proof accurately. For "UFS", I meant NFS.
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Sorry, I should have qualified that "effective" better. I was specifically
speaking in terms of Solaris and price. For companies without a SAN (especially
using Linux), something like a NetApp Filer using UFS is the way to go, I
realize. If you're running Solaris, the cost of QFS becomes a major
Jeremy Teo wrote:
On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?
This is a pretty high priority. We are working on it.
--matt
___
zfs-discuss mailin
Rainer Heilke wrote:
If you plan on RAC, then ASM makes good sense. It is
unclear (to me anyway)
if ASM over a zvol is better than ASM over a raw LUN.
Hmm. I thought ASM was really the _only_ effective way to do RAC,
but then, I'm not a DBA (and don't want to be ;-) We'll be just
using raw
Karen Chau wrote:
> How do you reconfigure ZFS on the server after an OS upgrade? I have a
> ZFS pool on a 6130 storge array.
> After upgrade the data on the storage array is still intact, but ZFS
> configuration is gone due to new OS.
>
> Do I use the same commands/procedure to recreate the zpo
How do you reconfigure ZFS on the server after an OS upgrade? I have a
ZFS pool on a 6130 storge array.
After upgrade the data on the storage array is still intact, but ZFS
configuration is gone due to new OS.
Do I use the same commands/procedure to recreate the zpool, ie.
# zpool create cana
Rainer,
Have you considered looking for a patch? If you have the supported
version(s) of Solaris (which it sound like you do), this may already be
available in a patch.
Bev.
Rainer Heilke wrote:
Thanks for the detailed explanation of the bug. This makes it clearer to us as
to what's happeni
> If you plan on RAC, then ASM makes good sense. It is
> unclear (to me anyway)
> if ASM over a zvol is better than ASM over a raw LUN.
Hmm. I thought ASM was really the _only_ effective way to do RAC, but then, I'm
not a DBA (and don't want to be ;-) We'll be just using raw LUN's. While the
z
Thanks for the detailed explanation of the bug. This makes it clearer to us as
to what's happening, and why (which is something I _always_ appreciate!).
Unfortunately, U4 doesn't buy us anything for our current problem.
Rainer
This message posted from opensolaris.org
> > This problem was fixed in snv_48 last September
> and will be
> > in S10_U4.
U4 doesn't help us any. We need the fix now. :-( By the time U4 is out, we may
even be finished (certainly well on our way) our RAC/ASM migration and this
whole issue will be moot.
Rainer
This message posted
> Bag-o-tricks-r-us, I suggest the following in such a case:
>
> - Two ZFS pools
> - One for production
> - One for Education
The DBA's are very resistant to splitting our whole environments. There are
nine on the test/devl server! So, we're going to put the DB files and redo logs
on separate
I can vouch for this situation. I had to go through a long maintenance to
accomplish the following:
- 50 x 64GB drives in a zpool; needed to seperate out 15 of them out due to
performance issues. There was no need to increase storage capacity.
Because I couldn't yank 15 drives from the existing
On 18/01/2007, at 9:55 PM, Jeremy Teo wrote:
On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?
Assuming we're talking about removing a top-level vdev..
I introduce new sysadmins to ZFS on a weekly
On 15/01/07, Rick McNeal <[EMAIL PROTECTED]> wrote:
On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:
> Hi, are there currently any plans to make an iSCSI target created by
> setting shareiscsi=on on a zvol
> bindable to a single interface (setting tpgt or acls)?
We're working on some more i
On 18/01/07, Jeremy Teo <[EMAIL PROTECTED]> wrote:
On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?
It's very useful if you accidentally create a concat rather than mirror
of an existing zpool. Ot
On Thu, Jan 18, 2007 at 06:55:39PM +0800, Jeremy Teo wrote:
> On the issue of the ability to remove a device from a zpool, how
> useful/pressing is this feature? Or is this more along the line of
> "nice to have"?
If you think "remove a device from a zpool" = "to shrink a pool" then
it is really u
On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
Jason J. W. Williams writes:
> Hi Anantha,
>
> I was curious why segregating at the FS level would provide adequate
> I/O isolation? Since all FS are on the same pool, I assumed flogging a
> FS would flog the pool and negatively affect all the other FS on that
> pool?
>
> Best Regards,
If some aspect of the load is writing large amount of data
into the pool (through the memory cache, as opposed to the
zil) and that leads to a frozen system, I think that a
possible contributor should be:
|6429205||each zpool needs to monitor its throughput and throttle heavy
wri
Hi,
Was wondering if anyone had experience working with VxVM volumes in a
zpool. We are using VxVM 5.0 on a Solaris 10 11/06 box. The volume is on a
SAN, with two FC HBAs connected to a fabric.
The setup works, but we observe a very strange message on bootup. The
bootup screen is attached at
51 matches
Mail list logo