- Original Message -
> First, this is under FreeBSD, but it isn't specific to that OS, and it
> involves some technical details beyond normal use, so I'm trying my
> luck here.
>
> I have a pool (around version 14) with a corrupted log device that's
> irrecoverable. I found a tool called l
On 18 Oct 2010, at 17:44, Habony, Zsolt wrote:
> Thank You all for the comments.
>
> You should imagine a datacenter with
> - standards not completely depending on me.
> - SAN for many OSs, one of them is Solaris, (and not the major amount)
So you get luns from the storage team and there is no
What would the performance impact be of splitting up a 64 GB SSD into four
partitions of 16 GB each versus having the entire SSD dedicated to each
pool?
Scenario A:
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> Last I checked, you lose the pool if you lose the slog on zpool
> versions < 19. I don't think there is a trivial way around this.
You should plan for this to be true
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gil Vidals
>
> What would the performance impact be of splitting up a 64 GB SSD into
> four partitions of 16 GB each versus having the entire SSD dedicated to
> each pool?
This is a common que
Hi,
I have two questions:
1) Is there any way of renaming zpool without export/import ??
2) If I took hardware snapshot of devices under a zpool ( where the snapshot
device will be exact copy including metadata i.e zpool and associated file
systems) is there any way to rename zpool name of snap
Hi.
I have a pool with 3 raidz1 vdevs (5*1,5TB + 5*1,5TB + 5*1TB), and I
want to create 6-disk raidz2 vdevs instead. I've bought 12 2TB drives,
and I already have additional 1,5TB and 1TB drives. My cabinet can
only hold 24 drives (connected to an LSI SAS controller, and a
Supermicro SAS backplane
On Tue, 19 Oct 2010, Gil Vidals wrote:
What would the performance impact be of splitting up a 64 GB SSD
into four partitions of 16 GB each versus having the entire SSD
dedicated to each pool?
Ignore Edward Ned Harvey's response because he answered the wrong
question.
For a L2ARC device, th
On Tue, 19 Oct 2010, Trond Michelsen wrote:
Anyway - I'm wondering what is the best way to migrate the data in
this system? I'm assuming that upgrading a raidz1 vdev to raidz2 is
not possible, and I have to create a new pool, zfs send all the
datasets and destroy the old pool. Is that correct?
Hi all
I have this server with some 50TB disk space. It originally had 30TB on WD
Greens, was filled quite full, and another storage chassis was added. Now,
space problem gone, fine, but what about speed? Three of the VDEVs are quite
full, as indicated below. VDEV #3 (the one with the spare act
obviously, I meant VDEVs, not LVOLs... It's been a long day...
- Original Message -
> Hi all
>
> I have this server with some 50TB disk space. It originally had 30TB
> on WD Greens, was filled quite full, and another storage chassis was
> added. Now, space problem gone, fine, but what abo
We tried this in our environment and found that it didn't work out. The more
partitions we used, the slower it went. We decided just to use the entire SSD
as a read cache and it worked fine. Still has the TRIM issue of course until
the next version.
--
This message posted from opensolaris.org
_
Nicolas Williams [mailto:nicolas.willi...@oracle.com] wrote:
> It's the sticky bit. Nowadays it's only useful on directories, and
> really it's generally only used with 777 permissions. The chmod(1)
Thanks. It doesn't seem harmful. But it does make me wonder why it's showing
up on my newly-c
On 19 October, 2010 - Linder, Doug sent me these 1,2K bytes:
> Nicolas Williams [mailto:nicolas.willi...@oracle.com] wrote:
>
> > It's the sticky bit. Nowadays it's only useful on directories, and
> > really it's generally only used with 777 permissions. The chmod(1)
>
> Thanks. It doesn't se
Based on the answers I received, I will stick to an SSD device fully
dedicated to each pool. This means I will have four SSDs and four pools.
This seems acceptable to me as it keeps things simpler and if one SSD
(L2ARC) fails, the others are still working correctly.
Thank you.
Gil Vidals
On Tue
- Original Message -
Based on the answers I received, I will stick to an SSD device fully dedicated
to each pool. This means I will have four SSDs and four pools. This seems
acceptable to me as it keeps things simpler and if one SSD (L2ARC) fails, the
others are still working correctl
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote:
> So are we all agreed then, that a vdev failure will cause pool loss ?
> --
unless you use copies=2 or 3, in which case your data is still safe
for those datasets that have this option set.
--
- Tuomas
On Mon, Oct 18, 2010 at 4:55 PM, Edward Ned Harvey wrote:
> Thank you, but, the original question was whether a scrub would identify
> just corrupt blocks, or if it would be able to map corrupt blocks to a list
> of corrupt files.
>
Just in case this wasn't already clear.
After scrub sees read o
Hi Sridhar,
The answer to the first question is definitely no:
No way exists to change a pool name without exporting and importing
the pool. I thought we had an open CR that covered renaming pools but I
can't find it.
The underlying pool devices contain pool information and no easy way
exists t
Tuomas:
My understanding is that the "copies" functionality doesn't guarantee that
the extra copies will be kept on a different vdev. So that isn't entirely
true. Unfortunately.
On 20 October 2010 07:33, Tuomas Leikola wrote:
> On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote:
> > So are we
On 10/19/10 14:33, Tuomas Leikola wrote:
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote:
So are we all agreed then, that a vdev failure will cause pool loss ?
--
unless you use copies=2 or 3, in which case your data is still safe
for those datasets that have this option set.
This adv
I have a Solaris 10 U8 box (142901-14) running as an NFS server with
a 23 disk zpool behind it (three RAIDZ2 vdevs).
We have a single Intel X-25E SSD operating as an slog ZIL device
attached to a SATA port on this machine's motherboard.
The rest of the drives are in a hot-swap enclosure.
Infrequ
On Oct 19, 2010, at 4:33 PM, Tuomas Leikola wrote:
> On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote:
>> So are we all agreed then, that a vdev failure will cause pool loss ?
>> --
>
> unless you use copies=2 or 3, in which case your data is still safe
> for those datasets that have this op
A bit over a year ago I posted about a problem I was having with live
upgrade on a system with lots of file systems mounted:
http://opensolaris.org/jive/thread.jspa?messageID=411137
An official Sun support call was basically just closed with no
resolution. I was quite fortunate that Jens El
Sorry I couldn't find this anywhere yet. For deduping it is best to have the
lookup table in RAM, but I wasn't too sure how much RAM is suggested?
::Assuming 128KB Block Sizes, and 100% unique data:
1TB*1024*1024*1024/128 = 8388608 Blocks
::Each Block needs 8 byte pointer?
8388608*8 = 67108864 b
On Tue, 19 Oct 2010, Cindy Swearingen wrote:
unless you use copies=2 or 3, in which case your data is still safe
for those datasets that have this option set.
This advice is a little too optimistic. Increasing the copies property
value on datasets might help in some failure scenarios, but prob
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Trond Michelsen
>
> Hi.
I think everything you said sounds perfectly right.
As for estimating the time required to "zfs send" ... I don't know how badly
"zfs send" gets hurt by the on-disk or
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> Ignore Edward Ned Harvey's response because he answered the wrong
> question.
Indeed.
Although, now that I go back and actually read the question correctly, I
wonder why n
On 2010-Oct-20 08:36:30 +0800, Never Best wrote:
>Sorry I couldn't find this anywhere yet. For deduping it is best to
>have the lookup table in RAM, but I wasn't too sure how much RAM is
>suggested?
*Lots*
>::Assuming 128KB Block Sizes, and 100% unique data:
>1TB*1024*1024*1024/128 = 8388608 Bl
Ouch. I was thinking a DDT entry basically just needs an 8byte pointer to
where-ever the data is located on disk, with a O(1) hash table for lookup, and
maybe some redundancy/error correction data. Maybe that should get optimized;
a light weight version for NB ;).
I guess it is doing more tha
30 matches
Mail list logo