On Mon, Apr 26, 2010 at 6:21 AM, Dave Pooser wrote:
> I have one storage server with 24 drives, spread across three controllers
> and split into three RAIDz2 pools. Unfortunately, I have no idea which bay
> holds which drive. Fortunately, this server is used for secondary storage so
> I can take i
> Then perhaps you should do zpool import -R / pool
> *after* you attach EBS.
> That way Solaris won't automatically try to import
> the pool and your
> scripts will do it once disks are available.
zpool import doesn't work as there was no previous export.
I'm trying to solve the case where the
Hi Tim,
thanks for sharing your dedup experience. Especially for Virtualization, having
a good pool of experience will help a lot of people.
So you see a dedup ratio of 1.29 for two installations of Windows Server 2008 on
the same ZFS backing store, if I understand you correctly.
What dedup rat
On 26/04/2010 09:27, Phillip Oldham wrote:
Then perhaps you should do zpool import -R / pool
*after* you attach EBS.
That way Solaris won't automatically try to import
the pool and your
scripts will do it once disks are available.
zpool import doesn't work as there was no previous export.
>
> On Jan 5, 2010, at 4:38 PM, Bob
> Friesenhahn wrote: class="Apple-interchange-newline"> type="cite">On Mon, 4 Jan 2010, Tony Russell
> wrote:I am under the
> impression that dedupe is still only in OpenSolaris
> and that support for dedupe is limited or non
> existent. Is this true? I would
> You don't have to do exports as I suggested to use
> 'zpool -R / pool'
> (notice -R).
I tried this after your suggestion (including the -R switch) but it failed,
saying the pool I was trying to import didn't exist.
> If you do so that a pool won't be added to
> zpool.cache and therefore
> af
- "Dave Pooser" skrev:
> I'm building another 24-bay rackmount storage server, and I'm
> considering
> what drives to put in the bays. My chassis is a Supermicro SC846A, so
> the
> backplane supports SAS or SATA; my controllers are LSI3081E, again
> supporting SAS or SATA.
>
> Looking at dri
Hi,
I'm trying to let zfs users to create and destroy snapshots in their zfs
filesystems.
So rpool/vm has the permissions:
osol137 19:07 ~: zfs allow rpool/vm
Permissions on rpool/vm -
Permission sets:
@virtual
clone,create,destroy,mount,prom
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: Sunday, April 25, 2010 2:12 PM
>
> > E did exist. Inode 12345 existed, but it had a different name at the
> time
>
> OK, I'll believe you.
>
> How about this?
>
> mv a/E/c a/c
> mv a/E a/c
> mv a/c a/E
The thin
> From: Ian Collins [mailto:i...@ianshome.com]
> Sent: Sunday, April 25, 2010 5:09 PM
> To: Edward Ned Harvey
> Cc: 'Robert Milkowski'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS Pool, what happen when disk failure
>
> On 04/26/10 12:08 AM, Edward Ned Harvey wrote:
>
> [why do y
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave Pooser
>
> (lots of small writes/reads), how much benefit will I see from the SAS
> interface?
In some cases, SAS outperforms SATA. I don't know what circumstances those
are.
I think th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Travis Tabbal
>
> I have a few old drives here that I thought might help me a little,
> though not at much as a nice SSD, for those uses. I'd like to speed up
> NFS writes, and there have been
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Travis Tabbal
Oh, one more thing. Your subject says "ZIL/L2ARC" and your message says "I
want to speed up NFS writes."
ZIL (log) is used for writes.
L2ARC (cache) is used for reads.
I'd reco
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> About SAS vs SATA, I'd guess you won't be able to see any change at
> all. The bottleneck is the drives, not the interface to them.
That doesn't agree with my understa
On 26/04/2010 11:14, Phillip Oldham wrote:
You don't have to do exports as I suggested to use
'zpool -R / pool'
(notice -R).
I tried this after your suggestion (including the -R switch) but it failed,
saying the pool I was trying to import didn't exist.
which means it couldn't discov
It's a litle while ago, but i've found a http://www.youtube.com/watch?v=tpzsSptzmyA";>pretty helpful video on
YT how to completely "migrate" from one harddrive to another.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
> If your clients are mounting "async" don't bother.
> If the clients are
> ounting async, then all the writes are done
> asynchronously, fully
> accelerated, and never any data written to ZIL log.
I've tried async, things run well until you get to the end of the job, then the
process hangs unt
> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Travis Tabbal
>
> Oh, one more thing. Your subject says "ZIL/L2ARC"
> and your message says "I
> want to speed up NFS writes."
>
> ZIL (log) is used for writes.
> L2ARC (cache) is used
On Apr 26, 2010, at 5:02 AM, Edward Ned Harvey wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> Sent: Sunday, April 25, 2010 2:12 PM
>>
>>> E did exist. Inode 12345 existed, but it had a different name at the
>> time
>>
>> OK, I'll believe you.
>>
>> How about this?
>>
>>
On Apr 25, 2010, at 10:02 PM, Dave Pooser wrote:
> I'm building another 24-bay rackmount storage server, and I'm considering
> what drives to put in the bays. My chassis is a Supermicro SC846A, so the
> backplane supports SAS or SATA; my controllers are LSI3081E, again
> supporting SAS or SATA.
>
Hi,
The setting was this:
Fresh installation of 2008 R2 -> server backup with the backup feature -> move
vhd to zfs -> install active directory role -> backup again -> move vhd to same
share
I am kinda confused over the change of dedup ratio from changing the record
size, since it should ded
Hi Vlad,
The create-time permissions do not provide the correct permissions for
destroying descendent datasets, such as clones.
See example 9-5 in this section that describes how to use zfs allow -d
option to grant permissions on descendent datasets:
http://docs.sun.com/app/docs/doc/819-5461/ge
luxadm(1m) has a led_blink subcommand you might find useful.
-- richard
On Apr 25, 2010, at 10:21 PM, Dave Pooser wrote:
> I have one storage server with 24 drives, spread across three controllers
> and split into three RAIDz2 pools. Unfortunately, I have no idea which bay
> holds which drive. F
Yes, it is helpful in that it reviews all the steps needed to get the
replacement disk labeled properly for a root pool and is identical
to what we provide in the ZFS docs.
The part that is not quite accurate is the reasons for having to relabel
the replacement disk with the format utility.
If
SAS: full duplex
SATA: half duplex
SAS: dual port
SATA: single port (some enterprise SATA has dual port)
SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write and 1 read
SATA: 1 active channel - 1 read or 1 write
SAS: Full error detection and recovery on both read and write
SATA: err
- "Tonmaus" skrev:
> I wonder if this is the right place to ask, as the Filesystem in User
> Space implementation is a separate project. In Solaris ZFS runs in
> kernel. FUSE implementations are slow, no doubt. Same goes for other
> FUSE implementations, such as for NTFS.
The classic answers
On Sun, Apr 25, 2010 at 10:02 PM, Dave Pooser wrote:
> Assuming I'm going to be using three 8-drive RAIDz2 configurations, and
> further assuming this server will be used for backing up home directories
> (lots of small writes/reads), how much benefit will I see from the SAS
> interface?
SAS driv
- "Brandon High" skrev:
> SAS drives are generally intended to be used in a multi-drive / RAID
> environment, and are delivered with TLER / CCTL / ERC enabled to
> prevent them from falling out of arrays when they hit a read error.
>
> SAS drives will generally have a longer warranty than de
- "Neil Simpson" skrev:
> I'm pretty sure Solaris 10 update 9 will have zpool version 22 so WILL
> have dedup.
Interesting - from where do you have this information?
roy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
On 4/26/10 10:10 AM, "Richard Elling" wrote:
> SAS shines with multiple connections to one or more hosts. Hence, SAS
> is quite popular when implementing HA clusters.
So that would be how one builds something like the active/active controller
failover in standalone RAID boxes. Is there a good r
On Mon, 26 Apr 2010, Roy Sigurd Karlsbakk wrote:
SAS drives will generally have a longer warranty than desktop drives.
With 2TB drives priced at €150 or lower, I somehow think paying for
drive lifetime is far more expensive than getting a few more drives
and add redundancy
This really depe
I found the VHD specification here:
http://download.microsoft.com/download/f/f/e/ffef50a5-07dd-4cf8-aaa3-442c0673a029/Virtual%20Hard%20Disk%20Format%20Spec_10_18_06.doc
I am not sure if i understand it right, but it seems like data on disk gets
"compressed" into the vhd (no empty space), so even
On Mon, Apr 26, 2010 at 9:43 AM, Roy Sigurd Karlsbakk
wrote:
> The zfs fuse project will give you most of the nice zfs stuff, but it
> probably won't give you the same performance. I don't think opensolaris has
> been compared to FUSE ZFS, but it might be interesting to see that.
AFAIK zfs-fus
On 25 apr 2010, at 20.12, Richard Elling wrote:
> On Apr 25, 2010, at 5:45 AM, Edward Ned Harvey wrote:
>
>>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>> Sent: Saturday, April 24, 2010 7:42 PM
>>>
>>> Next,
>>> mv /a/e /a/E
>>> ls -l a/e/.snapshot/snaptime
>>>
>>> ENOENT
Hi Cindy,
> The create-time permissions do not provide the correct permissions for
> destroying descendent datasets, such as clones.
>
> See example 9-5 in this section that describes how to use zfs allow -d
> option to grant permissions on descendent datasets:
>
> http://docs.sun.com/app/docs/d
On Fri, Apr 16, 2010 at 4:41 PM, Brandon High wrote:
> When I set up my opensolaris system at home, I just grabbed a 160 GB
> drive that I had sitting around to use for the rpool.
Just to follow up, after testing in Virtualbox, my initial plan is
very close to what worked. This is what I did:
1.
> This really depends on if you are willing to pay in advance, or pay
> after the failure. Even with redundancy, the cost of a failure may be
>
> high due to loss of array performance and system administration time.
>
> Array performance may go into the toilet during resilvers, depending
> on
On Mon, Apr 26, 2010 at 01:32:33PM -0500, Dave Pooser wrote:
> On 4/26/10 10:10 AM, "Richard Elling" wrote:
>
> > SAS shines with multiple connections to one or more hosts. Hence, SAS
> > is quite popular when implementing HA clusters.
>
> So that would be how one builds something like the acti
Hello list,
a pool shows some strange status:
volume: zfs01vol
state: ONLINE
scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38
2010
config:
NAME STATE READ WRITE CKSUM
zfs01vol ONLINE 0 0 0
mirror ONLINE
On 04/27/10 09:41 AM, Lutz Schumann wrote:
Hello list,
a pool shows some strange status:
volume: zfs01vol
state: ONLINE
scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38
mirror ONLINE 0 0 0
c2t12d0ONLINE 0
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> With 2TB drives priced at €150 or lower, I somehow think paying for
> drive lifetime is far more expensive than getting a few more drives and
> add redundancy
If you h
Hi Lutz,
You can try the following commands to see what happened:
1. Someone else replaced the disk with a spare, which would be
recorded in this command:
# zpool history -l zfs01vol
2. If the disk had some transient outage then maybe the spare kicked
in. Use the following command to see if so
I went through with it and it worked fine. So, I could successfully move my ZFS
device to the beginning of the new disk.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
Could this be a future enhancement for ZFS? Like provide 'zfs move fs1/
fs2/', which will do the needful without really copying anything?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
On Mon, Apr 26, 2010 at 8:51 AM, tim Kries wrote:
> I am kinda confused over the change of dedup ratio from changing the record
> size, since it should dedup 256-bit blocks.
Dedup works on the blocks or either recordsize or volblocksize. The
checksum is made per block written, and those checksum
On Mon, Apr 26, 2010 at 8:01 AM, Travis Tabbal wrote:
> At the end of my OP I mentioned that I was interested in L2ARC for dedupe. It
> sounds like the DDT can get bigger than RAM and slow things to a crawl. Not
> that I expect a lot from using an HDD for that, but I thought it might help.
> I'
On 26/04/10 03:02 PM, Dave Pooser wrote:
I'm building another 24-bay rackmount storage server, and I'm considering
what drives to put in the bays. My chassis is a Supermicro SC846A, so the
backplane supports SAS or SATA; my controllers are LSI3081E, again
supporting SAS or SATA.
Looking at drive
Hello.
If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
On Mon, Apr 26, 2010 at 10:02:42AM -0700, Chris Du wrote:
> SAS: full duplex
> SATA: half duplex
>
> SAS: dual port
> SATA: single port (some enterprise SATA has dual port)
>
> SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write and 1 read
> SATA: 1 active channel - 1 read or 1 writ
On 04/26/10 11:54 PM, Yuri Vorobyev wrote:
Hello.
If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
I'd be happy to, exactly what commands shall I run?
Paul
___
50 matches
Mail list logo