On Fri, Jul 9, 2010 at 5:18 PM, Brandon High wrote:
> I think that DDT entries are a little bigger than what you're using. The
> size seems to range between 150 and 250 bytes depending on how it's
> calculated, call it 200b each. Your 128G dataset would require closer to
> 200M (+/- 25%) for the
Hello, I'm trying to figure out why I'm getting about 10MB/s scrubs, on a pool
where I can easily get 100MB/s. It's 4x 1TB SATA2 (nv_sata), raidz. Athlon64
with 8GB RAM.
Here's the output while I "cat" an 8GB file to /dev/null
r...@solaris:~# zpool iostat 20
capacity operatio
>-Original Message-
>From: Erik Trimble
>Sent: Friday, July 09, 2010 6:45 PM
>Subject: Re: [zfs-discuss] block align SSD for use as a l2arc cache
>
>On 7/9/2010 5:55 PM, Geoff Nordli wrote:
>I have an Intel X25-M 80GB SSD.
>
>For optimum performance, I need to block align the S
On 07/09/10 19:40, Erik Trimble wrote:
On 7/9/2010 5:18 PM, Brandon High wrote:
On Fri, Jul 9, 2010 at 5:00 PM, Edward Ned Harvey
mailto:solar...@nedharvey.com>> wrote:
The default ZFS block size is 128K. If you have a filesystem
with 128G used, that means you are consuming 1,048,576
On 7/9/2010 5:55 PM, Geoff Nordli wrote:
I have an Intel X25-M 80GB SSD.
For optimum performance, I need to block align the SSD device, but I
am not sure exactly how I should to it.
If I run the format -> fdisk it allows me to partition based on a
cylinder, but I don't think that is suffici
On 7/9/2010 5:18 PM, Brandon High wrote:
On Fri, Jul 9, 2010 at 5:00 PM, Edward Ned Harvey
mailto:solar...@nedharvey.com>> wrote:
The default ZFS block size is 128K. If you have a filesystem with
128G used, that means you are consuming 1,048,576 blocks, each of
which must be checks
On 7/9/2010 2:55 PM, Peter Taps wrote:
Folks,
I would appreciate it if you can create a separate thread for Mac Mini.
Back to the original subject.
NetApp has deep pockets. A few companies have already backed out of zfs as they
cannot afford to go through a lawsuit. I am in a stealth startup
I have an Intel X25-M 80GB SSD.
For optimum performance, I need to block align the SSD device, but I am not
sure exactly how I should to it.
If I run the format -> fdisk it allows me to partition based on a cylinder,
but I don't think that is sufficient enough.
Can someone tell me how
On Fri, Jul 9, 2010 at 5:00 PM, Edward Ned Harvey wrote:
> The default ZFS block size is 128K. If you have a filesystem with 128G
> used, that means you are consuming 1,048,576 blocks, each of which must be
> checksummed. ZFS uses adler32 and sha256, which means 4bytes and 32bytes
> ... 36 byt
Whenever somebody asks the question, "How much memory do I need to dedup X
terabytes filesystem," the standard answer is "as much as you can afford to
buy." This is true and correct, but I don't believe it's the best we can
do. Because "as much as you can buy" is a true assessment for memory in
*
On Fri, Jul 9, 2010 at 6:49 PM, BJ Quinn wrote:
> I have a couple of systems running 2009.06 that hang on relatively large zfs
> send/recv jobs. With the -v option, I see the snapshots coming across, and
> at some point the process just pauses, IO and CPU usage go to zero, and it
> takes a har
On 07/10/10 09:49 AM, BJ Quinn wrote:
I have a couple of systems running 2009.06 that hang on relatively large zfs
send/recv jobs. With the -v option, I see the snapshots coming across, and at
some point the process just pauses, IO and CPU usage go to zero, and it takes a
hard reboot to get b
This thread from Marc Bevand and his blog linked therein might have some useful
alternative suggestions.
http://opensolaris.org/jive/thread.jspa?messageID=480925
I've bookmarked it because it's quite a handy summary and I hope he keeps
updating it with new info
--
This message posted from openso
First off, you need to test 3.0.3 if you're using dedup. Earlier
versions had an unduly large number of issues when used with dedup.
Hopefully with 3.0.3 we've got the bulk of the problems resolved. ;-)
Secondly, from your stack backtrace, yes, it appears ips is implicated.
If I had source for i
Hi,
I have been trying out the latest NextentaCore and NexentaStor Community
ed. builds (they have the driver I need built in) on the hardware I have
with this controller.
The only difference between the 2 machines is that the 'Core' machine
has 16GB of RAM and the 'Stor' one has 12GB.
On both
Folks,
I would appreciate it if you can create a separate thread for Mac Mini.
Back to the original subject.
NetApp has deep pockets. A few companies have already backed out of zfs as they
cannot afford to go through a lawsuit. I am in a stealth startup company and we
rely on zfs for our appli
I have a couple of systems running 2009.06 that hang on relatively large zfs
send/recv jobs. With the -v option, I see the snapshots coming across, and at
some point the process just pauses, IO and CPU usage go to zero, and it takes a
hard reboot to get back to normal. The same script running
On 07/10/10 08:10 AM, zfsnoob4 wrote:
I'm not trying to fix anything in particular, I'm just curious. In case I
rollback a filesystem and then realize, I wanted a file from the original file
system (before rollback).
I read the section on clones here:
http://docs.sun.com/app/docs/doc/819-5461/
+1
I badly need this.
On 09/07/2010, at 19.40, Roy Sigurd Karlsbakk wrote:
> Anyone knows where in the pipeline BP rewrite is, or how long this pipeline
> is?
>
> You could move the data elsewhere using zfs send and recv, destroy the
> original datasets and then recreate them. This would str
On 9 Jul 2010, at 20:38, Garrett D'Amore wrote:
> On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote:
>>> "ab" == Alex Blewitt writes:
>>
>>ab> All Mac Minis have FireWire - the new ones have FW800.
>>
>> I tried attaching just two disks to a ZFS host using firewire, and it
>> worked
This is a hypothetical question that could actually happen:
Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0
and for some reason c0t0d0s0 goes off line, but comes back
on line after a shutdown. The primary boot disk would then
be c0t0d0s0 which would have much older data than c0t1d0s0.
U
I'm not trying to fix anything in particular, I'm just curious. In case I
rollback a filesystem and then realize, I wanted a file from the original file
system (before rollback).
I read the section on clones here:
http://docs.sun.com/app/docs/doc/819-5461/gavvx?a=view
but I'm still not sure wha
Agreed! I'm not sure why Addonics is selling them, given the history of
problems. At any rate, I'm glad that I didn't pay anything for the 3 three
that I have.
On Fri, Jul 9, 2010 at 1:56 PM, Brandon High wrote:
> On Fri, Jul 9, 2010 at 2:40 AM, Vladimir Kotal wrote:
>
>> Could you be more spe
On Fri, Jul 9, 2010 at 2:40 AM, Vladimir Kotal wrote:
> Could you be more specific about the problems with 88SE9123, especially
> with SATA ? I am in the process of setting up a system with AD2SA6GPX1 HBA
> based on this chipset (at least according to the product pages [*]).
>
http://lmgtfy.com/
On Fri, Jul 9, 2010 at 8:04 AM, Tony MacDoodle wrote:
> datapool/pluto refreservation 70G local
>
> This means that every snapshot will require 70G of free space?
>
No.
Could you provide the information requested?
-B
--
Brandon High : bh...@freaks.com
On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote:
> > "ab" == Alex Blewitt writes:
>
> ab> All Mac Minis have FireWire - the new ones have FW800.
>
> I tried attaching just two disks to a ZFS host using firewire, and it
> worked very badly for me. I found:
>
> 1. The solaris fire
> "ab" == Alex Blewitt writes:
ab> All Mac Minis have FireWire - the new ones have FW800.
I tried attaching just two disks to a ZFS host using firewire, and it
worked very badly for me. I found:
1. The solaris firewire stack isn't as good as the Mac OS one.
2. Solaris is very obnoxi
I was going to suggest the export/import step next. :-)
I'm glad you were able to resolve it.
We are working on making spare behavior more robust.
In the meantime, my advice is keep life simple and do not share spares,
logs, caches, or even disks between pools.
Thanks,
Cindy
On 07/09/10 12
Cindy,
[IDGSUN02:/] root# cat /etc/release
Solaris 10 10/08 s10x_u6wos_07b X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
But as noted
Ok, so after removing the spares marked as AVAIL and re-adding them again, I
put myself back in the "you're effed, dude" boat. What I should have done at
that point is a zpool export/import at that point which would have resolved it.
So what I did was recreate the steps that got me into the stat
Hi Ryan,
Which Solaris release is this?
Thanks,
Cindy
On 07/09/10 10:38, Ryan Schwartz wrote:
Hi Cindy,
Not sure exactly when the drives went into this state, but it is likely that it
happened when I added a second pool, added the same spares to the second pool,
then later destroyed the se
Anyone knows where in the pipeline BP rewrite is, or how long this pipeline is?
You could move the data elsewhere using zfs send and recv, destroy the original
datasets and then recreate them. This would stripe the data across the vdevs.
Of course, when BP-rewrite becomes available it should
Hi Cindy,
Not sure exactly when the drives went into this state, but it is likely that it
happened when I added a second pool, added the same spares to the second pool,
then later destroyed the second pool. There have been no controller or any
other hardware changes to this system - it is all o
You could move the data elsewhere using zfs send and recv, destroy the
original datasets and then recreate them. This would stripe the data across
the vdevs. Of course, when BP-rewrite becomes available it should be
possible to simply redistribute blocks amongst the various vdevs without
having t
On Thu, Jul 08, 2010 at 08:42:33PM -0700, Garrett D'Amore wrote:
> On Fri, 2010-07-09 at 10:23 +1000, Peter Jeremy wrote:
> > In theory, collisions happen. In practice, given a cryptographic hash,
> > if you can find two different blocks or files that produce the same
> > output, please publicise
My advice would be to NOT use the AD2SA6GPX1 HBA for building an opensolaris
storage box. Although the AHCI driver will load, the drives are not visible
to the OS, and device configuration fails according to 'cfgadm -al'. I have
a couple of them that are now residing in a linux box as I was unabl
> From: Rich Teer [mailto:rich.t...@rite-group.com]
> Sent: Thursday, July 08, 2010 7:43 PM
>
> Yep. Provided it supported ZFS, a Mac Mini makes for a compelling SOHO
> server. The lack of ZFS is the main thing holding me back here...
I don't really want to go into much detail here (it's a zfs
On Jul 8, 2010, at 4:37 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Philippe Schwarz
>>
>> 3Ware cards
>>
>> Any drawback (except that without BBU, i've got a pb in case of power
>> loss) in enabling the
I use ZFS (on FreeBSD) for my home NAS. I started on 4 drives then added 4 and
have now added another 4, bringing the total up to 12 drives on 3 raidzs in 1
pool.
I was just wondering if there was any advantage or disadvantage to spreading
the data across the 3 raidz, as two are currently full
So, just I took the right command. For me the output is very cryptic and I
cannot get any information helping me. I uploaded the output to a filehoster.
http://ifile.it/vzwn50s/Output.txt
I hope you can tell me what it means.
Regards
ron
--
This message posted from opensolaris.org
On 07/ 9/10 09:58 AM, Brandon High wrote:
On Fri, Jul 9, 2010 at 12:42 AM, James Van Artsdalen
wrote:
If these 6 Gb/s controllers are based on the Marvell part I would test them
thoroughly before deployment - those chips have been problematic.
The Marvell 88SE9123 was the troublemaker, and
On Thu, Jul 1, 2010 at 1:33 AM, Lutz Schumann
wrote:
>
> Anyone knowing why the dedup factor is wrong ? Any insights on what has
> actually been written (compressed meta data, deduped meta data .. etc.)
> would be greatly appreshiated.
>
Metadata and ditto blocks. Even with dedup, zfs will write m
On Tue, Jul 6, 2010 at 10:05 AM, Roy Sigurd Karlsbakk wrote:
> The pool will remain available, but you will have data corruption. The
> simple way to avoid this, is to use a raidz2, where the chances are far
> lower for data corruption.
>
It's also possible to replace a drive while the failed / f
On 9 Jul 2010, at 08:55, James Van Artsdalen wrote:
>> On Thu, 8 Jul 2010, Edward Ned Harvey wrote:
>> Yep. Provided it supported ZFS, a Mac Mini makes for
>> a compelling SOHO server.
>
> Warning: a Mac Mini does not have eSATA ports for external storage. It's
> dangerous to use USB for exte
On Thu, Jul 8, 2010 at 5:43 PM, Tony MacDoodle wrote:
> Any ideas???
Do you have a reservation set on the dataset? Can you post the output
of 'zfs list -o space' and 'zfs get all datapool/mars'?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss m
On Fri, Jul 9, 2010 at 12:42 AM, James Van Artsdalen
wrote:
> If these 6 Gb/s controllers are based on the Marvell part I would test them
> thoroughly before deployment - those chips have been problematic.
The Marvell 88SE9123 was the troublemaker, and it's not available
anymore. The 88SE9120, 8
> On Thu, 8 Jul 2010, Edward Ned Harvey wrote:
> Yep. Provided it supported ZFS, a Mac Mini makes for
> a compelling SOHO server.
Warning: a Mac Mini does not have eSATA ports for external storage. It's
dangerous to use USB for external storage since many (most? all?) USB->SATA
chips discard S
If these 6 Gb/s controllers are based on the Marvell part I would test them
thoroughly before deployment - those chips have been problematic.
A PCI-e SSD card is likely much faster than any SATA SSD.
--
This message posted from opensolaris.org
___
zfs-
48 matches
Mail list logo