On 06/14/10 19:35, Erik Trimble wrote:
On 6/14/2010 12:10 PM, Neil Perrin wrote:
On 06/14/10 12:29, Bob Friesenhahn wrote:
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
It is good to keep in mind that only small writes go to the dedicated
slog. Large writes to to main store. A succession o
On Jun 14, 2010, at 6:35 PM, Erik Trimble wrote:
> On 6/14/2010 12:10 PM, Neil Perrin wrote:
>> On 06/14/10 12:29, Bob Friesenhahn wrote:
>>> On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
>>>
> It is good to keep in mind that only small writes go to the dedicated
> slog. Large writes to
Richard Elling wrote:
On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote:
Hi all
It seems zfs scrub is taking a big bit out of I/O when running. During a scrub,
sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some
L2ARC helps this, but still, the problem remains
On 6/14/2010 12:10 PM, Neil Perrin wrote:
On 06/14/10 12:29, Bob Friesenhahn wrote:
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
It is good to keep in mind that only small writes go to the dedicated
slog. Large writes to to main store. A succession of that many small
writes (to fill RAM/2)
On 2010-Jun-11 17:41:38 +0800, Joerg Schilling
wrote:
>PP.S.: Did you know that FreeBSD _includes_ the GPLd Reiserfs in the FreeBSD
>kernel since a while and that nobody did complain about this, see e.g.:
>
>http://svn.freebsd.org/base/stable/8/sys/gnu/fs/reiserfs/
That is completely irrelevant
On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> It seems zfs scrub is taking a big bit out of I/O when running. During a
> scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG
> and some L2ARC helps this, but still, the problem remains in that the scr
> Hello all,
>
> I've been running OpenSolaris on my personal
> fileserver for about a year and a half, and it's been
> rock solid except for having to upgrade from 2009.06
> to a dev version to fix some network driver issues.
> About a month ago, the motherboard on this computer
> died, and I upg
On 14/06/2010 22:12, Roy Sigurd Karlsbakk wrote:
Hi all
It seems zfs scrub is taking a big bit out of I/O when running. During a scrub,
sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some
L2ARC helps this, but still, the problem remains in that the scrub is given
ful
Hi all
It seems zfs scrub is taking a big bit out of I/O when running. During a scrub,
sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some
L2ARC helps this, but still, the problem remains in that the scrub is given
full priority.
Is this problem known to the developer
On Mon, Jun 14, 2010 at 1:35 PM, Brandon High wrote:
> How much memory do you have, and how big is the DDT? You can get the
> DDT size with 'zdb -DD'. The total count is the sum of duplicate and
> unique entries. Each entry uses ~ 250 bytes per entry, so the count
> divided by 4 is a (very rough)
On Sun, Jun 13, 2010 at 6:58 PM, Matthew Anderson
wrote:
> The problem didn’t seem to occur with only a small amount of data on the LUN
> (<50GB) and happened more frequently as the LUN filled up. I’ve since moved
> all data to non-dedup LUN’s and I haven’t seen a dropout for over a month.
How mu
On 06/14/10 12:29, Bob Friesenhahn wrote:
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
It is good to keep in mind that only small writes go to the dedicated
slog. Large writes to to main store. A succession of that many small
writes (to fill RAM/2) is highly unlikely. Also, that the zil is
On 04/10/10 09:28, Edward Ned Harvey wrote:
- If synchronous writes are large (>32K) and block aligned then the blocks are
written directly to the pool and a small record
written to the log. Later when the txg commits then the blocks are just linked
into the txg. However, this processing
I've been referred to here from the zfs-fuse newsgroup. I have a
(non-redundant) pool which is reporting errors that I don't quite understand:
# zpool status -v
pool: green
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications ma
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
It is good to keep in mind that only small writes go to the dedicated
slog. Large writes to to main store. A succession of that many small
writes (to fill RAM/2) is highly unlikely. Also, that the zil is not
read back unless the system is improper
Hi All,
I currently use b134 and COMSTAR to deploy SRP targets for virtual machine
storage (VMware ESXi4) and have run into some unusual behaviour when dedup is
enabled for a particular LUN. The target seems to lock up (ESX reports it as
unavailable) when writing large amount or overwriting dat
- Original Message -
> On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
>
> >> There is absolutely no sense in having slog devices larger than
> >> then main memory, because it will never be used, right?
> >> ZFS will rather flush the txg to disk than reading back from
> >> zil? So there i
Hi Giovanni,
My Monday morning guess is that the disk/partition/slices are not
optimal for the installation.
Can you provide the partition table on the disk that you are attempting
to install? Use format-->disk-->partition-->print.
You want to put all the disk space in c*t*d*s0. See this sect
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
There is absolutely no sense in having slog devices larger than
then main memory, because it will never be used, right?
ZFS will rather flush the txg to disk than reading back from
zil? So there is a guideline to have enough slog to hold about 10
On Jun 13, 2010, at 2:14 PM, Jan Hellevik
wrote:
Well, for me it was a cure. Nothing else I tried got the pool back.
As far as I can tell, the way to get it back should be to use
symlinks to the fdisk partitions on my SSD, but that did not work
for me. Using -V got the pool back. What is
Roy Sigurd Karlsbakk wrote:
>> There is absolutely no sense in having slog devices larger than
>> then main memory, because it will never be used, right?
>> ZFS will rather flush the txg to disk than reading back from
>> zil? So there is a guideline to have enough slog to hold about 10
>> seconds o
Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Arne Jansen
>>
>> There is absolutely no sense in having slog devices larger than
>> then main memory, because it will never be used, right?
>
> Also: A TXG is guara
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Arne Jansen
>
> There is absolutely no sense in having slog devices larger than
> then main memory, because it will never be used, right?
Also: A TXG is guaranteed to flush within 30 sec. Le
>>
> To add such a device, you would do:
> 'zpool add tank mycachedevice'
>
>
Hi
Correct me if I'm wrong, but for me the good command should be :
'zpool add tank cache mycachedevice'
If you don't use the "cache" keyword, the device would be added as a classical
top level vdev.
Remi
>>
> You are severely RAM limited. In order to do dedup, ZFS has to maintain
> a catalog of every single block it writes and the checksum for that
> block. This is called the Dedup Table (DDT for short).
>
> So, during the copy, ZFS has to (a) read a block from the old
> filesystem, (b) check the
> There is absolutely no sense in having slog devices larger than
> then main memory, because it will never be used, right?
> ZFS will rather flush the txg to disk than reading back from
> zil? So there is a guideline to have enough slog to hold about 10
> seconds of zil, but the absolute maximum v
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen wrote:
> Hi,
>
> I known it's been discussed here more than once, and I read the
> Evil tuning guide, but I didn't find a definitive statement:
>
> There is absolutely no sense in having slog devices larger than
> then main memory, because it will neve
Hi,
I known it's been discussed here more than once, and I read the
Evil tuning guide, but I didn't find a definitive statement:
There is absolutely no sense in having slog devices larger than
then main memory, because it will never be used, right?
ZFS will rather flush the txg to disk than readi
Hello
I even have this problem on my system. I lost my backup server crashing the
system-hd and the ZIL-device. After setting up a new system (osol 2009.06 and
updating to the latest osol/dev version with zpool-dedup) I tried to import my
backup pool, but I can't. The system tells me there isn't an
Just FYI.
The error was that I created the ZFS at the wrong pool.
rpool/a/b/c
rpool/new
I mounted "new" in a directory of rpoo/ "c". Seems like this hierarchical
mounting is not working like I thought. ;)
--
This message posted from opensolaris.org
__
Marcelo Leal wrote:
> Hello there,
> I think you should share it with the list, if you can, seems like an
> interesting work. ZFS has some issues with snapshots and spa_sync performance
> for snapshots deletion.
I'm a bit reluctant to post it to the list where it can still be found
years from n
31 matches
Mail list logo