On Mon, Jan 10, 2011 at 06:30:39PM +0100, Attila Nagy wrote:
> >why and we can't ask him now, I'm afraid. I just sent an e-mail to
>
> What happened to him?
Oops, I was thinking of something else.
http://valleywag.gawker.com/383763/freebsd-developer-kip-macy-arrested-for-tormenting-tenants
Marcu
On 12/16/2010 01:44 PM, Martin Matuska wrote:
Hi everyone,
following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
providing a ZFSv28 testing patch for 8-STABLE.
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
Link to mfsBS
On 01/10/2011 09:57 AM, Pawel Jakub Dawidek wrote:
On Sun, Jan 09, 2011 at 12:52:56PM +0100, Attila Nagy wrote:
[...]
I've finally found the time to read the v28 patch and figured out the
problem: vfs.zfs.l2arc_noprefetch was changed to 1, so it doesn't use
the prefetched data on the L2ARC devi
On 01/10/2011 10:02 AM, Pawel Jakub Dawidek wrote:
On Sun, Jan 09, 2011 at 12:49:27PM +0100, Attila Nagy wrote:
No, it's not related. One of the disks in the RAIDZ2 pool went bad:
(da4:arcmsr0:0:4:0): READ(6). CDB: 8 0 2 10 10 0
(da4:arcmsr0:0:4:0): CAM status: SCSI Status Error
(da4:arcmsr0:0:
On Sat, Dec 18, 2010 at 10:00:11AM +0100, Krzysztof Dajka wrote:
> Hi,
> I applied patch against evening 2010-12-16 STABLE. I did what Martin asked:
>
> On Thu, Dec 16, 2010 at 1:44 PM, Martin Matuska wrote:
> > # cd /usr/src
> > # fetch
> > http://people.freebsd.org/~mm/patches/zfs/v28/sta
On Sun, Jan 09, 2011 at 12:52:56PM +0100, Attila Nagy wrote:
[...]
> I've finally found the time to read the v28 patch and figured out the
> problem: vfs.zfs.l2arc_noprefetch was changed to 1, so it doesn't use
> the prefetched data on the L2ARC devices.
> This is a major hit in my case. Enabling
On Sun, Jan 09, 2011 at 12:49:27PM +0100, Attila Nagy wrote:
> No, it's not related. One of the disks in the RAIDZ2 pool went bad:
> (da4:arcmsr0:0:4:0): READ(6). CDB: 8 0 2 10 10 0
> (da4:arcmsr0:0:4:0): CAM status: SCSI Status Error
> (da4:arcmsr0:0:4:0): SCSI status: Check Condition
> (da4:arcms
On Sun, Jan 09, 2011 at 01:42:13PM +0100, Attila Nagy wrote:
> On 01/09/2011 01:18 PM, Jeremy Chadwick wrote:
> >On Sun, Jan 09, 2011 at 12:49:27PM +0100, Attila Nagy wrote:
> >> On 01/09/2011 10:00 AM, Attila Nagy wrote:
> >>>On 12/16/2010 01:44 PM, Martin Matuska wrote:
> Hi everyone,
> >>>
Once upon a time, this was a known problem with the arcmsr driver not
correctly interacting with ZFS, resulting in this behavior.
Since I'm presuming that the arcmsr driver update which was intended
to fix this behavior (in my case, at least) is in your nightly build,
it's probably worth pinging t
On 01/09/2011 01:18 PM, Jeremy Chadwick wrote:
On Sun, Jan 09, 2011 at 12:49:27PM +0100, Attila Nagy wrote:
On 01/09/2011 10:00 AM, Attila Nagy wrote:
On 12/16/2010 01:44 PM, Martin Matuska wrote:
Hi everyone,
following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
provid
On Sun, Jan 09, 2011 at 12:49:27PM +0100, Attila Nagy wrote:
> On 01/09/2011 10:00 AM, Attila Nagy wrote:
> > On 12/16/2010 01:44 PM, Martin Matuska wrote:
> >>Hi everyone,
> >>
> >>following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
> >>providing a ZFSv28 testing patch for 8
On 01/01/2011 08:09 PM, Artem Belevich wrote:
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote:
What I see:
- increased CPU load
- decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
hard disk load (IOPS graph)
...
Any ideas on what could cause these? I haven't upgraded
On 01/09/2011 10:00 AM, Attila Nagy wrote:
On 12/16/2010 01:44 PM, Martin Matuska wrote:
Hi everyone,
following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
providing a ZFSv28 testing patch for 8-STABLE.
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/sta
On 12/16/2010 01:44 PM, Martin Matuska wrote:
Hi everyone,
following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
providing a ZFSv28 testing patch for 8-STABLE.
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
I've got an I
On 01/03/2011 10:35 PM, Bob Friesenhahn wrote:
After four days, the L2 hit rate is still hovering around 10-20
percents (was between 60-90), so I think it's clearly a regression in
the ZFSv28 patch...
And the massive growth in CPU usage can also very nicely be seen...
I've updated the graph
After four days, the L2 hit rate is still hovering around 10-20 percents (was
between 60-90), so I think it's clearly a regression in the ZFSv28 patch...
And the massive growth in CPU usage can also very nicely be seen...
I've updated the graphs at (switch time can be checked on the zfs-mem gr
On 01/01/2011 08:09 PM, Artem Belevich wrote:
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote:
What I see:
- increased CPU load
- decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
hard disk load (IOPS graph)
...
Any ideas on what could cause these? I haven't upgraded
On 01/02/2011 03:45, Attila Nagy wrote:
> On 01/02/2011 05:06 AM, J. Hellenthal wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> On 01/01/2011 13:18, Attila Nagy wrote:
>>> On 12/16/2010 01:44 PM, Martin Matuska wrote:
Link to the patch:
http://people.freebsd.org/~
On 01/02/2011 05:06 AM, J. Hellenthal wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/01/2011 13:18, Attila Nagy wrote:
On 12/16/2010 01:44 PM, Martin Matuska wrote:
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
I've used
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/01/2011 13:18, Attila Nagy wrote:
> On 12/16/2010 01:44 PM, Martin Matuska wrote:
>> Link to the patch:
>>
>> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
>>
>>
>>
> I've used this:
> http://people.freebsd.org/
On 01/01/2011 08:09 PM, Artem Belevich wrote:
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote:
What I see:
- increased CPU load
- decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
hard disk load (IOPS graph)
...
Any ideas on what could cause these? I haven't upgraded
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote:
> What I see:
> - increased CPU load
> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
> hard disk load (IOPS graph)
>
...
> Any ideas on what could cause these? I haven't upgraded the pool version and
> nothing was chang
On 12/16/2010 01:44 PM, Martin Matuska wrote:
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
I've used this:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz
on a server with amd64, 8 G RAM, acting as
On Wednesday, 29 December 2010, jhell wrote:
>
> Another note too, I think I read that you mentioned using the L2ARC and
> slog device on the same disk You simply shouldn't do this it could
> be contributing to the real cause and there is absolutely no gain in
> either sanity or performance a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/28/2010 18:20, Martin Matuska wrote:
> Please don't consider these patches as production-ready.
> What we want to do is find and resolve as many bugs as possible.
I completely agree with Martin here. If your running it then your
willing to loose
Hi
On Wednesday, 29 December 2010, Martin Matuska wrote:
> Please don't consider these patches as production-ready.
> What we want to do is find and resolve as many bugs as possible.
>
> To help us fix these bugs, a way to reproduce the bug from a clean start
> (e.g. in virtualbox) would be great
Please don't consider these patches as production-ready.
What we want to do is find and resolve as many bugs as possible.
To help us fix these bugs, a way to reproduce the bug from a clean start
(e.g. in virtualbox) would be great and speed up finding the cause for
the problem.
Your problem looks
On Tue, Dec 28, 2010 at 9:39 AM, Jean-Yves Avenard
> Now, I haven't tried using cache and log from a different disk. The
> motherboard on the server has 8 SATA ports, and I have no free port to
> add another disk. So my only option to have both a log and cache
> device in my zfs pool, is to use two
On Tue, Dec 28, 2010 at 8:58 AM, Jean-Yves Avenard wrote:
> On 28 December 2010 08:56, Freddie Cash wrote:
>
>> Is that a typo, or the actual command you used? You have an extra "s"
>> in there. Should be "log" and not "logs". However, I don't think
>> that command is correct either.
>>
>> I b
On 29 December 2010 03:15, Jean-Yves Avenard wrote:
> # zpool import
> load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.11r 0.00u 0.03s 0% 2556k
> load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.94r 0.00u 0.03s 0% 2556k
> load: 0.00 cmd: zpool 405 [spa_namespace_lock] 16.57r 0.00u 0.03s 0%
On 28 December 2010 08:56, Freddie Cash wrote:
> Is that a typo, or the actual command you used? You have an extra "s"
> in there. Should be "log" and not "logs". However, I don't think
> that command is correct either.
>
> I believe you want to use the "detach" command, not "remove".
>
> # zp
Hi
On 27 December 2010 16:04, jhell wrote:
> 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
> 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
> 3) ( mount -w / ) to make sure you can remove and also write new
> zpool.cache as needed.
> 3) Remove /
Well
Today I added the log device:
zpool add pool log /dev/ada1s1 (8GB slice on a SSD Intel X25 disk)..
then added the cache (32GB)
zpool add pool cache /dev/ada1s2
So far so good.
zpool status -> all good.
Reboot : it hangs
booted in single user mode, zpool status:
ZFS filesystem version 5
ZF
Hi
On Tuesday, 28 December 2010, Freddie Cash wrote:
> On Sun, Dec 26, 2010 at 4:43 PM, Jean-Yves Avenard
> wrote:
>> On 27 December 2010 09:55, Jean-Yves Avenard wrote:
>>> Hi there.
>>>
>>> I used stable-8-zfsv28-20101223-nopython.patch.xz from
>>> http://people.freebsd.org/~mm/patches/zfs/v
On Sun, Dec 26, 2010 at 4:43 PM, Jean-Yves Avenard wrote:
> On 27 December 2010 09:55, Jean-Yves Avenard wrote:
>> Hi there.
>>
>> I used stable-8-zfsv28-20101223-nopython.patch.xz from
>> http://people.freebsd.org/~mm/patches/zfs/v28/
>
> I did the following:
>
> # zpool status
> pool: pool
>
Hi
On 27 December 2010 16:04, jhell wrote:
>
> Before anything else can you: (in FreeBSD)
>
> 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
> 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
> 3) ( mount -w / ) to make sure you can remove and also
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/26/2010 23:17, Jean-Yves Avenard wrote:
> Responding to myself again :P
>
> On 27 December 2010 13:28, Jean-Yves Avenard wrote:
>> tried to force a zpool import
>>
>> got a kernel panic:
>> panic: solaris assert: weight >= space && weight <= 2
Responding to myself again :P
On 27 December 2010 13:28, Jean-Yves Avenard wrote:
> tried to force a zpool import
>
> got a kernel panic:
> panic: solaris assert: weight >= space && weight <= 2 * space, file:
> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
tried to force a zpool import
got a kernel panic:
panic: solaris assert: weight >= space && weight <= 2 * space, file:
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
line: 793
cpuid = 5
KDB: stack backtrace
#0: 0xff805f64be at kdb_backtrace
#1 .. pa
Rebooting in single-user mode.
zpool status pool
or spool scrub pool
hangs just the same ... and there's no disk activity either ...
Will download a liveCD of OpenIndiana, hopefully it will show me what's wrong :(
Jean-Yves
___
freebsd-stable@freebsd.
On 27 December 2010 09:55, Jean-Yves Avenard wrote:
> Hi there.
>
> I used stable-8-zfsv28-20101223-nopython.patch.xz from
> http://people.freebsd.org/~mm/patches/zfs/v28/
I did the following:
# zpool status
pool: pool
state: ONLINE
scan: none requested
config:
NAMESTATE
Hi there.
I used stable-8-zfsv28-20101223-nopython.patch.xz from
http://people.freebsd.org/~mm/patches/zfs/v28/
simply because it was the most recent at this location.
Is this the one to use?
Just asking cause the file server I installed it on has stopped
responding this morning and doing a rem
Hi Martin, List,
Patched up to a ZFSv28 20101218 and it is working as expected, Great Job!.
There seems to be some assertion errors that are left to be fixed yet
with the following examples:
Panic String: solaris assert: vd->vdev_stat.vs_alloc == 0 (0x18a000 ==
0x0),
file:/usr/src/sys/modules/z
Thanks, I'm going to check it out!
On 23 Dec 2010, at 9:58, Martin Matuska wrote:
> I have updated the py-zfs port right now so it should work with v28,
> too. The problem was a non-existing solaris.misc module, I had to patch
> and remove references to this module.
>
> Cheers,
> mm
>
Regards,
I have updated the py-zfs port right now so it should work with v28,
too. The problem was a non-existing solaris.misc module, I had to patch
and remove references to this module.
Cheers,
mm
Dňa 23.12.2010 09:27, Ruben van Staveren wrote / napísal(a):
> Hi,
>
> On 16 Dec 2010, at 13:44, Martin M
Hi,
On 16 Dec 2010, at 13:44, Martin Matuska wrote:
> Hi everyone,
>
> following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
> providing a ZFSv28 testing patch for 8-STABLE.
Where can I find an updated py-zfs so that zfs (un)allow/userspace/groupspace
can be tested ?
Regar
Ok,
On 16 Dec 2010, at 13:44, Martin Matuska wrote:
> Please test, test, test. Chances are this is the last patchset before
> v28 going to HEAD (finally) and after a reasonable testing period into
> 8-STABLE.
> Especially test new changes, like boot support and sendfile(2) support.
> Also be sure
On Sat, Dec 18, 2010 at 7:30 PM, Martin Matuska wrote:
> The information about pools is stored in /boot/zfs/zpool.cache
> If this file doesn't contain correct information, your system pools will
> not be discovered.
>
> In v28, importing a pool with the "altroot" option does not touch the
> cache
Hi,
I applied patch against evening 2010-12-16 STABLE. I did what Martin asked:
On Thu, Dec 16, 2010 at 1:44 PM, Martin Matuska wrote:
> # cd /usr/src
> # fetch
> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
> # xz -d stable-8-zfsv28-20101215.patch.xz
>
17.12.2010 12:12, Romain Garbage пишет:
following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
providing a ZFSv28 testing patch for 8-STABLE.
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
Link to mfsBSD ISO files for tes
2010/12/16 Martin Matuska :
> Hi everyone,
>
> following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
> providing a ZFSv28 testing patch for 8-STABLE.
>
> Link to the patch:
>
> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
>
> Link to mfsBSD ISO
Hi everyone,
following the announcement of Pawel Jakub Dawidek (p...@freebsd.org) I am
providing a ZFSv28 testing patch for 8-STABLE.
Link to the patch:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
Link to mfsBSD ISO files for testing (i386 and amd64):
52 matches
Mail list logo