On May 17, 2011, at 1:29 PM, Jeremy Chadwick wrote:
> * ZFS send | ssh zfs recv results in ZFS subsystem hanging; 8.1-RELEASE;
> February 2011:
> http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010602.html
I found a reproducible deadlock condition actually. If you keep some I/O
ac
On May 20, 2011, at 9:33 AM, Luke Marsden wrote:
>> If you wish to reproduce it, try creating a dataset for /usr/obj,
>> running make buildworld on it, replicating at, say, 30 or 60 second
>> intervals, and keep several scripts (or rsync) reading the target
>> dataset files and just copying them
Hello,
I'm running a server with FreeBSD 7-STABLE as of August 8, Apache 2.2
with mpm/worker and threads support, and PHP 5.2.6.
Everything works like a charm, but I see that Apache is leaking
processes that get stuck in umtxn state.
This graph shows it pretty well (I upgraded the system
On Aug 11, 2008, at 12:31 PM, Kris Kennaway wrote:
Borja Marcos wrote:
Hello,
I'm running a server with FreeBSD 7-STABLE as of August 8, Apache
2.2 with mpm/worker and threads support, and PHP 5.2.6.
This trace doesn't show anything really. You need to recompile the
bin
On Aug 12, 2008, at 12:12 AM, Ivan Voras wrote:
Borja Marcos wrote:
Hello,
I'm running a server with FreeBSD 7-STABLE as of August 8, Apache
2.2 with mpm/worker and threads support, and PHP 5.2.6.
Everything works like a charm, but I see that Apache is leaking
processes that get stu
On Aug 12, 2008, at 12:28 PM, Jeremy Chadwick wrote:
Please be sure to report back with the outcome (in a few days, or
whenever suits you) -- I've seen a report of similar oddities (threads
locking up) on the suPHP mailing list, when using Apache with the
worker
MPM. No one stated what state
On Aug 13, 2008, at 3:18 PM, Kris Kennaway wrote:
Borja Marcos wrote:
((Sorry for the long dump))
(gdb) bt
#0 0x3827cfe7 in __error () from /lib/libthr.so.3
#1 0x3827cd4a in __error () from /lib/libthr.so.3
#2 0x08702120 in ?? ()
As you can see the debugging symbols are still not
On Aug 13, 2008, at 3:33 PM, Kris Kennaway wrote:
Hmm. Weird. I compiled the port having WITH_DEBUG defined (as I saw
in the Makefile) and indeed the gcc invocations has the -g flag
set. What is strange is the error gdb issued, offering a coredump,
etc.
It is likely that the binaries are
On Aug 13, 2008, at 5:24 PM, Tom Evans wrote:
On Wed, 2008-08-13 at 16:56 +0200, Borja Marcos wrote:
Personally, I find PHP far too troublesome to run threaded. These
days,
I use an event MPM based front-end apache 2.2, which reverse proxies
to
either a prefork MPM apache 2.2 with mod_
Hello,
The attached graphs are from a server running FreeBSD 7.1-i386 (now)
with the typical Apache2+MySQL with forums, Joomla...
I just cannot explain this. Disk I/O bandwidth was suffering a lot,
and after the update the disks are almost idle.
Any ideas? I cannot imagine a change betwe
On Jan 30, 2009, at 10:12 AM, Borja Marcos wrote:
Hello,
The attached graphs are from a server running FreeBSD 7.1-i386 (now)
with the typical Apache2+MySQL with forums, Joomla...
I see that the attachments didn't make it.
Disk I/O bandwidth was an average 40 - 60 % before the u
On Jan 31, 2009, at 7:27 PM, Robert Watson wrote:
There are basically three ways to go about exploring this, none
particularly good:
(1) Do a more formal before and after analysis of performance on the
box,
perhaps using tools like kernel profiling, hwpmc, dtrace, etc.
Machine in prod
On Nov 22, 2009, at 12:34 AM, Randy Bush wrote:
>
>> Try running FreeBSD 7-Stable to get the latest ZFS version which on
>> FreeBSD is 13
>
> that is what i am running. RELENG_7
I've been following ZFS on FreeBSD long ago, and it really seems to be stable
on 8.0/amd64.
Even Sun Microsystems s
On Nov 23, 2009, at 10:01 AM, Jeremy Chadwick wrote:
> On Mon, Nov 23, 2009 at 09:41:43AM +0100, Borja Marcos wrote:
>> On Nov 22, 2009, at 12:34 AM, Randy Bush wrote:
>>>
>>>> Try running FreeBSD 7-Stable to get the latest ZFS version which on
>>>>
On Mar 9, 2010, at 1:58 PM, Pawel Jakub Dawidek wrote:
>>> What kind of hardware do you have there? There is 3-way deadlock I've a
>>> fix for which would be hard to trigger on single or dual core machines.
>>>
>>> Feel free to try the fix:
>>>
>>> http://people.freebsd.org/~pjd/patches/zfs
On Mar 9, 2010, at 1:29 PM, Pawel Jakub Dawidek wrote:
> On Tue, Mar 09, 2010 at 10:15:53AM +0100, Stefan Bethke wrote:
>> Over the past couple of months, I've more or less regularly observed
>> machines having more and more processes stuck in the zfs wchan. The
>> processes never recover from
On Mar 9, 2010, at 3:18 PM, Borja Marcos wrote:
>
> On Mar 9, 2010, at 1:58 PM, Pawel Jakub Dawidek wrote:
>
>>>> What kind of hardware do you have there? There is 3-way deadlock I've a
>>>> fix for which would be hard to trigger on single or dual core mac
On Mar 10, 2010, at 12:02 PM, Pawel Jakub Dawidek wrote:
> On Wed, Mar 10, 2010 at 10:24:49AM +0100, Borja Marcos wrote:
>> Tested. Same deadlock remains.
>
> Ok, to track this down I need the following:
>
> Uncomment 'CFLAGS+=-DDEBUG=1' line in sys/mod
On Mar 11, 2010, at 8:45 AM, Alexander Leidinger wrote:
> Quoting Pawel Jakub Dawidek (from Wed, 10 Mar 2010
> 18:31:43 +0100):
>
> There is a 4th possibility, if you can rule out everything else: bugs in the
> CPU. I stumbled upon this with ZFS (but UFS was exposing the problem much
> faste
On Mar 11, 2010, at 3:08 PM, Alexander Leidinger wrote:
>>> Borja, can you confirm that the CPU is correctly announced in FreeBSD (just
>>> look at "dmesg | grep CPU:" output, if it tells you it is a AMD or Intel
>>> XXX CPU it is correctly detected by the BIOS)?
>>
>> A CPU bug? Weird. Very.
On Mar 5, 2013, at 11:09 PM, Jeremy Chadwick wrote:
>>> - Disks are GPT and are *partitioned, and ZFS refers to the partitions
>>> not the raw disk -- this matters (honest, it really does; the ZFS
>>> code handles things differently with raw disks)
>>
>> Not on FreeBSD as far I can see.
>
> M
Same as its brothers/sisters, it's optimized for 4 KB blocks.
/*
* OCZ Vertex 4 SSDs
* 4k optimized
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "OCZ_VERTEX4*", "*"},
/*quirks/DA_Q_4K
Borja.
___
On Jul 9, 2013, at 11:32 AM, Borja Marcos wrote:
>{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "OCZ_VERTEX4*", "*"},
Correction: I used an underscore by mistake.
OCZ-VERTEX4
___
freebsd-sta
> On 11 Dec 2018, at 20:01, Rodney W. Grimes
> wrote:
>
> Glen,
> It is just a bit shy of 25 years and 1 month that I shipped
> the 1.0 Release. Its been a long road, but we are here now!
Great job!
I remember when I used my first FreeBSD release (2.0.5) in 1995. Aftter trying
Xenix
> On 5 Feb 2019, at 23:49, Karl Denninger wrote:
>
> BTW under 12.0-STABLE (built this afternoon after the advisories came
> out, with the patches) it's MUCH worse. I get the same device resets
> BUT it's followed by an immediate panic which I cannot dump as it
> generates a page-fault (superv
> On 6 Feb 2019, at 16:34, Karl Denninger wrote:
>
> On 2/6/2019 09:18, Borja Marcos wrote:
>>>> Number of Hardware Resets has incremented. There are no other errors
>>>> shown:
>> What is _exactly_ that value? Is it related to the number of resets s
Hello,
I have a couple of questions, I'm using ZFS on FreeBSD 7.1/amd64.
To avoid issues with sharing the disks with ZFS and UFS, I am using a
USB pendrive on which I copy the /boot directory.
My first problem is: the presence of the /boot/zfs/zpool.cache file is
critical. Without it the
Hello,
I was wondering if there are plans to document and keep the ZFS user
library as a reasonably stable API.
I have been writing an automatic replication program, and it's ugly
and clumsy to do it calling a user program. I would rather prefer to
use an API, that would make it much eas
On Jun 18, 2009, at 11:35 PM, David Magda wrote:
Is there something specific you're looking to do? The file system
layer of ZFS (the "ZPL") is in flux, but there may be other
components (e.g., DMU) that may be more stable (the Lustre folks are
coding against it in user land). See pages 7 a
Sep 28 19:47:46 kernel: lock order reversal:
Sep 28 19:47:46 kernel: 1st 0xff0002a9a308 ufs (ufs) @ /usr/src/
sys/kern/vfs_mount.c:1200
Sep 28 19:47:46 kernel: 2nd 0xff0002a63a58 devfs (devfs) @ /usr/
src/sys/ufs/ffs/ffs_vfsops.c:1194
Sep 28 19:47:46 kernel: KDB: stack backtrace:
Se
Hello,
I have observed a deadlock condition when using ZFS. We are making a
heavy usage of zfs send/zfs receive to keep a replica of a dataset on
a remote machine. It can be done at one minute intervals. Maybe we're
doing a somehow atypical usage of ZFS, but, well, seems to be a great
so
On Sep 29, 2009, at 10:29 AM, Borja Marcos wrote:
Hello,
I have observed a deadlock condition when using ZFS. We are making a
heavy usage of zfs send/zfs receive to keep a replica of a dataset
on a remote machine. It can be done at one minute intervals. Maybe
we're doing a so
On Sep 29, 2009, at 10:29 AM, Borja Marcos wrote:
I have observed a deadlock condition when using ZFS. We are making a
heavy usage of zfs send/zfs receive to keep a replica of a dataset
on a remote machine. It can be done at one minute intervals. Maybe
we're doing a somehow atypical
panic: mtx_lock() of destroyed mutex @ /usr/src/sys/kern/vfs_subrc:2467
cpuid = 1
I was doing a zfs destroy -r of a dataset. The dataset has had many
snapshot receives done.
# uname -a
FreeBSD 8.0-RC1 FreeBSD 8.0-RC1 #1: Tue Oct 13 14:11:08 CEST 2009
root@:/usr/obj/usr/src/sys/DEBUG
On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
> You can do thinks in /boot/loader.conf to hard code bus and drive
> assignments.
>
> e.g.
>
> hint.da.0.at="scbus0"
> hint.da.0.target="19"
> hint.da.0.unit="0"
> hint.da.1.at="scbus0"
> hint.da.1.target="18"
> hint.da.1.unit="0"
Beware, the
> On 18 Feb 2016, at 01:24, Marius Strobl wrote:
>
>
> Could those of you experiencing these hangs with ZFS please test
> whether instead of reverting all of r292895, a kernel built with
> just the merge of r291244 undone via the following patch gets
> rid of that problem - especially on amd64
> On 07 Mar 2016, at 15:28, Jim Harris wrote:
> (Moving to freebsd-stable. NVMe is not associated with the SCSI stack at
> all.)
Oops, my apologies. I was assuming that, being storage stuff, -scsi was a good
list.
> Can you please file a bug report on this?
Sure, doing doing some simple te
> On 05 May 2016, at 16:39, Warner Losh wrote:
>
>> What do you think? In some cases it’s clear that TRIM can do more harm than
>> good.
>
> I think it’s best we not overreact.
I agree. But with this issue the system is almost unusable for now.
> This particular case is cause by the nvd driv
> On 17 May 2016, at 11:09, Steven Hartland wrote:
>
>> I understand that, but I don’t think it’s a good that ZFS depends blindly on
>> a driver feature such
>> as that. Of course, it’s great to exploit it.
>>
>> I have also noticed that ZFS has a good throttling mechanism for write
>> operat
> On 22 Jun 2016, at 04:08, Jason Zhang wrote:
>
> Mark,
>
> Thanks
>
> We have same RAID setting both on FreeBSD and CentOS including cache setting.
> In FreeBSD, I enabled the write cache but the performance is the same.
>
> We don’t use ZFS or UFS, and test the performance on the RAW G
Hi :)
Still experimenting with NVMe drives and FreeBSD, and I have ran into problems,
I think.
I´ve got a server with 10 Intel DC P3500 NVMe drives. Right now, running
11-BETA2.
I have updated the firmware in the drives to the latest version (8DV10174)
using the Data Center Tools.
And I’ve fo
> On 28 Jul 2016, at 19:25, Jim Harris wrote:
>
> Yes, you should worry.
>
> Normally we could use the dump_debug sysctls to help debug this - these
> sysctls will dump the NVMe I/O submission and completion queues. But in
> this case the LBA data is in the payload, not the NVMe submission ent
> On 01 Aug 2016, at 08:45, O. Hartmann wrote:
>
> On Wed, 22 Jun 2016 08:58:08 +0200
> Borja Marcos wrote:
>
>> There is an option you can use (I do it all the time!) to make the card
>> behave as a plain HBA so that the disks are handled by the “da” driver
> On 01 Aug 2016, at 15:12, O. Hartmann wrote:
>
> First, thanks for responding so quickly.
>
>> - The third option is to make the driver expose the SAS devices like a HBA
>> would do, so that they are visible to the CAM layer, and disks are handled by
>> the stock “da” driver, which is the ide
> On 29 Jul 2016, at 17:44, Jim Harris wrote:
>
>
>
> On Fri, Jul 29, 2016 at 1:10 AM, Borja Marcos wrote:
>
> > On 28 Jul 2016, at 19:25, Jim Harris wrote:
> >
> > Yes, you should worry.
> >
> > Normally we could use the dump_debug sysctls
you suggested you (Borja Marcos) did with the Dell salesman), where in
> reality each has its own advantages and disadvantages.
I know, but this is not the case. But it’s quite frustrating to try to order a
server with a HBA rather than a RAID and receiving an answer such as
“the HBA op
Hi
I apologise for being late on this, but I just noticed. The new vt console
driver has a very important
change in behavior, replacing the ancient “BIOS” text mode with a graphic VGA
mode.
I don’t know how many people relies on BIOS serial redirection for consoles,
but at least
HP’s iLO syst
> On 12 Sep 2016, at 17:23, Jim Harris wrote:
>
> There is an updated DCT 3.0.2 at: https://downloadcenter.intel.
> com/download/26221/Intel-SSD-Data-Center-Tool which has a fix for this
> issue.
>
> Borja has already downloaded this update and confirmed it looks good so
> far. Posting the up
Hi,
I have noticed that the GENERIC kernel in 11-STABLE includes the PCI_HP option,
and the
hotplug bits seem to be present in the kernel, but I don’t see any userland
support for it.
Is it somewhat complete and in that case am I missing something?
Thanks!
Borja.
__
> On 27 Sep 2016, at 15:48, Jan Henrik Sylvester wrote:
>
> On 09/27/2016 12:16, Borja Marcos wrote:
>> I have noticed that the GENERIC kernel in 11-STABLE includes the PCI_HP
>> option, and the
>> hotplug bits seem to be present in the kernel, but I don’t see any
> On 27 Sep 2016, at 17:51, Eric van Gyzen wrote:
>
>
> To my knowledge, all the necessary PCIe-layer code is present. However,
> that's just one layer: Many drivers will likely need changes in order
> to cope with surprise removal of their devices.
Thank you very much, that’s what I needed
> On 17 Oct 2016, at 02:44, Rostislav Krasny wrote:
>
> Hi,
>
> First of all I faced an old problem that I reported here a year ago:
> http://comments.gmane.org/gmane.os.freebsd.stable/96598
> Completely new USB flash drive flashed by the
> FreeBSD-11.0-RELEASE-i386-mini-memstick.img file kills
> On 25 Jan 2017, at 11:15, Kurt Jaeger wrote:
>
> I had some cases in the past where xterm was hanging, too -- but
> not with *that* high rate of problems.
Hmm that doesn’t sound too good. Potentially exploitable bug?
Borja.
___
freebsd-stable
Hi,
Since I’ve updated a machine to 11.1-STABLE I am seeing a rather unusual growth
of Wired memory.
Any hints on what might have changed from 11-RELEASE to 11.1-RELEASE and
11.1-STABLE?
Thanks!
___
freebsd-stable@freebsd.org mailing list
https:
> On 11 Sep 2017, at 11:09, Borja Marcos wrote:
>
>
> Hi,
>
> Since I’ve updated a machine to 11.1-STABLE I am seeing a rather unusual
> growth of Wired memory.
>
> Any hints on what might have changed from 11-RELEASE to 11.1-RELEASE and
> 11.1-STABLE?
Evil
> On 11 Sep 2017, at 11:25, Borja Marcos wrote:
>
>> Since I’ve updated a machine to 11.1-STABLE I am seeing a rather unusual
>> growth of Wired memory.
>>
>> Any hints on what might have changed from 11-RELEASE to 11.1-RELEASE and
>> 11.1-STABLE?
vmstat
> On 13 Sep 2017, at 17:56, Dan Nelson via freebsd-stable
> wrote:
>
> 2017-09-12 1:27 GMT-05:00 Borja Marcos :
>>
>>
>>> On 11 Sep 2017, at 11:25, Borja Marcos wrote:
>>>
>>>> Since I’ve updated a machine to 11.1-STABLE I a
> On 10 Dec 2017, at 09:47, Eugene M. Zheganin wrote:
>
> Hi,
>
> would be really nice if the 11.2 and subsequent versions would come with the
> hw.vga.textmode=1 as the default in the installation media. Because you know,
> there's a problem with some vendors (like HP) who's servers are inc
> On 5 Apr 2018, at 17:00, Warner Losh wrote:
>
> I'm working on trim shaping in -current right now. It's focused on NVMe,
> but since I'm doing the bulk of it in cam_iosched.c, it will eventually be
> available for ada and da. The notion is to measure how long the TRIMs take,
> and only send th
> On 6 Apr 2018, at 10:41, Steven Hartland wrote:
>
> That is very hw and use case dependent.
>
> The reason we originally sponsored the project to add TRIM to ZFS was that in
> our case without TRIM the performance got so bad that we had to secure erase
> disks every couple of weeks as they
> On 6 Apr 2018, at 10:56, Borja Marcos wrote:
>
> P.S: Attaching the graphs that were lost.
And, silly me, repeating the same mistakes over and over.
http://frobula.crabdance.com:8001/publicfiles/OneBonnie.png
http://frobula.crabdance.com:8001/publicfiles/TwoBonniesTimes
Hello,
I am trying to use several Emulex OpenConnect cards and the driver fails to
attach them.
oce0: mem
0x92c0-0x92c03fff,0x92bc-0x92bd,0x92be-0x92bf irq 38 at
device 0.7 on pci2
oce0: oce_mq_create failed - cmd status: 2
oce0: MQ create failed
device_attach: oce0 atta
Hi :)
I am setting an Elasticsearch cluster using FreeBSD 12-STABLE. The servers have
64 GB of memory and I am running ZFS.
I was puzzled when despite having limited vfs.zfs.arc_max to 32 GB and
assigning a 16 GB heap (locked) to Elasticsearch,
and with around 10 GB of free memory, I saw the
> On 30 Apr 2019, at 15:30, Michelle Sullivan wrote:
>
>> I'm sorry, but that may well be what nailed you.
>>
>> ECC is not just about the random cosmic ray. It also saves your bacon
>> when there are power glitches.
>
> No. Sorry no. If the data is only half to disk, ECC isn't going to sav
> On 1 May 2019, at 04:26, Michelle Sullivan wrote:
>
>mfid8 ONLINE 0 0 0
Anyway I think this is a mistake (mfid). I know, HBA makers have been insisting
on having their firmware getting in the middle,
which is a bad thing.
The right way to use disks is to give ZFS ac
> On 3 May 2019, at 11:55, Pete French wrote:
>
>
>
> On 03/05/2019 08:09, Borja Marcos via freebsd-stable wrote:
>
>> The right way to use disks is to give ZFS access to the plain CAM devices,
>> not thorugh some so-called JBOD on a RAID
>> controller
> On 8 May 2019, at 05:09, Walter Parker wrote:
> Would a disk rescue program for ZFS be a good idea? Sure. Should the lack
> of a disk recovery program stop you from using ZFS? No. If you think so, I
> suggest that you have your data integrity priorities in the wrong order
> (focusing on small,
> On 9 May 2019, at 00:55, Michelle Sullivan wrote:
>
>
>
> This is true, but I am of the thought in alignment with the zFs devs this
> might not be a good idea... if zfs can’t work it out already, the best thing
> to do will probably be get everything off it and reformat…
That’s true, I
68 matches
Mail list logo