@ CPU_CLK_UNHALTED_CORE [271718 samples]
22.48% [61081]lock_delay @ /boot/kernel/kernel
99.72% [60908] __mtx_lock_sleep
67.69% [41230] zone_fetch_slab
100.0% [41230] zone_import
100.0% [41230]zone_alloc_item
99.99% [41226] uma_zalloc_arg
tcsh called by sshd for invocation of scp: `tcsh -c scp -f Расписание.pdf`
At this time no any LC_* is set.
tcsh read .cshrc and set LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=ru_RU.UTF-8.
After this invocation of scp will be incorrect:
7ab0 20 2d 66 20 c3 90 c2 a0 c3 90 c2 b0 c3 91 c2 81 | -f ...
On Thu, Oct 20, 2016 at 08:54:05AM -0600, Alan Somers wrote:
> On Wed, Oct 19, 2016 at 11:10 AM, Slawa Olhovchenkov wrote:
> > tcsh called by sshd for invocation of scp: `tcsh -c scp -f Расписание.pdf`
> > At this time no any LC_* is set.
> > tcsh read .cshrc and set
On Fri, Oct 21, 2016 at 11:02:57AM +0100, Steven Hartland wrote:
> > Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free
> > ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other
> > Swap: 4096M Total, 4096M Free
> >
> > PID USERNAME PRI NICE SIZERES STATE C TIME
On Fri, Oct 21, 2016 at 04:51:36PM +0500, Eugene M. Zheganin wrote:
> Hi.
>
> On 21.10.2016 15:20, Slawa Olhovchenkov wrote:
> >
> > ZFS prefetch affect performance dpeneds of workload (independed of RAM
> > size): for some workloads wins, for some workloads lose (for
On Fri, Oct 21, 2016 at 01:47:08PM +0100, Pete French wrote:
> > In bad case metadata of every file will be placed in random place of disk.
> > ls need access to metadata of every file before start of output listing.
>
> Umm, are we not talkong abut an issue where the directoyr no longer contains
% gdb ./edge_stat
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is abso
7111 IIRC.
1. gdb7111 badly integrated w/ 11 and up (don't see kernel debug
symbols)
2. all included in base systems can't be core dumped.
> On Thu, Dec 8, 2016 at 06:53 Slawa Olhovchenkov wrote:
>
> > % gdb ./edge_stat
> >
> > GNU gdb 6.1.1 [FreeBSD]
> >
On Thu, Dec 08, 2016 at 04:52:35PM +, K. Macy wrote:
> kgdb7111 is what you use for kernel. It works fine for me.
kgdb7111 don't find .debug under /usr/lib/debug/
gdb found it.
> On Thu, Dec 8, 2016 at 08:29 Slawa Olhovchenkov wrote:
>
> > On Thu, Dec 08, 2016 at 04:01
On Thu, Dec 08, 2016 at 07:56:03PM +0200, Andriy Gapon wrote:
> On 08/12/2016 18:57, Slawa Olhovchenkov wrote:
> > kgdb7111 don't find .debug under /usr/lib/debug/
> > gdb found it.
>
> $ gdb7111 bhyve /var/coredumps/bhyve.0.0.core
> GNU gdb (GDB) 7.11.1 [GDB v7.11.
On Fri, Dec 16, 2016 at 06:08:34PM +0100, Fernando Herrero Carrón wrote:
> Hi everyone,
>
> A few months ago I got myself a new box and I have been happily running
> FreeBSD on it ever since. I noticed that the boot was not as fast as I had
> expected and I've realized that, while my disk is GPT
On Fri, Dec 16, 2016 at 11:43:18AM -0600, Eric van Gyzen wrote:
> On 12/16/2016 11:39, Slawa Olhovchenkov wrote:
> > On Fri, Dec 16, 2016 at 06:08:34PM +0100, Fernando Herrero Carrón wrote:
> >
> >> Hi everyone,
> >>
> >> A few months ago I got myself
On Sat, Dec 17, 2016 at 05:12:13PM +1100, Ian Smith wrote:
> On Fri, 16 Dec 2016 18:08:34 +0100, Fernando Herrero Carrón wrote:
> > Hi everyone,
>
> Hi,
>
> you've had plenty of helpful responses, but nobody has commented on:
>
> > My only reason for wanting to boot with UEFI is faster boot,
I am have stable/11 and E5v4.
I am don't see cpufreq support by sysctl:
# sysctl dev.cpu.0
dev.cpu.0.cx_method: C1/hlt
dev.cpu.0.cx_usage_counters: 61755
dev.cpu.0.cx_usage: 100.00% last 1us
dev.cpu.0.cx_lowest: C2
dev.cpu.0.cx_supported: C1/1/1
dev.cpu.0.%parent: acpi0
dev.cpu.0.%pnpinfo: _HID=AC
On Sun, Jan 15, 2017 at 10:40:42AM -0600, Dan Mack wrote:
> I have a system which builds world, kernel, install, boot, installworld,
> reboot several times per week. I just noticed that my build times
> increased from about (just cherry picking a couple build logs):
>
>Starting build of F
On Wed, Jan 18, 2017 at 02:48:19PM +0500, Eugene M. Zheganin wrote:
> Hi.
>
> Could someone recommend a decent 40Gbit adapter that are proven to be
> working under FreeBSD ? The intended purpose - iSCSI traffic, not much
> pps, but rates definitely above 10G. I've tried Supermicro-manufactured
>
I am got panic on recent stable:
Fatal trap 18: integer divide fault while in kernel mode
cpuid = 3; apic id = 06
instruction pointer = 0x20:0x81453230
stack pointer = 0x28:0xfe3e56f46480
frame pointer = 0x28:0xfe3e56f464a0
code segment= base 0x0
On Wed, Feb 01, 2017 at 06:52:01AM -0700, Jakub Lach wrote:
> Yes, HDD and card reader was USB mounted.
>
> This time, I've copied about 12G from 38G from internal SSD (UFS2) to
> HDD via USB (FAT32), then system panicked with CAM errors.
I am have like issuse on laptop w/ broken USB controller
On Wed, Feb 01, 2017 at 07:25:18AM -0700, Jakub Lach wrote:
> I would think so, if only I would not clone the disk/system via the same USB
> port mere weeks ago.
> Moreover, sysutils/f3 fully writes and validates (checksums) 30G+ memory
> cards via the same port without problems.
In my case contr
On Wed, Feb 22, 2017 at 11:47:42PM +0300, Lev Serebryakov wrote:
> Hello Freebsd-stable,
>
>Now if you build zfs.ko with -O0 it panics on boot.
>
>If you use default optimization level, a lot of fbt DTreace probes are
> missing.
Is this related to http://llvm.org/bugs/show_bug.cgi?id=
On Tue, Mar 07, 2017 at 10:19:35AM +0800, Erich Dollansky wrote:
> Hi,
>
> I wonder about the slow speed of my machine while top shows ample
> inactive memory:
>
> last pid: 85287; load averages: 2.56, 2.44, 1.68
> up 6+10:24:45 10:13:36 191 processes: 5 running, 186 sleeping
> CPU 0: 47.1%
On Wed, Mar 08, 2017 at 09:00:34AM +0500, Eugene M. Zheganin wrote:
> Hi.
>
> Some have probably seen this already -
> http://lists.dragonflybsd.org/pipermail/users/2017-March/313254.html
>
> So, could anyone explain why FreeBSD was owned that much. Test is split
> into two parts, one is ngin
I am see lock contetntion cuased by aio read (same file segment from
multiple process simultaneous):
07.74% [26756]lock_delay @ /boot/kernel/kernel
92.21% [24671] __mtx_lock_sleep
52.14% [12864] vm_page_enqueue
100.0% [12864] vm_fault_hold
87.71% [11283]vm
I am have strange issuse on stable/10:
# devinfo -v
nexus0
apic0
ram0
acpi0
[...]
pcib0 pnpinfo _HID=PNP0A08 _UID=0 at handle=\_SB_.PCI0
pci0
hostb0 pnpinfo vendor=0x8086 device=0xd130 subvendor=0x1014
subdevice=0x03ce class=0x06 at slot=0 function=0 dbsf=pci0:0:0:0
re the refcount can be inc/dec (obviously >1, ie not in
> a state where you can dec to 0) via atomics, without grabbing a lock.
> That'll make this particular use case mch faster.
>
> (dfbsd does this.)
I can try you patch.
> -a
>
>
> On 21 March 2017 at 0
On Thu, Jul 27, 2017 at 04:29:52PM +0300, Alexander Motin wrote:
> Hi Mike,
>
> On 27.07.2017 16:21, Mike Tancsa wrote:
> > I noticed quite a few MFCs to RELENG_11 around zfs yesterday and today.
> > First off, thank you for all these fixes/enhancements! Of the some 60
> > MFCs, are there any
On Tue, Aug 08, 2017 at 10:31:33AM +0200, Hans Petter Selasky wrote:
> Here is the conclusion:
>
> The following code is going in an infinite loop:
>
>
> > for (;;) {
> > TW_RLOCK(V_tw_lock);
> > tw = TAILQ_FIRST(&V_twq_2msl);
> > if (tw =
On Tue, Aug 08, 2017 at 01:49:08PM +0200, Hans Petter Selasky wrote:
> On 08/08/17 13:33, Slawa Olhovchenkov wrote:
> > TW_RUNLOCK(V_tw_lock);
> > and
> > if (INP_INFO_TRY_WLOCK(&V_tcbinfo)) {
> >
> > `inp` can be invalidated, freed and this pointer may be
On Mon, Mar 19, 2018 at 03:53:03PM +0100, Patrick M. Hausen wrote:
> Hi all,
>
> any ideas why a current RELENG_11_1 system with ixl(4)
> onboard interfaces might not negotiate with a switch that
> has only fast ethernet?
>
> status: no carrieron the host
> li
I am upgrade system to latest -STABLE and now see kernel crash:
- loading virtualbox modules build on 11.1-RELEASE-p6
- loading nvidia module build on 11.1-RELEASE-p6 and start xdm
Is this expected? I am mean about loading modules builded on
11.1-RELEASE on any 11.1-STABLE.
__
On Wed, Mar 28, 2018 at 03:39:46PM +0200, Gregory Byshenk wrote:
> On Wed, Mar 28, 2018 at 04:09:04PM +0300, Slawa Olhovchenkov wrote:
> > I am upgrade system to latest -STABLE and now see kernel crash:
> >
> > - loading virtualbox modules build on 11.1-RELEASE-p6
> &
On Wed, Mar 28, 2018 at 05:13:48PM +0200, Gregory Byshenk wrote:
> On Wed, Mar 28, 2018 at 05:35:51PM +0300, Slawa Olhovchenkov wrote:
> > On Wed, Mar 28, 2018 at 03:39:46PM +0200, Gregory Byshenk wrote:
> > >
> > > Did you rebuild your virtualbox and nvidia modules fo
On Wed, Mar 28, 2018 at 10:29:10AM -0500, Eric van Gyzen wrote:
> On 03/28/2018 08:09, Slawa Olhovchenkov wrote:
> > I am upgrade system to latest -STABLE and now see kernel crash:
> >
> > - loading virtualbox modules build on 11.1-RELEASE-p6
> > - loading nvidia modu
On Wed, Mar 28, 2018 at 11:25:08PM -0700, Kevin Oberman wrote:
> > > > r325665 is previos point and is good.
> > > > r331615 crashed.
> > > > Can I use some script for bisect?
> > >
> > > I'm not aware of a script for this. The only tool I've used is "git
> > > bisect", which is very handy if you
# vmstat -m|grep temp
Type InUse MemUse HighUse Requests Size(s)
temp60 18014398509481829K - 32350974
16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536
Is this normal?
SVN rev: r328463
___
freebsd-stable@freebsd.o
On Thu, Jun 07, 2018 at 07:04:29PM +0930, Shane Ambler wrote:
> On 07/06/2018 16:09, Peter Jeremy wrote:
> > I've noticed that 11-stable/amd64 has been wiring seemingly excessive
> > amounts of RAM for some time (the problem goes back at least 6 months).
> > This extends to getting ENOMEM errors f
On Wed, Jun 20, 2018 at 07:37:20PM +0200, Miroslav Lachman wrote:
> > %busy comes from the devstat layer. It's defined as the percent of the
> > time over the polling interval in which at least one transaction was
> > awaiting completion by the lower layers. It's an imperfect measure of
> > how
301 - 337 of 337 matches
Mail list logo