On 12/01/2017 12:42, Dustin Wenz wrote:
> Here's the top -uS output from a test this morning:
> 
> last pid: 57375;  load averages:  8.29,  7.02,  4.05                          
>                                                                               
>                                                                               
>                                                                               
>                                  up 38+22:19:14  11:28:25
> 68 processes:  2 running, 65 sleeping, 1 waiting
> CPU:  0.1% user,  0.0% nice, 40.4% system,  0.4% interrupt, 59.1% idle
> Mem: 2188K Active, 4K Inact, 62G Wired, 449M Free
> ARC: 7947M Total, 58M MFU, 3364M MRU, 1000M Anon, 2620M Header, 904M Other
>      4070M Compressed, 4658M Uncompressed, 1.14:1 Ratio
> Swap: 112G Total, 78M Used, 112G Free, 4K In, 12K Out
> 
>   PID    UID    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
>    11      0     24 155 ki31     0K   384K RUN     0    ??? 1446.82% idle
>     0      0    644 -16    -     0K 10304K swapin 21 554:59 492.45% kernel
> 57333      0     30  20    0 17445M  1325M kqread  9  16:38 357.42% bhyve
>    15      0     10  -8    -     0K   192K arc_re 20  80:54  81.55% zfskern
>     5      0      6 -16    -     0K    96K -       5  12:35  11.50% cam
>    12      0     53 -60    -     0K   848K WAIT   21  74:35   9.40% intr
> 41094      0     30  20    0 17445M 14587M kqread 17 301:29   0.39% bhyve
> 
> Dec  1 11:29:31 <kern.err> service014 kernel: pid 57333 (bhyve), uid 0, was 
> killed: out of swap space
> Dec  1 11:29:31 <kern.err> service014 kernel: pid 69549 (bhyve), uid 0, was 
> killed: out of swap space
> Dec  1 11:29:31 <kern.err> service014 kernel: pid 41094 (bhyve), uid 0, was 
> killed: out of swap space
> 
> 
> This was with three VMs running, but only one of them was doing any IO. Note 
> that the whole machine hung for about about 60 seconds before the VMs were 
> shut down and memory recovered. That's why the top output is over a minute 
> older than the kill messages (top had stopped refreshing).
> 
> What I'm suspicious of is that almost all of the physical memory is wired. If 
> that is bhyve memory, why did it not page out?
> 
> 
>       - .Dustin
> 
> 
>> On Nov 30, 2017, at 5:15 PM, Dustin Wenz <dustinw...@ebureau.com> wrote:
>>
>> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is 
>> also FreeBSD 11.1). Their sole purpose is to house some medium-sized 
>> Postgres databases (100-200GB). The host system has 64GB of real memory and 
>> 112GB of swap. I have configured each guest to only use 16GB of memory, yet 
>> while doing my initial database imports in the VMs, bhyve will quickly grow 
>> to use all available system memory and then be killed by the kernel:
>>
>>      kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 4096, 
>> error 12
>>      kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 4096, 
>> error 12
>>      kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 4096, 
>> error 12
>>      kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
>>
>> The OOM condition seems related to doing moderate IO within the VM, though 
>> nothing within the VM itself shows high memory usage. This is the chyves 
>> config for one of them:
>>
>>      bargs                      -A -H -P -S
>>      bhyve_disk_type            virtio-blk
>>      bhyve_net_type             virtio-net
>>      bhyveload_flags
>>      chyves_guest_version       0300
>>      cpu                        4
>>      creation                   Created on Mon Oct 23 16:17:04 CDT 2017 by 
>> chyves v0.2.0 2016/09/11 using __create()
>>      loader                     bhyveload
>>      net_ifaces                 tap51
>>      os                         default
>>      ram                        16G
>>      rcboot                     0
>>      revert_to_snapshot
>>      revert_to_snapshot_method  off
>>      serial                     nmdm51
>>      template                   no
>>      uuid                       8495a130-b837-11e7-b092-0025909a8b56
>>
>>
>> I've also tried using different bhyve_disk_types, with no improvement. How 
>> is it that bhyve can use far more memory that I'm specifying?
>>
>>      - .Dustin
> 

'Wired' memory, specifically means that it cannot be paged out. It is
not bhyve, it is ZFS.

Please lower your vfs.zfs.arc_max to about 10-12 GB instead of 60+ GB it
is at now.

And you might also want to double your: vfs.zfs.arc_free_target to make
ZFS give up memory more easily.

-- 
Allan Jude
_______________________________________________
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"

Reply via email to