On Tuesday, August 09, 2016 09:13:31 AM james wrote:
On 08/09/2016 07:42 AM, Michael Mol wrote:
> On Monday, August 08, 2016 10:45:09 PM Alan McKinnon wrote:
>> On 08/08/2016 19:20, Michael Mol wrote:
>>> On Monday, August 08, 2016 06:52:15 PM Alan McKinnon wrote:
>>>> On 08/08/2016 17:02, Michael Mol wrote:
>>> snip <<<
>>
>> KMail is the lost child of KDE for many months now, I reckon this
>> situation is just going to get worse and worse. I know for myself my
>> mail problems ceased the day I dumped KMail4 for claws and/or
thunderbird
>
> That's really, really sad.
>
> I used Thunderbird for years, but I eventually had to stop when it
would,
> averaging once a month (though sometimes not for a couple months,
> sometimes a couple times a week) explode in memory consumption and drive
> the entire system unresponsively into swap.
>
> I've tried claws from time to time due to other annoyances with
> Thunderbird, but I kept switching back. Not because I liked Tbird, but
> (IIRC) because of stability issues I had with claws.
>
> Even with the bugs it has, Kontact and Akonadi has been the most
reliable
> mail client I've used in the last year. When it gives me problems, I
know
> why, and I can address it. (Running a heavily tuned MySQLd instance
> behind Akonadi, for example...)
>
> I wish someone would pay me to fix this stuff; I'd be able to spend the
> time on it.
Perhaps an experiment. Locate some folks that know about how to promote
'crowd funding'. The propose a project like this, targeted at business
and user, to all pitch in. In fact, quite a few beloved open source
projects could benefit, if the idea of crowd funding took hold
on open source soft. Perhaps one of the foundations deeply involved in
the open source movement would get behind the idea?
KDE is very popular, so the concept or something similar might just have
legs, even if it only funds a series of grad-students or young
programmers to maintain good FOSS projects?
A wonderful thought. I rather expect KDE is already doing this, but if
not, they ought to. (I'm sure someone who commits code to KDE reads this
list...)
Certainly wouldn't cover someone like me who has a family to support,
but still.
AS a side note, I put 32G of ram on my system and still at times it is
laggy with little processor load and htop shows little <30% ram usage.
What tools do you use to track down mem. management issues?
I use Zabbix extensively at work, and have the Zabbix agent on my
workstation reporting back various supported metrics. There's a great
deal you can use (and--my favorite--abuse) Zabbix for, especially once
you understand how it thinks.
vm.dirty_background_bytes ensures that any data (i.e. from mmap or
fwrite, not from swapping) waiting to be written to disk *starts*
getting written to disk once you've got at least the configured amount
(1MB) of data waiting. (If you've got a disk controller with
battery-backed or flash-backed write cache, you might consider
increasing this to some significant fraction of your write cache. I.e.
if you've got a 1GB FBWC with 768MB of that dedicated to write cache,
you might set this to 512MB or so. Depending on your workload. I/O
tuning is for those of us who enjoy the dark arts.)
vm.dirty_bytes says that once you've got the configured amount (10MB) of
data waiting to be disk, then no more asynchronous I/O is permitted
until you have no more data waiting; all outstanding writes must be
finished first. (My rule of thumb is to have this between 2-10 times the
value of vm.dirty_background_bytes. Though I'm really trying to avoid it
being high enough that it could take more than 50ms to transfer to disk;
that way, any stalls that do happen are almost imperceptible.)
You want vm.dirty_background_bytes to be high enough that your hardware
doesn't spend its time powered on if it doesn't have to be, and so that
your hardware can transfer data in large, efficient, streamable chunks.
You want vm.dirty_bytes enough higher than your first number so that
your hardware has enough time to spin up and transfer data before you
put the hammer down and say, "all right, nobody else gets to queue
writes until all the waiting data has reached disk."
You want vm.dirty_bytes *low* enough that when you *do* have to put that
hammer down, it doesn't interfere with your perceptions of a responsive
system. (And in a server context, you want it low enough that things
can't time out--or be pushed into timing out--waiting for it. Call your
user attention a matter of timing out expecting things to respond to
you, and the same principle applies...)
Now, vm.swappiness? That's a weighting factor for how quickly the kernel
should try moving memory to swap to be able to speedily respond to new
allocations. Me, I prefer the kernel to not preemptively move
lesser-used data to swap, because that's going to be a few hundred
megabytes worth of data all associated with one application, and it'll
be a real drag when I switch back to the application I haven't used for
half an hour. So I set vm.swappiness to 0, to tell the kernel to only
move data to swap if it has no other alternative while trying to satisfy
a new memory allocation request.