On Tue, 10 Nov 2015 08:50:50 +0100 Christian Kivalo <ml+dove...@valo.at> wrote:
> Hi, > > On 2015-11-10 01:44, mancyb...@gmail.com wrote: > > Hello dear list, > > I've recently discovered 'doveadm stats' and I'm trying to use > > "doveadm stats dump user" and "doveadm stats dump session" > > to understand the pop/imap users that put more stress on the hard > > disks. > > > > My problem is that some users refuse to delete their emails from the > > server, > > so they keep 20GB of maildir files on the server, the webmail > > (roundcube) takes forever to open the inbox, > > the imap searches takes forever > > and meanwhile all the users wait. > > (already tried roundcube + memcache(d) but didn't help) > > What is forever in your context? > I'm using roundcube and a folder with about 78k mails opens in < 1 sec > unsorted. A folder with about 37k messages from a mailinglist and thread > sort takes < 3 sec. My roundcube shows 200 messages per page by default. > On a side note, are you using an imap proxy for roundcube? It doesn't > help you with your dovecot problem but it speeds up roundcube. > > To speed up imap searches i can recommend to implement fts-solr with > dovecot (or maybe fts-elasticsearch, am wanting to try that but solr > works...). That will speed up your searches after mailboxes are indexed. > > > So my problem is not the storage usage itself: > > I don't care if the user gets tons of emails with big attachments; > > my problem is when the user opens / searches an imap folder with more > > than 10K mails > > and iostat util goes 100% for minutes. > > Dovecot should be very quick to open even folders with a huge amount of > files due to its indexes. > > I'm unable to reproduce any significant numbers in iostat when accessing > large mailfolders with roundcube. > > Whats your configuration, filesystem, ... > > > So I've enabled dovecot's stats and enjoying "doveadm stats top", > > "stats-top.pl" and "doveadm stats dump user/session", > > but talking about "doveadm stats dump user" and its output fields: > > > > user reset_timestamp last_update num_logins num_cmds > > user_cpu sys_cpu min_faults maj_faults vol_cs invol_cs > > disk_input disk_output read_count read_bytes > > write_count write_bytes mail_lookup_path mail_lookup_attr > > mail_read_count mail_read_bytes mail_cache_hits > > > > I'm not sure which of those fields can help me > > and I can't find any relevant documentation. > > > > So here are my questions: > > > > 1. is there a documentation for those 21 fields and for 'doveadm > > stats' in general ? > > 2. what's the difference between disk_output, read_bytes, read_count > > and mail_read_bytes ? > > 3. which field of those is, in your opinion, more representative for > > expressing the workload that gives me problems ? > > 4. which settings do I need to store 1 week worth of stats ? > > > > I'm currenty using the 'standard' values: > > > > stats_refresh = 30 secs > > stats_track_cmds = yes > > stats_memory_limit = 16 M > > stats_command_min_time = 1 mins > > stats_domain_min_time = 12 hours > > stats_ip_min_time = 12 hours > > stats_session_min_time = 15 mins > > stats_user_min_time = 1 hours > > > > Can you please tell me the correct parameters to store 1 week of stats > > ? > > For stats somebody else has to jump in, i have only enabled the plugin > to see what to get out of it but not made any use of it. > > Please share your doveconf -n output > > > Thank you, > > Mike > > regards > christian By 'forever' I mean more than 1 minute. So there is no documentation / manual for 'doveadm stats' ? Do I have to read the source to know which field does what ? I mean the output fields of "doveadm stats dump user": user reset_timestamp last_update num_logins num_cmds user_cpu sys_cpu min_faults maj_faults vol_cs invol_cs disk_input disk_output read_count read_bytes write_count write_bytes mail_lookup_path mail_lookup_attr mail_read_count mail_read_bytes mail_cache_hits what's the difference between disk_output, read_bytes, read_count and mail_read_bytes ? (sorry to restate the same question, just making sure about it) Thank you, Mike