Stan Hoeppner schrieb:
Although, I should point out that parallel SCSI (U320) is pretty much a
dead technology at this point. AFAIK, no vendor has shipped a new
parallel SCSI disk line (only warranty replacements) for a number of
years now. It has been superseded by Serial Attached SCSI (SAS).
Patrick Westenberg put forth on 12/15/2010 7:28 AM:
> Won't be 15k U320 SCSI disks also faster than average SATA disks for the
> indexes?
Yes. A faster spindle speed allows for greater random IOPS. Index file
reads/writes are random IOPS, and become more random with greater
concurrency, i.e. mor
El 15/12/10 14:28, Patrick Westenberg escribió:
Won't be 15k U320 SCSI disks also faster than average SATA disks for
the indexes?
I am using 2xraid5 of 8 SAS disks of 15k rpm for mailboxes & indexes.
I am evaluating the migration of indexes to 1xraid 1+0 8 disks SAS
15k rpm
Regar
Won't be 15k U320 SCSI disks also faster than average SATA disks for the
indexes?
Javier de Miguel Rodríguez put forth on 12/14/2010 6:15 AM:
> I attach you a screenshot of the perfomance of the lefthand: Average: 15
> MB/seg, 1.700 IOPS. Highest load (today) is ~62 MB/seg, with a whooping 9000
> IOPS, mucho above the theorical iops of 2 raid5 of 8 disks each (SAS 15K),
Javier de Miguel Rodríguez put forth on 12/13/2010 3:26 AM:
> Can you give me (off-list if you desire) more info about your setup?
> I am interested in the number and type of spindles you are using. We are
> using LeftHand because of their real time replication capabilities,
> something very
Javier de Miguel Rodríguez put forth on 12/13/2010 1:26 AM:
> Sadly, Red Hat Enterprise Linux 5 does not support natively XFS. I
> can install it via CentosPlus, but we need Red Hat support if somethings
> goes VERY wrong. Red Hat Enterprise Linux 6 supports XFS (and gives me
> dovecot 2.0),
On 12/12/2010 00:49, Stan Hoeppner wrote:
>
> Since Javier is looking for ways to decrease I/O load on the SAN, not
> necessarily increase Dovecot performance, I think putting the index
> files on a ramdisk is best thing to try first. It may not be a silver
> bullet. If he's still got spare memo
On Sat, 2010-12-11 at 23:05 +0700, a...@test123.ru wrote:
> Does anybody know concrete answers? Let's consider IMAP and LDA and forget
> POP3.
>
> 1. Is migration to dovecot 2.0 good idea if I want to decrease I/O?
That alone makes no difference.
> 2. Can mdbox help decrease IO?
Hopefully! No
On Mon, 2010-12-13 at 10:26 +0100, Javier de Miguel Rodríguez wrote:
> We can throw more hardware to this, let's see if using memory-based
> indexes (via ramdisk) we get better results. Zlib compression on indexes
> should be great for this.
Isn't it possible for Linux to compress the ramd
El 13/12/10 10:16, Brad Davidson escribió:
On Dec 12, 2010, at 23:26, Javier de Miguel Rodrí guez
wrote:
My SAN(s) (HP LeftHand Networks) do not support SSD, though. But I have
several LeftHand nodes, some of them with raid5, others with raid 1+0.
Maildirs+indexes are now in raid5, may
On Dec 12, 2010, at 23:26, Javier de Miguel Rodrí guez
wrote:
>
>My SAN(s) (HP LeftHand Networks) do not support SSD, though. But I have
> several LeftHand nodes, some of them with raid5, others with raid 1+0.
> Maildirs+indexes are now in raid5, maybe I can separate the indexes to raid
Thank you for your responses Stan, I reply you below
For that many users I'm guessing you can't physically stuff enough RAM
into the machines in your ESX cluster to use a ramdisk for the index
files, and if you could, you probably couldn't, or wouldn't want to,
afford the DIMMs required to me
Javier de Miguel Rodríguez put forth on 12/12/2010 1:26 PM:
>
> Thank you very much for all the responses in this thread. Now I have
> more questions:
>
> - I have "slow" I/O (about 3.5000-4.000 IOPS, measured via
> imapsync), if I enable zlib compression in my maildirs, that should
> lower t
Eric Rostetter put forth on 12/12/2010 9:08 PM:
> Quoting Stan Hoeppner :
>
>> Also, due to the potential size of the index files (mine alone are 276
>> MB on an 877 MB mbox), you'll need to do some additional research to see
>> if this is a possibility for you.
>
> That's rather high based on my
Quoting Javier de Miguel Rodríguez :
- I understand that indexes should go to the fastest storage I
own. Somebody talked about storing them in a ramdisk and then backup
them to disk on shutdown. I have several questions about that:
- In my setup I have 25.000+ users, alm
Quoting Stan Hoeppner :
Also, due to the potential size of the index files (mine alone are 276
MB on an 877 MB mbox), you'll need to do some additional research to see
if this is a possibility for you.
That's rather high based on my users... My largest user has 110M of indexes.
The next highe
Thank you very much for all the responses in this thread. Now I have
more questions:
- I have "slow" I/O (about 3.5000-4.000 IOPS, measured via
imapsync), if I enable zlib compression in my maildirs, that should
lower the number the IOPS (less to read, less to write, less IOPS, more
CPU
On 12.12.2010, at 9.39, Stan Hoeppner wrote:
> mail_location = maildir:~/Maildir:INDEX=MEMORY
>
> The ":INDEX=MEMORY" disables writing the index files to disk, and as the
> name implies, I believe, simply keeps indexes in memory.
I think maybe I shoudn't have called it INDEX=MEMORY, but rather m
Patrick Westenberg put forth on 12/11/2010 5:12 AM:
> Stan Hoeppner schrieb:
>
>> So, either:
>>
>> 2. Move indexes to memory
>
> What steps have to be done and what will the configuration look like
> to have your indexes in memory?
Regarding Dovecot 1.2.x, for maildir, I believe it would be so
Eric Rostetter put forth on 12/11/2010 9:48 AM:
> Well, it is true I know nothing about vmware/ESX. I know in my virtual
> machine setups, I _can_ give the virtual instances access to devices which
> are not used by other virtual instances. This is what I would do. Yes,
> it is still virtualize
Hi,
I am running a fair amount of stored e-mails on maildirs(10 GB+) in 846
folders that gets a fair amount of searching, and 20+ users accessing them,
mostly via IMAP and a few POP3 accounts. I am running these on a Linode XEN
server and have yet to hit any hard limits of "bare metal". User and V
Quoting a...@test123.ru:
Guys. Who is interested in obvious reasoning?
The same people who are interested in vague questions?
Let me remind original concrete question. I am also interested.
We can "exchange" CPU & RAM to minimize disk i/o.
Should we change to dovecot 2.0?
Maybe mdbox can h
Guys. Who is interested in obvious reasoning? More memory, bare metal, depends
on your needs, bla-bla-bla. Let me remind original concrete question. I am also
interested.
> We can "exchange" CPU & RAM to minimize disk i/o.
> Should we change to dovecot 2.0?
> Maybe mdbox can help us?
> Maybe e
Quoting Stan Hoeppner :
Eric you missed up above that he's running Dovecot on an ESX cluster, so
SSDs or any hardware dedicated to Dovecot isn't possible for the OP.
Well, it is true I know nothing about vmware/ESX. I know in my virtual
machine setups, I _can_ give the virtual instances acc
Eric Rostetter put forth on 12/10/2010 10:11 PM:
> Quoting javierdemig...@us.es:
>
>> in our vmware esx cluster. We want to minimize disk I/O, what config
>> options should we use. We can "exchange" CPU & RAM to minimize disk i/o.
>
> Depends on what you are doing -- pop3, imap, both, deliver or
Quoting javierdemig...@us.es:
in our vmware esx cluster. We want to minimize disk I/O, what config
options should we use. We can "exchange" CPU & RAM to minimize disk i/o.
Depends on what you are doing -- pop3, imap, both, deliver or some other
LDA? Do you care if the indexes are lost on rebo
Hello
We are using dovecot 1.2.x. In our setup we will have 1200
concurrent imap users (maildirs) and we have 2xraid5 sas 15k diks
mounted by iSCSI. The dovecot server (RHEL 5 x64) is
a virtual machine
in our vmware esx cluster. We want to minimize disk I/O, what config
options should we use
28 matches
Mail list logo