Sven Hartge wrote:
> I am currently in the planning stage for a "new and improved" mail
> system at my university.
OK, executive summary of the design ideas so far:
- deployment of X (starting with 4, but easily scalable) virtual servers
on VMware ESX
- storage will be backed by a RDM on our
El 09/01/12 14:50, Phil Turmel escribió:
I've been following this thread with great interest, but no advice to offer.
The content is entirely appropriate, and appreciated. Don't be embarrassed
by your enthusiasm, Stan.
+1
On 1/9/2012 7:48 AM, Sven Hartge wrote:
> It seems my initial idea was not so bad after all ;)
Yeah, but you didn't know how "not so bad" it really was until you had
me analyze it, flesh it out, and confirm it. ;)
> Now I "just" need o
> built a little test setup, put some dummy users on it an
Timo Sirainen wrote:
> On 9.1.2012, at 22.13, Sven Hartge wrote:
>> Timo Sirainen wrote:
>>> On 9.1.2012, at 21.45, Sven Hartge wrote:
>> | location = imapc:~/imapc-shared
What is the syntax of this location? What does "imapc-shared" do in
this case?
>>
>>> It's the direc
On 9.1.2012, at 22.13, Sven Hartge wrote:
> Timo Sirainen wrote:
>> On 9.1.2012, at 21.45, Sven Hartge wrote:
>
> | location = imapc:~/imapc-shared
>>>
>>> What is the syntax of this location? What does "imapc-shared" do in this
>>> case?
>
>> It's the directory for index files. The back
Timo Sirainen wrote:
> On 9.1.2012, at 21.45, Sven Hartge wrote:
| location = imapc:~/imapc-shared
>>
>> What is the syntax of this location? What does "imapc-shared" do in this
>> case?
> It's the directory for index files. The backend IMAP server is used as
> a rather dummy storage, so
On 9.1.2012, at 21.45, Sven Hartge wrote:
>>> | location = imapc:~/imapc-shared
>
> What is the syntax of this location? What does "imapc-shared" do in this
> case?
It's the directory for index files. The backend IMAP server is used as a rather
dummy storage, so if for example you do a FETCH
Timo Sirainen wrote:
> On 9.1.2012, at 21.31, Sven Hartge wrote:
>> ,
>> | # User's private mail location
>> | mail_location = mdbox:~/mdbox
>> |
>> | # When creating any namespaces, you must also have a private namespace:
>> | namespace {
>> | type = private
>> | separator = .
>> | pre
On 9.1.2012, at 21.31, Sven Hartge wrote:
> ,
> | # User's private mail location
> | mail_location = mdbox:~/mdbox
> |
> | # When creating any namespaces, you must also have a private namespace:
> | namespace {
> | type = private
> | separator = .
> | prefix = INBOX.
> | #location defa
Timo Sirainen wrote:
> On 9.1.2012, at 20.47, Sven Hartge wrote:
Can "mmap_disable = yes" and the other NFS options be set per
namespace or only globally?
>>
>>> Currently only globally.
>>
>> Ah, too bad.
>>
>> Back to the drawing board then.
> mmap_disable=yes works pretty well ev
On 9.1.2012, at 21.16, Timo Sirainen wrote:
> passdb {
> type = static
> args = user=shareduser
Of course you should also require a password:
args = user=shareduser pass=master-user-password
On 9.1.2012, at 20.47, Sven Hartge wrote:
>>> Can "mmap_disable = yes" and the other NFS options be set per
>>> namespace or only globally?
>
>> Currently only globally.
>
> Ah, too bad.
>
> Back to the drawing board then.
mmap_disable=yes works pretty well even if you're only using it for loc
Timo Sirainen wrote:
> On 9.1.2012, at 20.25, Sven Hartge wrote:
>> Timo Sirainen wrote:
>>> On 8.1.2012, at 0.20, Sven Hartge wrote:
Right now, I am pondering with using an additional server with just
the shared folders on it and using NFS (or a cluster FS) to mount
the shared f
On 9.1.2012, at 20.25, Sven Hartge wrote:
> Timo Sirainen wrote:
>> On 8.1.2012, at 0.20, Sven Hartge wrote:
>
>>> Right now, I am pondering with using an additional server with just
>>> the shared folders on it and using NFS (or a cluster FS) to mount the
>>> shared folder filesystem to each ba
Timo Sirainen wrote:
> On 8.1.2012, at 0.20, Sven Hartge wrote:
>> Right now, I am pondering with using an additional server with just
>> the shared folders on it and using NFS (or a cluster FS) to mount the
>> shared folder filesystem to each backend storage server, so each user
>> has potential
On 9.1.2012, at 17.14, Charles Marcus wrote:
> On 2012-01-09 9:51 AM, Timo Sirainen wrote:
>> The "proper" solution for this that I've been thinking about would be
>> to use v2.1's imapc backend with master users. So that when user A
>> wants to access user B's shared folder, Dovecot connects to
Stan Hoeppner wrote:
> The more I think about your planned architecture the more it reminds
> me of a "shared nothing" database cluster--even a relatively small one
> can outrun a well tuned mainframe, especially doing decision
> support/data mining workloads (TPC-H).
> As long as you're prepare
On 1/9/2012 8:08 AM, Sven Hartge wrote:
> Stan Hoeppner wrote:
> The quota for students is 1GiB here. If I provide each of my 4 nodes
> with 500GiB of storage space, this gives me 2TiB now, which should be
> sufficient. If a nodes fills, I increase its storage space. Only if it
> fills too fast,
On 2012-01-09 9:51 AM, Timo Sirainen wrote:
The "proper" solution for this that I've been thinking about would be
to use v2.1's imapc backend with master users. So that when user A
wants to access user B's shared folder, Dovecot connects to B's IMAP
server using master user login, and accesses t
Timo Sirainen wrote:
> On 8.1.2012, at 0.20, Sven Hartge wrote:
>> Right now, I am pondering with using an additional server with just
>> the shared folders on it and using NFS (or a cluster FS) to mount the
>> shared folder filesystem to each backend storage server, so each user
>> has potential
Too much text in the rest of this thread so I haven't read it, but:
On 8.1.2012, at 0.20, Sven Hartge wrote:
> Right now, I am pondering with using an additional server with just the
> shared folders on it and using NFS (or a cluster FS) to mount the shared
> folder filesystem to each backend sto
Stan Hoeppner wrote:
> On 1/8/2012 2:15 PM, Sven Hartge wrote:
>> Wouldn't such a setup be the "Best of Both Worlds"? Having the main
>> traffic going to local disks (being RDMs) and also being able to provide
>> shared folders to every user who needs them without the need to move
>> those users
Stan Hoeppner wrote:
> On 1/8/2012 3:07 PM, Sven Hartge wrote:
>> Ah, I forgot: I _already_ have the mechanisms in place to statically
>> redirect/route accesses for users to different backends, since some
>> of the users are already redirected to a different mailsystem at
>> another location of
On 01/09/2012 08:38 AM, Stan Hoeppner wrote:
> On 1/8/2012 3:07 PM, Sven Hartge wrote:
[...]
>> (Are my words making any sense? I got the feeling I'm writing German with
>> English words and nobody is really understanding anything ...)
>
> You're making perfect sense, and frankly, if not for the
Stan Hoeppner wrote:
> On 1/8/2012 9:39 AM, Sven Hartge wrote:
>> Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My
>> cluster nodes each have 48GB, so no problem on this side though.
> Shouldn't be a problem if you're going to spread the load over 2 to 4
> cluster nodes. 16
On 1/8/2012 3:07 PM, Sven Hartge wrote:
> Ah, I forgot: I _already_ have the mechanisms in place to statically
> redirect/route accesses for users to different backends, since some of
> the users are already redirected to a different mailsystem at another
> location of my university.
I assume you
On 1/8/2012 2:15 PM, Sven Hartge wrote:
> Wouldn't such a setup be the "Best of Both Worlds"? Having the main
> traffic going to local disks (being RDMs) and also being able to provide
> shared folders to every user who needs them without the need to move
> those users onto one server?
The only p
On 1/8/2012 9:39 AM, Sven Hartge wrote:
> Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My
> cluster nodes each have 48GB, so no problem on this side though.
Shouldn't be a problem if you're going to spread the load over 2 to 4
cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB pe
Sven Hartge wrote:
> Sven Hartge wrote:
>> Stan Hoeppner wrote:
>>> If an individual VMware node don't have sufficient RAM you could build a
>>> VM based Dovecot cluster, run these two VMs on separate nodes, and thin
>>> out the other VMs allowed to run on these nodes. Since you can't
>>> dire
Sven Hartge wrote:
> Stan Hoeppner wrote:
>> If an individual VMware node don't have sufficient RAM you could build a
>> VM based Dovecot cluster, run these two VMs on separate nodes, and thin
>> out the other VMs allowed to run on these nodes. Since you can't
>> directly share XFS, build a tin
Stan Hoeppner wrote:
> On 1/7/2012 7:55 PM, Sven Hartge wrote:
>> Stan Hoeppner wrote:
>>
>>> It's highly likely your problems can be solved without the drastic
>>> architecture change, and new problems it will introduce, that you
>>> describe below.
>>
>> The main reason is I need to replace t
On 1/7/2012 7:55 PM, Sven Hartge wrote:
> Stan Hoeppner wrote:
>
>> It's highly likely your problems can be solved without the drastic
>> architecture change, and new problems it will introduce, that you
>> describe below.
>
> The main reason is I need to replace the hardware as its service
> co
Stan Hoeppner wrote:
> It's highly likely your problems can be solved without the drastic
> architecture change, and new problems it will introduce, that you
> describe below.
The main reason is I need to replace the hardware as its service
contract ends this year and I am not able to extend it
On 1/7/2012 4:20 PM, Sven Hartge wrote:
> Hi *,
>
> I am currently in the planning stage for a "new and improved" mail
> system at my university.
>
> Right now, everything is on one big backend server but this is causing
> me increasing amounts of pain, beginning with the time a full backup
> tak
Hi *,
I am currently in the planning stage for a "new and improved" mail
system at my university.
Right now, everything is on one big backend server but this is causing
me increasing amounts of pain, beginning with the time a full backup
takes.
So naturally, I want to split this big server into
35 matches
Mail list logo