On 10.7.2012, at 8.31, Frank Bonnet wrote:
> Would it be possible to close this thread from Dovecot mailing-list ?
Yeah, enough with this thread.
Am 10.07.2012 05:59, schrieb Stan Hoeppner:
>>> That's simply not true Reindl.
>>>
>>> SATA drives are being used very widely in production today, and
>>> outnumber SAS deployments by a very wide margin.
>>
>> for SOHO with no public services, yes
>
> Google has more public facing services, se
Le 10/07/2012 08:13, Robert Schetterer a écrit :
Am 09.07.2012 21:41, schrieb Reindl Harald:
in these environments you find near to zero SATA
only few people these does are doing bare metal installs in days
where hardware supported virtaliziation has nearly zero overhead
Hi Harald, that simpl
Am 09.07.2012 21:41, schrieb Reindl Harald:
> in these environments you find near to zero SATA
> only few people these does are doing bare metal installs in days
> where hardware supported virtaliziation has nearly zero overhead
Hi Harald, that simply not true
i have thousends of mailbox users on
Am 09.07.2012 21:29, schrieb Stan Hoeppner:
> SAS is found today almost exclusively in high volume transactional
> servers such as mail spools, mail stores, databases, VM image storage,
> and applications that need higher reliability, such as medical imaging
> systems, etc.
and SAS may not be fast
On Mon, 2012-07-09 at 12:10 +0200, Reindl Harald wrote:
> what you do not understand is that a proper SAN is NOT
> an complex setup, it is in many cases a simpler one
> because you have TWO controllers, disks with DUAL channel
> and a proper RAID level in ONE device
>
> to built all this redundan
On 7/9/2012 2:41 PM, Reindl Harald wrote:
>
>
> Am 09.07.2012 21:29, schrieb Stan Hoeppner:
>> On 7/9/2012 3:17 AM, Reindl Harald wrote:
>>>
>>>
>>> Am 09.07.2012 07:48, schrieb Wojciech Puchar:
> disagreed with my statement, then agreed with it. Apparently you didn't
> realize you did s
Am 09.07.2012 22:42, schrieb s...@privat.dk:
Moi.
Hi there.
Wouldn't it be possible to either stop this madness of silly people
trying to teach other maillist users this storage nonsense ? (Religion)
or to tell how to unsubscribe asap ?
My inbox is filling up with this to me, and maybe other on
list scope nonsense.
Regards
Solo
- Original meddelelse -
> Fra: Reindl Harald
> Til: dovecot@dovecot.org
> Dato: Man, 09. jul 2012 21:41
> Emne: Re: [Dovecot] Howto add another disk storage
>
>
>
> Am 09.07.2012 21:29, schrieb Stan Hoeppner:
> > On 7/
Am 09.07.2012 21:29, schrieb Stan Hoeppner:
> On 7/9/2012 3:17 AM, Reindl Harald wrote:
>>
>>
>> Am 09.07.2012 07:48, schrieb Wojciech Puchar:
disagreed with my statement, then agreed with it. Apparently you didn't
realize you did so. Would you please clarify what I stated that is
>>>
On 7/9/2012 3:17 AM, Reindl Harald wrote:
>
>
> Am 09.07.2012 07:48, schrieb Wojciech Puchar:
>>> disagreed with my statement, then agreed with it. Apparently you didn't
>>> realize you did so. Would you please clarify what I stated that is
>>> "simply not true"? You comment WRT SSD doesn't pr
performance
as long as the following are your OFF-LIST answers you
better shut up here!
Original-Nachricht
Betreff: Re: [Dovecot] Howto add another disk storage
Datum: Mon, 9 Jul 2012 11:43:11 +0200 (CEST)
Von: Wojciech Puchar
An: Reindl Harald
repeat it twice. repeat in 10 tim
please stop this bullshit, especially OFF-LIST
Fortunately you do not decide about it.
i do not sell them and i am not uneducated
The wording they use ("everyone do this", "because it is enterprise") just
proves that most of them are people that
should not even touch a computer
look in t
Am 09.07.2012 12:14, schrieb Wojciech Puchar:
> there are lots of skilled engineers here on that forum.
>
> But certainly not the ones that babble about storage, SANs etc.. just because
> they sell them, have profits from
> selling them or are just that uneducated.
please stop this bullshit, e
Many people just want to be proud, or want to make things expensive so their
clients are proud. but not always it's like that.
You go on a bit about "pride in complexity" . . What you fail to understand is
that many highly intelligent, experienced, very able engineers build systems that are
what you do not understand is that a proper SAN is NOT
an complex setup, it is in many cases a simpler one
because you have TWO controllers, disks with DUAL channel
and a proper RAID level in ONE device
to built all this redundancy at your own is a much
complexer software-setup and you can be pret
Seem some people have never heard of "keep it simple, stupid"or
"less is more" ... sounds like a few people here are falsely propping
up their worth to their employers, making unnecessary BS to justify
their own existence.
My experience of over 20 years of this industry easily shows that t
Am 09.07.2012 11:41, schrieb Wojciech Puchar:
>>> using SATA for any production-storage
>>>
>>
>> Hi, sorry SATA is running fine here,
>
> as well here.
>
> Many people just want to be proud, or want to make things expensive so their
> clients are proud. but not always it's
> like that.
oh ye
On 9 Jul 2012, at 10:41, Wojciech Puchar wrote:
> Many people just want to be proud, or want to make things expensive so their
> clients are proud. but not always it's like that.
You go on a bit about "pride in complexity" . . What you fail to understand is
that many highly intelligent, experie
ouch - that said and your offlist discussion why SAN storages
are crap for you gives a picture - nobody, really nobody is
using SATA for any production-storage
In a view of moron you are "everyone".
using SATA for any production-storage
Hi, sorry SATA is running fine here,
as well here.
Many people just want to be proud, or want to make things expensive so
their clients are proud. but not always it's like that.
Am 09.07.2012 10:17, schrieb Reindl Harald:
>
>
> Am 09.07.2012 07:48, schrieb Wojciech Puchar:
>>> disagreed with my statement, then agreed with it. Apparently you didn't
>>> realize you did so. Would you please clarify what I stated that is
>>> "simply not true"? You comment WRT SSD doesn't
Am 09.07.2012 07:48, schrieb Wojciech Puchar:
>> disagreed with my statement, then agreed with it. Apparently you didn't
>> realize you did so. Would you please clarify what I stated that is
>> "simply not true"? You comment WRT SSD doesn't prove anything I said to
>> be untrue. Quite the con
disagreed with my statement, then agreed with it. Apparently you didn't
realize you did so. Would you please clarify what I stated that is
"simply not true"? You comment WRT SSD doesn't prove anything I said to
be untrue. Quite the contrary, you reinforced my statements.
Actually the only sto
On 7/8/2012 5:16 PM, Matthias-Christian Ott wrote:
> On 2012-07-08 23:29, Stan Hoeppner wrote:
>> On 7/8/2012 8:27 AM, Patrick Domack wrote:
>>>
>>> Quoting Wojciech Puchar :
>>>
> I think there are optimal situations where any configuration looks
> good . . How often can a real-world disk
Quoting Wojciech Puchar :
is random seek latency. And the faster the spindle, the lower the
latency. Thus 15k Seagate SAS drives are excellent candidates for mail
store duty, as are any 10k or 15k drives.
definitely not counting by $/IOPS rate. even worse looking with $/GB
which is more imp
is random seek latency. And the faster the spindle, the lower the
latency. Thus 15k Seagate SAS drives are excellent candidates for mail
store duty, as are any 10k or 15k drives.
definitely not counting by $/IOPS rate. even worse looking with $/GB which
is more important unless you make <1GB ma
On 7/8/2012 8:27 AM, Patrick Domack wrote:
>
> Quoting Wojciech Puchar :
>
>>> I think there are optimal situations where any configuration looks
>>> good . . How often can a real-world disk actually deliver the 6Gbs
>>> when only a minority of disk reads are long sequential runs on the
>>> platt
On 2012-07-08 5:55 AM, Reindl Harald wrote:
nobody is using 100 MBit for a SAN
And no one who is using a SAN is using 100Mb on the LAN either. In fact,
I'd say that even 99.9% of all LANs - even small (wired) home LANs are Gb...
--
Best regards,
Charles
Quoting Wojciech Puchar :
I think there are optimal situations where any configuration looks
good . . How often can a real-world disk actually deliver the 6Gbs
when only a minority of disk reads are long sequential runs on the
platters?
none of hard drives can saturate 1.5Gb/s
There are
there is more than the connection speed
6Gbsdo not help you much as long the physical disk can not
write in this speed and more concurretn writes making
this worser - so there are many things like big battery backed
caches fon a SAN which are imprtant for OVERALL performance
with cache as big a
Am 08.07.2012 09:27, schrieb Steve Litt:
> On Sat, 07 Jul 2012 11:36:02 +0200, Reindl Harald said:
>> to believe under really high load a local storage
>> is faster at the end is bullshit!
>
> Can one even argue on one side or the other without knowing the speed
> of the network, and how much co
I think there are optimal situations where any configuration looks good . . How
often can a real-world disk actually deliver the 6Gbs when only a minority of
disk reads are long sequential runs on the platters?
none of hard drives can saturate 1.5Gb/s
On 8 Jul 2012, at 08:36, Steve Litt wrote:
>> Can one even argue on one side or the other without knowing the speed
>> of the network, and how much contention is on that network?
>>
>> My experience is that with a 100Mbs network, local is faster, although
>> I've never had a SAN, so to speak, on
On Sun, 8 Jul 2012 03:27:55 -0400, Steve Litt said:
> On Sat, 07 Jul 2012 11:36:02 +0200, Reindl Harald said:
> >
> >
> > Am 07.07.2012 11:23, schrieb Wojciech Puchar:
> > >>> Fine. i understand that. What i am suggesting is not making
> > >>> large LUNs. you get the best performance with directl
On Sat, 07 Jul 2012 11:36:02 +0200, Reindl Harald said:
>
>
> Am 07.07.2012 11:23, schrieb Wojciech Puchar:
> >>> Fine. i understand that. What i am suggesting is not making large
> >>> LUNs. you get the best performance with directly attaching disks
> >>> to your machine.
> >>
> >> That's simply
On 05/07/2012 11:33, Charles Marcus wrote:
On 2012-07-05 5:45 AM, Kaya Saman wrote:
FreeBSD 8.2 x64 running on VMware
Hi Kaya,
Do you (or anyone else) know of any decent VMWare images (appliance)
of current version of FreeBSD? I've been debating on switching from
Gentoo to FreeBSD for a wh
Am 07.07.2012 11:43, schrieb Wojciech Puchar:
>> SAN storage with 1 GB dedicated buffer cache und a
>> DEDUCATED 1400 MHz CPU which is only optimized for
>> one task: disk performance
>>
>> you lcal storage has to fight for CPU and memory all
>> teh time with other applications (caching etc.)
>>
SAN storage with 1 GB dedicated buffer cache und a
DEDUCATED 1400 MHz CPU which is only optimized for
one task: disk performance
you lcal storage has to fight for CPU and memory all
teh time with other applications (caching etc.)
to believe under really high load a local storage
is faster at the
On Sat, 2012-07-07 at 11:23 +0200, Wojciech Puchar wrote:
> >> Fine. i understand that. What i am suggesting is not making large LUNs.
> >> you get the best performance with directly attaching disks to your machine.
> >
> > That's simply not true. 99% of block latency is rotational. iSCSI
> It's
Am 07.07.2012 11:23, schrieb Wojciech Puchar:
>>> Fine. i understand that. What i am suggesting is not making large LUNs.
>>> you get the best performance with directly attaching disks to your machine.
>>
>> That's simply not true. 99% of block latency is rotational. iSCSI
> It's not about iSCS
On 7 Jul 2012, at 07:37, Stan Hoeppner wrote:
> 99% of block latency is rotational.
So true... I spend my entire life trying to convince customers to add heaps and
heaps of RAM to *nix servers to make them faster and not be swayed by talk of
faster CPUs . . Sheeesh! . . Come to think of it, I'
Fine. i understand that. What i am suggesting is not making large LUNs.
you get the best performance with directly attaching disks to your machine.
That's simply not true. 99% of block latency is rotational. iSCSI
It's not about iSCSI latency and overhead.
It's about other things i just don'
On 7/6/2012 2:16 AM, Wojciech Puchar wrote:
>>
>> You wouldn't partition the large LUN. You'd simply directly format it
>> with XFS. Laying a partition table on it would introduce the real
>
> Fine. i understand that. What i am suggesting is not making large LUNs.
> you get the best performance
On 6 July 2012 12:41, Wojciech Puchar wrote:
>>
>> do you really think it is a good idea to trash someone else's comments
>> (without contributing anything at all I might add) based on pure
>> ass-u-me-ptions of yours that have no basis in reality?
>
>
> Do you hate yourself of not being able to u
do you really think it is a good idea to trash someone else's comments
(without contributing anything at all I might add) based on pure
ass-u-me-ptions of yours that have no basis in reality?
Do you hate yourself of not being able to understand normal response and
so - getting agressive agai
Am 06.07.2012 12:01, schrieb Charles Marcus:
> Again - just please stay silent if you don't have anything positive to
> contribute - and yes, you often do actually
> contribute positive things, and you definitely have some knowledge to share,
> but again, your tone and manner are
> almost alway
On 2012-07-06 5:46 AM, Reindl Harald wrote:
where do you see anything offending in my reply?
Your tone is almost always offending, Reindl - and you quite often throw
in a good dose of very offending cursing to boot (admittedly not this
time though)... basically, I just don't like your genera
Am 06.07.2012 11:26, schrieb Charles Marcus:
> On 2012-07-05 6:37 AM, Reindl Harald wrote:
>> do you really think it is a good idea to start with a pre-installed
>> FREE operating system instead doing a fresh install?
>
> do you really think it is a good idea to trash someone else's comments
>
On 2012-07-05 6:37 AM, Reindl Harald wrote:
Am 05.07.2012 12:33, schrieb Charles Marcus:
On 2012-07-05 5:45 AM, Kaya Saman wrote:
FreeBSD 8.2 x64 running on VMware
Do you (or anyone else) know of any decent VMWare images
(appliance) of current version of FreeBSD? I've been debating on
swit
You wouldn't partition the large LUN. You'd simply directly format it
with XFS. Laying a partition table on it would introduce the real
Fine. i understand that. What i am suggesting is not making large LUNs.
you get the best performance with directly attaching disks to your
machine.
On 7/5/2012 6:36 AM, Wojciech Puchar wrote:
>>
>> At 16TB+ scale with maildir you should be using XFS on kernel 3.x, not
>> EXT4. Your performance will be significantly better, as in 30% or much
>
> why you want to make 16TB partition at first place?
You wouldn't partition the large LUN. You'd
At 16TB+ scale with maildir you should be using XFS on kernel 3.x, not
EXT4. Your performance will be significantly better, as in 30% or much
why you want to make 16TB partition at first place?
Am however, trying to do all clean installs on FreeBSD where I **can
** get away with it.
right.
Ok this may sound incredibly sad so don't sue me for it, but for my
OpenSource work at home I have switched over from 15+ Linux servers
down to 1x FreeBSD system running Jails.
quite a common ca
~ James.
Thank you, this is a valid suggestion, especially since it could be
done directly with some SQL magic in dovecot config file.
I will consider this option !
as i always run dovecot using standard unix auth/password mechanism and
mail user is always unix user, then it is rather trivial
this is actually offtopic from the OP however, feel free to PM with
any questions you have :-)
However, in response I didn't use any images, just the simple FreeBSD
8.2 AMD64 ISO. and installed from there.
and it will work on many VM systems. And works best without any VM
overlay.
On Thu, Jul 5, 2012 at 11:33 AM, Charles Marcus
wrote:
> On 2012-07-05 5:45 AM, Kaya Saman wrote:
>>
>> FreeBSD 8.2 x64 running on VMware
>
>
> Hi Kaya,
>
> Do you (or anyone else) know of any decent VMWare images (appliance) of
> current version of FreeBSD? I've been debating on switching from G
Am 05.07.2012 12:33, schrieb Charles Marcus:
> On 2012-07-05 5:45 AM, Kaya Saman wrote:
>> FreeBSD 8.2 x64 running on VMware
>
> Hi Kaya,
>
> Do you (or anyone else) know of any decent VMWare images (appliance) of
> current version of FreeBSD? I've been
> debating on switching from Gentoo to
On 2012-07-05 5:45 AM, Kaya Saman wrote:
FreeBSD 8.2 x64 running on VMware
Hi Kaya,
Do you (or anyone else) know of any decent VMWare images (appliance) of
current version of FreeBSD? I've been debating on switching from Gentoo
to FreeBSD for a while now, and would love to find a ready made
It absolutely kills me every time I see a mail server admin display
almost total lack of knowledge of his/her storage back end, or the
inability to describe it technically, in an email...
You should get used to this. Welcome in XXI century!
The rule is
amount of real knowledge*official paper c
On Thu, Jul 5, 2012 at 11:01 AM, J E Lyon
wrote:
> On 5 Jul 2012, at 10:55, Kaya Saman wrote:
>
>> That's why I'm not even thinking of migrating the mission critical
>> stuff running on CentOS 5 to even CentOS 6 yet.
>
> I'm in an identical position there -- and in fact, I think it's time to get
On Thu, Jul 5, 2012 at 12:46 PM, J E Lyon
wrote:
>
> When I first saw you mention hashing, I misread it as some sort of hash-table
> approach to large directories that I wasn't aware of, or something . . And
> now I've read the Dovecot documentation, I see what you're talking about!
>
> Why are
On Thu, Jul 5, 2012 at 12:35 PM, Stan Hoeppner wrote:
> On 7/5/2012 2:44 AM, Adrian M wrote:
>
>> Hi Stan,
>> I know how to add drives to the storage and how to grow the existing
>> filesystem, but such big filesystems are somehow new to mainstream
>> linux. Yes, I know some university out there a
On 5 Jul 2012, at 10:55, Kaya Saman wrote:
> That's why I'm not even thinking of migrating the mission critical
> stuff running on CentOS 5 to even CentOS 6 yet.
I'm in an identical position there -- and in fact, I think it's time to get
some virtualised hosting of CentOS 6 servers, once I decid
On Thu, Jul 5, 2012 at 10:48 AM, J E Lyon
wrote:
> On 5 Jul 2012, at 10:45, Kaya Saman wrote:
>
>> But then one must think, do I really want to switch OS?
>
> I heard a rumour that switching OS is sometimes harder than adding a
> mountpoint :)
>
> J.
It can be!
That's why I'm not even thinking
On 5 Jul 2012, at 10:45, Kaya Saman wrote:
> But then one must think, do I really want to switch OS?
I heard a rumour that switching OS is sometimes harder than adding a mountpoint
:)
J.
On 5 Jul 2012, at 08:44, Adrian M wrote:
> Hi Stan,
> I know how to add drives to the storage and how to grow the existing
> filesystem, but such big filesystems are somehow new to mainstream
> linux. Yes, I know some university out there already have pentabytes
> filesystems, but right now stable
On Thu, Jul 5, 2012 at 10:35 AM, Stan Hoeppner wrote:
> On 7/5/2012 2:44 AM, Adrian M wrote:
>
>> Hi Stan,
>> I know how to add drives to the storage and how to grow the existing
>> filesystem, but such big filesystems are somehow new to mainstream
>> linux. Yes, I know some university out there a
On 7/5/2012 2:44 AM, Adrian M wrote:
> Hi Stan,
> I know how to add drives to the storage and how to grow the existing
> filesystem, but such big filesystems are somehow new to mainstream
> linux. Yes, I know some university out there already have pentabytes
> filesystems, but right now stable lin
On 5.7.2012, at 10.44, Adrian M wrote:
> All this is telling me that is safer to have two or tree smaller
> filesystems than a big one. Dovecot has a nice feature for this
> "Directory hashing" http://wiki.dovecot.org/MailLocation/
>
> What I don't know is a nice way to migrate from a single dire
On Thu, Jul 5, 2012 at 7:37 AM, Stan Hoeppner wrote:
> On 7/4/2012 4:09 PM, Adrian Minta wrote:
>> On 07/04/12 23:22, J E Lyon wrote:
>>> On 4 Jul 2012, at 21:01, Adrian Minta wrote:
>>>
What is the best strategy to add another storage to an existing
virtual mail system ?
Move some
On 7/4/2012 4:09 PM, Adrian Minta wrote:
> On 07/04/12 23:22, J E Lyon wrote:
>> On 4 Jul 2012, at 21:01, Adrian Minta wrote:
>>
>>> What is the best strategy to add another storage to an existing
>>> virtual mail system ?
>>> Move some domains to the new storage and create symlinks ?
>>> Switch to
On 4 Jul 2012, at 22:09, Adrian Minta wrote:
> On 07/04/12 23:22, J E Lyon wrote:
>> On 4 Jul 2012, at 21:01, Adrian Minta wrote:
>>
>>> What is the best strategy to add another storage to an existing virtual
>>> mail system ?
>>> Move some domains to the new storage and create symlinks ?
>>> Sw
On 07/04/12 23:22, J E Lyon wrote:
On 4 Jul 2012, at 21:01, Adrian Minta wrote:
What is the best strategy to add another storage to an existing virtual mail
system ?
Move some domains to the new storage and create symlinks ?
Switch to dovecot hashing ? But in this case what is the easy-east wa
On 4 Jul 2012, at 21:01, Adrian Minta wrote:
> What is the best strategy to add another storage to an existing virtual mail
> system ?
> Move some domains to the new storage and create symlinks ?
> Switch to dovecot hashing ? But in this case what is the easy-east way to
> migrate ?
>
> Thanks
75 matches
Mail list logo