Stephen John Smoogen wrote:
On 7 September 2017 at 16:07, Alexander Dalloz wrote:
Am 07.09.2017 um 20:07 schrieb hw:
Gordon Messmer wrote:
On 09/07/2017 08:11 AM, Stephen John Smoogen wrote:
This was always
problematic because DNS hostnames and email addresses in the RFC
standards were ca
Alexander Dalloz wrote:
Am 07.09.2017 um 20:07 schrieb hw:
Gordon Messmer wrote:
On 09/07/2017 08:11 AM, Stephen John Smoogen wrote:
This was always
problematic because DNS hostnames and email addresses in the RFC
standards were case insensitive
Not quite. SMTP is required to treat the "lo
Mark Haney wrote:
On 09/07/2017 01:57 PM, hw wrote:
Hi,
is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?
I might be the lone voice on this, but I refuse to use btrfs for anything, much
less a mail spool. I used it in production on DB and Web servers a
PS:
What kind of storage solutions do people use for cyrus mail spools? Apparently
you can not use remote storage, at least not NFS. That even makes it difficult
to use a VM due to limitations of available disk space.
I´m reluctant to use btrfs, but there doesn´t seem to be any reasonable
al
I am trying to get wireless working on CentOS 7.3 with intel wireless 3165
ip link
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state
UP mode DEFAULT qlen 1000
link/ether b8:ae:ed:77:b3:
I think it depends on who you ask. Facebook and Netflix are using it
extensively in production:
https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason
Though they have the in-house kernel engineering resources to
troubleshoot problems. When I see q
I hate top posting, but since you've got two items I want to comment on,
I'll suck it up for now.
Having SSDs alone will give you great performance regardless of
filesystem. BTRFS isn't going to impact I/O any more significantly
than, say, XFS. It does have serious stability/data integrity i
Matty wrote:
I think it depends on who you ask. Facebook and Netflix are using it
extensively in production:
https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason
Though they have the in-house kernel engineering resources to
troubleshoot problem
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment on, I'll
suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
Having SSDs alone will give you great performance regardless of filesystem.
It depend
On Thu, September 7, 2017 14:07, hw wrote:
> Gordon Messmer wrote:
>> On 09/07/2017 08:11 AM, Stephen John Smoogen wrote:
>>> This was always problematic because DNS hostnames and
>>> email addresses in the RFC standards were case insensitive
>>
>>
>> Not quite. SMTP is required to treat the "lo
hw wrote:
> Mark Haney wrote:
>> BTRFS isn't going to impact I/O any more significantly than, say, XFS.
>
> But mdadm does, the impact is severe. I know there are ppl saying
> otherwise, but I´ve seen the impact myself, and I definitely don´t want
> it on that particular server because it would l
m.r...@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm does, the impact is severe. I know there are ppl saying
otherwise, but I´ve seen the impact myself, and I definitely don´t want
it on that particular server
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment
on, I'll suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
I'm afraid you'll have to live with it a bit longer.
On Fri, September 8, 2017 9:48 am, hw wrote:
> m.r...@5-cent.us wrote:
>> hw wrote:
>>> Mark Haney wrote:
>>
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
>>>
>>> But mdadm does, the impact is severe. I know there are ppl saying
>>> otherwise, but I´ve seen the imp
On 8 September 2017 at 11:00, Valeri Galtsev wrote:
>
> On Fri, September 8, 2017 9:48 am, hw wrote:
>> m.r...@5-cent.us wrote:
>>> hw wrote:
Mark Haney wrote:
>>>
> BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm does, the impact is severe. I
On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
> On 8 September 2017 at 11:00, Valeri Galtsev
> wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.r...@5-cent.us wrote:
hw wrote:
> Mark Haney wrote:
>> BTRFS isn't going to impact I/O any more signi
On 8 September 2017 at 12:13, Valeri Galtsev wrote:
>
> On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
>> On 8 September 2017 at 11:00, Valeri Galtsev
>> wrote:
>>>
>>> On Fri, September 8, 2017 9:48 am, hw wrote:
m.r...@5-cent.us wrote:
> hw wrote:
>> Mark Haney wro
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment on, I'll
suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
I'm afraid you'll have to live wi
Mark Haney wrote:
> On 09/08/2017 09:49 AM, hw wrote:
>> Mark Haney wrote:
>>
>> It depends, i. e. I can´t tell how these SSDs would behave if large
>> amounts of data would be written and/or read to/from them over extended
>> periods of time because I haven´t tested that. That isn´t the
>> appli
Valeri Galtsev wrote:
On Fri, September 8, 2017 9:48 am, hw wrote:
m.r...@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm does, the impact is severe. I know there are ppl saying
otherwise, but I´ve seen the i
hw wrote:
> Mark Haney wrote:
>> On 09/08/2017 09:49 AM, hw wrote:
>>> Mark Haney wrote:
> Probably with the very expensive SSDs suited for this ...
>>>
>>> That´s because I do not store data on a single disk, without
>>> redundancy, and the SSDs I have are not suitable for hardware RAID.
That's
m.r...@5-cent.us wrote:
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
It depends, i. e. I can´t tell how these SSDs would behave if large
amounts of data would be written and/or read to/from them over extended
periods of time because I haven´t tested that. That isn´
On 09/08/2017 01:31 PM, hw wrote:
Mark Haney wrote:
I/O is not heavy in that sense, that´s why I said that´s not the
application.
There is I/O which, as tests have shown, benefits greatly from low
latency, which
is where the idea to use SSDs for the relevant data has arisen from.
This I/O
on
Mark Haney wrote:
> On 09/08/2017 01:31 PM, hw wrote:
>> Mark Haney wrote:
>>
>> Probably with the very expensive SSDs suited for this ...
> Possibly, but that's somewhat irrelevant. I've taken off the shelf SSDs
> and hardware RAID'd them. If they work for the hell I put them through
> (process
On 09/07/2017 12:57 PM, hw wrote:
>
> Hi,
>
> is there anything that speaks against putting a cyrus mail spool onto a
> btrfs subvolume?
This is what Red Hat says about btrfs:
The Btrfs file system has been in Technology Preview state since the
initial release of Red Hat Enterprise Linux 6. Red
On Fri, September 8, 2017 12:56 pm, hw wrote:
> Valeri Galtsev wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.r...@5-cent.us wrote:
hw wrote:
> Mark Haney wrote:
>> BTRFS isn't going to impact I/O any more significantly than, say,
>> XFS.
>
> But mda
On 9/8/2017 12:52 PM, Valeri Galtsev wrote:
Thanks. That seems to clear fog a little bit. I still would like to hear
manufacturers/models here. My choices would be: Areca or LSI (bought out
by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
Evo SATA III. Does anyone who us
On Fri, Sep 8, 2017 at 2:52 PM, Valeri Galtsev
wrote:
>
> manufacturers/models here. My choices would be: Areca or LSI (bought out
> by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
>
Intel only purchased the networking component of LSI, Axxia, from Avago.
The RAID divi
On 09/08/2017 11:06 AM, hw wrote:
Make a test and replace a software RAID5 with a hardware RAID5. Even
with
only 4 disks, you will see an overall performance gain. I´m guessing
that
the SATA controllers they put onto the mainboards are not designed to
handle
all the data --- which gets multi
On Fri, September 8, 2017 3:06 pm, John R Pierce wrote:
> On 9/8/2017 12:52 PM, Valeri Galtsev wrote:
>> Thanks. That seems to clear fog a little bit. I still would like to hear
>> manufacturers/models here. My choices would be: Areca or LSI (bought out
>> by Intel, so former LSI chipset and micro
On 9/8/2017 2:36 PM, Valeri Galtsev wrote:
With all due respect, John, this is the same as hard drive cache is not
backed up power wise for a case of power loss. And hard drives all lie
about write operation completed before data actually are on the platters.
So we can claim the same: hard drives
I've been trying to install CentOS 7 AltArch ppc64le onto a new Power 8
system and I want to configure mdraid for the volumes. I can get
everything working if I install to a single disk, but when I configure
for RAID 1, the system fails to boot.
So, is mdraid1 supported for booting a Power 8 s
32 matches
Mail list logo