On Thu, Jul 19, 2012 at 12:06 PM, Lamar Owen wrote:
> On Wednesday, July 18, 2012 03:31:53 PM Les Mikesell wrote:
>> Sure, everything can break and most will sometime, but does this
>> happen often enough that you'd want to slow down all of your network
>> disk writes by an order of magnitude on t
On Wednesday, July 18, 2012 03:31:53 PM Les Mikesell wrote:
> Sure, everything can break and most will sometime, but does this
> happen often enough that you'd want to slow down all of your network
> disk writes by an order of magnitude on the odd chance that some app
> really cares about a random
On Wed, Jul 18, 2012 at 1:31 PM, Lamar Owen wrote:
> On Tuesday, July 17, 2012 12:28:00 PM Les Mikesell wrote:
>> But the thing with the spinning disks is the thing that will go down.
>> Not much reason for a network to break - at least since people stopped
>> using thin coax.
>
> Just a few days
On 07/19/2012 06:31 AM, Lamar Owen wrote:
On Tuesday, July 17, 2012 12:28:00 PM Les Mikesell wrote:
But the thing with the spinning disks is the thing that will go down.
Not much reason for a network to break - at least since people stopped
using thin coax.
Just a few days ago I watched a facil
On Tuesday, July 17, 2012 12:28:00 PM Les Mikesell wrote:
> But the thing with the spinning disks is the thing that will go down.
> Not much reason for a network to break - at least since people stopped
> using thin coax.
Just a few days ago I watched a facility's switched network go basically 'do
On Tue, Jul 17, 2012 at 8:27 AM, wrote:
> I always wondered why the default for nfs was ever sync in the first
place. Why shouldn't it be the same as local use of the filesystem?
The few things that care should be doing fsync's at the right places
anyway.
>>>
>>> Well, the re
Les Mikesell wrote:
> On Tue, Jul 17, 2012 at 4:33 AM, Johnny Hughes wrote:
>>> I always wondered why the default for nfs was ever sync in the first
>>> place. Why shouldn't it be the same as local use of the filesystem?
>>> The few things that care should be doing fsync's at the right places
>>>
On Tue, Jul 17, 2012 at 4:33 AM, Johnny Hughes wrote:
>> I always wondered why the default for nfs was ever sync in the first
>> place. Why shouldn't it be the same as local use of the filesystem?
>> The few things that care should be doing fsync's at the right places
>> anyway.
>>
>
> Well, the
On 07/13/2012 07:40 AM, Les Mikesell wrote:
> On Fri, Jul 13, 2012 at 7:12 AM, mark wrote:
>> *After* I test further, I think it's up to my manager and our users to
>> decide if it's worth it to go with less secure - this is a real issue,
>> since some of their jobs run days, and one or two weeks,
On Wed, July 11, 2012 00:21, Kahlil Hodgson wrote:
>
> If you are just using the Red Hat bugzilla that might be your problem.
> I've heard a rumour that Red Hat doesn't really monitor that channel,
> giving preference to issues raised though their customer portal. That
> does makes _some_ commer
On Fri, Jul 13, 2012 at 7:12 AM, mark wrote:
>
> *After* I test further, I think it's up to my manager and our users to
> decide if it's worth it to go with less secure - this is a real issue,
> since some of their jobs run days, and one or two weeks, on an HBS* or a
> good sized cluster. (We're s
On 07/12/12 06:41, Colin Simpson wrote:
> I have tried the async option and that reverts to being as fast as
> previously.
>
> So I guess the choice is use the less safe async and get file creation
> being quick or live with the slow down until a potentially new protocol
> extension appears to help
Am 11.07.2012 00:58, schrieb Gé Weijers:
> It may not be a bug, it may be that RHEL 6.x implements I/O barriers
> correctly, which slows things down but keeps you from losing data
Which is of course no excuse for not even responding to a support
request. "It's not a bug, it's a feature" may no
I have tried the async option and that reverts to being as fast as
previously.
So I guess the choice is use the less safe async and get file creation
being quick or live with the slow down until a potentially new protocol
extension appears to help with this.
Colin
On Wed, 2012-07-11 at 15:16 -0
- Original Message -
> On Wed, Jul 11, 2012 at 11:29 AM, Colin Simpson
> wrote:
> >
> > But think yourself lucky, BTRFS on Fedora 16 was much worse. This
> > was
> > the time it took me to untar a vlc tarball.
> >
> > F16 to RHEL5 - 0m 28.170s
> > F16 to F16 ext4 - 4m 12.450s
> > F16 to
Gé Weijers wrote:
> This is likely to be a bug in RHEL5 rather than one in RHEL6. RHEL5
> (kernel 2.6.18) does not always guarantee that the disk cache is
> flushed before 'fsync' returns. This is especially true if you use
> software RAID and/or LVM. You may be able to get the old performance
> ba
This is likely to be a bug in RHEL5 rather than one in RHEL6. RHEL5
(kernel 2.6.18) does not always guarantee that the disk cache is
flushed before 'fsync' returns. This is especially true if you use
software RAID and/or LVM. You may be able to get the old performance
back by disabling I/O barriers
On Wed, Jul 11, 2012 at 11:29 AM, Colin Simpson
wrote:
>
> But think yourself lucky, BTRFS on Fedora 16 was much worse. This was
> the time it took me to untar a vlc tarball.
>
> F16 to RHEL5 - 0m 28.170s
> F16 to F16 ext4 - 4m 12.450s
> F16 to F16 btrfs - 14m 31.252s
>
> A quick test seems to sa
We have this issue.
I have a support call open with Red Hat about it. Bug reports will only
really forcibly get actioned if you open a support call and point at the
bug report.
I also have this issue though much much worse on Fedora (using BTRFS),
which will surely have to be fixed before BTRFS b
On 11/07/12 00:18, m.r...@5-cent.us wrote:
>> For any redhatters on the list, I'm going to be reopening this bug today.
>>
>> I am also VERY unhappy with Redhat. I filed the bug months ago, and it was
>> *never* assigned - no one apparently even looked at it. It's a
>> show-stopper for us, since it
It may not be a bug, it may be that RHEL 6.x implements I/O barriers
correctly, which slows things down but keeps you from losing data
On Tue, Jul 10, 2012 at 7:18 AM, wrote:
> Thought I'd post this here, too - I emailed it to the redhat list, and
> that's pretty moribund, while I've seen r
21 matches
Mail list logo