On Mon, 18 Apr 2016 11:46:18 -0700 Gregory Farnum wrote:
> On Sun, Apr 17, 2016 at 9:05 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Fri, 15 Apr 2016 08:20:45 +0200 Michael Metz-Martini | SpeedPartner
> > GmbH wrote:
> >
> >> Hi,
> >>
> >> Am 15.04.2016 um 07:43 schrieb Christian Balzer:
On Sun, Apr 17, 2016 at 9:05 PM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 15 Apr 2016 08:20:45 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>
>> Hi,
>>
>> Am 15.04.2016 um 07:43 schrieb Christian Balzer:
>> > On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
>
Hello,
On Fri, 15 Apr 2016 08:20:45 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 15.04.2016 um 07:43 schrieb Christian Balzer:
> > On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
> > GmbH wrote:
> >> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
>
Hi,
Am 15.04.2016 um 07:43 schrieb Christian Balzer:
> On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
We thought this was a good idea so that we can change the replication
size different for doc_roo
Hello,
On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
> >> We thought this was a good idea so that we can change the replication
> >> size different for doc_root and raw-data if we like. Seems this wa
Hi,
Am 15.04.2016 um 03:07 schrieb Christian Balzer:
>> We thought this was a good idea so that we can change the replication
>> size different for doc_root and raw-data if we like. Seems this was a
>> bad idea for all objects.
> I'm not sure how you managed to get into that state or if it's a bug
On Thu, 14 Apr 2016 19:39:01 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 14.04.2016 um 03:32 schrieb Christian Balzer:
[massive snip]
Thanks for that tree/du output, it matches what I expected.
You'd think XFS wouldn't be that intimidated by directories of that size.
>
>
It doesn't seem like it would be wise to run such systems on top of rbd.
-Sam
On Thu, Apr 14, 2016 at 11:05 AM, Jianjian Huo wrote:
> On Wed, Apr 13, 2016 at 6:06 AM, Sage Weil wrote:
>> On Tue, 12 Apr 2016, Jan Schermer wrote:
>>> Who needs to have exactly the same data in two separate objects
Hi,
Am 14.04.2016 um 03:32 schrieb Christian Balzer:
> On Wed, 13 Apr 2016 14:51:58 +0200 Michael Metz-Martini | SpeedPartner GmbH
> wrote:
>> Am 13.04.2016 um 04:29 schrieb Christian Balzer:
>>> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner GmbH
>>> wrote:
Am 11.04
Hello,
[reduced to ceph-users]
On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote:
>
>
> >>> Christian Balzer schrieb am Dienstag, 12. April 2016
> >>> um 01:39:
>
> > Hello,
> >
>
> Hi,
>
> > I'm officially only allowed to do (preventative) maintenance during
> > weekend nights
Hello,
[reducing MLs to ceph-user]
On Wed, 13 Apr 2016 14:51:58 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 13.04.2016 um 04:29 schrieb Christian Balzer:
> > On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
> > GmbH wrote:
> >> Am 11.04.2016 um 23:3
On Wed, 13 Apr 2016 08:30:52 -0400 (EDT) Sage Weil wrote:
> On Wed, 13 Apr 2016, Christian Balzer wrote:
> > > > Recently we discovered an issue with the long object name handling
> > > > that is not fixable without rewriting a significant chunk of
> > > > FileStores filename handling. (There is
Hello,
On 11/04/2016 23:39, Sage Weil wrote:
> [...] Is this reasonable? [...]
Warning: I'm just a ceph user and definitively non-expert user.
1. Personally, if you see the documentation, read a little the maling list
and/or IRC, it seems to me _clear_ that ext4 is not recommended even if the
On Wed, 13 Apr 2016, Jan Schermer wrote:
> I apologise, I probably should have dialed down a bit.
> I'd like to personally apologise to Sage, for being so patient with my
> ranting.
No worries :)
> I just hope you don't forget about the measly RBD users like me (I'd
> guesstimate a silent 90%+
On Tue, 12 Apr 2016, Jan Schermer wrote:
> Who needs to have exactly the same data in two separate objects
> (replicas)? Ceph needs it because "consistency"?, but the app (VM
> filesystem) is fine with whatever version because the flush didn't
> happen (if it did the contents would be the same).
Hi,
Am 13.04.2016 um 04:29 schrieb Christian Balzer:
> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 11.04.2016 um 23:39 schrieb Sage Weil:
>>> ext4 has never been recommended, but we did test it. After Jewel is
>>> out, we would like explicitly recomm
On Wed, 13 Apr 2016, Christian Balzer wrote:
> > > Recently we discovered an issue with the long object name handling
> > > that is not fixable without rewriting a significant chunk of
> > > FileStores filename handling. (There is a limit in the amount of
> > > xattr data ext4 can store in the ino
Hello,
On Tue, 12 Apr 2016 09:56:32 -0400 (EDT) Sage Weil wrote:
> Hi all,
>
> I've posted a pull request that updates any mention of ext4 in the docs:
>
> https://github.com/ceph/ceph/pull/8556
>
> In particular, I would appreciate any feedback on
>
>
> https://github.com/ceph/
Hello,
On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 11.04.2016 um 23:39 schrieb Sage Weil:
> > ext4 has never been recommended, but we did test it. After Jewel is
> > out, we would like explicitly recommend *against* ext4 and stop
> > testing
Hello,
On Tue, 12 Apr 2016 09:56:13 +0200 Udo Lembke wrote:
> Hi Sage,
Not Sage, but since he hasn't piped up yet...
> we run ext4 only on our 8node-cluster with 110 OSDs and are quite happy
> with ext4.
> We start with xfs but the latency was much higher comparable to ext4...
>
Welcome to th
; cluster that we are deploying has several hardware choices which go a long
> way to improve this performance as well. Coupled with the coming Bluestore,
> the future looks bright.
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Thank you for the votes of confidence, everybody. :)
It would be good if we could keep this thread focused on who is harmed
by retiring ext4 as a tested configuration at what speed, and break
out other threads for other issues. (I'm about to do that for one of
them!)
-Greg
_
Hi Jan,
i can answer your question very quickly: We.
We need that!
We need and want a stable, selfhealing, scaleable, robust, reliable
storagesystem which can talk to our infrastructure in different languages.
I have full understanding, that people who are using an infrastructure,
which is goin
rom: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Sage Weil
>> Sent: 12 April 2016 21:48
>> To: Jan Schermer
>> Cc: ceph-devel ; ceph-users > us...@ceph.com>; ceph-maintain...@ceph.com
>> Subject: Re: [ceph-users] Deprecating ext4 suppo
On 12/04/2016 22:33, Jan Schermer wrote:
> I don't think it's apples and oranges.
> If I export two files via losetup over iSCSI and make a raid1 swraid out of
> them in guest VM, I bet it will still be faster than ceph with bluestore.
> And yet it will provide the same guarantees and do the same
Bluestore,
the future looks bright.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Sage Weil
> Sent: 12 April 2016 21:48
> To: Jan Schermer
> Cc: ceph-devel ; ceph-users us...@ceph.com>; ceph-maintain...@ceph.com
> Subje
On Tue, 12 Apr 2016, Jan Schermer wrote:
> Still the answer to most of your points from me is "but who needs that?"
> Who needs to have exactly the same data in two separate objects
> (replicas)? Ceph needs it because "consistency"?, but the app (VM
> filesystem) is fine with whatever version be
Still the answer to most of your points from me is "but who needs that?"
Who needs to have exactly the same data in two separate objects (replicas)?
Ceph needs it because "consistency"?, but the app (VM filesystem) is fine with
whatever version because the flush didn't happen (if it did the conte
Okay, I'll bite.
On Tue, 12 Apr 2016, Jan Schermer wrote:
> > Local kernel file systems maintain their own internal consistency, but
> > they only provide what consistency promises the POSIX interface
> > does--which is almost nothing.
>
> ... which is exactly what everyone expects
> ... which
On 12/04/2016 21:19, Jan Schermer wrote:
>
>> On 12 Apr 2016, at 20:00, Sage Weil wrote:
>>
>> On Tue, 12 Apr 2016, Jan Schermer wrote:
>>> I'd like to raise these points, then
>>>
>>> 1) some people (like me) will never ever use XFS if they have a choice
>>> given no choice, we will not use some
On Tue, 12 Apr 2016, Jan Schermer wrote:
> I'd like to raise these points, then
>
> 1) some people (like me) will never ever use XFS if they have a choice
> given no choice, we will not use something that depends on XFS
Huh ?
> 3) doesn't majority of Ceph users only care about RBD?
Well, half user
> On 12 Apr 2016, at 20:00, Sage Weil wrote:
>
> On Tue, 12 Apr 2016, Jan Schermer wrote:
>> I'd like to raise these points, then
>>
>> 1) some people (like me) will never ever use XFS if they have a choice
>> given no choice, we will not use something that depends on XFS
>>
>> 2) choice is al
On Tue, 12 Apr 2016, Jan Schermer wrote:
> I'd like to raise these points, then
>
> 1) some people (like me) will never ever use XFS if they have a choice
> given no choice, we will not use something that depends on XFS
>
> 2) choice is always good
Okay!
> 3) doesn't majority of Ceph users only
Hi all,
I've posted a pull request that updates any mention of ext4 in the docs:
https://github.com/ceph/ceph/pull/8556
In particular, I would appreciate any feedback on
https://github.com/ceph/ceph/pull/8556/commits/49604303124a2b546e66d6e130ad4fa296602b01
both on substance a
Hello!
On Mon, Apr 11, 2016 at 05:39:37PM -0400, sage wrote:
> Hi,
> ext4 has never been recommended, but we did test it. After Jewel is out,
> we would like explicitly recommend *against* ext4 and stop testing it.
1. Does filestore_xattr_use_omap fix issues with ext4? So, can I continue usin
Hi Sage,
we run ext4 only on our 8node-cluster with 110 OSDs and are quite happy
with ext4.
We start with xfs but the latency was much higher comparable to ext4...
But we use RBD only with "short" filenames like
rbd_data.335986e2ae8944a.000761e1.
If we can switch from Jewel to K* and
I'd like to raise these points, then
1) some people (like me) will never ever use XFS if they have a choice
given no choice, we will not use something that depends on XFS
2) choice is always good
3) doesn't majority of Ceph users only care about RBD?
(Angry rant coming)
Even our last performanc
Hi,
Am 11.04.2016 um 23:39 schrieb Sage Weil:
> ext4 has never been recommended, but we did test it. After Jewel is out,
> we would like explicitly recommend *against* ext4 and stop testing it.
Hmmm. We're currently migrating away from xfs as we had some strange
performance-issues which were res
Hello,
On Mon, 11 Apr 2016 21:12:14 -0400 (EDT) Sage Weil wrote:
> On Tue, 12 Apr 2016, Christian Balzer wrote:
> >
> > Hello,
> >
> > What a lovely missive to start off my working day...
> >
> > On Mon, 11 Apr 2016 17:39:37 -0400 (EDT) Sage Weil wrote:
> >
> > > Hi,
> > >
> > > ext4 has ne
r"
Cc: ceph-de...@vger.kernel.org, ceph-us...@ceph.com, ceph-maintain...@ceph.com
Sent: Tuesday, April 12, 2016 10:12:14 AM
Subject: Re: [ceph-users] Deprecating ext4 support
On Tue, 12 Apr 2016, Christian Balzer wrote:
>
> Hello,
>
> What a lovely missive to start off my working day...
&
On Tue, 12 Apr 2016, Christian Balzer wrote:
>
> Hello,
>
> What a lovely missive to start off my working day...
>
> On Mon, 11 Apr 2016 17:39:37 -0400 (EDT) Sage Weil wrote:
>
> > Hi,
> >
> > ext4 has never been recommended, but we did test it.
> Patently wrong, as Shinobu just pointed.
>
On Mon, Apr 11, 2016 at 06:49:09PM -0400, Shinobu Kinjo wrote:
> Just to clarify to prevent any confusion.
>
> Honestly I've never used ext4 as underlying filesystem for the Ceph cluster,
> but according to wiki [1], ext4 is recommended -;
>
> [1] https://en.wikipedia.org/wiki/Ceph_%28software%
Le 12/04/2016 01:40, Lindsay Mathieson a écrit :
> On 12/04/2016 9:09 AM, Lionel Bouton wrote:
>> * If the journal is not on a separate partition (SSD), it should
>> definitely be re-created NoCoW to avoid unnecessary fragmentation. From
>> memory : stop OSD, touch journal.new, chattr +C journal.ne
On 12/04/2016 9:09 AM, Lionel Bouton wrote:
* If the journal is not on a separate partition (SSD), it should
definitely be re-created NoCoW to avoid unnecessary fragmentation. From
memory : stop OSD, touch journal.new, chattr +C journal.new, dd
if=journal of=journal.new (your dd options here for
Hello,
What a lovely missive to start off my working day...
On Mon, 11 Apr 2016 17:39:37 -0400 (EDT) Sage Weil wrote:
> Hi,
>
> ext4 has never been recommended, but we did test it.
Patently wrong, as Shinobu just pointed.
Ext4 never was (especially recently) flogged as much as XFS, but it a
Hi,
Le 11/04/2016 23:57, Mark Nelson a écrit :
> [...]
> To add to this on the performance side, we stopped doing regular
> performance testing on ext4 (and btrfs) sometime back around when ICE
> was released to focus specifically on filestore behavior on xfs.
> There were some cases at the time
uot;
To: "Sage Weil" , ceph-de...@vger.kernel.org,
ceph-us...@ceph.com, ceph-maintain...@ceph.com, ceph-annou...@ceph.com
Sent: Tuesday, April 12, 2016 6:57:16 AM
Subject: Re: [ceph-users] Deprecating ext4 support
On 04/11/2016 04:44 PM, Sage Weil wrote:
> On Mon, 11 Apr 2016, Sage W
Hi,
ext4 has never been recommended, but we did test it. After Jewel is out,
we would like explicitly recommend *against* ext4 and stop testing it.
Why:
Recently we discovered an issue with the long object name handling that is
not fixable without rewriting a significant chunk of FileStores f
Hi!
How about these findings?
https://events.linuxfoundation.org/sites/events/files/slides/AFL%20filesystem%20fuzzing%2C%20Vault%202016.pdf
Ext4 seems to be the one file system tested best... (although xfs
survived also quite long...)
Gruesse
Michael
On 2016-04-11 23:44, Sage Weil wrote:
> On
On Mon, 11 Apr 2016, Sage Weil wrote:
> Hi,
>
> ext4 has never been recommended, but we did test it. After Jewel is out,
> we would like explicitly recommend *against* ext4 and stop testing it.
I should clarify that this is a proposal and solicitation of feedback--we
haven't made any decisions
On 04/11/2016 04:44 PM, Sage Weil wrote:
On Mon, 11 Apr 2016, Sage Weil wrote:
Hi,
ext4 has never been recommended, but we did test it. After Jewel is out,
we would like explicitly recommend *against* ext4 and stop testing it.
I should clarify that this is a proposal and solicitation of feed
RIP Ceph.
> On 11 Apr 2016, at 23:42, Allen Samuels wrote:
>
> RIP ext4.
>
>
> Allen Samuels
> Software Architect, Fellow, Systems and Software Solutions
>
> 2880 Junction Avenue, San Jose, CA 95134
> T: +1 408 801 7030| M: +1 408 780 6416
> allen.samu...@sandisk.com
>
>
>> -Original
RIP ext4.
Allen Samuels
Software Architect, Fellow, Systems and Software Solutions
2880 Junction Avenue, San Jose, CA 95134
T: +1 408 801 7030| M: +1 408 780 6416
allen.samu...@sandisk.com
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.k
53 matches
Mail list logo