thanks, sage weil.
writing fs is a serious matter,we should make it clear, includes coding style.
there are other places we should fix.
thanks
At 2014-09-29 12:10:52, "Sage Weil" wrote:
>On Mon, 29 Sep 2014, yuelongguang wrote:
>> hi, sage will1.
>> you mean if i use cache tiering, clie
Hi Wido,
> On 26 Sep 2014, at 23:14, Wido den Hollander wrote:
>
> On 26-09-14 17:16, Dan Van Der Ster wrote:
>> Hi,
>> Apologies for this trivial question, but what is the correct procedure to
>> replace a failed OSD that uses a shared journal device?
>>
>> Suppose you have 5 spinning disks (
On 26/09/14 17:16, Dan Van Der Ster wrote:
> Hi,
> Apologies for this trivial question, but what is the correct procedure to
> replace a failed OSD that uses a shared journal device?
>
> I’m just curious, for such a routine operation, what are most admins doing in
> this case?
>
I think ceph-o
Hi,
> On 29 Sep 2014, at 10:01, Daniel Swarbrick
> wrote:
>
> On 26/09/14 17:16, Dan Van Der Ster wrote:
>> Hi,
>> Apologies for this trivial question, but what is the correct procedure to
>> replace a failed OSD that uses a shared journal device?
>>
>> I’m just curious, for such a routine op
Dear ceph users,
we are managing ceph clusters since 1 year now. Our setup is typically
made of Supermicro servers with OSD sata drives and journal on SSD.
Those SSD are all failing one after the other after one year :(
We used Samsung 850 pro (120Go) with two setup (small nodes with 2 ssd,
2
Hi Dan,
At least looking at upstream to get journals and partitions persistently
working, this requires gpt partitions, and being able to add a GPT
partition UUID to work perfectly with minimal modification.
I am not sure the status of this on RHEL6, The latest Fedora and
OpenSUSE support this bu
Hi Owen,
> On 29 Sep 2014, at 10:33, Owen Synge wrote:
>
> Hi Dan,
>
> At least looking at upstream to get journals and partitions persistently
> working, this requires gpt partitions, and being able to add a GPT
> partition UUID to work perfectly with minimal modification.
>
> I am not sure t
Hello,
On Mon, 29 Sep 2014 10:31:03 +0200 Emmanuel Lacour wrote:
>
> Dear ceph users,
>
>
> we are managing ceph clusters since 1 year now. Our setup is typically
> made of Supermicro servers with OSD sata drives and journal on SSD.
>
> Those SSD are all failing one after the other after one
Hi Emmanuel,
This is interesting, because we’ve had sales guys telling us that those Samsung
drives are definitely the best for a Ceph journal O_o !
The conventional wisdom has been to use the Intel DC S3700 because of its
massive durability.
Anyway, I’m curious what do the SMART counters say o
Hi Alexandre,
No problem, I hope this saves you some pain
It's probably worth going for a larger journal probably around 20Gig if you
wish to play with tuning of "filestore max sync interval" could be have some
interesting results.
Also probably already know this however most of us when startin
On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote:
>
> Given your SSDs, are they failing after more than 150TB have been written?
between 30 and 40 TB ...
>
> > Thought, statistics gives 60GB (option 2) to 100 GB (option 1) writes
> > per day on SSD on a not really over loaded cl
On Mon, Sep 29, 2014 at 08:58:38AM +, Dan Van Der Ster wrote:
> Hi Emmanuel,
> This is interesting, because we’ve had sales guys telling us that those
> Samsung drives are definitely the best for a Ceph journal O_o !
> The conventional wisdom has been to use the Intel DC S3700 because of its
> On 29 Sep 2014, at 10:47, Dan Van Der Ster wrote:
>
> Hi Owen,
>
>> On 29 Sep 2014, at 10:33, Owen Synge wrote:
>>
>> Hi Dan,
>>
>> At least looking at upstream to get journals and partitions persistently
>> working, this requires gpt partitions, and being able to add a GPT
>> partition UU
Hi,
i saw the following commit in dumpling:
commit b5dafe1c0f7ecf7c3a25d0be5dfddcbe3d07e69e
Author: Sage Weil
Date: Wed Jun 18 11:02:58 2014 -0700
osd: allow io priority to be set for the disk_tp
The disk_tp covers scrubbing, pg deletion, and snap trimming
I've experienced high load
On Mon, 29 Sep 2014, Dan Van Der Ster wrote:
> Hi Owen,
>
> > On 29 Sep 2014, at 10:33, Owen Synge wrote:
> >
> > Hi Dan,
> >
> > At least looking at upstream to get journals and partitions persistently
> > working, this requires gpt partitions, and being able to add a GPT
> > partition UUID to
On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote:
> On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote:
> >
> > Given your SSDs, are they failing after more than 150TB have been
> > written?
>
> between 30 and 40 TB ...
>
That's low. One wonders what is going on here, Sams
On Mon, 29 Sep 2014 09:04:51 + Quenten Grasso wrote:
> Hi Alexandre,
>
> No problem, I hope this saves you some pain
>
> It's probably worth going for a larger journal probably around 20Gig if
> you wish to play with tuning of "filestore max sync interval" could be
> have some interesting re
The issue hasn't popped up since I upgraded the kernel so the issue I was
experiencing seems to have been addressed.
On Tue, Sep 9, 2014 at 12:13 PM, James Devine wrote:
> The issue isn't so much mounting the ceph client as it is the mounted ceph
> client becoming unusable requiring a remount.
What's the limit of which versions of the 'rbd' command and versions of the
rbd kernel driver compatibility?
I have a project that requires running 'rbd' on machines that have a fairly
new kernel (3.10) and really old versions of libc and other libs (based on
Centos5.10). It would be really nifty
Hello ceph users,
We have a federated gateway configured to replicate between two zones.
Replication seems to be working smoothly between the master and slave zone,
however I have a recurring error in the replication log with the following
info:
INFO:radosgw_agent.worker:17573 is processin
On Mon, Sep 29, 2014 at 10:44 AM, Lyn Mitchell wrote:
>
>
> Hello ceph users,
>
>
>
> We have a federated gateway configured to replicate between two zones.
> Replication seems to be working smoothly between the master and slave zone,
> however I have a recurring error in the replication log with
Sorry, it seemed that I missed this.
You can test it via ./ceph_test_librbd_fsx and running "fsxtest" in
vm with librbd backend
On Fri, Sep 26, 2014 at 4:07 PM, Stefan Priebe - Profihost AG
wrote:
> Hi,
> Am 26.09.2014 um 10:02 schrieb Haomai Wang:
>> If user enable fiemap feature in osd side,
22 matches
Mail list logo