Hey
Right now multipath is not supported.There is an issue whenClients send a write
to AA blocks the writeThe client timeout so it send the same write to BB writes
The client send another write to BB writesA unlocks and overwrite the second B
write with old information
It can end up corrupting
Hello,
I want to ask about my problem. There's some OSD that dont have any load
(indicated with No ops on that OSD).
Hereby I attached the ceph osd status result : https://pastebin.com/fFLcCbpk
. Look at OSD 17,61 and 72. There's no load or operation happened at that
OSD. How to fix this?
Thank
Hi
I follow this guide, http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
to install iSCSI Ceph on CentOS 7.4 kernel 4.xx. But why Ceph don't support
for OS CentOS. In this document, they wrote:
*Requirements:*
-
A running Ceph Luminous or later storage cluster
-
RHEL/CentOS 7.
Christian Balzer wrote:
Your exact system configuration (HW, drives, controller, settings, etc)
would be interesting as I can think of plenty scenarios on how to corrupt
things that normally shouldn't be affected by such actions
Oh, the hardware in question is consumer grade and not new. Some old
On 01/11/2017 18:04, Chris Jones wrote:
Greg,
Thanks so much for the reply!
We are not clear on why ZFS is behaving poorly under some circumstances
on getxattr system calls, but that appears to be the case.
Since the last update we have discovered that back-to-back booting of
the OSD yields
Hello,
I have enabled bluestore compression, how can I get some statistics just to
see if compression is really working?
Thanks,
Mario
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In that thread, I really like how Wido puts it. He takes out any bit of
code paths, bugs, etc... In reference to size=3 min_size=1 he says,
"Loosing two disks at the same time is something which doesn't happen that
much, but if it happens you don't want to modify any data on the only copy
which y
Here's some good reading for you.
https://www.spinics.net/lists/ceph-users/msg32895.html
I really like how Wido puts it, "Loosing two disks at the same time is
something which doesn't happen that much, but if it happens you don't want
to modify any data on the only copy which you still have left.
I'm currently running 12.2.0. How should I go about applying the patch?
Should I upgrade to 12.2.1, apply the changes, and then recompile?
I really appreciate the patch.
Thanks
On Wed, Nov 1, 2017 at 11:10 AM, David Zafman wrote:
>
> Jon,
>
> If you are able please test my tentative fix for
I don't know. I've seen several cases where people have inconsistent pgs
that they can't recover from and they didn't lose any disks. The most
common thread between them is min_size=1. My postulated scenario might not
be the actual path in the code that leads to it, but something does... and
min
On Wed, Nov 1, 2017 at 11:27 AM Denes Dolhay wrote:
> Hello,
> I have a trick question for Mr. Turner's scenario:
> Let's assume size=2, min_size=1
> -We are looking at pg "A" acting [1, 2]
> -osd 1 goes down, OK
> -osd 1 comes back up, backfill of pg "A" commences from osd 2 to osd 1, OK
> -osd
Hello,
I have a trick question for Mr. Turner's scenario:
Let's assume size=2, min_size=1
-We are looking at pg "A" acting [1, 2]
-osd 1 goes down, OK
-osd 1 comes back up, backfill of pg "A" commences from osd 2 to osd 1, OK
-osd 2 goes down (and therefore pg "A" 's backfill to osd 1 is
incompl
Jon,
If you are able please test my tentative fix for this issue which
is in https://github.com/ceph/ceph/pull/18673
Thanks
David
On 10/30/17 1:13 AM, Jon Light wrote:
Hello,
I have three OSDs that are crashing on start with a FAILED
assert(p.same_interval_since) error. I ran across
RAID may make it likely that disk failures aren't going to be the cause of
your data loss, but none of my examples referred to hardware failure. The
daemon and the code having issues causing OSDs to restart or just not
respond long enough to be marked down. Data loss in this case isn't
talking ab
I have ownership of the directory /user/kwolter on the cephFS server and I
am mounting to ~/ceph, which I also own.
On Wed, Nov 1, 2017 at 2:04 PM, Gregory Farnum wrote:
> Which directory do you have ownership of? Keep in mind your local
> filesystem permissions do not get applied to the remote
On Thu, Oct 26, 2017 at 12:44:01PM -0200, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> http://wiki.ceph.com/Planning
>
> If you have work that you're doing that it a feature work, significant
> backports,
Which directory do you have ownership of? Keep in mind your local
filesystem permissions do not get applied to the remote CephFS mount...
On Wed, Nov 1, 2017 at 11:03 AM Keane Wolter wrote:
> I am mounting a directory under /user which I am the owner of with the
> permissions of 700. If I remove
I am mounting a directory under /user which I am the owner of with the
permissions of 700. If I remove the uid=100026 option, I have no issues. I
start having issues as soon as the uid restrictions are in place.
On Wed, Nov 1, 2017 at 1:05 PM, Gregory Farnum wrote:
> Well, obviously UID 100026 n
I have read your post then read the thread you suggested, very interesting.
Then I read again your post and understood better.
The most important thing is that even with min_size=1 writes are
acknowledged after ceph wrote size=2 copies.
In the thread above there is:
As David already said, when all
Well, obviously UID 100026 needs to have the normal POSIX permissions to
write to the /user path, which it probably won't until after you've done
something as root to make it so...
On Wed, Nov 1, 2017 at 9:57 AM Keane Wolter wrote:
> Acting as UID 100026, I am able to successfully run ceph-fuse
Greg,
Thanks so much for the reply!
We are not clear on why ZFS is behaving poorly under some circumstances on
getxattr system calls, but that appears to be the case.
Since the last update we have discovered that back-to-back booting of the OSD
yields very fast boot time, and very fast getxatt
Acting as UID 100026, I am able to successfully run ceph-fuse and mount the
filesystem. However, as soon as I try to write a file as UID 100026, I get
permission denied, but I am able to write to disk as root without issue. I
am looking for the inverse of this. I want to write changes to disk as UI
Hello everyone,
I would like to implement an object-size based pool-placement policy to
be used with the S3 API and I'd like to ask for some insights.
In particular, I would like to automatically store objects (via S3 API)
into different pools based on their size. I.e. <64K objects to a SSD
Okay, so just to be clear you *haven't* run pg repair yet?
These PG copies look wildly different, but maybe I'm misunderstanding
something about the output.
I would run the repair first and see if that makes things happy. If you're
running on Bluestore, it will *not* break anything or "repair" wi
It looks like you're running with a size = 2 and min_size = 1 (the min_size
is a guess, the size is based on how many osds belong to your problem
PGs). Here's some good reading for you.
https://www.spinics.net/lists/ceph-users/msg32895.html
Basically the jist is that when running with size = 2 yo
I disagree.
We have the following setting...
osd pool default size = 3
osd pool default min size = 1
There's maths that need to be conducted for 'osd pool default size'. A
setting of 3 and 1 allows for 2 disks to fail ... at the same time ...
without a loss of data. This is standard storage
Hi Sage,
This is the mempool dump of my osd.1
ceph daemon osd.0 dump_mempools
{
"bloom_filter": {
"items": 0,
"bytes": 0
},
"bluestore_alloc": {
"items": 10301352,
"bytes": 10301352
},
"bluestore_cache_data": {
"items": 0,
"bytes
Hi David,
What is your min_size in the cache pool? If your min_size is 2, then the
cluster would block requests to that pool due to it having too few copies
available.
this is a little embarassing, but it seems it was the min_size indeed.
I had changed this setting a couple of weeks ago, bu
PPS - or min_size 1 in production
On Wed, Nov 1, 2017 at 10:08 AM David Turner wrote:
> What is your min_size in the cache pool? If your min_size is 2, then the
> cluster would block requests to that pool due to it having too few copies
> available.
>
> PS - Please don't consider using rep_size
What is your min_size in the cache pool? If your min_size is 2, then the
cluster would block requests to that pool due to it having too few copies
available.
PS - Please don't consider using rep_size 2 in production.
On Wed, Nov 1, 2017 at 5:14 AM Eugen Block wrote:
> Hi experts,
>
> we have u
I experienced this as well on tiny Ceph cluster testing...
HW spec - 3x
Intel i7-4770K quad core
32Gb m2/ssd
8Gb memory
Dell PERC H200
6 x 3Tb Seagate
Centos 7.x
Ceph 12.x
I also run 3 memory hungry procs on the Ceph nodes. Obviously there is a
memory problem here. Here are the steps I took avo
I was able to work around this problem by creating the initial cluster with a
single monitor.
On Tuesday, October 31, 2017, 10:42:54 AM CDT, Tyn Li
wrote:
Hello,
I am having trouble setting up a cluster using Ceph Luminous (version 12.2.1)
on Debian 9, kernel 4.9.51. I was able to c
Did you encounter an issue with the steps documented here [1]?
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/
On Wed, Nov 1, 2017 at 5:59 AM, GiangCoi Mr wrote:
> Hi all.
>
> I'm configuring Ceph RDB to expose iSCSI gateway. I am using 3 Ceph-node
> (CentOS 7.4 + Ceph Luminous). I
On Wed, 1 Nov 2017, shadow_lin wrote:
> Hi Sage,
> We have tried compiled the latest ceph source code from github.
> The build is ceph version 12.2.1-249-g42172a4
> (42172a443183ffe6b36e85770e53fe678db293bf) luminous (stable).
> The memory problem seems better but the memory usage of osd is still k
Hello list,
in the past we used the E5-1650v4 for our SSD based ceph nodes which
worked fine.
The new xeon generation doesn't seem to have a replacement. The only one
which is still 0.2Ghz slower is the Intel Xeon Gold 6128. But the price
is 3 times as high.
So the question is is there any bene
Hi all.
I'm configuring Ceph RDB to expose iSCSI gateway. I am using 3 Ceph-node
(CentOS 7.4 + Ceph Luminous). I want to configure iSCSI gateway on 3
Ceph-node for Windows server 2016 connect to Multipath iSCSI. How I can
configure. Please help me to configure it. Thanks
Regards,
Giang
__
Hi experts,
we have upgraded our cluster to Luminous successfully, no big issues
so far. We are also testing cache tier with only 2 SSDs (we know it's
not recommended), and there's one issue to be resolved:
Evertime we have to restart the cache OSDs we get slow requests with
impacts on ou
Hello,
On Wed, 1 Nov 2017 09:30:06 +0100 Michael wrote:
> Hello everyone,
>
> I've conducted some crash tests (unplugging drives, the machine,
Your exact system configuration (HW, drives, controller, settings, etc)
would be interesting as I can think of plenty scenarios on how to corrupt
thin
Hello everyone,
I've conducted some crash tests (unplugging drives, the machine,
terminating and restarting ceph systemd services) with Ceph 12.2.0 on
Ubuntu and quite easily managed to corrupt what appears to be rocksdb's
log replay on a bluestore OSD:
# ceph-bluestore-tool fsck --path /va
I haven't seen much talk about direct integration with oVirt. Obviously it
kind of comes down to oVirt being interested in participating. But, is the
only hold-up getting development time toward an integration or is there
some kind of friction between the dev teams?
We use Ceph Kraken with oVirt
Sure here it is ceph -s:
cluster:
id: 8bc45d9a-ef50-4038-8e1b-1f25ac46c945
health: HEALTH_ERR
100 scrub errors
Possible data damage: 56 pgs inconsistent
services:
mon: 3 daemons, quorum 0,1,pve3
mgr: pve3(active)
osd: 3 osds: 3 up, 3 in
data:
pools:
41 matches
Mail list logo