On 03/12/2017 07:54 PM, Florian Haas wrote:
Florian,
For others following this thread who still have the hammer→jewel upgrade
ahead: there is a ceph.conf option you can use here; no need to fiddle
with the upstart scripts.
setuser match path = /var/lib/ceph/$type/$cluster-$id
Ah, i did not k
On 03/13/2017 02:02 PM, Christoph Adomeit wrote:
Christoph,
Thanks for the detailed upgrade report.
We have another scenario: We have allready upgraded to jewel 10.2.6 but
we are still running all our monitors and osd daemons as root using the
setuser match path directive.
What would be the r
Hello, I'm trying to deploy a ceph filestore cluster with LVM using
ceph-ansible playbook. I've been fixing a couple of code blocks in
ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
again; 'ceph-disk activate osd' fails.
Please let me just show you the error message
I'd love to get helped out; it'd be much appreciated.
Best Wishes,
Nicholas.
On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim wrote:
> Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
> ceph-ansible and ceph-disk
Is this Jewel? Do you have some udev rules or anything that changes the
owner on the journal device (eg. /dev/sdx or /dev/nvme0n1p1) to ceph:ceph?
On 03/14/17 08:53, Gunwoo Gim wrote:
> I'd love to get helped out; it'd be much appreciated.
>
> Best Wishes,
> Nicholas.
>
> On Tue, Mar 14, 2017 at 4
On 17-03-14 00:08, John Spray wrote:
On Mon, Mar 13, 2017 at 8:15 PM, Andras Pataki
wrote:
Dear Cephers,
We're using the ceph file system with the fuse client, and lately some of
our processes are getting stuck seemingly waiting for fuse operations. At
the same time, the cluster is healthy, n
Dear cephers,
I met a problem when using ceph-fuse with quota enabled.
My ceph version is :
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
I have two ceph-fuse process in two different hosts(node1 and node2)
One ceph-fuse is mounted with root directory on /mnt/c
Hi,
This sounds familiar: http://tracker.ceph.com/issues/17939
I found that you can get the updated quota on node2 by touching the
base dir. In your case:
touch /shares/share0
-- Dan
On Tue, Mar 14, 2017 at 10:52 AM, yu2xiangyang wrote:
> Dear cephers,
> I met a problem when using ce
Hi all
I need a help with operation of moving all data from one pool to another.
pool1: ECpool with replicated cache tier pool (name it: pool1a)
pool2: replicated pool
need to move data from pool1 -> pool2
any help/procedures would be helpful
Kind regards
Paul
Hi,
>> My question is how much total CEPH storage does this allow me? Only 2.3TB?
>> or does the way CEPH duplicates data enable more than 1/3 of the storage?
> 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit.
To expand on this, you probably want to keep some margins a
Thanks John,
I think that has resolved the problems.
Dan
On 03/04/2017 09:08 AM, John Spray wrote:
On Fri, Mar 3, 2017 at 9:48 PM, Daniel Davidson
wrote:
ceph daemonperf mds.ceph-0
-mds-- --mds_server-- ---objecter--- -mds_cache-
---mds_log
rlat inos caps|hsr hcs hcr |w
Hi John,
I've checked the MDS session list, and the fuse client does appear on
that with 'state' as 'open'. So both the fuse client and the MDS agree
on an open connection.
Attached is the log of the ceph fuse client at debug level 20. The MDS
got restarted at 9:44:20, and it went through
I did find the journal configuration entries and they indeed did help for
this test, thanks
Configuration was:
journal_max_write_entries=100
journal_queue_max_ops=300
journal_queue_max_bytes=33554432
journal_max_write_bytes=10485760
Configuration after update:
journal_max_write_entries=1
journ
Thank you very much, Peter.
I'm sorry for not clarifying the version number; it's kraken and
11.2.0-1xenial.
I guess the udev rules in this file are supposed to change them :
/lib/udev/rules.d/95-ceph-osd.rules
...but the rules' filters don't seem to match the DEVTYPE part of the
prepared par
Hi all,
Even with debug_osd 0/0 as well as every other debug_ setting at 0/0 I
still get logs like those pasted below in
/var/log/ceph/ceph-osd..log when the relevant situation arises
(release 11.2.0).
Any idea what toggle switches these off? I went through and set
every single debug_ setting t
On Tue, Mar 14, 2017 at 2:10 PM, Andras Pataki
wrote:
> Hi John,
>
> I've checked the MDS session list, and the fuse client does appear on that
> with 'state' as 'open'. So both the fuse client and the MDS agree on an
> open connection.
>
> Attached is the log of the ceph fuse client at debug lev
Hi,
We initially upgraded from Hammer to Jewel while keeping the ownership
unchanged, by adding "setuser match path =
/var/lib/ceph/$type/$cluster-$id" in ceph.conf
Later, we used the following steps to change from running as root to
running as ceph.
On the storage nodes, we ran the following
Thanks for the decoding of the logs, now I see what to look for. Can you
point me to any documentation that explains a bit more on the logic
(about capabilities, Fb/Fw, how the communication between the client and
the MDS works, etc.)?
I've tried running the client and the MDS at log level 20,
Hi,
I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per
node) to test if ceph in such small scale is going to perform good
enough to put it into production environment (or does it perform well
only if there are tens of OSDs, etc.).
Are there any "do's" and "don'ts" in matt
Hello,
your subject line has little relevance to your rather broad questions.
On Tue, 14 Mar 2017 23:45:26 +0100 Michał Chybowski wrote:
> Hi,
>
> I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per
> node) to test if ceph in such small scale is going to perform good
> e
Greg, thanks for the reply.
True that i cant provide enough information to know what happened since the
pool is gone.
But based on your experience, can i please take some of your time, and give
me the TOP 5 fo what could happen / would be the reason to happen what
hapened to that pool (or any pool
Hi There,
I see NOTIFY_TIMEOUT is defined as 5 seconds, and be used by notify2().
This value is passed from client to OSD side. Why we define this value and
how client and OSD use it in the watch/notify mechanism?
Thanks,
Zhongyan
___
ceph-users mailing
On Mon, Mar 13, 2017 at 6:09 AM, Florian Haas wrote:
> On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster
> wrote:
>>> I'm sorry, I may have worded that in a manner that's easy to
>>> misunderstand. I generally *never* suggest that people use CFQ on
>>> reasonably decent I/O hardware, and thus h
Dear cephers,
I met a problem when using ceph-fuse with quota enabled.
My ceph version is :
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) .
I have two ceph-fuse process in two different hosts(node1 and node2).
One ceph-fuse is mounted with root directory on /mnt/cephfs on n
This is a very generic question. Perchance, are you referring to
librbd's use of watch/notify? If so, we picked 5 seconds because that
should be more than enough time for a client to ACK the message. In
cases where it matters, we will re-send the notification if it times
out due to a non-responsive
Hello,
I have tried to recover the pg using the following steps:
Preparation:
1. set noout
2. stop osd.2
3. use ceph-objectstore-tool to export from osd2
4. start osd.2
5. repeat step 2-4 on osd 35,28, 63 (I've done these hoping to be able to use
one of those exports to recover the PG)
First a
Decide which copy you want to keep and export that with ceph-objectstore-tool
Delete all copies on all OSDs with ceph-objectstore-tool (not by
deleting the directory on the disk).
Use force_create_pg to recreate the pg empty.
Use ceph-objectstore-tool to do a rados import on the exported pg copy
Hello,
Can we get any update for this problem?
Thanks
On Thu, Mar 2, 2017 at 2:16 PM, nokia ceph wrote:
> Hello,
>
> Env:- v11.2.0 - bluestore - EC 3 + 1
>
> We are getting below entries both in /var/log/messages and osd logs. May I
> know what is the impact of the below message and as these
We currently run a commodity cluster that supports a few petabytes of data.
Each node in the cluster has 4 drives, currently mounted as /0 through /3. We
have been researching alternatives for managing the storage, Ceph being one
possibility, iRODS being another. For preservation purposes, we wo
29 matches
Mail list logo