Hi all,
I am not sure if this is the correct mailing list. Correct me if I am wrong.
I failed to add a pool at http://ceph.com/pgcalc/ because of a Javascript
error:
(index):345 Uncaught TypeError: $(...).dialog is not a function
at addPool (http://ceph.com/pgcalc/:345:31)
at HTMLButtonE
Hello all,
I have issue with radosgw-admin regionmap update . It doesn't update map.
With zone configured like this:
radosgw-admin zone get
{
"id": "fc12ac44-e27e-44e3-9b13-347162d3c1d2",
"name": "oak-1",
"domain_root": "oak-1.rgw.data.root",
"control_pool": "oak-1.rgw.control"
Hi all,
I'm trying to use *path restriction* on CephFS, running a Ceph Jewel (ceph
version 10.2.5) cluster.
For this I'm using the command specified in the official docs (
http://docs.ceph.com/docs/jewel/cephfs/client-auth/):
ceph auth get-or-create client.boris mon 'allow r' mds 'allow r, allow r
Hi,
On 01/11/2017 11:02 AM, Boris Mattijssen wrote:
Hi all,
I'm trying to use/path restriction/ on CephFS, running a Ceph Jewel
(ceph version 10.2.5) cluster.
For this I'm using the command specified in the official docs
(http://docs.ceph.com/docs/jewel/cephfs/client-auth/):
ceph auth get-or
Your current problem has nothing to do with clients and neither does
choose_total_tries.
Try setting just this value to 100 and see if your situation improves.
Ultimately you need to take a good look at your cluster configuration
and how your crush map is configured to deal with that configuratio
Hello,
We from Nokia are validating bluestore on 3 node cluster with EC 2+1
While upgrading our cluster from Kraken 11.0.2 to 11.1.1 with bluesotre ,
the cluster affected more than half of the OSDs went down.
$ceph -s
cluster cb55baa8-d5a5-442e-9aae-3fd83553824e
health HEALTH_ERR
Hi
I'm the author of the mentioned thread on ceph-devel.
The second to last reply in that thread
(http://marc.info/?l=ceph-devel&m=148396739308208&w=2 ) mentions what I
suspected was the cause:
Improper balance of the entire cluster (2 new nodes had double the capacity of
the original cluste
Hi Brukhard,
Thanks for your answer. I've tried two things now:
* ceph auth get-or-create client.boris mon 'allow r' mds 'allow r path=/,
allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is according to
your suggestion. I am however now still able to mount the root path and
read all con
Ok, thank you. I thought I have to set ceph to a tunables profile. If I’m
right, then I just have to export the current crush map, edit it and import it
again, like:
ceph osd getcrushmap -o /tmp/crush
crushtool -i /tmp/crush --set-choose-total-tries 100 -o /tmp/crush.new
ceph osd setcrushmap -i
Hi,
On 01/11/2017 12:39 PM, Boris Mattijssen wrote:
Hi Brukhard,
Thanks for your answer. I've tried two things now:
* ceph auth get-or-create client.boris mon 'allow r' mds 'allow r
path=/, allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is
according to your suggestion. I am howe
On Wed, Jan 11, 2017 at 11:39 AM, Boris Mattijssen
wrote:
> Hi Brukhard,
>
> Thanks for your answer. I've tried two things now:
> * ceph auth get-or-create client.boris mon 'allow r' mds 'allow r path=/,
> allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is according to
> your suggestio
Please refer to Jens's message.
Regards,
On Wed, Jan 11, 2017 at 8:53 PM, Marcus Müller wrote:
> Ok, thank you. I thought I have to set ceph to a tunables profile. If I’m
> right, then I just have to export the current crush map, edit it and import
> it again, like:
>
> ceph osd getcrushmap -o
Hello,
On Tue, Jan 10, 2017 at 11:11 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Daznis
>> Sent: 09 January 2017 12:54
>> To: ceph-users
>> Subject: [ceph-users] Ceph cache tier removal.
>>
>> Hello,
>>
>>
>
Hello all,
I have issue with radosgw-admin regionmap update . It doesn't update map.
With zone configured like this:
radosgw-admin zone get
{
"id": "fc12ac44-e27e-44e3-9b13-347162d3c1d2",
"name": "oak-1",
"domain_root": "oak-1.rgw.data.root",
"control_pool": "oak-1.rgw.control
> Op 11 januari 2017 om 12:24 schreef Jayaram R :
>
>
> Hello,
>
>
>
> We from Nokia are validating bluestore on 3 node cluster with EC 2+1
>
>
>
> While upgrading our cluster from Kraken 11.0.2 to 11.1.1 with bluesotre ,
> the cluster affected more than half of the OSDs went down.
>
Yes
Yes, but everything i want to know is, if my way to change the tunables is
right or not?
> Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo :
>
> Please refer to Jens's message.
>
> Regards,
>
>> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Müller
>> wrote:
>> Ok, thank you. I thought I have to set
Ah right, I was using the the kernel client on kernel 3.x
Thanks for the answer. I'll try updating tomorrow and will let you know if
it works!
Cheers,
Boris
On Wed, Jan 11, 2017 at 1:03 PM John Spray wrote:
> On Wed, Jan 11, 2017 at 11:39 AM, Boris Mattijssen
> wrote:
> > Hi Brukhard,
> >
> >
Hi Marcus
Please refer to the documentation:
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
I belive your suggestion only modifies the in-memory map and you never get a
changed version written in the outfile, but it could easily be tested by
decompiling the new
On 11-1-2017 08:06, Adrian Saul wrote:
>
> I would concur having spent a lot of time on ZFS on Solaris.
>
> ZIL will reduce the fragmentation problem a lot (because it is not
> doing intent logging into the filesystem itself which fragments the
> block allocations) and write response will be a lo
Hi list,
I'm having trouble with slow requests, they have a noticeable impact
on the performance. I'd like to find out, what the root cause is, I
guess there are a lot of possible causes. But I'll just describe what
I'm seeing and hopefully someone can give advices.
I just counted the occur
Hi,
just for clarity:
Did you parse the slow request messages and use the effective OSD in the
statistics? Some message may refer to other OSDs, e.g. "waiting for sub
op on OSD X,Y". The reporting OSD is not the root cause in that case,
but one of the mentioned OSDs (and I'm currently not a
I would like to propose that starting with the Luminous release of Ceph,
RBD will no longer support the creation of v1 image format images via the
rbd CLI and librbd.
We previously made the v2 image format the default and deprecated the v1
format under the Jewel release. It is important to note th
Hi,
I simply grepped for "slow request" in ceph.log. What exactly do you
mean by "effective OSD"?
If I have this log line:
2017-01-11 [...] osd.16 [...] cluster [WRN] slow request 32.868141
seconds old, received at 2017-01-11 [...]
ack+ondisk+write+known_if_redirected e12440) currently wa
Hello,
As the subject says - are here any users/consumers of librados C API? I'm asking because we're researching if this PR:
https://github.com/ceph/ceph/pull/12216 will be actually beneficial for larger group of users. This PR adds a bunch of new APIs that perform
object writes without interm
It would be fine to not support v1 image format at all.
But it would be probably friendly for users to provide them with more
understandable message when they face feature mismatch instead of just
displaying:
* rbd: map failed: (6) No such device or address
For instance, show the following some
John,
This morning I compared the logs from yesterday and I show a noticeable
increase in messages like these:
2017-01-11 09:00:03.032521 7f70f15c1700 10 mgr handle_mgr_digest 575
2017-01-11 09:00:03.032523 7f70f15c1700 10 mgr handle_mgr_digest 441
2017-01-11 09:00:03.032529 7f70f15c1700 10 mgr n
On Wed, Jan 11, 2017 at 5:09 PM, Jason Dillaman wrote:
> I would like to propose that starting with the Luminous release of Ceph, RBD
> will no longer support the creation of v1 image format images via the rbd
> CLI and librbd.
>
> We previously made the v2 image format the default and deprecated
So I was attempting to add an OSD to my ceph-cluster (running Jewel 10.2.5),
using ceph-deploy (1.5.35), on Ubuntu.
I have 2 OSD’s on this node, attempting to add third.
The first two OSD’s I created with on-disk journals, then later moved them to
partitions on the NVMe system disk (Intel P3600
+1
I'd be happy to tweak the internals of librbd to support pass-through
of C buffers all the way to librados. librbd clients like QEMU use the
C API and this currently results in several extra copies (in librbd
and librados).
On Wed, Jan 11, 2017 at 11:44 AM, Piotr Dałek wrote:
> Hello,
>
> As
On Wed, Jan 11, 2017 at 6:01 PM, Shinobu Kinjo wrote:
> It would be fine to not support v1 image format at all.
>
> But it would be probably friendly for users to provide them with more
> understandable message when they face feature mismatch instead of just
> displaying:
>
> * rbd: map failed: (
Jason: librbd itself uses the librados C++ api though, right?
-Sam
On Wed, Jan 11, 2017 at 9:37 AM, Jason Dillaman wrote:
> +1
>
> I'd be happy to tweak the internals of librbd to support pass-through
> of C buffers all the way to librados. librbd clients like QEMU use the
> C API and this curren
It does internally -- which requires the extra copy from C array to a
bufferlist. I had a PR for wrapping the C array into a bufferlist (w/o
the copy), but Sage pointed out a potential issue with such
implementations (which might still be an issue w/ this PR).
[1]
https://github.com/yuyuyu101/cep
On Thu, Jan 12, 2017 at 2:41 AM, Ilya Dryomov wrote:
> On Wed, Jan 11, 2017 at 6:01 PM, Shinobu Kinjo wrote:
>> It would be fine to not support v1 image format at all.
>>
>> But it would be probably friendly for users to provide them with more
>> understandable message when they face feature mism
On Wed, 11 Jan 2017, Jason Dillaman wrote:
> +1
>
> I'd be happy to tweak the internals of librbd to support pass-through
> of C buffers all the way to librados. librbd clients like QEMU use the
> C API and this currently results in several extra copies (in librbd
> and librados).
+1 from me too.
On Wed, Jan 11, 2017 at 1:01 PM, Sage Weil wrote:
> Jason, where does librbd fall?
Option (2) won't help for users like QEMU unless we can tie the
reference counting back into the AioCompletion (i.e. delay firing
until all references to the memory are released).
--
Jason
___
OK, I changed the setting and it seems to work as expected. I have to say thank
you to all you guys!
I was just not sure how to do this properly.
> Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo :
>
> Please refer to Jens's message.
>
> Regards,
>
> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Mül
On 1/11/17, 10:31 AM, "ceph-users on behalf of Reed Dier"
wrote:
>>2017-01-03 12:10:23.514577 7f1d821f2800 0 ceph version 10.2.5
>>(c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 19754
>> 2017-01-03 12:10:23.517465 7f1d821f2800 1
>>filestore(/var/lib/ceph/tmp/mnt.WaQmjK) mkfs
Interesting, I feel silly having not checked ownership of the dev device.
Will chown before next deploy and report back for sake of possibly helping
someone else down the line.
Thanks,
Reed
> On Jan 11, 2017, at 3:07 PM, Stillwell, Bryan J
> wrote:
>
> On 1/11/17, 10:31 AM, "ceph-users on be
On Thu, Jan 12, 2017 at 2:19 AM, Eugen Block wrote:
> Hi,
>
> I simply grepped for "slow request" in ceph.log. What exactly do you mean by
> "effective OSD"?
>
> If I have this log line:
> 2017-01-11 [...] osd.16 [...] cluster [WRN] slow request 32.868141 seconds
> old, received at 2017-01-11 [...
We are going to setup a test cluster with kraken using CentOS7. And
obviously like to stay as close as possible to using their repositories.
If we need to install the 4.1.4 kernel or later, is there a ceph
recommended repository to choose? Like for instance use the elrepo
4.9ml/4.4lt?
http:
On Wed, Jan 11, 2017 at 3:27 PM, Marc Roos wrote:
> We are going to setup a test cluster with kraken using CentOS7. And
> obviously like to stay as close as possible to using their repositories.
Ilya has backported the latest kernel code to CentOS 7.3's kernel, so
I'd recommend the version in the
Hello John,
Apologies for the error. We will be working to correct it, but in the
interim, you can use http://linuxkidd.com/ceph/pgcalc.html
Thanks,
Michael J. Kidd
Sr. Software Maintenance Engineer
Red Hat Ceph Storage
+1 919-442-8878
On Wed, Jan 11, 2017 at 12:03 AM, 林自均 wrote:
> Hi all,
On 12/20/2016 08:48 AM, Wido den Hollander wrote:
> I wouldn't call it a fix. In 2016 (almost 2017) IPv6 should be enabled, no
> questions asked.
Tell that to every single one of my internet service providers.
no, please, do tell them. I'm tired of it too.
Hi Michael,
Thanks for your link!
However, when I am using your clone of pgcalc, the newly created pool
didn't follow my values in the "Add Pool" dialog. For example, no matter
what I fill in "Pool Name", I always get "newPool" as the name.
By the way, where can I find the git repository of pgca
Hello,
On Wed, 11 Jan 2017 11:09:46 -0500 Jason Dillaman wrote:
> I would like to propose that starting with the Luminous release of Ceph,
> RBD will no longer support the creation of v1 image format images via the
> rbd CLI and librbd.
>
> We previously made the v2 image format the default and
Hello John,
Thanks for the bug report. Unfortunately, I'm not able to reproduce the
error. I tested from both Firefox and Chrome on linux. Can you let me
know what os/browser you're using? Also, I've not tested any non 'en-US'
characters, so I can't attest to how it will behave with other alp
On Thu, Jan 12, 2017 at 12:28 PM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 11 Jan 2017 11:09:46 -0500 Jason Dillaman wrote:
>
>> I would like to propose that starting with the Luminous release of Ceph,
>> RBD will no longer support the creation of v1 image format images via the
>> rbd CLI an
On Wed, Jan 11, 2017 at 10:43 PM, Shinobu Kinjo wrote:
> +2
> * Reduce manual operation as much as possible.
> * A recovery tool in case that we break something which would not
> appear to us initially.
I definitely agree that this is an overdue tool and we have an
upstream feature ticket for t
Thanks Wido for the information and I hope from 11.1.0 possible to upgrade
the intermediate releases of kraken and upcoming releases.
Thanks,
Muthu
On 11 January 2017 at 19:12, Wido den Hollander wrote:
>
> > Op 11 januari 2017 om 12:24 schreef Jayaram R :
> >
> >
> > Hello,
> >
> >
> >
> > We
Hi Michael,
Sorry, I can't reproduce it anymore. I used Chrome and didn't use any
non-alphabet character. I must have done something else wrong.
Thanks for the quick reply.
Best,
John Lin
Michael Kidd 於 2017年1月12日 週四 上午11:33寫道:
> Hello John,
> Thanks for the bug report. Unfortunately, I'm
50 matches
Mail list logo