hello,
do you know why this happens when I did it following the official
ducumentation.
$ sudo rbd map foo --name client.admin
rbd: add failed: (5) Input/output error
the OS kernel,
$ uname -a
Linux ceph.yygamedev.com 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10
20:39:51 UTC 2012 x86_64 x86
On Thu, Oct 29, 2015 at 2:21 PM, Gregory Farnum wrote:
> On Wed, Oct 28, 2015 at 8:38 PM, Yan, Zheng wrote:
>> On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
>>> I tried to dig into the ceph-fuse code, but I was unable to find the
>>> fragment that is responsible for flushing the data from the p
On Thu, 29 Oct 2015, Yan, Zheng wrote:
> On Thu, Oct 29, 2015 at 2:21 PM, Gregory Farnum wrote:
> > On Wed, Oct 28, 2015 at 8:38 PM, Yan, Zheng wrote:
> >> On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
> >>> I tried to dig into the ceph-fuse code, but I was unable to find the
> >>> fragment tha
... and this is the core dump output while executing the "rbd diff" command:
http://paste.openstack.org/show/477604/
Regards,
Giuseppe
2015-10-28 16:46 GMT+01:00 Giuseppe Civitella
:
> Hi all,
>
> I'm trying to get the real disk usage of a Cinder volume converting this
> bash commands to python
Only way I can think of that is creating a new crush rule that selects
that specific OSD with min_size = max_size = 1, then creating a pool
with size = 1 and using that crush rule.
Then you can use that pool as you'd use any other pool.
I haven't tested however it should work.
On Thu, Oct 29, 20
Hi,
On 10/29/2015 09:54 AM, Luis Periquito wrote:
Only way I can think of that is creating a new crush rule that selects
that specific OSD with min_size = max_size = 1, then creating a pool
with size = 1 and using that crush rule.
Then you can use that pool as you'd use any other pool.
I haven
Hi,
On 10/29/2015 09:30 AM, Sage Weil wrote:
On Thu, 29 Oct 2015, Yan, Zheng wrote:
On Thu, Oct 29, 2015 at 2:21 PM, Gregory Farnum wrote:
On Wed, Oct 28, 2015 at 8:38 PM, Yan, Zheng wrote:
On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
I tried to dig into the ceph-fuse code, but I was un
Thanks Gurjar.
Have loaded the rbd module, but got no luck.
what dmesg shows,
[119192.384770] libceph: mon0 172.17.6.176:6789 feature set mismatch, my
2 < server's 42040002, missing 4204
[119192.388744] libceph: mon0 172.17.6.176:6789 missing required
protocol features
[119202.400782] libce
On Thu, Oct 29, 2015 at 8:13 AM, Wah Peng wrote:
> hello,
>
> do you know why this happens when I did it following the official
> ducumentation.
>
> $ sudo rbd map foo --name client.admin
>
> rbd: add failed: (5) Input/output error
>
>
> the OS kernel,
>
> $ uname -a
> Linux ceph.yygamedev.com 3.2
$ ceph -v
ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)
thanks.
On 2015/10/29 星期四 18:23, Ilya Dryomov wrote:
What's your ceph version and what does dmesg say? 3.2 is*way* too
old, you are probably missing more than one required feature bit. See
http://docs.ceph.com/docs/ma
On Thu, Oct 29, 2015 at 11:22 AM, Wah Peng wrote:
> Thanks Gurjar.
> Have loaded the rbd module, but got no luck.
> what dmesg shows,
>
> [119192.384770] libceph: mon0 172.17.6.176:6789 feature set mismatch, my 2 <
> server's 42040002, missing 4204
> [119192.388744] libceph: mon0 172.17.6.176:
It sounds like you ran into this issue [1]. It's been fixed in upstream master
and infernalis branches, but the backport is still awaiting release on hammer.
[1] http://tracker.ceph.com/issues/12885
--
Jason Dillaman
- Original Message -
> From: "Giuseppe Civitella"
> To: "ceph-
I'm following the tutorial at
http://docs.ceph.com/docs/v0.79/start/quick-ceph-deploy/ to deploy a
monitor using
% ceph-deploy mon create-initial
But I got the following errors:
...
[ceph-node1][INFO ] Running command: ceph --cluster=ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-node1.asok mo
Hi,
Please search the google, there exist the answer.
As i rembered:
1. low kernel version rbd not support some feature of CRUSH, Check the
/var/log/message
2. sudo rbd map foo --name client.admin -p {pol_name}
3. also specify the -p {pol_name} when you create the image
Thanks!
-
On 29 October 2015 at 19:24, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> # ceph tell osd.1 bench
> {
> "bytes_written": 1073741824,
> "blocksize": 4194304,
> "bytes_per_sec": 117403227.00
> }
>
> It might help you to figure out whether individual OSDs
On Wed, Oct 28, 2015 at 7:54 PM, Matt Taylor wrote:
> I still see rsync errors due to permissions on the remote side:
>
Thanks for the heads' up; I bet another upload rsync process got
interrupted there.
I've run the following to remove all the oddly-named RPM files:
for f in $(locate *.rpm.*
Hi,
we have multiple Ceph clusters. One is used as backend for OpenStack
installation for developers - it's here we test Ceph upgrades before we upgrade
prod Ceph clusters. The Ceph cluster is 4 nodes with 12 osds each running
Ubuntu Trusty with latest 3.13 kernel.
This time when upgrading fro
Hi Wido and all community.
We catched very idiotic issue on our Cloudstack installation, which related
to ceph and possible to java-rados lib.
So, we have constantly agent crashed (which cause very big problem for
us... ).
When agent crashed - it's crash JVM. And no event in logs at all.
We enab
Hi,
I am having a strange problem with our development cluster. When I run rbd
export it just hangs. I have been running ceph for a long time and haven't
encountered this kind of issue. Any ideas as to what is going on?
rbd -p locks export seco101ira -
I am running
Centos 6.6 x86 64
cep
i,
I am having a strange problem with our development cluster. When I run rbd
export it just hangs. I have been running ceph for a long time and haven't
encountered this kind of issue. Any ideas as to what is going on?
rbd -p locks export seco101ira -
I am running
Centos 6.6 x86 64
ceph
You can also extend that command line to specify specific block and
total sizes. Check the help text. :)
-Greg
On Thursday, October 29, 2015, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
>
> On 29 October 2015 at 19:24, Burkhard Linke <
> burkhard.li...@computational.bio.uni-giessen.de
I don't see the read request hitting the wire, so I am thinking your client
cannot talk to the primary PG for the 'rb.0.16cf.238e1f29.' object.
Try adding "debug objecter = 20" to your configuration to get more details.
--
Jason Dillaman
- Original Message -
> From: "Joe
rbd -p locks export seco101ira -
2015-10-29 13:13:49.487822 7f5c2cb3b7c0 1 librados: starting msgr at :/0
2015-10-29 13:13:49.487838 7f5c2cb3b7c0 1 librados: starting objecter
2015-10-29 13:13:49.487971 7f5c2cb3b7c0 1 -- :/0 messenger.start
2015-10-29 13:13:49.488027 7f5c2cb3b7c0 1 librados: se
Sorry, the information is in the headers. So I think the valid question
to follow up is why is this information in the headers and not the body
of the request. I think this is a bug, but maybe I am not aware of a
subtly. It would seem this json comes from this line[0].
[0] -
https://github.com/
On Thu, Oct 29, 2015 at 11:29 AM, Derek Yarnell wrote:
> Sorry, the information is in the headers. So I think the valid question
> to follow up is why is this information in the headers and not the body
> of the request. I think this is a bug, but maybe I am not aware of a
> subtly. It would se
Periodicly I am also getting these while waiting
2015-10-29 13:41:09.528674 7f5c24fd6700 10 client.7368.objecter tick
2015-10-29 13:41:14.528779 7f5c24fd6700 10 client.7368.objecter tick
2015-10-29 13:41:19.528907 7f5c24fd6700 10 client.7368.objecter tick
2015-10-29 13:41:22.515725 7f5c260d9700 1
Additional trace:
#0 0x7f30f9891cc9 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x7f30f98950d8 in __GI_abort () at abort.c:89
#2 0x7f30f87b36b5 in __gnu_cxx::__verbose_terminate_handler() () from
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x000
>From all we analyzed - look like - it's this issue
http://tracker.ceph.com/issues/13045
PR: https://github.com/ceph/ceph/pull/6097
Can anyone help us to confirm this? :)
2015-10-29 23:13 GMT+02:00 Voloshanenko Igor :
> Additional trace:
>
> #0 0x7f30f9891cc9 in __GI_raise (sig=sig@entry=6
More info
output of dmesg
[259956.804942] libceph: osd7 10.134.128.42:6806 socket closed (con state OPEN)
[260752.788609] libceph: osd1 10.134.128.43:6800 socket closed (con state OPEN)
[260757.908206] libceph: osd2 10.134.128.43:6803 socket closed (con state OPEN)
[260763.181751] libceph: osd3 1
On Thu, Oct 29, 2015 at 4:30 PM, Sage Weil wrote:
> On Thu, 29 Oct 2015, Yan, Zheng wrote:
>> On Thu, Oct 29, 2015 at 2:21 PM, Gregory Farnum wrote:
>> > On Wed, Oct 28, 2015 at 8:38 PM, Yan, Zheng wrote:
>> >> On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
>> >>> I tried to dig into the ceph-f
Hi, everyone
After install hammer-0.94.5 in debian, i want to trace the librbd by lttng, but
after done follow steps, i got nothing:
2036 mkdir -p traces
2037 lttng create -o traces librbd
2038 lttng enable-event -u 'librbd:*'
2039 lttng add-context -u -t pthread_id
2040 lttng start
20
31 matches
Mail list logo