On Tue, Dec 20, 2016 at 11:32 PM, Aakanksha Pudipeddi
wrote:
> I am trying to setup kraken from source and I get an import error on using
> the ceph command:
>
>
>
> Traceback (most recent call last):
>
> File "/home/ssd/src/vanilla-ceph/ceph-install/bin/ceph", line 112, in
>
>
> from ceph_
> Op 21 december 2016 om 2:39 schreef Christian Balzer :
>
>
>
> Hello,
>
> I just (manually) added 1 OSD each to my 2 cache-tier nodes.
> The plan was/is to actually do the data-migration at the least busiest day
> in Japan, New Years (the actual holiday is January 2nd this year).
>
> So I
Hi, everyone.
Sometimes, I've got a need to know the ip address of the ceph client at the
time, is there any way to list those ip address in ceph cluster?
Thank you:-)___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
Hi, everyone.
Sometimes, I've got a need to know the ip address of the ceph client at the
time, is there any way to list those ip address in ceph cluster? I'm using ceph
rbd with kvm servers.
Thank you:-)
___
ceph-users mailing list
ceph-users@lis
Hi,
I'm currently investigating a case where Ceph cluster ended up with
inconsistent clone information.
Here's a what I did to quickly reproduce:
* Created new cluster (tested in hammer 0.94.6 and jewel 10.2.3)
* Created two pools: test and rbd
* Created base image in pool test, created snapshot
Hi Andras,
Iam not the experienced User but i guess you could have a look on this object
on each related osd for the pg, compare them and delete the Different object. I
assume you have size = 3.
Then again pg repair.
But be carefull iirc the replica will be recovered from the primary pg.
Hth
Hi,
I use this Ansible installation:
https://github.com/harobed/poc-ceph-ansible/tree/master/vagrant-3mons-3osd
I have:
* 3 osd
* 3 mons
```
root@ceph-test-1:/home/vagrant# ceph version
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
```
```
bash-4.2# rbd --version
ceph version
No problem with Debian:
```
root@ceph-client-2:/mnt/image2# rbd --version
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
root@ceph-client-2:/mnt/image2# uname --all
Linux ceph-client-2 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-2 (2015-04-13)
x86_64 GNU/Linux
```
I need to upgrade r
Hi,
I have this issue:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/015216.html
Question: can I use rbd 0.80.7 with ceph cluster version 10.2.5?
Why I use this old version? Because I use Atomic Project
http://www.projectatomic.io/
Best regards,
Stéphane
--
Stéphane Klein
Hi all,
In top of our ceph cluster, one application use the rados gateway/S3.
This application did not use multipart s3 api, but it split files (for example
1 MB) into chunk of desire size (it have to work on top of several type of
storage).
For every tests, the application hang when uploadin
Yes, size = 3, and I have checked that all three replicas are the same
zero length object on the disk. I think some metadata info is
mismatching what the OSD log refers to as "object info size". But I'm
not sure what to do about it. pg repair does not fix it. In fact, the
file this object c
Thanks ceph@jack and Alexandre for the reassurance!
C.
On 12/20/2016 08:37 PM, Alexandre DERUMIER wrote:
I have upgrade 3 jewel cluster on jessie to last 10.2.5, works fine.
- Mail original -
De: "Chad William Seys"
À: "ceph-users"
Envoyé: Mardi 20 Décembre 2016 17:31:49
Objet: [ceph
I was under the impression that when a client talks to the cluster, it
grabs the osd map and computes the crush algorithm to determine where it
stores the object. Does the rgw server do this for clients? If I had 12
clients all talking through one gateway, would that server have to pass all
of the
Hi Gerald,
for the s3 and swift case, the clients are not accessing the ceph cluster. They
are s3 and swift clients and only discuss with the RGW over HTTP. The RGW is
the ceph client that does all the interaction with the ceph cluster.
Best
JC
> On Dec 21, 2016, at 07:27, Gerald Spencer wrot
I have configured:
```
ceph osd crush tunables firefly
```
on cluster. After that, same error :(
2016-12-21 15:23 GMT+01:00 Stéphane Klein :
> No problem with Debian:
>
> ```
> root@ceph-client-2:/mnt/image2# rbd --version
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> root@
You are unfortunately the second person today to hit an issue where
"rbd remove" incorrectly proceeds when it hits a corner-case error.
First things first, when you configure your new user, you needed to
give it "rx" permissions to the parent image's pool. If you attempted
the clone operation usin
Hi,
http://ceph.com/resources/mailing-list-irc/
points to:
http://dir.gmane.org/gmane.comp.file-systems.ceph.user
if i try to search in the list its pointing to:
http://search.gmane.org/?query=test&group=gmane.comp.file-systems.ceph.user
and this search.gmane.org DNS does not exist.
So
On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
wrote:
> I have configured:
>
> ```
> ceph osd crush tunables firefly
> ```
If it gets to rm, then it's probably not tunables. Are you running
these commands by hand?
Anything in dmesg?
Thanks,
Ilya
__
Same error with rbd image create with --image-format 1
2016-12-21 14:51 GMT+01:00 Stéphane Klein :
> Hi,
>
> I use this Ansible installation: https://github.com/harobed/
> poc-ceph-ansible/tree/master/vagrant-3mons-3osd
>
> I have:
>
> * 3 osd
> * 3 mons
>
> ```
> root@ceph-test-1:/home/vagrant#
2016-12-21 18:47 GMT+01:00 Ilya Dryomov :
> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
> wrote:
> > I have configured:
> >
> > ```
> > ceph osd crush tunables firefly
> > ```
>
> If it gets to rm, then it's probably not tunables. Are you running
> these commands by hand?
>
Yes, I have exec
Hi,
searching here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/
ends in:
ht://Dig error
htsearch detected an error. Please report this to the webmaster of this
site by sending an e-mail to: mail...@listserver-dap.dreamhost.com The
error message is:
Unable to read word database file
'
I mean setup a Ceph cluster after compiling from source and make install. I
usually use the long form to setup the cluster. The mon setup is fine but when
I create an OSD using ceph osd create or even check the status using ceph -s
after the monitor is setup, I get this error. The PATH, LD_LIBRA
Hi,
I'm looking for the way of setting up read only cache tier but the updated
objects to the backend store must be evicted from the cache (if any on the
cache) and it should be promoted to cache again on next miss read. This
will make the cache never contain stale objects. Which cache-mode should
On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein
wrote:
>
>
> 2016-12-21 18:47 GMT+01:00 Ilya Dryomov :
>>
>> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
>> wrote:
>> > I have configured:
>> >
>> > ```
>> > ceph osd crush tunables firefly
>> > ```
>>
>> If it gets to rm, then it's probably not
Can you share exact steps you took to build the cluster?
On Thu, Dec 22, 2016 at 3:39 AM, Aakanksha Pudipeddi
wrote:
> I mean setup a Ceph cluster after compiling from source and make install. I
> usually use the long form to setup the cluster. The mon setup is fine but
> when I create an OSD u
Hi,
One of our OSDs have gone into a mode where it will throw an assert and die
shortly after it has been started.
The following assert is being thrown:
https://github.com/ceph/ceph/blob/v10.2.5/src/osd/PGLog.cc#L1036-L1047
--- begin dump of recent events ---
0> 2016-12-21 17:05:57.975799
2016-12-21 19:51 GMT+01:00 Ilya Dryomov :
> On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein
> wrote:
> >>
> > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov :
> >>
> >> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
> >> wrote:
> >> > I have configured:
> >> >
> >> > ```
> >> > ceph osd crush tunables
On Wed, Dec 21, 2016 at 9:42 PM, Stéphane Klein
wrote:
>
>
> 2016-12-21 19:51 GMT+01:00 Ilya Dryomov :
>>
>> On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein
>> wrote:
>> >>
>> > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov :
>> >>
>> >> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
>> >> wrote:
>>
[moving to ceph-users ...]
You should be able to use the rados CLI to list all the objects in
your pool, excluding all objects associated with known, valid image
ids:
rados ls -p rbd | grep -vE "($(rados -p rbd ls | grep rbd_header |
grep -o "\.[0-9a-f]*" | sed -e :a -e '$!N; s/\n/|/; ta' -e
's/\
Sure, to build I did:
1. ./do_cmake.sh
2. cd build
3. make && sudo make install
To setup:
#5. prepare keys
ceph-authtool --create-keyring ./ceph.mon.keyring --gen-key -n mon. --cap mon
'allow *'
ceph-authtool --create-keyring ./ceph.client.admin.keyring --gen-key -n
client.admin --set-uid=0 --
Sorry, the line breaks seem to be messed up. Here is the setup script:
#5. prepare keys
ceph-authtool --create-keyring ./ceph.mon.keyring --gen-key -n mon. --cap mon
'allow *'
ceph-authtool --create-keyring ./ceph.client.admin.keyring --gen-key -n
client.admin --set-uid=0 --cap mon 'allow *' --c
> Not sure what's going on here. Using firefly version of the rbd CLI
> tool isn't recommended of course, but doesn't seem to be _the_ problem.
> Can you try some other distro with an equally old ceph - ubuntu trusty
> perhaps?
Same error with:
* Ubuntu trusty
root@ceph-client-3:/home/vagrant#
On Wed, Dec 21, 2016 at 10:55 PM, Stéphane Klein
wrote:
>
>> Not sure what's going on here. Using firefly version of the rbd CLI
>> tool isn't recommended of course, but doesn't seem to be _the_ problem.
>> Can you try some other distro with an equally old ceph - ubuntu trusty
>> perhaps?
>
>
> S
2016-12-21 23:06 GMT+01:00 Ilya Dryomov :
> What's the output of "cat /proc/$(pidof rm)/stack?
>
root@ceph-client-3:/home/vagrant# cat /proc/2315/stack
[] sleep_on_page+0xe/0x20
[] wait_on_page_bit+0x7f/0x90
[] truncate_inode_pages_range+0x2fe/0x5a0
[] truncate_inode_pages+0x15/0x20
[] ext4_evict
On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein
wrote:
>
> 2016-12-21 23:06 GMT+01:00 Ilya Dryomov :
>>
>> What's the output of "cat /proc/$(pidof rm)/stack?
>
>
> root@ceph-client-3:/home/vagrant# cat /proc/2315/stack
> [] sleep_on_page+0xe/0x20
> [] wait_on_page_bit+0x7f/0x90
> [] truncate_inod
2016-12-21 23:33 GMT+01:00 Ilya Dryomov :
> On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein
> wrote:
> >
> > 2016-12-21 23:06 GMT+01:00 Ilya Dryomov :
> >>
> >> What's the output of "cat /proc/$(pidof rm)/stack?
> >
> >
> > root@ceph-client-3:/home/vagrant# cat /proc/2315/stack
> > [] sleep_on_p
On Wed, Dec 21, 2016 at 6:39 PM, Aakanksha Pudipeddi
wrote:
> I mean setup a Ceph cluster after compiling from source and make install. I
> usually use the long form to setup the cluster. The mon setup is fine but
> when I create an OSD using ceph osd create or even check the status using
> cep
On Wed, Dec 21, 2016 at 11:36 PM, Stéphane Klein
wrote:
>
>
> 2016-12-21 23:33 GMT+01:00 Ilya Dryomov :
>>
>> On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein
>> wrote:
>> >
>> > 2016-12-21 23:06 GMT+01:00 Ilya Dryomov :
>> >>
>> >> What's the output of "cat /proc/$(pidof rm)/stack?
>> >
>> >
>>
2016-12-21 23:33 GMT+01:00 Ilya Dryomov :
> What if you boot ceph-client-3 with >512M memory, say 2G?
>
Success !
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi John,
Thanks for your response. Here is what I am setting them to:
I am installing all binaries in the folder:
~/src/vanilla-ceph/ceph-install. So the folder contains the following
subfolders:
ssd@msl-lab-ads01:~/src/vanilla-ceph/ceph-install$ ls
bin etc include lib libexec sbin share
What output do you get from the following?
$ strace -eopen ceph 2>&1|grep ceph_argparse
On Thu, Dec 22, 2016 at 8:55 AM, Aakanksha Pudipeddi
wrote:
> Hi John,
>
> Thanks for your response. Here is what I am setting them to:
>
> I am installing all binaries in the folder:
> ~/src/vanilla-ceph/cep
Hello Brad,
I manually deleted the py and pyc files under /usr/lib/python2.7/dist-packages
and that seems to have worked. The ceph command does not complain right now.
But I just noticed during the installation that there was an error installing
some python packages:
TEST FAILED:
/home/ssd/sr
The output from a working run of ceph is not very helpful.
On Thu, Dec 22, 2016 at 9:26 AM, Aakanksha Pudipeddi
wrote:
> Hello Brad,
>
> I manually deleted the py and pyc files under
> /usr/lib/python2.7/dist-packages and that seems to have worked. The ceph
> command does not complain right now
I understand. We'll leave it at that and I will get back to you if I see
further issues.
Thanks again for all the help!
-Original Message-
From: Brad Hubbard [mailto:bhubb...@redhat.com]
Sent: Wednesday, December 21, 2016 3:33 PM
To: Aakanksha Pudipeddi
Cc: John Spray; ceph-users
Subjec
Hello,
On Thu, 22 Dec 2016 01:47:36 +0700 Lazuardi Nasution wrote:
> Hi,
>
> I'm looking for the way of setting up read only cache tier but the updated
> objects to the backend store must be evicted from the cache (if any on the
> cache) and it should be promoted to cache again on next miss rea
Hello,
On Wed, 21 Dec 2016 11:33:48 +0100 (CET) Wido den Hollander wrote:
>
> > Op 21 december 2016 om 2:39 schreef Christian Balzer :
> >
> >
> >
> > Hello,
> >
> > I just (manually) added 1 OSD each to my 2 cache-tier nodes.
> > The plan was/is to actually do the data-migration at the le
Hi Christian,
Thank you for your explanation. Based on your suggestion, I have put
writeback cache-mode. But, currently the write ops is more better than read
ops. I mean "dd if=/dev/zero of=/dev/rb0" is more better than "dd
if=/dev/rb0 of=/dev/null". Do you know what's wrong here?
Best regards,
Hello,
On Thu, 22 Dec 2016 09:12:44 +0700 Lazuardi Nasution wrote:
> Hi Christian,
>
> Thank you for your explanation. Based on your suggestion, I have put
> writeback cache-mode. But, currently the write ops is more better than read
> ops. I mean "dd if=/dev/zero of=/dev/rb0" is more better th
Hi Christian,
Actual test commands are below.
write: dd if=/dev/zero of=/dev/rb0 bs=4096
read: dd if=/dev/rb0 of=/dev/null bs=4096
I will check ceph readahead too.
Best regards,
On Dec 22, 2016 09:40, "Christian Balzer" wrote:
>
> Hello,
>
> On Thu, 22 Dec 2016 09:12:44 +0700 Lazuardi Nasuti
2016-12-21 23:39 GMT+01:00 Stéphane Klein :
>
>
> 2016-12-21 23:33 GMT+01:00 Ilya Dryomov :
>
>> What if you boot ceph-client-3 with >512M memory, say 2G?
>>
>
> Success !
>
It is possible to add a warning message in rbd to say if memory is too low?
__
50 matches
Mail list logo