Hi all:
we use ceph rbd with openstack ,recently there are some dirty data in
my cinder-volume databases such as volumes status like error-deleting. So
we need manually delete this volumes。
but when I delete the volume on ceph node,ceph tell me this error
[root@ceph-node3 ~]# rbd -p
> Can you paste me the whole output of the install? I am curious why/how you
> are getting el7 and el6 packages.
priority=1 required in /etc/yum.repos.d/ceph.repo entries
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph
Hi,
I'm doing a fresh install of ceph 0.83 (src build) to an Ubuntu 14.04 VM
using ceph-deploy 1.59. Everything goes well until the osd creation,
which fails to start with a journal open error. The steps are shown
below (ceph is the deploy target host):
(ceph1) $ uname -a
Linux ceph1 3.13.0
I forgot to register before posting so reposting.
I think I have a split issue or I can't seem to get rid of these objects.
How can I tell ceph to forget the objects and revert?
How this happened is that due to the python 2.7.8/ceph bug ( a whole rack
of ceph went town (it had ubuntu 14.10 and th
And you're using cloud-init in these cases, or are you executing
growrootfs via some other means?
If you're using cloud-init, you should see some useful messages in
/var/log/cloud-init.log (particularly on debian/ubuntu; I've found
centos' logs to not be as helpful).
Also, if you're using cloud-i
Hey cephers,
Just wanted to let folks know that as a way of saying thank you for 10
years of contributions and growth on the Ceph project we'll be
shipping a free limited edition 10th anniversary t-shirt to anyone who
has contributed to the project (and wants one). All you have to do to
get your
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Adding ceph-users, back to the discussion.
Can you tell me if `ceph-deploy admin cephosd02` was what worked or if
it was the scp'ing of keys?
On Wed, Aug 6, 2014 at 12:36 PM, German Anders wrote:
> It work!!! :) thanks a lot Alfredo. I want to ask also if you know how can I
> remove a osd server
On 08/06/2014 03:43 AM, Luis Periquito wrote:
Hi,
In the last few days I've had some issues with the radosgw in which all
requests would just stop being served.
After some investigation I would go for a single slow OSD. I just
restarted that OSD and everything would just go back to work. Every
On Wed, Aug 6, 2014 at 11:23 AM, German Anders wrote:
> Hi to all,
> I'm having some issues while trying to deploy a osd with btrfs:
>
> ceph@cephdeploy01:~/ceph-deploy$ ceph-deploy disk activate --fs-type btrfs
> cephosd02:sdd1:/dev/sde1
> [ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin
Hi to all,
I'm having some issues while trying to deploy a osd with btrfs:
ceph@cephdeploy01:~/ceph-deploy$ ceph-deploy disk activate --fs-type
btrfs cephosd02:sdd1:/dev/sde1
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy disk
activate --fs-type btrfs cephosd02:sdd1:/dev/s
On Wed, 2014-08-06 at 08:05 -0700, Sage Weil wrote:
> BTW, do we still need to use something != virtio in order for
> trim/discard?
This was also my first concern when virtio was suggested. We were using
ide primarily so we could take advantage of discard. The vms we will be
supporting are more
On Wed, 6 Aug 2014 08:05:33 -0700 (PDT) Sage Weil wrote:
> On Wed, 6 Aug 2014, Mark Nelson wrote:
> > On 08/05/2014 06:19 PM, Mark Kirkwood wrote:
> > > On 05/08/14 23:44, Mark Nelson wrote:
> > > > On 08/05/2014 02:48 AM, Mark Kirkwood wrote:
> > > > > On 05/08/14 03:52, Tregaron Bayly wrote:
> >
On Wed, 6 Aug 2014, Mark Nelson wrote:
> On 08/05/2014 06:19 PM, Mark Kirkwood wrote:
> > On 05/08/14 23:44, Mark Nelson wrote:
> > > On 08/05/2014 02:48 AM, Mark Kirkwood wrote:
> > > > On 05/08/14 03:52, Tregaron Bayly wrote:
> > > > > Does anyone have any insight on how we can tune librbd to per
On 08/05/2014 06:19 PM, Mark Kirkwood wrote:
On 05/08/14 23:44, Mark Nelson wrote:
On 08/05/2014 02:48 AM, Mark Kirkwood wrote:
On 05/08/14 03:52, Tregaron Bayly wrote:
Does anyone have any insight on how we can tune librbd to perform
closer
to the level of the rbd kernel module?
In our lab w
Anybody know why this error occurs, and a solution?
[ceph@tm1cldcphal01 ~]$ ceph --version
ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
[ceph@tm1cldcphal01 ~]$ ceph --status
2014-08-06 08:55:13.168770 7f5527929700 0 librados: client.admin
authentication error (95) Operation not
On Wed, 6 Aug 2014 09:19:57 -0400 Chris Kitzmiller wrote:
> On Aug 5, 2014, at 12:43 PM, Mark Nelson wrote:
> > On 08/05/2014 08:42 AM, Mariusz Gronczewski wrote:
> >> On Mon, 04 Aug 2014 15:32:50 -0500, Mark Nelson
> >> wrote:
> >>> On 08/04/2014 03:28 PM, Chris Kitzmiller wrote:
> On Aug 1
You can use the
ceph osd perf
command to get recent queue latency stats for all OSDs. With a bit
of sorting this should quickly tell you if any OSDs are going
significantly slower than the others.
We'd like to automate this in calamari or perhaps even in the monitor, but
it is not immediate
On Tue, 5 Aug 2014, Craig Lewis wrote:
> There currently isn't a backup tool for CephFS. CephFS is a POSIX
> filesystem, so your normal tools should work. It's a really large POSIX
> filesystem though, so normal tools may not scale well.
Note that CephFS does have one feature that should make ef
Any idea what may be the issue here?
[ceph@tm1cldcphal01 ~]$ ceph --status
2014-08-06 07:53:21.767255 7fe31fd1e700 -1 monclient(hunting): ERROR: missing
keyring, cannot use cephx for authentication
2014-08-06 07:53:21.767263 7fe31fd1e700 0 librados: client.admin
initialization error (2) No such
On 06/08/14 13:07, debian Only wrote:
> Thanks for your reply.
> I have found and test a way myself.. and now share to others
>
>
> >Begin>>> On Debian >>>
> root@ceph01-vm:~# modprobe brd rd_nr=1 rd_size=4194304 max_part=0
> root@ceph01-vm:~# mkdir /mnt/ramdisk
> root@ceph01-vm:~# mkfs.btrfs
On Aug 5, 2014, at 12:43 PM, Mark Nelson wrote:
> On 08/05/2014 08:42 AM, Mariusz Gronczewski wrote:
>> On Mon, 04 Aug 2014 15:32:50 -0500, Mark Nelson
>> wrote:
>>> On 08/04/2014 03:28 PM, Chris Kitzmiller wrote:
On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote:
> I got weird stall
On Tue, Aug 5, 2014 at 10:47 PM, O'Reilly, Dan wrote:
> Final update: after a good deal of messing about, I did finally get this to
> work. Many thanks for the help
Would you mind sharing what changed so this would end up working? Just
want to make sure that it is not something on ceph-depl
Thanks for your reply.
I have found and test a way myself.. and now share to others
>Begin>>> On Debian >>>
root@ceph01-vm:~# modprobe brd rd_nr=1 rd_size=4194304 max_part=0
root@ceph01-vm:~# mkdir /mnt/ramdisk
root@ceph01-vm:~# mkfs.btrfs /dev/ram0
WARNING! - Btrfs Btrfs v0.19 IS EXPERIMEN
I am confuse to understand how File store in Ceph.
I do two test. where is the File or the object for the File
①rados put Python.msi Python.msi -p data
②rbd -p testpool create fio_test --size 2048
rados command of ① means use Ceph as Object storage ?
rbd command of ② means use Ceph as Block stor
On 06.08.2014 09:25, Christian Balzer wrote:
> On Wed, 06 Aug 2014 09:18:13 +0200 Tijn Buijs wrote:
>
>> Hello Pratik,
>>
>> Thanks for this tip. It was the golden one :). I just deleted all my VMs
>> again and started over with (again) CentOS 6.5 and 1 OSD disk per data
>> VM of 20 GB dynamical
Hi Wido,
as the backing disk is running a deep scrub it's constantly 100% busy, no
errors though...
I'm running everything on XFS.
I had a similar feeling that was the OSD slowing down those requests. What
would be the affected pool? ".rgw"?
thanks,
On 6 August 2014 10:08, Wido den Hollander
On 08/06/2014 10:43 AM, Luis Periquito wrote:
Hi,
In the last few days I've had some issues with the radosgw in which all
requests would just stop being served.
After some investigation I would go for a single slow OSD. I just
restarted that OSD and everything would just go back to work. Every
Hi,
I did a test with 'rados -p ecdata bench 10 write' on an ECpool
with a cache replicated pool over it (ceph 0.83).
The benchmark wrote about 12TB of data. After the 10 seconds run,
rados started to delete his benchmark files.
But only about 2,5TB got deleted, then rados returned. I
Hi,
In the last few days I've had some issues with the radosgw in which all
requests would just stop being served.
After some investigation I would go for a single slow OSD. I just restarted
that OSD and everything would just go back to work. Every single time there
was a deep scrub running on th
Hi,
1) I have flavors like 1 vCPU, 2GB memory, 20GB root disk. No swap + no
ephemeral disk. Then I just create an instance via horizon choosing an image +
a flavor.
2) OpenStack itselfs runs on Ubuntu 12.04.4 LTS, for the instances I have some
Ubuntu 12.04/14.04s, Debians and CentOS'.
3) In t
On Wed, 06 Aug 2014 09:18:13 +0200 Tijn Buijs wrote:
> Hello Pratik,
>
> Thanks for this tip. It was the golden one :). I just deleted all my VMs
> again and started over with (again) CentOS 6.5 and 1 OSD disk per data
> VM of 20 GB dynamically allocated. And this time everything worked
> corr
Hello Pratik,
Thanks for this tip. It was the golden one :). I just deleted all my VMs
again and started over with (again) CentOS 6.5 and 1 OSD disk per data
VM of 20 GB dynamically allocated. And this time everything worked
correctly like they mentioned in the documentation :). I went on my w
33 matches
Mail list logo