te an osd.X number??
The "ceph osd create" could be extended to have OSD ID as a second
optional argument (the first is already used for uuid).
ceph osd create
The command would succeed only if the ID were not in use.
Ron, would this work for you?
I have a patch as a
On Sun, Feb 15, 2015 at 06:24:45PM -0800, Sage Weil wrote:
> On Sun, 15 Feb 2015, Gregory Farnum wrote:
> > On Sun, Feb 15, 2015 at 5:39 PM, Sage Weil wrote:
> > > On Sun, 15 Feb 2015, Mykola Golub wrote:
> > >> https://github.com/trociny/ceph/compare/wip-osd
On Sun, Feb 15, 2015 at 5:39 PM, Sage Weil wrote:
> On Sun, 15 Feb 2015, Mykola Golub wrote:
>> The "ceph osd create" could be extended to have OSD ID as a second
>> optional argument (the first is already used for uuid).
>>
>> ceph osd create
>>
ceph-objectstore-tool, which adds mark-complete operation,
as it has been suggested by Sam in http://tracker.ceph.com/issues/10098
https://github.com/ceph/ceph/pull/5031
It has not been reviewed yet and not tested well though because I
don't know a simple way how to get an incompl
osd",
"type_id": 0,
"status": "up",
"reweight": 1.00,
"primary_affinity": 0.75,
"crush_weight": 1.00,
"depth": 2},
{ "id": 2,
"n
.
Done. https://github.com/ceph/ceph/pull/3254
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
cation is defined in ceph.conf,
[client] section, "admin socket" parameter.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
on success
aio_write completion returned the number of bytes written. I might be
confused by this test that checks for r >= 0:
https://github.com/ceph/ceph/blob/master/src/test/librbd/test_librbd.cc#L1254
Now, looking at it again, it is certainly not true and my pat
`ceph_radostool` (people use such tools only when
facing extraordinary situations so they are more careful and expect
limitations).
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
date immutable features
> >>>
> >>>I read on the guide I shoulded had place in the
> >>>config|rbd_default_features|
> >>>
> >>>What can I do now to enable this feature all feature of
> >>>jewel on all images?
> >>>Can I insert all the feature of jewel or is there any issue
> >>>with old kernel?
> >>>
> >>>|
> >>>|
> >>>
> >>>|Thanks,
> >>>Max
> >>>|
> >>>
> >>>___
> >>>ceph-users mailing list
> >>>ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >>>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >
> >
> >
> >___
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.conf to
avoid names collisions).
Don't you see other errors?
What is output for `ps auxww |grep rbd-nbd`?
As the first step you could try to export images to file using `rbd
export`, see if it succeeds and probably investigate the content.
--
Mykola Golub
_
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote:
>
> Il 25/06/2017 21:52, Mykola Golub ha scritto:
> >On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
> >>I can see the error even if I easily run list-mapped:
> >>
eed logs for that period to understand
what happened.
> >Don't you observe sporadic crashes/restarts of rbd-nbd processes? You
> >can associate a nbd device with rbd-nbd process (and rbd volume)
> >looking at /sys/block/nbd*/pid and ps output.
> I really don't kno
On Tue, Jun 27, 2017 at 07:17:22PM -0400, Daniel K wrote:
> rbd-nbd isn't good as it stops at 16 block devices (/dev/nbd0-15)
modprobe nbd nbds_max=1024
Or, if nbd module is loaded by rbd-nbd, use --nbds_max command line
option.
--
Myko
e.g. with the
help of lsof utility. Or add something like below
log file = /tmp/ceph.$name.$pid.log
to ceph.conf before starting qemu and look for /tmp/ceph.*.log
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> [1] http://tracker.ceph.com/issues/36089
>
> --
> Jason
> ___
> ceph-users mailing list
> ceph-users@li
; std::less, std::allocator > const&,
> >> MapCacher::Transaction >> std::char_traits, std::allocator >,
> >> ceph::buffer::list>*)+0x8e9) [0x55eef3894fe9]
> >> 7: (get_attrs(ObjectStore*, coll_t, ghobject_t,
> >> ObjectS
https://github.com/torvalds/linux/commit/29eaadc0364943b6352e8994158febcb699c9f9b
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
and remount of the filesystem is required.
Does your rbd-nbd include this fix [1], targeted for v12.2.3?
[1] http://tracker.ceph.com/issues/22172
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cg
for cls errors? You will probably need to restart an osd to get some
fresh ones on the osd start.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ist returned by this
command:
ceph-conf --name osd.0 -D | grep osd_class_load_list
contains rbd.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rn code of 13 for check of
> host 'localhost' was out of bounds.
>
> --
Could you please post the full ceph-osd log somewhere?
/var/log/ceph/ceph-osd.0.log
> but hang at the command: "rbd create libvirt-pool/dimage --size 10240 "
So i
rn mentioned by others about ext4 might want
to flush the journal if it is not clean even when mounting ro. I
expect the mount will just fail in this case because the image is
mapped ro, but you might want to investigate how to improve this.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Jan 18, 2019 at 11:06:54AM -0600, Mark Nelson wrote:
> IE even though you guys set bluestore_cache_size to 1GB, it is being
> overridden by bluestore_cache_size_ssd.
Isn't it vice versa [1]?
[1]
https://github.com/ceph/ceph/blob/luminous/src/os/bluestore/BlueStore.cc#L3976
the metadata?
Yes.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
w "rbd migration prepare" after the the sourse VM closes the
image, but before the destination VM opens it.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Feb 22, 2019 at 02:43:36PM +0200, koukou73gr wrote:
> On 2019-02-20 17:38, Mykola Golub wrote:
>
> > Note, if even rbd supported live (without any downtime) migration you
> > would still need to restart the client after the upgrate to a new
> > librb
not support "journaling" feature, which is
necessary for mirroring. You can access those images only with librbd
(e.g. mapping with rbd-nbd driver or via qemu).
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to find all its logs? `lsof |grep 'rbd-mirror.*log'`
may be useful for this.
BTW, what rbd-mirror version are you running?
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For a 50 gb volume, Local image get created, but it couldnt create
> a mirror image
"Connection timed out" errors suggest you have a connectivity issue
between sites?
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
30 matches
Mail list logo