Hello,
On Mon, Dec 02, 2019 at 08:17:49AM +0100, GBS Servers wrote:
> Hi, im have problem with create new osd:
>
> stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> b9e52bda-7f05-44e0-a69b-1d47755343cf
> Dec 2 08:09:03 ser
Hi Lars,
I've also seen interim space usage burst during my experiments. Up to 2x
times of max level size when topmost RocksDB level is L3 (i.e. 25GB
max). So I think 2x (which results in 60-64 GB for DB) is a good grade
when your DB is expected to be small and medium sized. Not sure this
mu
How to check ?
Thanks.
pon., 2 gru 2019 o 10:38 Alwin Antreich napisał(a):
> Hello,
>
> On Mon, Dec 02, 2019 at 08:17:49AM +0100, GBS Servers wrote:
> > Hi, im have problem with create new osd:
> >
>
> > stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring
> > /var/lib/ceph/bootstra
On Mon, Dec 02, 2019 at 11:57:34AM +0100, GBS Servers wrote:
> How to check ?
>
> Thanks.
>
> pon., 2 gru 2019 o 10:38 Alwin Antreich napisał(a):
>
> > Hello,
> >
> > On Mon, Dec 02, 2019 at 08:17:49AM +0100, GBS Servers wrote:
> > > Hi, im have problem with create new osd:
> > >
> >
> > > stdi
On 19/11/2019 22:42, Florian Haas wrote:
> On 19/11/2019 22:34, Jason Dillaman wrote:
>>> Oh totally, I wasn't arguing it was a bad idea for it to do what it
>>> does! I just got confused by the fact that our mon logs showed what
>>> looked like a (failed) attempt to blacklist an entire client IP a
Hi. Im have problem with ceph-fuse on lxc-containers:
[root@centos01 ceph]# ceph-fuse -m 192.168.1.101:6789 /mnt/cephfs/
2019-12-02 18:00:09.923237 7f4890f17f00 -1 init, newargv = 0x55fabbfa49c0
newargc=11
ceph-fuse[1623]: starting ceph client
ceph-fuse[1623]: ceph mount failed with (22) Invalid
Hi Team,
We would like to create multiple snapshots inside ceph cluster,
initiate the request from librados client and came across this rados api
rados_ioctx_selfmanaged_snap_set_write_ctx
Can some give us sample code on how to use this api .
Thanks,
Muthu
_
Ok, have a new problem... i lxc container.
[root@centos02 ~]# ceph-fuse -m 192.168.1.101:6789 /mnt/cephfs/
2019-12-02 19:20:14.831971 7f5906ac9f00 -1 init, newargv = 0x5594fc9fca80
newargc=11
ceph-fuse[1148]: starting ceph client
fuse: device not found, try 'modprobe fuse' first
ceph-fuse[1148]: