Best hair transplant in bangalore:Get advanced hair loss treatment by best Hair
specialists in Bangalore! We have award winning doctors for Hair
transplantation.
Contact: https://www.maachairtransplantbangalore.com/
___
ceph-users mailing list -- ceph-u
Why is this info not available???
https://access.redhat.com/solutions/4009431
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
This normally means you have some form of partition data on the RBD disk.
If you use -vvv on the pv command it should show you the reason, but yes redhat
solutions require an active support subscription.
On Mon, 27 Apr 2020 17:43:02 +0800 Marc Roos
wrote
Why is this info not a
It is a new image. -vvv says "Unrecognised LVM device type 252" Could
this related to rbd features not being enabled/disabled?
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] Device /dev/rbd0 excluded by a filter.
This normally means you have some form of partition data on t
Hello Sailaja,
Do you still have this problem?
Have you checked the crush rule for your pools to see if the data
distribution rule is met?
Regards, Joachim
___
Clyso GmbH
Homepage: https://www.clyso.com
Am 24.04.2020 um 16:02 schrieb Sailaja Yedugundla:
I am
A quick google shows this:
You need to change your /etc/lvm/lvm.conf device name filter to include it. May
be that your LVM filter is not allowing rbdX type disks to be used.
On Mon, 27 Apr 2020 17:49:53 +0800 Marc Roos
wrote
It is a new image. -vvv says "Unrecognised LVM devic
I think it is, I already changed the lvm.conf with:
preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d","^/de
v/rbd"]
But this is still out commented
# filter = [ "a|.*/|" ]
-Original Message-
Cc: ceph-users
Subject: RE: [ceph-users] Device /dev/rbd0 excluded by a filter.
Had to add this to lvm.
types = [ "rbd", 252 ]
-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: Device /dev/rbd0 excluded by a filter.
I think it is, I already changed the lvm.conf with:
preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d","^/de
v/rbd"]
But t
Hi Joachim,
Thanks for your response. I am very new to ceph. I am not sure about the
crush rule. I just followed the cephadm deployment instructions and did not
make any changes. It looks like the radosgw service is not running properly.
When I run the command,
radosgw-admin zone get --rgw-z
I guess this is not good for ssd (samsung sm863)? Or do I need to devide
14.8 by 40?
rbd perf image iostat
NAME WRRD WR_BYTES RD_BYTES WR_LAT
RD_LAT
rbd.ssd/vps-test 40/s 0/s 5.0 MiB/s 0 B/s 14.84 ms
0.00 ns
_
On Mon, Apr 27, 2020 at 7:38 AM Marc Roos wrote:
>
> I guess this is not good for ssd (samsung sm863)? Or do I need to devide
> 14.8 by 40?
>
The 14.8 ms number is the average latency coming from the OSDs, so no need
to divide the number by anything. What is the size of your writes? At 40
writes
Just left a comment at https://tracker.ceph.com/issues/44509
Generally bdev-new-db performs no migration, RocksDB might eventually do
that but no guarantee it moves everything.
One should use bluefs-bdev-migrate to do actual migration.
And I think that's the root cause for the above ticket.
No, I don't think so. But you can try again after applying
bluefs-bdev-migrate
On 4/24/2020 9:13 PM, Stefan Priebe - Profihost AG wrote:
Hi Igor,
could it be the fact that there are those 64kb spilled over metadata i
can't get away?
Stefan
Am 24.04.20 um 13:08 schrieb Igor Fedotov:
Hi Stefa
So,
It looks like my problem got resolved by itself, of course right after I sent
the email to this group that there was a problem.
However, I did notice the following, which coincide with your observations:
I have the pg autoscaler on, but currently there isn’t too much (write)
activity in th
RBD is never a workable solution unless you want to pay the cost of
double-replication in both HDFS and Ceph.
I think the right approach is thinking about other implementation of the
FileSystem interface, like s3a and localfs.
s3a is straight forward, ceph rgw provide s3 interface and s3a is stab
> local filesystem is a bit tricky, we just tried a POC that mounting
> CephFS
> into every hadoop , configure Hadoop using LocalFS with Replica = 1.
> Which
> end up with each data only write once into cephfs and cephfs take care of
> the data durability.
Can you tell a bit more about this?
we
Hi;
I've build two iscsi gateway for our (small) ceph cluster.The cluster is a
nautilus installation, 4
nodes with 9x4TB each, and it's working fine. We mainly use it via s3 object
storage interface,
but I've deployed also some rbd block devices and a cephfs filesystem.
Now I'm trying to conn
On 4/27/20 10:43 AM, Simone Lazzaris wrote:
> Hi;
>
> I've build two iscsi gateway for our (small) ceph cluster.The cluster is a
> nautilus installation, 4
> nodes with 9x4TB each, and it's working fine. We mainly use it via s3 object
> storage interface,
> but I've deployed also some rbd bloc
I am trying to manually create a radosgw instance for a small development
installation. I was able to muddle through and get a working mon, mgr, and
osd (x2), but the docs for radosgw are based on ceph-deploy which is not
part of the octopus release.
The host systems are all lxc/lxd containers wit
So I'm still stuck against this bug which is stopping me adding any new service
either OSD / MON / MDS e.t.c
>From my understanding if I enable cephx I should be able to get around this
>but, is there a particular way I should tell cephadm to set cephx enabled on
>the cluster before I reboot a
Hi all,
*** Short version ***
Is there a way to repair a rocksdb from errors "Encountered error while
reading data from compression dictionary block Corruption: block
checksum mismatch" and "_open_db erroring opening db" ?
*** Long version ***
We operate a nautilus ceph cluster (with 100 dis
On Thu, Apr 23, 2020 at 11:05 PM wrote:
>
> Hi
>
> We have an 3 year old Hadoop cluster - up for refresh - so it is time
> to evaluate options. The "only" usecase is running an HBase installation
> which is important for us and migrating out of HBase would be a hazzle.
>
> Our Ceph usage has expan
Hi all!
I export the rbd image use the export v2 format and without it.
The difference is the export v2 format image is more smaller than the image
which do not use export v2 format.
Can any body tell me why they are size is difference so huge?
Thanks very much!
root@controller:/mnt#
Hi,
is there a way to synchronize a specific bucket by Ceph across the available
datacenters?
I've just found multi site setup but that one sync the complete cluster, which
is equal to failover solution.
For me just 1 bucket.
Thank you
This message is confident
Hi all
I am afraid that there is even more thrash available - running
rgw-orphan-list does not find everything. Like I still have broken
multiparts -> when I do s3cmd multipart I get a list of
"pending/interrupted multiparts". When I try to cancel such multipart
I get 404.
Does anyone have a meth
25 matches
Mail list logo