Hello everyone,
We have some osd on the ceph.
Some osd's usage is more than 77% and another osd's usage is 39% in the same
host.
I wonder why osd’s usage is different.(Difference is large) and how can i fix
it?
ID CLASS WEIGHTREWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME
-2
Hi
Have been thinking a bit about rocksdb and EC pools:
Since a RADOS object written to a EC(k+m) pool is split into several
minor pieces, then the OSD will receive many more smaller objects,
compared to the amount it would receive in a replicated setup.
This must mean that the rocksdb will
Hello everyone,
We have some osd on the ceph.
Some osd's usage is more than 77% and another osd's usage is 39% in the same
host.
I wonder why osd’s usage is different.(Difference is large) and how can i fix
it?
ID CLASS WEIGHTREWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME
Hi
Have been thinking a bit about rocksdb and EC pools:
Since a RADOS object written to a EC(k+m) pool is split into several
minor pieces, then the OSD will receive many more smaller objects,
compared to the amount it would receive in a replicated setup.
This must mean that the rocksdb will als
On 24/06/2019 11:25, jinguk.k...@ungleich.ch wrote:
Hello everyone,
We have some osd on the ceph.
Some osd's usage is more than 77% and another osd's usage is 39% in
the same host.
I wonder why osd’s usage is different.(Difference is large) and how
can i fix it?
ID CLASS WEIGHT REWEI
Hi Team,
We have 9 OSD and when we run ceph osd df its showing TOTAL SIZE
31 TiB USE :- 13 TiB AVAIL :- 18 TiB %USE:- 42.49. When checked in client
machine its showing Size :- 14T USE:- 6.5T AVAIL 6.6T around 3TB its
missing. We are using replication size is 2 . Any on
Hi everyone,
We successfully use Ceph here for several years now, and since recently,
CephFS.
From the same CephFS server, I notice a big difference between a fuse
mount and a kernel mount (10 times faster for kernel mount). It makes
sense to me (an additional fuse library versus a direct ac
I have used the gentle reweight script many times in the past. But more
recently, I expanded one cluster from 334 to 1114 OSDs, by just changing
the crush weight 100 OSDs at a time. Once all pgs from those 100 were
stable and backfilling, add another hundred. I stopped at 500 and let the
backfill f
Hi all,
Some bluestore OSDs in our Luminous test cluster have started becoming
unresponsive and booting very slowly.
These OSDs have been used for stress testing for hardware destined for our
production cluster, so have had a number of pools on them with many, many
objects in the past. All
On Mon, Jun 24, 2019 at 9:06 AM Thomas Byrne - UKRI STFC
wrote:
>
> Hi all,
>
>
>
> Some bluestore OSDs in our Luminous test cluster have started becoming
> unresponsive and booting very slowly.
>
>
>
> These OSDs have been used for stress testing for hardware destined for our
> production clust
On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
wrote:
>
> Hello everyone,
>
> I encounter this in nautilus client and not with mimic. Removing admin
> socket entry from config on client makes no difference
>
> Error:
>
> rbd ls -p one
> 2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to
On Mon, 2019-06-24 at 15:51 +0200, Hervé Ballans wrote:
> Hi everyone,
>
> We successfully use Ceph here for several years now, and since recently,
> CephFS.
>
> From the same CephFS server, I notice a big difference between a fuse
> mount and a kernel mount (10 times faster for kernel mount).
Hi, Konstantin.
Thanks for the reply.
I know about stale instances and that they remained from prior version.
I ask about “marker” of bucket. I have bucket “clx” and I can see his current
marker in stale-instances list.
As I know, stale-instances list must contain only previous marker ids.
Fro
It's aborting incomplete multipart uploads that were left around. First it
will clean up the cruft like that and then it should start actually
deleting the objects visible in stats. That's my understanding of it
anyway. I'm int he middle of cleaning up some buckets right now doing this
same thing.
Jason,
Here you go:
WHOMASK LEVELOPTION VALUE
RO
client advanced admin_socket/var/run/ceph/$name.$pid.asok *
global advanced cluster_network 10.0.42.0/23 *
global advanced debug_asok
On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak
wrote:
>
> Jason,
>
> Here you go:
>
> WHOMASK LEVELOPTION VALUE
> RO
> client advanced admin_socket
> /var/run/ceph/$name.$pid.asok *
This is the offending config option that is
Thank you David!
I will give it a whirl and see if running it long enough will do it.
On Mon, Jun 24, 2019 at 12:49 PM David Turner wrote:
>
> It's aborting incomplete multipart uploads that were left around. First it
> will clean up the cruft like that and then it should start actually deleti
Hello!
I did a lab with 2 separated clusters, each one with one zone. The tests were
ok, if I put a file in a bucket in one zone, i could see it in the other.
My question is if it's possible to have more control over this sync. I want
that every sync is disabled by default, but if it's desire
Is it safe to have RBD cache enabled on all the gateways in the latest ceph
14.2+ and ceph-iscsi 3.0 setup? Assuming client are using multipath as outlined
here: http://docs.ceph.com/docs/nautilus/rbd/iscsi-initiators/ Thanks.
Respectfully,
Wes Dillingham
wdilling...@godaddy.com
Site Reliabili
No.
tcmu-runner disables the cache automatically overriding your ceph.conf
setting.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon, Jun 24, 2019 at 9:43 PM W
Jason,
What are you suggesting to do ? Removing this line from the config database
and keeping in config files instead?
On 6/24/2019 1:12 PM, Jason Dillaman wrote:
On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak
wrote:
Jason,
Here you go:
WHOMASK LEVELOPTION VALU
On Mon, Jun 24, 2019 at 4:05 PM Paul Emmerich wrote:
>
> No.
>
> tcmu-runner disables the cache automatically overriding your ceph.conf
> setting.
Correct. For safety purposes, we don't want to support a writeback
cache when fallover between different gateways is possible
>
> Paul
>
> --
> Paul
Why are you selected this specific sizes? Are there any tests/research
on it?
Best Regards,
Rafał Wądołowski
On 24.06.2019 13:05, Konstantin Shalygin wrote:
>
>> Hi
>>
>> Have been thinking a bit about rocksdb and EC pools:
>>
>> Since a RADOS object written to a EC(k+m) pool is split into seve
Thanks for the reply.
Btw, one my customer wants to get the objects based on last modified date
filed. How do we can achive this?
On Thu, Jun 13, 2019 at 7:09 PM Paul Emmerich
wrote:
> There's no (useful) internal ordering of these entries, so there isn't a
> more efficient way than getting eve
Hi
You could look into the radosgw elasicsearch sync module, and use that
to find the objects last modified.
http://docs.ceph.com/docs/master/radosgw/elastic-sync-module/
/Torben
On 25.06.2019 08:19, M Ranga Swami Reddy wrote:
Thanks for the reply.
Btw, one my customer wants to get the obj
25 matches
Mail list logo