I have only 366M meta data stored in an ssd pool, with 16TB (10 million
objects) of filesystem data (hdd pools).
The active mds is using 13GB memory.
Some stats of the active mds server
[@c01 ~]# ceph daemonperf mds.a
---mds --mds_cache--- --mds_log--
-mds_
How did you retreive what osd nr to restart?
Just for future reference, when I run into a similar situation. If you
have a client hang on a osd node. This can be resolved by restarting
the osd that it is reading from?
-Original Message-
From: Dan van der Ster [mailto:d...@vanderste
On the stuck client:
cat /sys/kernel/debug/ceph/*/osdc
REQUESTS 0 homeless 0
LINGER REQUESTS
BACKOFFS
REQUESTS 1 homeless 0
245540 osd100 1.9443e2a5 1.2a5 [100,1,75]/100 [100,1,75]/100 e74658
fsvolumens_393f2dcc-6b09-44d7-8d20-0e84b072ed26/2000b2f5905.0001
0x400024 1 write
LINGER REQUESTS
B
Hi,
we need to migrate a ceph pool used for gnocchi to another cluster in
another datacenter. Gnocchi uses the python rados or cradox module to
access the Ceph cluster. The pool is dedicated to gnocchi only. The
source pool is based on HDD OSDs while the target pool got SSD only. As
there are
Hello,
I am aware that when enabling compression in bluestore it will only
compress new data.
However, if I had compression enabled for a period of time, is it then
possible to disable compression and any data that was compressed continue
to be uncompressed on read as normal but any new data not
Hi,
The ceph-volume@.service units on an Ubuntu 18.04.2 system
run unlimited and do not finish.
Only after we create this override config the system boots again:
# /etc/systemd/system/ceph-volume@.service.d/override.conf
[Unit]
After=network-online.target local-fs.target time-sync.target ceph-mo
Hi Ashley,
general rule is that compression switch do not affect existing data but
controls future write request processing.
You can enable/disable compression at any time.
Once disabled - no more compression is happening. And data that has been
compressed remains in this state until removal
On Thu, May 2, 2019 at 5:27 AM Robert Sander
wrote:
>
> Hi,
>
> The ceph-volume@.service units on an Ubuntu 18.04.2 system
> run unlimited and do not finish.
>
> Only after we create this override config the system boots again:
>
> # /etc/systemd/system/ceph-volume@.service.d/override.conf
> [Unit
Hi,
On 02.05.19 13:40, Alfredo Deza wrote:
> Can you give a bit more details on the environment? How dense is the
> server? If the unit retries is fine and I was hoping at some point it
> would see things ready and start activating (it does retry
> indefinitely at the moment).
It is a machine wi
On Thu, May 2, 2019 at 8:28 AM Robert Sander
wrote:
>
> Hi,
>
> On 02.05.19 13:40, Alfredo Deza wrote:
>
> > Can you give a bit more details on the environment? How dense is the
> > server? If the unit retries is fine and I was hoping at some point it
> > would see things ready and start activatin
Based on past experience with this issue in other projects, I would
propose this:
1. By default (rgw frontends=beast), we should bind to both IPv4 and
IPv6, if available.
2. Just specifying port (rgw frontends=beast port=8000) should apply to
both IPv4 and IPv6, if available.
3. If the use
After discussing with Casey, I'd like to propose some clarifications to
this.
First, we do not treat EAFNOSUPPORT as a non-fatal error. Any other
error binding is fatal, but that one we warn and continue.
Second, we treat "port=" as expanding to "endpoint=0.0.0.0:,
endpoint=[::]".
Then, w
Daniel Gryniewicz writes:
> After discussing with Casey, I'd like to propose some clarifications to
> this.
>
> First, we do not treat EAFNOSUPPORT as a non-fatal error. Any other
> error binding is fatal, but that one we warn and continue.
>
> Second, we treat "port=" as expanding to "endpoin
On Mon, 29 Apr 2019, Alexander Y. Fomichev wrote:
> Hi,
>
> I just upgraded from mimic to nautilus(14.2.0) and stumbled upon a strange
> "feature".
> I tried to increase pg_num for a pool. There was no errors but also no
> visible effect:
>
> # ceph osd pool get foo_pool01 pg_num
> pg_num: 256
>
Just to follow up on this:
I enabled up enabling the balancer module in upmap mode.
This did resolve the the short term issue and even things out a
bit...but things are still far from uniform.
It seems like the balancer option is an ongoing process that continues
to run over time...so maybe
On Thu, 2 May 2019 at 05:02, Mark Nelson wrote:
[...]
> FWIW, if you still have an OSD up with tcmalloc, it's probably worth
> looking at the heap stats to see how much memory tcmalloc thinks it's
> allocated vs how much RSS memory is being used by the process. It's
> quite possible that there is
Thanks so much for your help!
On Mon, Apr 29, 2019 at 6:49 PM Gregory Farnum wrote:
> Yes, check out the file layout options:
> http://docs.ceph.com/docs/master/cephfs/file-layouts/
>
> On Mon, Apr 29, 2019 at 3:32 PM Daniel Williams
> wrote:
> >
> > Is the 4MB configurable?
> >
> > On Mon, Apr
Hello
I am trying to figure out a way to restrict access to S3 buckets. Is it
possible to create a RadosGW user that can only access specific bucket(s)?
Thanks,
Vlad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
Hi Vlad,
If a user creates a bucket then only that user can see the bucket
unless an S3 ACL is applied giving additional permissionsbut I'd
guess you are asking a more complex question than that.
If you are looking to apply some kind of policy over-riding whatever
ACL a user might apply to a
On 5/2/19 11:46 AM, Igor Podlesny wrote:
On Thu, 2 May 2019 at 05:02, Mark Nelson wrote:
[...]
FWIW, if you still have an OSD up with tcmalloc, it's probably worth
looking at the heap stats to see how much memory tcmalloc thinks it's
allocated vs how much RSS memory is being used by the proce
On Fri, 3 May 2019 at 01:29, Mark Nelson wrote:
> On 5/2/19 11:46 AM, Igor Podlesny wrote:
> > On Thu, 2 May 2019 at 05:02, Mark Nelson wrote:
> > [...]
> >> FWIW, if you still have an OSD up with tcmalloc, it's probably worth
> >> looking at the heap stats to see how much memory tcmalloc thinks
Hello,
I'm trying to write a tool to index all keys in all buckets stored in radosgw.
I've created a user with the following caps:
"caps": [
{
"type": "buckets",
"perm": "read"
},
{
"type": "metadata",
"perm": "read"
On 5/2/19 1:51 PM, Igor Podlesny wrote:
On Fri, 3 May 2019 at 01:29, Mark Nelson wrote:
On 5/2/19 11:46 AM, Igor Podlesny wrote:
On Thu, 2 May 2019 at 05:02, Mark Nelson wrote:
[...]
FWIW, if you still have an OSD up with tcmalloc, it's probably worth
looking at the heap stats to see how mu
On Fri, 3 May 2019 at 05:12, Mark Nelson wrote:
[...]
> > -- https://www.kernel.org/doc/Documentation/vm/transhuge.txt
>
> Why are you quoting the description for the madvise setting when that's
> clearly not what was set in the case I just showed you?
Similarly why(?) are you telling us it must
hi,
I never recognized the Debian /etc/default/ceph :-)
=
# Increase tcmalloc cache size
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728
that is, what is active now.
Huge pages:
# cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
# dpkg -S /usr/lib/x8
25 matches
Mail list logo