There’s a couple of things I would look into:
* Any packet loss whatsoever – especially on your cluster / private
replication network
* Test against an R3 pool to see if EC on RBD with overwrites is the culprit
* Check to see what processes are in the “R” state during high iowait time
This stuff is logged under the 'civetweb' subsystem, so can be turned
off with 'debug_civetweb = 0'. You can configure 'debug_rgw' separately.
On 11/28/18 1:03 AM, zyn赵亚楠 wrote:
Hi there,
I have a question about rgw/civetweb log settings.
Currently, rgw/civetweb prints 3 lines of logs with l
On 14/08/2018 15:57, Emmanuel Lacour wrote:
> Le 13/08/2018 à 16:58, Jason Dillaman a écrit :
>>
>> See [1] for ways to tweak the bluestore cache sizes. I believe that by
>> default, bluestore will not cache any data but instead will only
>> attempt to cache its key/value store and metadata.
>
> I
On 11/28/18 8:36 AM, Florian Haas wrote:
On 14/08/2018 15:57, Emmanuel Lacour wrote:
Le 13/08/2018 à 16:58, Jason Dillaman a écrit :
See [1] for ways to tweak the bluestore cache sizes. I believe that by
default, bluestore will not cache any data but instead will only
attempt to cache its key/
On 28/11/2018 15:52, Mark Nelson wrote:
>> Shifting over a discussion from IRC and taking the liberty to resurrect
>> an old thread, as I just ran into the same (?) issue. I see
>> *significantly* reduced performance on RBD reads, compared to writes
>> with the same parameters. "rbd bench --io-type
On 27/11/2018 20:28, Maxime Guyot wrote:
> Hi,
>
> I'm running into an issue with the RadosGW Swift API when the S3 bucket
> versioning is enabled. It looks like it silently drops any metadata sent
> with the "X-Object-Meta-foo" header (see example below).
> This is observed on a Luminous 12.2.8 c
Hi Florian,
You assumed correctly, the "test" container (private) was created with the
"openstack container create test", then I am using the S3 API to
enable/disable object versioning on it.
I use the following Python snippet to enable/disable S3 bucket versioning:
import boto, boto.s3, boto.s3.
Hello,
I’m trying to find a way to determine real/physical/raw storage capacity usage
when storing a similar set of objects in different pools, for example a 3-way
replicated pool vs. a 4+2 erasure coded pool, and in particular how this ratio
changes from small (where Bluestore block size matt
You can get all the details from the admin socket of the OSDs:
ceph daemon osd.X perf dump
(must be run on the server the OSD is running on)
Examples of relevant metrics are: bluestore_allocated/stored and the
bluefs block for metadata.
Running perf schema might contain some details on the meani
Hi Jody,
yes, this is a known issue.
Indeed, currently 'ceph df detail' reports raw space usage in GLOBAL
section and 'logical' in the POOLS one. While logical one has some flaws.
There is a pending PR targeted to Nautilus to fix that:
https://github.com/ceph/ceph/pull/19454
If you want to
Cody :
>
> > And this exact problem was one of the reasons why we migrated
> > everything to PXE boot where the OS runs from RAM.
>
> Hi Paul,
>
> I totally agree with and admire your diskless approach. If I may ask,
> what kind of OS image do you use? 1GB footprint sounds really small.
It's based
Hello,
I’m trying to find a way to use async+dpdk as network on cep13.2.0. .After I
have compiled ceph with dpdk and mount hugepage, I got a segmentation fault
like this:
./bin/ceph-mon -i a -c ceph.conf
EAL:Detected 48 lcore(s)
EAL: No free hugepages reported in hugepages-1048576KB
E
Hey,
After rebooting a server that hosts the MGR Dashboard I am now unable to
get the dashboard module to run.
Upon restarting the mgr service I see the following :
ImportError: No module named ordered_dict
Nov 29 07:13:14 ceph-m01 ceph-mgr[12486]: [29/Nov/2018:07:13:14] ENGINE
Serving on http:/
Hi,
I have a ceph 12.2.8 cluster on filestore with rather large omap dirs
(avg size is about 150G). Recently slow requests became a problem, so
after some digging I decided to convert omap from leveldb to rocksdb.
Conversion went fine and slow requests rate went down to acceptable
level. Unfo
14 matches
Mail list logo