Hi,
I have question on intention of Ceph setmaxosd command. From source code, it
appears as if this is present as a way to limit the number of OSDs in the Ceph
cluster. Can this be used to shrink the number of OSDs in the cluster without
gracefully shutting down OSDs and letting recovery/remap
Page reclamation in Linux is NUMA aware. So page reclamation is not an issue.
You can see performance improvements only if all the components of a given IO
completes on a single core. This is hard to achieve in Ceph as a single IO
goes through multiple thread switches and the threads are not b
Check states of PGs using "ceph pg dump" and for every PG that is not
"active+clean", issue "ceph pg map " and get mapping OSDs. Check
the state of those OSDs by looking at their logs under /var/log/ceph/.
Regards,
Anand
On Mon, May 23, 2016 at 6:53 AM, Ken Peng wrote:
> Hi,
>
> # ceph -s
>
For performance, civetweb is better as fastcgi module associated with
apache is single threaded. But Apache does have fancy features which
civetweb lacks. If you are looking for just the performance, then go for
civetweb.
Regards,
Anand
On Mon, May 23, 2016 at 12:43 PM, fridifree wrote:
> Hi ev
I think you are looking for inotify/fanotify events for Ceph. Usually these
are implemented for local file system. Ceph being a networked file system,
it will not be easy to implement and will involve network traffic to
generate events.
Not sure it is in the plan though.
Regards,
Anand
On Wed,
Correct. This is guaranteed.
Regards,
Anand
On Fri, Jun 24, 2016 at 10:37 AM, min fang wrote:
> Hi, as my understanding, in PG level, IOs are execute in a sequential way,
> such as the following cases:
>
> Case 1:
> Write A, Write B, Write C to the same data area in a PG --> A Committed,
> then
Hi,
When GET BUCKET ACL REST call is issued with X-Auth-Token set, call fails.
This is due to bucket in question not having CORS settings. Is there a way
to set CORS on the S3 bucket with REST APIs? I know a way using boto S3
that works. I am looking for REST APIs for CORS setting.
Regards,
Anan
These are known problem.
Are you doing mkfs.xfs on SSD? If so, please check SSD data sheets whether
UNMAP is supported. To avoid unmap during mkfs, use mkfs.xfs -K
Regards,
Anand
On Thu, Jul 7, 2016 at 5:23 PM, Nick Fisk wrote:
> Hi All,
>
>
>
> Does anybody else see a massive (ie 10x) perform
Merge happens either due to movement of objects due to CRUSH recalculation
(when cluster grows or shrinks due to various reasons) or deletion of
objects.
Split happens when portions of objects/volumes get populated that were
previously sparse. Each RADOS object by default is 4MB chunk and volumes
Use qemu-img-convert to convert from one format to another.
Regards,
Anand
On Mon, Jul 11, 2016 at 9:37 PM, Gaurav Goyal
wrote:
> Thanks!
>
> I need to create a VM having qcow2 image file as 6.7 GB but raw image as
> 600GB which is too big.
> Is there a way that i need not to convert qcow2 file
10 matches
Mail list logo