> Date: Fri, 28 Aug 2015 12:07:39 +0100
> From: gfar...@redhat.com
> To: vickey.singh22...@gmail.com
> CC: ceph-users@lists.ceph.com; ceph-us...@ceph.com; ceph-de...@vger.kernel.org
> Subject: Re: [ceph-users] Opensource plugin for pulling out cluster recove
On 08/28/2015 05:37 PM, John Spray wrote:
> On Fri, Aug 28, 2015 at 3:53 PM, Tony Nelson wrote:
>> I recently built a 3 node Proxmox cluster for my office. I’d like to get HA
>> setup, and the Proxmox book recommends Ceph. I’ve been reading the
>> documentation and watching videos, and I think I
Hi Patrick,
On Thu, Aug 27, 2015 at 12:00 PM, Patrick McGarry
wrote:
> Just a reminder that our Performance Ceph Tech Talk with Mark Nelson
> will be starting in 1 hour.
>
> If you are unable to attend there will be a recording posted on the
> Ceph YouTube channel and linked from the page at:
>
hi,
3osd(B\C\D)in cluster network,3 replication,
monitor (A)
[cid:image003.png@01D0E27F.A7634850]
if osdB is out of cluster network(still can communicate with mon(A)),then
osdC and osdD will report to A that B is down,then MON(A) will mark B down,at
this time,does the ceph-osd process on B
Dear all,
During a cluster reconfiguration (change of crush tunables from legacy
to TUNABLES2) with large data replacement, several OSDs get overloaded
and had to be restarted; when OSDs stabilize, I got a number of PGs
marked stale, even when all OSDs where this data used to be located show
I'm running 3-node cluster with Ceph (it's Deis cluster, so Ceph daemons
are containerized). There are 3 OSDs and 3 mons. After rebooting all nodes
one by one all monitors are up, but only two OSDs of three are up. 'Down'
OSD is really running but is never marked up/in.
All three mons are reachable
Hello,
I'm having a concern about CEPH Read IOPS.
I'm not sure how Ceph settles Read request:
- Whether only primary OSD or all replica OSDs reply read requests ?
- Do I increase Read IOPS by supplementing OSD nodes to cluster? If not, is
there any method to do?
Thanks and regards.
Hi,
I'm trying to install ceph for the first time following the quick
installation guide. I'm getting the below error, can someone please help?
ceph-deploy install --release=firefly ceph-vm-mon1
[*ceph_deploy.conf*][*DEBUG* ] found configuration file at:
/home/cloud-user/.cephdeploy.conf
[*ceph
Yes, reads are served from primary OSDs..
Adding OSD nodes should definitely increase your performance, but, you need to
see first whether you are getting the desired performance (or maxed out) with
the existing cluster or not.
Please give some more information about your cluster and let us know
I'm not the OP, but in my particular case, gc is proceeding normally
(since 94.2, i think) -- i just have millions of older objects
(months-old) which will not go away.
(see my other post --
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003967.html
)
-Ben
On Fri, Aug 28, 2015
10 matches
Mail list logo