egards,
Suresh
On Mon, 14 Aug 2017 19:54:25 +0530 John Spray <jsp...@redhat.com>
wrote
On Mon, Aug 14, 2017 at 3:13 PM, psuresh <psur...@zohocorp.com> wrote:
> Hi,
>
> Suddenly I've faced cache pressure on my production ceph cluster. mds
> runnin
Hi,
Suddenly I've faced cache pressure on my production ceph cluster. mds running
on 8GB vm.
After restarted the mds service, HEALTH come to OK. What is the issue?
root@ceph-admin~#ceph -s
cluster 2074b31b-7965-4244-8390-a64f3b038f3e
health HEALTH_WARN
mds0: C
Hi,
When i run "ceph-deploy install lceph-mon2" from admin node i'm getting
following error. Any clue!
[cide-lceph-mon2][DEBUG ] connected to host: cide-lceph-mon2
[cide-lceph-mon2][DEBUG ] detect platform information from remote host
[cide-lceph-mon2][DEBUG ] detect machine type
[cep
SDs. If you had 6 OSDs, then the goal would be to have
somewhere between 200-400 PGs total to maintain the same 100-200 PGs per OSD.
On Thu, May 4, 2017 at 10:24 AM psuresh <psur...@zohocorp.com> wrote:
Hi,
I'm running 3 osd in my test setup. I have created PG poo
Hi,
I'm running 3 osd in my test setup. I have created PG pool with 128 as per
the ceph documentation.
But i'm getting too many PGs warning. Can anyone clarify? why i'm getting
this warning.
Each OSD contain 240GB disk.
cluster 9d325da2-3d87-4b6b-8cca-e52a4b65aa08
h