Hi Peter,
On 02/15/18 @ 19:44, Jan Peters wrote:
> I want to evaluate ceph with bluestore, so I need some hardware/configure
> advices from you.
>
> My Setup should be:
>
> 3 Nodes Cluster, on each with:
>
> - Intel Gold Processor SP 5118, 12 core / 2.30Ghz
> - 64GB RAM
> - 6 x 7,2k, 4 TB SAS
On 02/16/18 @ 18:21, Nico Schottelius wrote:
> on a test cluster I issued a few seconds ago:
>
> ceph auth caps client.admin mgr 'allow *'
>
> instead of what I really wanted to do
>
> ceph auth caps client.admin mgr 'allow *' mon 'allow *' osd 'allow *' \
> mds allow
>
> Now any access t
[osd] allow *
client.bootstrap-mds
root@ceph-mon1:/# cat /var/lib/ceph/mon/ceph-ceph-mon1/keyring
[mon.]
key = AQD1y3RapVDCNxAAmInc8D3OPZKuTVeUcNsPug==
caps mon = "allow *"
> Michel Raabe writes:
> > On 02/16/18 @ 18:21, Nico Schottelius wrote:
> >> on a test
> On 2. Feb 2019, at 01:25, Carlos Mogas da Silva wrote:
>
>> On 01/02/2019 22:40, Alan Johnson wrote:
>> Confirm that no pools are created by default with Mimic.
>
> I can confirm that. Mimic deploy doesn't create any pools.
https://ceph.com/community/new-luminous-pool-tags/
Yes and that’s
On Monday, February 18, 2019 16:44 CET, David Turner
wrote:
> Has anyone else come across this issue before? Our current theory is that
> Bluestore is accessing the disk in a way that is triggering a bug in the
> older firmware version that isn't triggered by more traditional
> filesystems. We
On 20.05.19 13:04, Lars Täuber wrote:
Mon, 20 May 2019 10:52:14 +
Eugen Block ==> ceph-users@lists.ceph.com :
Hi, have you tried 'ceph health detail'?
No I hadn't. Thanks for the hint.
You can also try
$ rados lspools
$ ceph osd pool ls
and verify that with the pgs
$ ceph pg ls --fo
Hi Mike,
On 30.05.19 02:00, Mike Cave wrote:
I’d like a s little friction for the cluster as possible as it is in
heavy use right now.
I’m running mimic (13.2.5) on CentOS.
Any suggestions on best practices for this?
You can limit the recovery for example
* max backfills
* recovery max act
Hi Brett!
fyi it's fixed last month:
https://github.com/ceph/ceph/commit/425c5358fed9376939cff8a922c3ce1186d6b9e2
HTH,
Michel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
On 15.07.19 22:42, dhils...@performair.com wrote:
Paul;
If I understand you correctly:
I will have 2 clusters, each named "ceph" (internally).
As such, each will have a configuration file at: /etc/ceph/ceph.conf
I would copy the other clusters configuration file to something like:
Hi Muthu,
On 16.09.19 11:30, nokia ceph wrote:
Hi Team,
In ceph 14.2.2 , ceph dashboard does not have set-ssl-certificate .
We are trying to enable ceph dashboard and while using the ssl
certificate and key , it is not working .
cn5.chn5au1c1.cdn ~# ceph dashboard set-ssl-certificate -i das
On 10/1/19 8:20 AM, Lars Täuber wrote:
> Mon, 30 Sep 2019 15:21:18 +0200
> Janne Johansson ==> Lars Täuber :
>>>
>>> I don't remember where I read it, but it was told that the cluster is
>>> migrating its complete traffic over to the public network when the cluster
>>> networks goes down. So th
11 matches
Mail list logo