My problem was that commands like ceph -s fail to connect
and therefore I couldn't extract monmap.
I could get it from the running pid though and I 've used it
along with the documentation and the example of how a monmap
looks like in order to create a new and inject it into the
second monitor.
It seems that both PUBLIC_NETWORK and CLUSTER_NETWORK
have to be defined in order to work.
Otherwise if only PUBLIC NETWORK is defined a certain (quite vasty
though) amount
of traffic is using the other interface.
All the best,
George
In that case - yes...put everything on 1 card - or if
In that case - yes...put everything on 1 card - or if both cards are 1G (or
same speed for that matter...) - then you might want toblock all external
traffic except i.e. SSH, WEB, but allow ALL traffic between all CEPH
OSDs... so you can still use that network for "public/client" traffic - not
sure
Andrija,
I have two cards!
One on 15.12.* and one on 192.*
Obviously the 15.12.* is the external network (real public IP address
e.g used to access the node via SSH)
That's why I am telling that my public network for CEPH is the 192. and
should I use the cluster network for that as well?
Georgios,
no need to put ANYTHING if you don't plan to split client-to-OSD vs
OSD-OSD-replication on 2 different Network Cards/Networks - for pefromance
reasons.
if you have only 1 network - simply DONT configure networks at all inside
your CEPH.conf file...
if you have 2 x 1G cards in servers,
Andrija,
Thanks for you help!
In my case I just have one 192.* network, so should I put that for
both?
Besides monitors do I have to list OSDs as well?
Thanks again!
Best,
George
This is how I did it, and then retart each OSD one by one, but
monritor with ceph -s, when ceph is healthy, p
This is how I did it, and then retart each OSD one by one, but monritor
with ceph -s, when ceph is healthy, proceed with next OSD restart...
Make sure the networks are fine on physical nodes, that you can ping in
between...
[global]
x
x
x
x
x
x
#
### RE
I thought that it was easy but apparently it's not!
I have the following in my conf file
mon_host = 192.168.1.100,192.168.1.101,192.168.1.102
public_network = 192.168.1.0/24
mon_initial_members = fu,rai,jin
but still the 15.12.6.21 link is being saturated
Any ideas why???
Should I put c
changin PG number - causes LOOOT of data rebalancing (in my case was 80%)
which I learned the hard way...
On 14 March 2015 at 18:49, Gabri Mate
wrote:
> I had the same issue a few days ago. I was increasing the pg_num of one
> pool from 512 to 1024 and all the VMs in that pool stopped. I came to
I had the same issue a few days ago. I was increasing the pg_num of one
pool from 512 to 1024 and all the VMs in that pool stopped. I came to
the conclusion that doubling the pg_num caused such a high load in ceph
that the VMs were blocked. The next time I will test with small
increments.
On 12:3
Andrija,
thanks a lot for the useful info!
I would also like to thank "Kingrat" at the IRC channel for his useful
advice!
I was under the wrong impression that public is the one used for RADOS.
So I thought that public=external=internet and therefore I used that
one in my conf.
I underst
Public network is clients-to-OSD traffic - and if you have NOT explicitely
defined cluster network, than also OSD-to-OSD replication takes place over
same network.
Otherwise, you can define public and cluster(private) network - so OSD
replication will happen over dedicated NICs (cluster network) a
Hi all!!
What is the meaning of public_network in ceph.conf?
Is it the network that OSDs are talking and transferring data?
I have two nodes with two IP addresses each. One for internal network
192.168.1.0/24
and one external 15.12.6.*
I see the following in my logs:
osd.0 is down since ep
>>And at this moment, some of the VM stored on this pool were stopped (on
>>some hosts, not all, it depends, no logic)
do you use librbd or krbd for theses vm ?
Is the guest os crashed ? or the qemu process killed?(which seem really
strange)
- Mail original -
De: "Florent Bautista"
Hello all!
I am working on Ceph object storage architecture from last few months.
I am unable to search a document which can describe how Ceph object storage
APIs (Swift/S3 APIs) are mappedd with Ceph storage cluster APIs (librados APIs)
to store the data at Ceph storage cluster.
As the document
Good evening all,
Just had another quick look at this with some further logging on and thought
I'd post the results in case anyone can keep me moving in the right direction.
Long story short, some OSDs just don't appear to come up after one failing
after another. Dealing with one in isolation,
Thanks,
Is there any option to fix bucket index automaticly?
--
Regards
2015-03-14 4:49 GMT+01:00 Yehuda Sadeh-Weinraub :
>
>
> - Original Message -
>> From: "Dominik Mostowiec"
>> To: ceph-users@lists.ceph.com
>> Sent: Friday, March 13, 2015 4:50:18 PM
>> Subject: [ceph-users] not exist
17 matches
Mail list logo