Thanks to all! I might have found the reason.
It is look like relate to the below bug.
https://bugs.launchpad.net/nova/+bug/1773449
At 2018-12-04 23:42:15, "Ouyang Xu" wrote:
Hi linghucongsong:
I have got this issue before, you can try to fix it as below:
1. use rbd
c. 2018 kl 09:49 skrev linghucongsong :
HI all!
I have a ceph test envirment use ceph with openstack. There are some vms run on
the openstack. It is just a test envirment.
my ceph version is 12.2.4. Last day I reboot all the ceph hosts before this I
do not shutdown the vms on the openstack.
Wh
HI all!
I have a ceph test envirment use ceph with openstack. There are some vms run on
the openstack. It is just a test envirment.
my ceph version is 12.2.4. Last day I reboot all the ceph hosts before this I
do not shutdown the vms on the openstack.
When all the hosts boot up and the ceph
-09-20 11:29:28, "Konstantin Shalygin" wrote:
>On 09/20/2018 10:09 AM, linghucongsong wrote:
>> By the way I use keepalive+lvs to loadbalance and ha.
>
>This is good. But in that case I wonder why fastcgi+nginx, instead
>
Thank you Shalygin for sharing.
I have know the reason. it is in L the fastcgi is disabled by default. I have
reenable the fastcgi and it worked well now.
By the way I use keepalive+lvs to loadbalance and ha.
Thanks again!
At 2018-09-18 18:36:46, "Konstantin Shalygin" wrote:
>>
Thank you for Gregory to provide it.
By the way where to get the pdf for these talks?
Thanks again!
At 2018-09-06 07:03:32, "Gregory Farnum" wrote:
>Hey all,
>Just wanted to let you know that all the talks from Mountpoint.io are
>now available on YouTube. These are reasonably high-qu
In jewel I use the below config rgw is work well with the nginx. But with
luminous the nginx look like can not work with the rgw.
10.11.3.57, request: "GET / HTTP/1.1", upstream:
"fastcgi://unix:/var/run/ceph/ceph-client.rgw.ceph-11.asok:", host:
"10.11.3.57:7480"
2018/08/31 16:38:25 [error]
Welcome!
At 2018-08-29 09:13:24, "Sage Weil" wrote:
>Hi everyone,
>
>Please help me welcome Mike Perez, the new Ceph community manager!
>
>Mike has a long history with Ceph: he started at DreamHost working on
>OpenStack and Ceph back in the early days, including work on the original
>RBD
n we were working on some bugs with the
Ceph support team previously.
On Wed, Mar 14, 2018 at 5:38 AM linghucongsong wrote:
what is the purpose for we to show the removed snaps? look like the removed
snaps no use to the user. we use rbd export and import backup images from one
ceph cluster to
what is the purpose for we to show the removed snaps? look like the removed
snaps no use to the user. we use rbd export and import backup images from one
ceph cluster to another ceph cluster. the increment image backup depand on the
snap.and we wiil remove the snap after the backup.so it will sh
Hi, all!
I just use ceph rbd for openstack.
my ceph version is 10.2.7.
I find a surprise thing that the object save in the osd , in some pgs the
objects are 8M, and in some pgs the objects are 4M, can someone tell me why?
thanks!
root@node04:/var/lib/ceph/osd/ceph-3/current/1.6e_head/DIR
set the osd noout nodown
At 2017-08-03 18:29:47, "Hans van den Bogert" wrote:
Hi all,
One thing which has bothered since the beginning of using ceph is that a reboot
of a single OSD causes a HEALTH_ERR state for the cluster for at least a couple
of seconds.
In the case of planned reb
root ssds {
id -9 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
}
It is empty in ssds!
rule ssdpool {
ruleset 1
type replicated
min_size 1
max_size 10
step take ssds
step chooseleaf firstn
17:57:11, "Nikola Ciprich" wrote:
>On Fri, Jul 28, 2017 at 05:52:29PM +0800, linghucongsong wrote:
>>
>>
>>
>> You have two crush rule? One is ssd the other is hdd?
>yes, exactly..
>
>>
>> Can you show ceph osd dump|grep pool
>>
You have two crush rule? One is ssd the other is hdd?
Can you show ceph osd dump|grep pool
ceph osd crush dump
At 2017-07-28 17:47:48, "Nikola Ciprich" wrote:
>
>On Fri, Jul 28, 2017 at 05:43:14PM +0800, linghucongsong wrote:
>>
>>
>> It look like
It look like the osd in your cluster is not all the same size.
can you show ceph osd df output?
At 2017-07-28 17:24:29, "Nikola Ciprich" wrote:
>I forgot to add that OSD daemons really seem to be idle, no disk
>activity, no CPU usage.. it just looks to me like some kind of
>deadlock, as they
16 matches
Mail list logo