Gregory,
I greatly appreciate your assistance. I recompiled Ceph with -ssl and
the nss USE flags set, which is opposite what I was using. I am now
able to export from our pools without signature check failures. Thank
you for pointing me in the right direction.
Cary
-Dynamic
On Fri, Feb 16
ssue with Ceph Luminous, as we were not having these problem
with Jewel.
Cary
-Dynamic
On Thu, Feb 1, 2018 at 7:04 PM, Cary wrote:
> Hello,
>
> I did not do anything special that I know of. I was just exporting an
> image from Openstack. We have recently upgraded from Jewel 10.2.
: CENSORED
caps: [mon] allow *
I believe this is causing the virtual machines we have running to
crash. Any advice would be appreciated. Please let me know if I need
to provide any other details. Thank you,
Cary
-Dynamic
On Mon, Jan 29, 2018 at 7:53 PM, Gregory Farnum wrote:
> On Fri, Ja
172.21.32.2:6807/153106
conn(0x7fc8bc020870 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH
pgs=26018 cs=1 l=1).process Signature check failed
Does anyone know what could cause this, and what I can do to fix it.
Thank you,
Cary
-Dynamic
___
ceph-users
172.21.32.2:6807/153106
conn(0x7fc8bc020870 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH
pgs=26018 cs=1 l=1).process Signature check failed
Does anyone know what could cause this, and what I can do to fix it.
Thank you,
Cary
-Dynamic
___
ceph-users
_AND_DISPATCH
pgs=26018 cs=1 l=1).process Signature check failed
Does anyone know what could cause this, and what I can do to fix it.
Thank you,
Cary
-Dynamic
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
==
cd /etc/init.d/
ln -s ceph ceph-osd.12
/etc/init.d/ceph-osd.12 start
rc-update add ceph-osd.12 default
Cary
On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东 wrote:
> Hello Cary!
> It’s really big surprise for me to receive your reply!
> Sincere thanks to you!
> I know it’s a fake
You could add a file named /usr/sbin/systemctl and add:
exit 0
to it.
Cary
On Dec 28, 2017, at 18:45, 赵赵贺东 wrote:
Hello ceph-users!
I am a ceph user from china.
Our company deploy ceph on arm ubuntu 14.04.
Ceph Version is luminous 12.2.2.
When I try to activate osd by ceph-volume, I got
00 1665G 1146G 518G 68.86 0.94 325
>
> 25 1.62650 1.0 1665G 1033G 632G 62.02 0.85 309
>
> 26 1.62650 1.0 1665G 1234G 431G 74.11 1.01 334
>
> 27 1.62650 1.0 1665G 1342G 322G 80.62 1.10 352
>
> TOTAL 46635G 34135G 12500G 73.20
>
> MI
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013646.html
On Tue, Dec 26, 2017 at 6:07 AM, Cary wrote:
> Are you using hardlinks in cephfs?
>
>
> On Tue, Dec 26, 2017 at 3:42 AM, 周 威 wrote:
>> The out put of ceph osd df
>>
>>
>>
>&g
Could you post the output of “ceph osd df”?
On Dec 25, 2017, at 19:46, ? ? wrote:
Hi all:
Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
Ceph df:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
46635G 12500G 34135G 73.19
rm d
rm:
n my 4.6TB is the
same for all of them, they have different %USE. So I could lower the
weight of the OSDs with more data, and Ceph will balance the cluster.
I am not too sure why this happens.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008623.html
Cary
-Dynamic
On Tue,
James,
If your replication factor is 3, for every 1GB added, your GB avail
with decrease by 3GB.
Cary
-Dynamic
On Mon, Dec 18, 2017 at 6:18 PM, James Okken wrote:
> Thanks David.
> Thanks again Cary.
>
> If I have
> 682 GB used, 12998 GB / 13680 GB avail,
> then I still need
A possible option. They do not recommend using cppool.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011460.html
**COMPLETELY UNTESTED AND DANGEROUS**
stop all MDS daemons
delete your filesystem (but leave the pools)
use "rados export" and "rados import" to do a full copy of the
Karun,
Could you paste in the output from "ceph health detail"? Which OSD
was just added?
Cary
-Dynamic
On Sun, Dec 17, 2017 at 4:59 AM, Karun Josy wrote:
> Any help would be appreciated!
>
> Karun Josy
>
> On Sat, Dec 16, 2017 at 11:04 PM, Karun Josy wrote:
>>
recovering. If possible, wait until the cluster is in a healthy state
first.
Cary
-Dynamic
On Sat, Dec 16, 2017 at 2:05 PM, Karun Josy wrote:
> Hi Cary,
>
> No, I didnt try to repair it.
> I am comparatively new in ceph. Is it okay to try to repair it ?
> Or should I take any precauti
Karun,
Did you attempt a "ceph pg repair "? Replace with the pg
ID that needs repaired, 3.4.
Cary
-D123
On Sat, Dec 16, 2017 at 8:24 AM, Karun Josy wrote:
> Hello,
>
> I added 1 disk to the cluster and after rebalancing, it shows 1 PG is in
> remapped state. How can I c
ailable, 3 total. ie.
usage: 19465 GB used, 60113 GB / 79578 GB avail
We choose to use Openstack with Ceph in this decade and do the other
things, not because they are easy, but because they are hard...;-p
Cary
-Dynamic
On Fri, Dec 15, 2017 at 10:12 PM, David Turner wrote:
> In conjunct
James,
Those errors are normal. Ceph creates the missing files. You can
check "/var/lib/ceph/osd/ceph-6", before and after you run those
commands to see what files are added there.
Make sure you get the replication factor set.
Cary
-Dynamic
On Fri, Dec 15, 2017 at 6:11 PM, J
;
changed to a lower %?
Cary
-Dynamic
On Thu, Dec 14, 2017 at 10:52 PM, James Okken wrote:
> Thanks Cary!
>
> Your directions worked on my first sever. (once I found the missing carriage
> return in your list of commands, the email musta messed it up.
>
> For anyone else:
> ch
hould now be able to start the drive. You can watch the data move
to the drive with a ceph -w. Once data has migrated to the drive,
start the next.
Cary
-Dynamic
On Thu, Dec 14, 2017 at 5:34 PM, James Okken wrote:
> Hi all,
>
> Please let me know if I am missing steps or using the wrong s
"release": "luminous",
"num": 8
}
},
"client": {
"group": {
"features": "0x1ffddff8eea4fffb",
"release": "luminous",
"num": 3
Is there any way I can get these OSDs to join the cluster now, or recover
my data?
Cary
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
22 matches
Mail list logo