I installed jewel el7 via yum on CentOS 7.1, but it seems no systemd
scripts are available. But I do find there's a folder named 'systemd' in
the source, so maybe we forgot to build it into the package?
___
ceph-users mailing list
ceph-users@lists.ceph.co
Hi, I need help for deploying jewel OSDs on CentOS 7.
Following the guide, I have successfully run OSD daemons but all of them
are down according to `ceph -s`: 15/15 in osds are down
No errors in /var/log/ceph/ceph-osd.1.log, it just stoped at these lines
and never made progress:
2016-05-09 01:32
Hi all,
To observe what will happen to ceph-fuse mount if the network is down, we
blocked
network connections to all three monitors by iptables. If we restore the
network
immediately(within minutes), the blocked I/O request will be restored,
every thing will
be back to normal.
But if we continue
Thanks! I'll check it out.
2017年11月24日 17:58,"Yan, Zheng" 写道:
> On Fri, Nov 24, 2017 at 4:59 PM, Zhang Qiang
> wrote:
> > Hi all,
> >
> > To observe what will happen to ceph-fuse mount if the network is down, we
> > blocked
> > network c
Hi,
One of several OSDs on the same machine crashed several times within days.
It's always that one, other OSDs are all fine. Below is the dumped message,
since it's too long here, I only pasted the head and tail of the recent
events. If it's necessary to inspect the full log, please see
https://g
Thanks Wang, looks like so, not Ceph to blame :)
On 25 October 2016 at 09:59, Haomai Wang wrote:
> could you check dmesg? I think there exists disk EIO error
>
> On Tue, Oct 25, 2016 at 9:58 AM, Zhang Qiang
> wrote:
>
>> Hi,
>>
>> One of several OSDs on the s
Hi,
Ceph version 10.2.3. After a power outage, I tried to start the MDS
deamons, but they stuck forever replaying journals, I had no idea why
they were taking that long, because this is just a small cluster for
testing purpose with only hundreds MB data. I restarted them, and the
error below was e
Hi,
I'm using ceph-fuse 10.2.3 on CentOS 7.3.1611. ceph-fuse always
segfaults after running for some time.
*** Caught signal (Segmentation fault) **
in thread 7f455d832700 thread_name:ceph-fuse
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
1: (()+0x2a442a) [0x7f457208e42a]
2
se.
>
> On Mon, Apr 2, 2018 at 12:56 AM, Zhang Qiang wrote:
>> Hi,
>>
>> I'm using ceph-fuse 10.2.3 on CentOS 7.3.1611. ceph-fuse always
>> segfaults after running for some time.
>>
>> *** Caught signal (Segmentation fault) **
>> i
Hi all,
I'm new to Luminous, when I use ceph-volume create to add a new
filestore OSD, it will tell me that the journal's header magic is not
good. But the journal device is a new LV. How to make it write the new
OSD's header to the journal?
And it seems this error message will not affect the cre
Hi,
Is it normal that I deleted files from the cephfs and ceph didn't
delete the back objects a day later? Until I restart the mds deamon
then it started to release the storage space.
I noticed the doc(http://docs.ceph.com/docs/mimic/dev/delayed-delete/)
says the file is marked as deleted on the
Hi all,
I have 20 OSDs and 1 pool, and, as recommended by the doc(
http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
configured pg_num and pgp_num to 4096, size 2, min size 1.
But ceph -s shows:
HEALTH_WARN
534 pgs degraded
551 pgs stuck unclean
534 pgs undersized
too many
Hi Reddy,
It's over a thousand lines, I pasted it on gist:
https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4
On Tue, 22 Mar 2016 at 18:15 M Ranga Swami Reddy
wrote:
> Hi,
> Can you please share the "ceph health detail" output?
>
> Thanks
> Swami
>
> On
his.
>
> I dont know any other way.
>
> Good luck !
>
> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
> IP-Interactive
>
> mailto:i...@ip-interactive.de
>
> Anschrift:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
>
Oliver, Goncalo,
Sorry to disturb again, but recreating the pool with a smaller pg_num
didn't seem to work, now all 666 pgs are degraded + undersized.
New status:
cluster d2a69513-ad8e-4b25-8f10-69c4041d624d
health HEALTH_WARN
666 pgs degraded
82 pgs stuck unclean
3 Mar 2016 at 19:10 Zhang Qiang wrote:
> Oliver, Goncalo,
>
> Sorry to disturb again, but recreating the pool with a smaller pg_num
> didn't seem to work, now all 666 pgs are degraded + undersized.
>
> New status:
> cluster d2a69513-ad8e-4b25-8f10-69c4041
nt
> hosts (I do not really know why is that happening). If you are using the
> default crush rules, ceph is not able to replicate objects (even with size
> 2) across two different hosts because all your OSDs are just in one host.
>
> Cheers
> Goncalo
>
> ---
sd.0; both up and in. I'd recommend trying to get rid of the
> one listed on host 148_96 and see if it clears the issues.
>
>
>
> On Tue, Mar 22, 2016 at 6:28 AM, Zhang Qiang
> wrote:
>
>> Hi Reddy,
>> It's over a thousand lines, I pasted it on gist:
>>
Hi all,
According to fio, with 4k block size, the sequence write performance of my
ceph-fuse mount is just about 20+ M/s, only 200 Mb of 1 Gb full duplex NIC
outgoing bandwidth was used for maximum. But for 1M block size the
performance could achieve as high as 1000 M/s, approaching the limit of t
80: 1000 pgs, 2 pools, 14925 MB data, 3851 objects
37827 MB used, 20837 GB / 21991 GB avail
1000 active+clean
On Fri, 25 Mar 2016 at 16:44 Christian Balzer wrote:
>
> Hello,
>
> On Fri, 25 Mar 2016 08:11:27 + Zhang Qiang wrote:
>
> > Hi all,
> >
&
20 matches
Mail list logo