Yes, I can find ceph of rotate configure file in the directory of
/etc/logrotate.d.
Also, I find sth. Weird.
drwxr-xr-x 2 root root 4.0K Dec 3 14:54 ./
drwxrwxr-x 19 root syslog 4.0K Dec 3 13:33 ../
-rw--- 1 root root 0 Dec 2 06:25 ceph.audit.log
-rw--- 1 root root85K No
You can setup logrotate however you want - not sure what the default is for
your distro.
Usually logrotate doesn't touch files that are smaller than some size even if
they are old. It will also not delete logs for OSDs that no longer exist.
Ceph itself has nothing to do with log rotation, logro
I'm following this presentation of Mirantis team:
http://www.slideshare.net/mirantis/ceph-talk-vancouver-20
They calculate CEPH IOPS = Disk IOPS * HDD Quantity * 0.88 (4-8k random
read proportion)
And VM IOPS = CEPH IOPS / VM Quantity
But if I use replication of 3, *Would VM IOPS be divided by
Hi all,
i upgraded from hammer to infernalis today and even so I had a hard time
doing so I finally got my cluster running in a healthy state (mainly my
fault, because I did not read the release notes carefully).
But when I try to list my disks with "ceph-disk list" I get the following
Trac
Hi Felix,
This is a bug, I file an issue for you at http://tracker.ceph.com/issues/13970
Cheers
On 03/12/2015 10:56, Stolte, Felix wrote:
> Hi all,
>
>
>
> i upgraded from hammer to infernalis today and even so I had a hard time
> doing so I finally got my cluster running in a healthy state
When install the ceph infernal(v9.2.0) ,it require the package
selinux-policy-base-3.13.1-23.el7_1.18.noarch.rpm, I tried search it by google
, but got nothing, if anyone know how to get it?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Couple of things to check
1. Can you create just a normal non cached pool and test performance to
rule out any funnies going on there.
2. Can you also run something like iostat during the benchmarks and see if
it looks like all your disks are getting saturated.
I would suggest you forget about 15k disks, there probably isn't much point in
using them vs SSD's nowdays. For 10K disks, if cost is a key factor I would
maybe look at the WD Raptor disks.
In terms of numbers of disks, it's very hard to calculate with the numbers you
have provided. That simple
OK! One more question. Do you know why ceph has 2 ways outputting logs(dout &&
clog). Cause I find dout is more helpful than clog, Did ceph use clog first,
and dout added for new version?
-
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor
I did some more tests :
fio on a raw RBD volume (4K, numjob=32, QD=1) gives me around 3000 IOPS
I also tuned xfs mount options on client (I realized I didn't do that
already) and with
"largeio,inode64,swalloc,logbufs=8,logbsize=256k,attr2,auto,nodev,noatime,nodiratime"
I get better performance :
This is the clean way to handle this. But you can also use udev to do this
at boot. From what I found on the mailing list and made working before
using GUID :
cat > /etc/udev/rules.d/89-ceph-journal.rules << EOF
KERNEL=="sda?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660"
KERNEL=="sdb?"
What distribution and version/release are you trying to install it on?
On a CentOS 7 box I see it is available:
$ sudo yum provides selinux-policy-base
...
selinux-policy-minimum-3.13.1-23.el7.noarch : SELinux minimum base policy
Repo: base
Matched from:
Provides: selinux-policy-base =
Hi, All:
I 've got a question about a priority. We defined
osd_client_op_priority = 63. CEPH_MSG_PRIO_LOW = 64.
We are clear there are multiple IO to be discussed. Why not define
osd_client_op_priority > 64, so we can just deal with client IO in first
priority.
Hi Loic,
thanx for the quick reply and filing the issue.
Regards
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: P
On Thu, 3 Dec 2015, Wukongming wrote:
> OK! One more question. Do you know why ceph has 2 ways outputting
> logs(dout && clog). Cause I find dout is more helpful than clog, Did
> ceph use clog first, and dout added for new version?
clog is the 'cluster log', that gets aggregated into a single lo
Hi,
On 03/12/2015 12:12, Florent B wrote:
> It seems that if some OSD are using journal devices, ceph user needs to
> be a member of "disk" group on Debian. Can someone confirm this ?
Yes, I confirm... if you are talking about the journal partitions of OSDs.
Another solution: via a udev rule, se
Hi,
for testing I would like to create some OSD in the hammer release with
journal size 0
I included this in ceph.conf:
[osd]
osd journal size = 0
Then I zapped the disk in question and tried:
'ceph-deploy disk zap o1:sda'
Thank you for your advice how to prepare an osd without journal /
jo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Reweighting the OSD to 0.0 or setting the osd out (but not terminating
the process) should allow it to backfill the PGs to a new OSD. I would
try the reweight first (and in a test environment).
-
Robert LeBlanc
PGP Fingerprint 79A2 9
On 3 Dec 2015 8:56 p.m., "Florent B" wrote:
>
> By the way, when system boots, "ceph" service is starting everything
> fine. So "ceph-osd@" service is disabled => how to restart an OSD ?!
>
AFAIK, ceph now have 2 services:
1. Mount device
2. Start OSD
Also, service can be disabled, but this not m
On 3 Dec 2015 9:35 p.m., "Robert LeBlanc" wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Reweighting the OSD to 0.0 or setting the osd out (but not terminating
> the process) should allow it to backfill the PGs to a new OSD. I would
> try the reweight first (and in a test environ
On Fri, 2015-11-27 at 10:00 +0100, Laurent GUERBY wrote:
> >
> > Hi, from given numbers one can conclude that you are facing some kind
> > of XFS preallocation bug, because ((raw space divided by number of
> > files)) is four times lower than the ((raw space divided by 4MB
> > blocks)). At a glanc
I think OSD are automatically mouted at boot via udev rules and that the
ceph service does not handle the mounting part.
On Thu, Dec 3, 2015 at 7:40 PM, Florent B wrote:
> Hi,
>
> On 12/03/2015 07:36 PM, Timofey Titovets wrote:
>
>
> On 3 Dec 2015 8:56 p.m., "Florent B" <
> flor...@coppint.com>
Lol, it's opensource guys
https://github.com/ceph/ceph/tree/master/systemd
ceph-disk@
2015-12-03 21:59 GMT+03:00 Florent B :
> "ceph" service does mount :
>
> systemctl status ceph -l
> ● ceph.service - LSB: Start Ceph distributed file system daemons at boot
> time
>Loaded: loaded (/etc/init.d
echo add >/sys/block/sdX/sdXY/uevent
The easiest way to make it mount automagically
Jan
> On 03 Dec 2015, at 20:31, Timofey Titovets wrote:
>
> Lol, it's opensource guys
> https://github.com/ceph/ceph/tree/master/systemd
> ceph-disk@
>
> 2015-12-03 21:59 GMT+03:00 Florent B :
>> "ceph" servic
Some users already ask on list about this problem on Debian
You can fix that by:
ln -sv
Or by
systemctl edit --full ceph-disk@.service
Just choose the way
2015-12-03 23:00 GMT+03:00 Florent B :
> Ok and /bin/flock is supposed to exist on all systems ? Don't have it on
> Debian... flock is at /usr
I would be a lot more conservative in terms of what a spinning drive can
do. The Mirantis presentation has pretty high expectations out of a
spinning drive, as they¹re ignoring somewhat latency (til the last few
slides). Look at the max latencies for anything above 1 QD on a spinning
drive.
If you
Hi,
On 03/12/2015 21:00, Florent B wrote:
> Ok and /bin/flock is supposed to exist on all systems ? Don't have it on
> Debian... flock is at /usr/bin/flock
I filed a bug for this : http://tracker.ceph.com/issues/13975
Cheers
>
> My problem is that "ceph" service is doing everything, and all ot
We were able to prevent the blacklist operations, and now the cluster is
much happier, however, the OSDs have not started cleaning up old osd maps
after 48 hours. Is there anything we can do to poke them to get them to
start cleaning up old osd maps?
On Wed, Dec 2, 2015 at 11:25 AM, Gregory Far
This was sent to the ceph-maintainers list; answering here:
On 11/25/2015 02:54 AM, Alaâ Chatti wrote:
> Hello,
>
> I used to install qemu-ceph on centos 6 machine from
> http://ceph.com/packages/, but the link has been removed, and there is
> no alternative in the documentation. Would you please
On Thu, Dec 3, 2015 at 5:53 PM, Dan Mick wrote:
> This was sent to the ceph-maintainers list; answering here:
>
> On 11/25/2015 02:54 AM, Alaâ Chatti wrote:
>> Hello,
>>
>> I used to install qemu-ceph on centos 6 machine from
>> http://ceph.com/packages/, but the link has been removed, and there i
i have a file which is untouchable: ls -i gives an error, stat gives an
error. it shows ??? for all fields except name.
How do i clean this up?
I'm on ubuntu 15.10, running 0.94.5
# ceph -v
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
the node that accessed the file then caused
Hello Cephers,
I am unable to create the initial monitor during ceph cluster deployment. I do
not know what changed since the same recipe used to work until very recently.
These are the steps I used:
Ceph-deploy new -- works
Dpkg -i -R --works
Ceph-deploy mon create-initial - fails
Log:
[ceph
Hi Adrien...
Thanks for the pointer. It effectually solved our issue.
Cheers
G.
On 12/04/2015 12:53 AM, Adrien Gillard wrote:
This is the clean way to handle this. But you can also use udev to do
this at boot. From what I found on the mailing list and made working
before using GUID :
cat >
Hi,haomai
A bit tough question I asked above, but do you know the answer?
-
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor
-邮件原件-
发件人: wukongming 12019 (RD)
发送时间: 2015年12月3日 22:15
收件人: ceph-de...@vger.kernel.org; ceph-user
In SimpleMessenger, the client OP like OSD_OP will dispatch by
ms_fast_dispatch, and not queued in PriortizedQueue in Messenger.
2015-12-03 22:14 GMT+08:00 Wukongming :
> Hi, All:
> I 've got a question about a priority. We defined
> osd_client_op_priority = 63. CEPH_MSG_PRIO_LOW = 64.
>
35 matches
Mail list logo