5 o 20:09 użytkownik Lionel Bouton
> <mailto:lionel-subscript...@bouton.name>> napisał:
>
> On 06/22/15 17:21, Erik Logtenberg wrote:
> > I have the journals on a separate disk too. How do you disable the
> > snapshotting on the OSD?
> http://ceph
I have the journals on a separate disk too. How do you disable the
snapshotting on the OSD?
Thanks,
Erik.
On 22-06-15 12:27, Krzysztof Nowicki wrote:
> AFAIK the snapshots are useful when the journal sits inside the OSD
> filesystem. In case the journal is on a separate filesystem/device then
>
I believe this may be the same issue I reported some time ago, which is
as of yet unsolved.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg19770.html
I used strace to figure out that the OSD's were doing an incredible
amount of getxattr, setxattr and removexattr calls, for no apparent
>> What does this do?
>>
>> - leveldb_compression: false (default: true)
>> - leveldb_block/cache/write_buffer_size (all bigger than default)
>
> I take it you're running these commands on a monitor (from I think the
> Dumpling timeframe, or maybe even Firefly)? These are hitting specific
> settin
Hi,
I ran a config diff, like this:
ceph --admin-daemon (...).asok config diff
There are the obvious things like the fsid and IP-ranges, but two
settings stand out:
- internal_safe_to_start_threads: true (default: false)
What does this do?
- leveldb_compression: false (default: true)
- leveld
Hi,
Can anyone explain what the mount options nodcache and nofsc are for,
and especially why you would want to turn these options on/off (what are
the pros and cons either way?)
Thanks,
Erik.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Hi,
Two days ago I added a new osd to one of my ceph machines, because one
of the existing osd's got rather full. There was quite a difference in
disk space usage between osd's, but I understand this is kind of just
how ceph works. It spreads data over osd's but not perfectly even.
Now check out
objects are flushed. What data exactly are you seeing that's leading
> you to believe writes are happening against these drives? What is the
> exact CephFS and cache pool configuration?
> -Greg
>
> On Mon, Mar 16, 2015 at 2:36 PM, Erik Logtenberg wrote:
>> Hi,
>>
&
hen
I'm just reading big files from cephfs.
So apparently the osd's are doing some non-trivial amount of writing on
their own behalf. What could it be?
Thanks,
Erik.
On 03/16/2015 10:26 PM, Erik Logtenberg wrote:
> Hi,
>
> I am getting relatively bad performance from cephfs.
Hi,
I am getting relatively bad performance from cephfs. I use a replicated
cache pool on ssd in front of an erasure coded pool on rotating media.
When reading big files (streaming video), I see a lot of disk i/o,
especially writes. I have no clue what could cause these writes. The
writes are goi
root=ssd"
[osd.1]
host = ceph-01
[osd.2]
host = ceph-01
You see all osd's are linked to the right hostname. But the ssd osd is
then explicitly set to go into the right crush location too.
Kind regards,
Erik.
On 12/30/2014 11:11 PM, Lindsay Mathieson wrote:
> On Tue, 30 Dec 2014
with your physical layout.
Kind regards,
Erik.
On 12/30/2014 10:18 PM, Lindsay Mathieson wrote:
> On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
>> As you can see, I have four hosts: ceph-01 ... ceph-04, but eight
>> host entries. This works great.
>
>
> you
>
> Hi Erik,
>
> I have tiering working on a couple test clusters. It seems to be
> working with Ceph v0.90 when I set:
>
> ceph osd pool set POOL hit_set_type bloom
> ceph osd pool set POOL hit_set_count 1
> ceph osd pool set POOL hit_set_period 3600
> ceph osd pool set POOL cache_target_d
Hi Lindsay,
Actually you just setup two entries for each host in your crush map. One
for hdd's and one for ssd's. My osd's look like this:
# idweight type name up/down reweight
-6 1.8 root ssd
-7 0.45host ceph-01-ssd
0 0.45osd.0 up
Hi,
I use a cache tier on SSD's in front of the data pool on HDD's.
I don't understand the logic behind the flushing of the cache however.
If I start writing data to the pool, it all ends up in the cache pool at
first. So far so good, this was what I expected. However ceph never
starts actually f
Whoops, I accidently sent my mail before it was finished. Anyway I have
some more testing to do, especially with converting between
erasure/replicated pools. But it looks promising.
Thanks,
Erik.
On 23-12-14 16:57, Erik Logtenberg wrote:
> Hi,
>
> Every now and then someone ask
Hi,
Every now and then someone asks if it's possible to convert a pool to a
different type (replicated vs erasure / change the amount of pg's /
etc), but this is not supported. The advised approach is usually to just
create a new pool and somehow copy all data manually to this new pool,
removing t
If you are like me, you have the journals for your OSD's with rotating
media stored separately on an SSD. If you are even more like me, you
happen to use Intel 530 SSD's in some of your hosts. If so, please do
check your S.M.A.R.T. statistics regularly, because these SSD's really
can't cope with Ce
Hi,
I would like to mount a cephfs share from fstab, but it doesn't
completely work.
First of all, I followed the documentation [1], which resulted in the
following line in fstab:
ceph-01:6789:/ /mnt/cephfs/ ceph
name=testhost,secretfile=/root/testhost.key,noacl 0 2
Yes, this works when I manua
Hi,
I noticed that the docs [1] on adding and removing an MDS are not yet
written...
[1] https://ceph.com/docs/master/rados/deployment/ceph-deploy-mds/
I would like to do exactly that, however. I have an MDS on one machine,
but I'd like a faster machine to take over instead. In fact, It would be
I think I might be running into the same issue. I'm using Giant though.
A lot of slow writes. My thoughts went to: the OSD's get too much work
to do (commodity hardware), so I'll have to do some performance tuning
to limit parallellism a bit. And indeed, limiting the amount of threads
for different
I know that it is possible to run CephFS with a cache tier on the data
pool in Giant, because that's what I do. However when I configured it, I
was on the previous release. When I upgraded to Giant, everything just
kept working.
By the way when I set it up, I used the following commmands:
ceph os
Hi,
Every time I start any OSD, it always logs that it tried to remove two
btrfs snapshots but failed:
2014-11-15 22:31:08.251600 7f1730f71700 -1
filestore(/var/lib/ceph/osd/ceph-5) unable to destroy snap
'snap_3020746' got (2) No such file or directory
2014-11-15 22:31:09.661161 7f1730f71700 -1
ike 5 minutes before the problems start.
>
> Is there anything interesting in the kernel logs? OOM killers, or
> memory deadlocks?
>
>
>
> On Sat, Nov 8, 2014 at 11:19 AM, Erik Logtenberg <mailto:e...@logtenberg.eu>> wrote:
>
> Hi,
>
> I have som
I have no experience with the DELL SAS controller, but usually the
advantage of using a simple controller (instead of a RAID card) is that
you can use full SMART directly.
$ sudo smartctl -a /dev/sda
=== START OF INFORMATION SECTION ===
Device Model: INTEL SSDSA2BW300G3H
Serial Number:PEP
Oops, my apologies if the 3MB logfile that I sent to this list
yesterday was annoying to anybody. I didn't realize that the
combination "low bandwith / high mobile tariffs" and "email client
that automatically downloads all attachments" was still a thing.
Apparently it is.
Next time I'll upload a l
Hi,
My MDS is very slow, and it logs stuff like this:
2014-11-07 23:38:41.154939 7f8180a31700 0 log_channel(default) log
[WRN] : 2 slow requests, 1 included below; oldest blocked for >
187.777061 secs
2014-11-07 23:38:41.154956 7f8180a31700 0 log_channel(default) log
[WRN] : slow request 121.32
Hi,
There is a small bug in the Fedora package for ceph-0.87. Two days ago,
Boris Ranto built the first 0.87 package, for Fedora 22 (rawhide) [1].
[1] http://koji.fedoraproject.org/koji/buildinfo?buildID=589731
This build was a succes, so I took that package and built it for Fedora
20 (which is
10/30/2014 05:13 PM, John Spray wrote:
> There are a couple of open tickets about bogus (negative) stats on PGs:
> http://tracker.ceph.com/issues/5884
> http://tracker.ceph.com/issues/7737
>
> Cheers,
> John
>
> On Thu, Oct 30, 2014 at 12:38 PM, Erik Logtenberg wrote:
>> Yesterday I removed two OSD's, to replace them with new disks. Ceph was
>> not able to completely reach all active+clean state, but some degraded
>> objects remain. However, the amount of degraded objects is negative
>> (-82), see below:
>>
>
> So why didn't it reach that state?
Well, I dunno,
Hi,
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
2014-10-30 13:31:32.862083 mon.0 [INF] pgmap v209175: 768 pgs: 7
I would like to add that removing log files (/var/log/ceph is also
removed on uninstall) is also a bad thing.
My suggestion would be to simply drop the whole %postun trigger, since
it does only these two very questionable things.
Thanks,
Erik.
On 10/22/2014 09:16 PM, Dmitry Borodaenko wrote:
>
t; According to my understanding, the weight of a host is the sum of all osd
> weights on this host. So you just reweight any osd on this host, the
> weight of this host is reweighed.
>
> Thanks
> LeiDong
>
> On 10/20/14, 7:11 AM, "Erik Logtenberg" wrote:
>
>> Hi,
&
Hi,
Simple question: how do I reweight a host in crushmap?
I can use "ceph osd crush reweight" to reweight an osd, but I would like
to change the weight of a host instead.
I tried exporting the crushmap, but I noticed that the weights of all
hosts are commented out, like so:
# weight 5.
>>
>> I haven't done the actual calculations, but given some % chance of disk
>> failure, I would assume that losing x out of y disks has roughly the
>> same chance as losing 2*x out of 2*y disks over the same period.
>>
>> That's also why you generally want to limit RAID5 arrays to maybe 6
>> disk
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I should consider and/or are there best practices for
choosing the right K/M-parameters?
>>
>> Loic might have a better an
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I s
Hi,
Be sure to check this out:
http://ceph.com/community/ceph-calamari-goes-open-source/
Erik.
On 11-08-14 08:50, Irek Fasikhov wrote:
> Hi.
>
> I use ZABBIX with the following script:
> [ceph@ceph08 ~]$ cat /etc/zabbix/external/ceph
> #!/usr/bin/python
>
> import sys
> import os
> import c
> Yeah, Ceph will never voluntarily reduce the redundancy. I believe
> splitting the "degraded" state into separate "wrongly placed" and
> "degraded" (reduced redundancy) states is currently on the menu for
> the Giant release, but it's not been done yet.
That would greatly improve the accuracy o
Hi,
RHEL7 repository works just as well. CentOS 7 is effectively a copy of
RHEL7 anyway. Packages for CentOS 7 wouldn't actually be any different.
Erik.
On 07/10/2014 06:14 AM, Alexandre DERUMIER wrote:
> Hi,
>
> I would like to known if a centos7 respository will be available soon ?
>
> Or ca
Hi,
If you add an OSD to an existing cluster, ceph will move some existing
data around so the new OSD gets its respective share of usage right away.
Now I noticed that during this moving around, ceph reports the relevant
PG's as degraded. I can more or less understand the logic here: if a
piece o
Hi,
I have some osd's on hdd's and some on ssd's, just like the example in
these docs:
http://ceph.com/docs/firefly/rados/operations/crush-map/
Now I'd like to place an erasure encoded pool on the hdd's and a
replicated (cache) pool on the ssd's. In order to do that, I have to
split the crush ma
;s. Simply mounting with acl's enabled was enough to
cause the issue apparently.
So, do you have enough information to possibly fix it, or is there any
way that I can provide additional information?
Thanks,
Erik.
On 06/30/2014 05:13 AM, Yan, Zheng wrote:
> On Mon, Jun 30, 2014 at 4:25 A
Ahhh now -that- is some useful information, thanks!
On 06/30/2014 07:57 PM, Alfredo Deza wrote:
> On Mon, Jun 30, 2014 at 9:33 AM, Erik Logtenberg wrote:
>> Ah, okay I missed that.
>>
>> So, what distributions/versions are supported then? I see that the FC20
>> part o
e are building for FC19 anymore.
>
> There are some dependencies that could not be met for Ceph in FC19 so we
> decided to stop trying to get builds out for that.
>
> On Sun, Jun 29, 2014 at 2:52 PM, Erik Logtenberg wrote:
>> Nice work! When will the new rpm's be re
Nice work! When will the new rpm's be released on
http://ceph.com/rpm/fc19/x86_64/ ?
Thanks,
Erik.
On 06/27/2014 10:55 PM, Sage Weil wrote:
> This is the second post-firefly development release. It includes a range
> of bug fixes and some usability improvements. There are some MDS
> debuggi
jun 00:09 hoi2
So now it's group and world writable on both hosts.
Kind regards,
Erik.
On 06/19/2014 11:37 PM, Erik Logtenberg wrote:
> I am using the kernel client.
>
> kernel: 3.14.4-100.fc19.x86_64
> ceph: ceph-0.80.1-0.fc19.x86_64
>
> Actually, I seem to be able to re
Hi Loic,
That is a nice idea. And if I then use newfs against that replicated
cache pool, it'll work reliably?
Kind regards,
Erik.
On 06/19/2014 11:09 PM, Loic Dachary wrote:
>
>
> On 19/06/2014 22:51, Wido den Hollander wrote:
>>
>>
>>
>>
&g
Hi Ilya,
Do you happen to know when this fix will be released?
Is upgrading to a newer kernel (client side) still a solution/workaround
too? If yes, which kernel version is required?
Kind regards,
Erik.
> The "if there are any erasure code pools in the cluster, kernel clients
> (both krbd and
. Exactly the same
results.
Kind regards,
Erik.
On 06/16/2014 02:32 PM, Yan, Zheng wrote:
> were you using ceph-fuse or kernel client? ceph version and kernel
> version? how reliably you can reproduce this problem?
>
> Regards
> Yan, Zheng
>
> On Sun, Jun 15, 2014 at 4:42 AM, E
Hi,
Are erasure coded pools suitable for use with MDS?
I tried to give it a go by creating two new pools like so:
# ceph osd pool create ecdata 128 128 erasure
# ceph osd pool create ecmetadata 128 128 erasure
Then looked up their id's:
# ceph osd lspools
..., 6 ecdata,7 ecmetadata
# ceph mds
r.ceph.com/issues/8599 but not yet released).
>
> B) the jerasure plugin fails to load for some reason and the mon
> logs should tell us why
>
> Cheers
>
> On 15/06/2014 09:47, Loic Dachary wrote:
>> Hi Erik,
>>
>> Did you upgrade the cluster or is it a new
ou upgrade the cluster or is it a new cluster ? Could you
> please ls -l /usr/lib64/ceph/erasure-code ? If you're connected on
> irc.oftc.net#ceph today feel free to ping me ( loicd ).
>
> Cheers
>
> On 14/06/2014 23:25, Erik Logtenberg wrote:
>> Hi,
>>
Hi,
I'm trying to set up an erasure coded pool, as described in the Ceph docs:
http://ceph.com/docs/firefly/dev/erasure-coded-pool/
Unfortunately, creating a pool like that gives me the following error:
# ceph osd pool create ecpool 12 12 erasure
Error EINVAL: cannot determine the erasure code
Hi,
So... I wrote some files into that directory to test performance, and
now I notice that both hosts see the permissions the right way, like
they were when I first created the directory.
What is going on here? ..
Erik.
On 06/14/2014 10:32 PM, Erik Logtenberg wrote:
> Hi,
>
> I r
Hi,
I ran into a weird issue with cephfs today. I create a directory like this:
# mkdir bla
# ls -al
drwxr-xr-x 1 root root0 14 jun 22:22 bla
Now on another host, with the same cephfs mounted, I see different
permissions:
# ls -al
drwxrwxrwx 1 root root 0 14 jun 22:22 bla
Weird, huh?
Bac
Hi,
In march 2013 Greg wrote an excellent blog posting regarding the (then)
current status of MDS/CephFS and the plans for going forward with
development.
http://ceph.com/dev-notes/cephfs-mds-status-discussion/
Since then, I understand progress has been slow, and Greg confirmed that
he didn't wa
uot;
every now and then. But I don't see any cause for this. Apart from the
few benchmarks I ran, there is no activity whatsoever.
Erik.
On 08/02/2013 01:34 PM, Mark Nelson wrote:
> Hi Erik,
>
> Is your mon still running properly?
>
> Mark
>
> On 08/01/2013 05:06 PM, E
t; Logging might well help.
>
> http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
>
>
>
> On 07/31/2013 03:51 PM, Erik Logtenberg wrote:
>> Hi,
>>
>> I just added a second node to my ceph test platform. The first node has
>> a mon and three
Hi,
I just added a second node to my ceph test platform. The first node has
a mon and three osd's, the second node only has three osd's. Adding the
osd's was pretty painless, and ceph distributed the data from the first
node evenly over both nodes so everything seems to be fine. The monitor
also t
o be fixed
> there instead of adding a workaround to the ceph spec.
>
> Regards,
>
> Danny
>
> Am 30.07.2013 09:42, schrieb Erik Logtenberg:
>> Hi,
>>
>> Fedora, in this case Fedora 19, x86_64.
>>
>> Kind regards,
>>
>> Erik.
>
sure Ceph builds on Fedora.
Signed-off-by: Erik Logtenberg
---
--- ceph.spec-orig 2013-07-30 00:24:54.70500 +0200
+++ ceph.spec 2013-07-30 01:20:23.59300 +0200
@@ -42,6 +42,8 @@
BuildRequires: libxml2-devel
BuildRequires: libuuid-devel
BuildRequires: leveldb-devel > 1.2
+BuildRequi
l pick up the correct packages needed
> to build ceph.
>
> Which distro do you use?
>
> Danny
>
> Am 30.07.2013 01:33, schrieb Patrick McGarry:
>> ------ Forwarded message --
>> From: Erik Logtenberg
>> Date: Mon, Jul 29, 2013 at 7:07
Hi,
The spec file used for building rpm's misses a build time dependency on
snappy-devel. Please see attached patch to fix.
Kind regards,
Erik.
--- ceph.spec-orig 2013-07-30 00:24:54.70500 +0200
+++ ceph.spec 2013-07-30 00:25:34.19900 +0200
@@ -42,6 +42,7 @@
BuildRequires: libxml2-deve
> * osd: pg log (re)writes are not vastly more efficient (faster peering)
>(Sam Just)
Do you really mean "are not"? I'd think "are now" would make sense (?)
- Erik.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/list
65 matches
Mail list logo