Hi,
What would make the decision easier: if we knew that we could easily
revert the
> "ceph osd crush tunables optimal"
once it has begun rebalancing data?
Meaning: if we notice that impact is too high, or it will take too long,
that we could simply again say
> "ceph osd crush tunables hamme
Thanks Jake, for your extensive reply. :-)
MJ
On 3-10-2017 15:21, Jake Young wrote:
On Tue, Oct 3, 2017 at 8:38 AM lists <mailto:li...@merit.unu.edu>> wrote:
Hi,
What would make the decision easier: if we knew that we could easily
revert the
> "ceph o
Hi,
Yesterday I chowned our /var/lib/ceph ceph, to completely finalize our
jewel migration, and noticed something interesting.
After I brought back up the OSDs I just chowned, the system had some
recovery to do. During that recovery, the system went to HEALTH_ERR for
a short moment:
See be
ok, thanks for the feedback Piotr and Dan!
MJ
On 4-10-2017 9:38, Dan van der Ster wrote:
Since Jewel (AFAIR), when (re)starting OSDs, pg status is reset to "never
contacted", resulting in "pgs are stuck inactive for more than 300 seconds"
being reported until osds regain connections between the
Hi,
On our three-node 24 OSDs ceph 10.2.10 cluster, we have started seeing
slow requests on a specific OSD, during the the two-hour nightly xfs_fsr
run from 05:00 - 07:00. This started after we applied the meltdown patches.
The specific osd.10 also has the highest space utilization of all OSD
Hi Wes,
On 15-1-2018 20:32, Wes Dillingham wrote:
I dont hear a lot of people discuss using xfs_fsr on OSDs and going over
the mailing list history it seems to have been brought up very
infrequently and never as a suggestion for regular maintenance. Perhaps
its not needed.
True, it's just some
Hi Wes,
On 15-1-2018 20:57, Wes Dillingham wrote:
My understanding is that the exact same objects would move back to the
OSD if weight went 1 -> 0 -> 1 given the same Cluster state and same
object names, CRUSH is deterministic so that would be the almost certain
result.
Ok, thanks! So this
Hi all,
We're pretty new to ceph, but loving it so far.
We have a three-node cluster, four 4TB OSDs per node, journal (5GB) on
SSD, 10G ethernet cluster network, 64GB ram on the nodes, total 12 OSDs.
We noticed the following output when using ceph bench:
root@ceph1:~# rados bench -p scbench
Hi Christian,
Thanks for your reply.
What SSD model (be precise)?
Samsung 480GB PM863 SSD
Only one SSD?
Yes. With a 5GB partition based journal for each osd.
During the 0 MB/sec, there is NO increased cpu usage: it is usually
around 15 - 20% for the four ceph-osd processes.
Watch your n
Hi Pardhiv,
Thanks for sharing!
MJ
On 11-6-2018 22:30, Pardhiv Karri wrote:
Hi MJ,
Here are the links to the script and config file. Modify the config file
as you wish, values in config file can be modified while the script
execution is in progress. The script can be run from any monitor or
Hi John, list,
On 1-4-2017 16:18, John Petrini wrote:
Just ntp.
Just to follow-up on this: we have yet experienced a clock skew since we
starting using chrony. Just three days ago, I know, bit still...
Perhaps you should try it too, and report if it (seems to) work better
for you as well.
Hi Dan,
did you mean "we have not yet..."?
Yes! That's what I meant.
Chrony does much better a job than NTP, at least here :-)
MJ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I was building a small test cluster and noticed a difference with trying
to rbd map depending on whether the cluster was built using fedora or
CentOS.
When I used CentOS osds, and tried to rbd map from arch linux or fedora,
I would get "rbd: add failed: (34) Numerical result out of range". It
se
I remember reading somewhere that the kernel ceph clients (rbd/fs) could
not run on the same host as the OSD. I tried finding where I saw that,
and could only come up with some irc chat logs.
The issue stated there is that there can be some kind of deadlock. Is
this true, and if so, would you ha
I've just tried setting up the radosgw on centos6 according to
http://ceph.com/docs/master/radosgw/config/
There didn't seem to be an init script in the rpm I installed, so I
copied the one from here:
https://raw.githubusercontent.com/ceph/ceph/31b0823deb53a8300856db3c104c0e16d05e79f7/src/init
On 2014-06-17 07:30, John Wilkins wrote:
You followed this intallation guide:
http://ceph.com/docs/master/install/install-ceph-gateway/ [16]
An then you, followed this http://ceph.com/docs/master/radosgw/config/
[1] configuration guide and then you executed:
sudo /etc/init.d/ceph-radosgw start
On 2014-06-16 13:16, lists+c...@deksai.com wrote:
I've just tried setting up the radosgw on centos6 according to
http://ceph.com/docs/master/radosgw/config/
While I can run the admin commands just fine to create users etc.,
making a simple wget request to the domain I set up returns a 50
Guess I'll try again. I gave this another shot, following the
documentation, and still end up with basically a fork bomb rather than
the nice ListAllMyBucketsResult output that the docs say I should get.
Everything else about the cluster works fine, and I see others
talking about the gateway
I have been following ceph for a long time. I have yet to put it into
service, and I keep coming back as btrfs improves and ceph reaches
higher version numbers.
I am now trying ceph 0.93 and kernel 4.0-rc1.
Q1) Is it still considered that btrfs is not robust enough, and that
xfs should be used in
ecede the failures.
I can see that this is not an osd failure, but a resource limit issue.
I completely acknowledge that I must now RTFM, but I will ask whether
anybody can give any guidance, based on experience, with respect to
this issue.
Thank you again for all for the previous prompt and inva
I have been experimenting with Ceph, and have some OSDs with drives
containing XFS filesystems which I want to change to BTRFS.
(I started with BTRFS, then started again from scratch with XFS
[currently recommended] in order to eleminate that as a potential cause
of some issues, now with further ex
Hi,
On my cluster I tried to clear all objects from a pool. I used the
command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench
cleanup doesn't clean everything, because there was a lot of other
testing going on here).
Now 'rados -p bench ls' returns a list of objects, which do
Hi,
I an test cluster (3 nodes, 24 osd's) I'm testing the ceph iscsi gateway
(with http://docs.ceph.com/docs/master/rbd/iscsi-targets/). For a client
I used a seperate server, everything runs Centos 7.5. The iscsi gateway
are located on 2 of the existing nodes in the cluster.
How does iscsi
Hi,
On a small cluster (3 nodes) I frequently have slow requests. When
dumping the inflight ops from the hanging OSD, it seems it doesn't get a
'response' for one of the subops. The events always look like:
"events": [
{
"time": "201
On Tue, Jun 26, 2018 at 6:06 PM Frank de Bot (lists)
mailto:li...@searchy.net>> wrote:
Hi,
In my test setup I have a ceph iscsi gateway (configured as in
http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ )
I would like to use thie with a FreeBSD (11.1) initiat
Hey cephers,
Just wanted to briefly announce the release of a radosgw CLI tool that solves
some of our team's minor annoyances. Called radula, a nod to the patron animal,
this utility acts a lot like s3cmd with some tweaks to meet the expectations of
our researchers.
https://pypi.python.org/py
John Spray wrote:
> On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote:
>>
>> Hi,
>>
>> On my cluster I tried to clear all objects from a pool. I used the
>> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench
>> cleanup doe
Frank (lists) wrote:
> Hi,
>
> On a small cluster (3 nodes) I frequently have slow requests. When
> dumping the inflight ops from the hanging OSD, it seems it doesn't get a
> 'response' for one of the subops. The events always look like:
>
I've done some
Hi,
In my test setup I have a ceph iscsi gateway (configured as in
http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ )
I would like to use thie with a FreeBSD (11.1) initiator, but I fail to
make a working setup in FreeBSD. Is it known if the FreeBSD initiator
(with gmultipath) can work with
I
> don't run FreeBSD, but any particular issue you are seeing?
>
> On Tue, Jun 26, 2018 at 6:06 PM Frank de Bot (lists) <mailto:li...@searchy.net>> wrote:
>
> Hi,
>
> In my test setup I have a ceph iscsi gateway (configured as in
> http://docs
30 matches
Mail list logo