an set this directly in ceph.conf,
though, right? This is the advice I've seen before. Is a restart of
ceph-mds sufficient to make that work, or would something need to be
recreated?
--
David Champion • d...@uchicago.edu • University of Chicago
Enrico Fermi Institute • Computation In
date, but to try not to deal with now. Maybe the
bind mount of /dev/rbd/rbd will do for now.
--
David Champion • d...@uchicago.edu • University of Chicago
Enrico Fermi Institute • Computation Institute • USATLAS Midwest Tier 2
OSG Connect • CI Connect
__
Ello —
I’ve been watching with great eagerness at the design and features of ceph
especially compared to the current distributed file systems I use. One of the
pains with VM work loads is when writes stall for more than a few seconds,
virtual machines that think they are communicating with a r
more or less PGs at
any given time.
David Zafman
Senior Developer
http://www.inktank.com
On Apr 24, 2014, at 8:09 AM, Chad Seys wrote:
> Hi All,
> What does osd_recovery_max_single_start do? I could not find a description
> of it.
>
>
On Apr 24, 2014, at 10:09 AM, Chad Seys wrote:
> Hi David,
> Thanks for the reply.
> I'm a little confused by OSD versus PGs in the description of the two
> options osd_recovery_max_single_start and osd_recovery_max_active .
An OSD manages all the PGs in its object store
ious Ceph daemons already operate in a clustered
manner.
(The documentation at
http://ceph.com/docs/firefly/architecture/#scalability-and-high-availability
may be helpful if you are not already familiar with this.)
Kind regards,
David
--
David McBride
Unix Specialist, University Information
RabbitMQ
cluster as an intermediary, and Grafana as a query/graphing tool.
Apart from causing some stress on the disk-subsystem attempting to
write-out all those metrics, this has been working out quite well...
Cheers,
David
--
David McBride
Unix Specialist, University Information Services
It isn’t clear to me what could cause a loop there. Just to be sure you don’t
have a filesystem corruption please try to run a “find” or “ls -R” on the
filestore root directory to be sure it completes.
Can you send the log you generated? Also, what version of Ceph are you running?
David
t the name of the object in the log. Start
an OSD and after it crashes examine the log. Then remove the extraneous
object using ceph-objectstore-tool. You'll have to repeat this process
if there are more of these.
David
On 9/3/15 2:22 AM, Gregory Farnum wrote:
On Thu, Sep 3, 2015 at
ot;max":0,"pool":3,"namespace":"","max":0}]
To remove it, cut and paste your output line with snapid 9197 inside
single quotes like this:
$ ceph-objectstore-tool --data-path xx --journal-path xx
'["3.f9",{"oid":"r
;snap": 2,
"size": 452,
"overlap": "[]"
},
{
"snap": 3,
"size": 452,
"overlap": "[]"
},
{
"snap": 4,
&qu
ool 3) and create a new one
5. Restore RBD images from backup using new pool (make sure you have
disk space as the pool delete removes objects asynchronously)
David
On 9/3/15 8:15 PM, Chris Taylor wrote:
On 09/03/2015 02:44 PM, David Zafman wrote:
Chris,
WARNING: Do this at your own risk. Yo
Chris,
I was wondering if you still had /tmp/snap.out laying around, could you
send it to me? The way the dump to json code works if the "clones" is
empty it doesn't show me what two other structures look like.
David
On 9/5/15 3:24 PM, Chris Taylor wrote:
# ceph-dencod
ealand)
mirror. Is there an intention to re-enable this on either
download.ceph.com, or the EU equivalent?
--
David Clarke
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
http://pastebin.com/BUm61Bbf
On our cluster that hosts mainly RBDs, we have an OSD fail, the OSD was
replaced. During the rebalance with the new OSD, another OSD failed. That
OSD was replaced during the continuing rebalance. Now that the dust has
settled most of are RBDs are hanging on PGs and
://www.hastexo.com/resources/hints-and-kinks/which-osd-stores-specific-rados-object
--David
On Fri, Sep 25, 2015 at 10:11 AM, Jan Schermer wrote:
> Ouch
> 1) I should have read it completely
> 2) I should have tested it :)
> Sorry about that...
>
> You could get the name prefix for each
limit this find to only those
PGs in question, which from what you have described is only 1. So figure
out which OSDs are active for the PG, and run the find in the subdir for
the placement group on one of those. It should run really fast unless you
have tons of tiny objects in the PG.
--
David Burley
mance
downgrades on production setup.
Best regards,
David.
--
David Bayle
System Administrator
GloboTech Communications
Phone: 1-514-907-0050
Toll Free: 1-(888)-GTCOMM1
Fax: 1-(514)-907-0750
supp...@gtcomm.net
http://www.gtcomm.net
___
ceph-users mailing list
ny current ceph download site
download.ceph.com::ceph seems to work for us.
--
David Clarke
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
wrote:
>> The density would be higher than the 36 drive units but lower than the
>> 72 drive units (though with shorter rack depth afaik).
> You mean the 1U solution with 12 disk is longer in length than 72 disk
> 4U version ?
This is a bit old and
On Wed, Sep 30, 2015 at 8:19 AM, Mark Nelson wrote:
> FWIW, I've mentioned to Supermicro that I would *really* love a version of the
> 5018A-AR12L that replaced the Atom with an embedded Xeon-D 1540. :)
Is even that enough? (It's a serious question; due to our insatiable
need for IOPs rather tha
Because we have a good thing going, our Ceph clusters are still
running Firefly on all of our clusters including our largest, all-SSD
cluster.
If I understand right, newer versions of Ceph make much better use of
SSDs and give overall much higher performance on the same equipment.
However, the imp
On Tue, Sep 29, 2015 at 7:32 AM, Jiri Kanicky wrote:
> Thank you for your reply. In this case I am considering to create separate
> partitions for each disk on the SSD drive. Would be good to know what is the
> performance difference, because creating partitions is kind of waste of
> space.
It ma
ight. So organizations in our position
just don't use that stuff. As a relative outsider for whom the Ceph
source code is effectively a foreign language, it's just *really* hard
to tell if Hammer in general is in that same "still baking" category.
Thanks!
On Wed, Sep 30, 201
avoid mixing O_DIRECT and normal I/O to the same
file, and especially to overlapping byte regions in the same file. Even
when the filesystem correctly handles the coherency issues in this
situation, overall I/O throughput is likely to be slower than using
either mode alone."
David
On 10
In the Ceph docs, at:
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
It says (under "Prepare OSDs"):
"Note: When running multiple Ceph OSD daemons on a single node, and
sharing a partioned journal with each OSD daemon, you should consider
the entire node the minimum failure d
On Mon, Oct 19, 2015 at 7:09 PM, John Wilkins wrote:
> The classic case is when you are just trying Ceph out on a laptop (e.g.,
> using file directories for OSDs, setting the replica size to 2, and setting
> osd_crush_chooseleaf_type to 0).
Sure, but the text isn’t really applicable in that situa
:
https://github.com/ceph/ceph/blob/master/src/ceph-disk#L83-L94
I assume this will work with CBT since it worked with Ceph OSDs.
--David
On Wed, Oct 21, 2015 at 5:34 PM, Artie Ziff wrote:
> My inquiry may be a fundamental Linux thing and/or requiring basic
> Ceph guidance.
>
> Accordi
to repair
# I'm in the primary OSD and the file below has been repaired correctly.
~# cat
/var/lib/ceph/osd/ceph-1/current/3.30_head/100.__head_F0B56F30__3
123456
As you can see, the repair command has worked well.
Maybe my little is too trivial?
Hmm, maybe David has some ide
kages files, so it looks like something has gone awry with the
repository build scripts.
A direct download in available at:
http://download.ceph.com/debian-hammer/pool/main/c/ceph-deploy/ceph-deploy_1.5.28trusty_all.deb
That version does not include /usr/share/man/man8/ceph-deploy.8.gz, and
so does not
rsion based on a v0.94.4 source tree. I don't know if you can
just copy files with names like
"current/meta/osdmap.8__0_FD6E4D61__none" (map for epoch 8) between OSDs.
David
On 10/21/15 8:54 PM, James O'Neill wrote:
I have an OSD that didn't come up after a reboot. I was
I was focused on fixing the OSD, but you need to determine if some
misconfiguration or hardware issue caused a filesystem corruption.
David
On 10/22/15 3:08 PM, David Zafman wrote:
There is a corruption of the osdmaps on this particular OSD. You need
determine which maps are bad probably
u enable straw2.
>
>
> I believe straw2 only requires monitor support -- unlike the tuna led
> involved in executing CRUSH, straw2 is just about how the OSD/bucket
> weights get converted into a sort of "internal" straw weight. That's
> done on the monitors and enc
in Hammer, as
well as fixing interoperability issues that are required before an
upgrade to Infernalis. That is, all users of earlier version of Hammer
or any version of Firefly will first need to upgrade to hammer v0.94.4
or later before upgrading to Infernalis (or future releases)."
[0
0K 19. Nov 10:32 .
drwxr-xr-x. 28 0 0 4,0K 19. Nov 11:14 ..
drwxr-x---. 2 167 1676 10. Nov 13:06 bootstrap-mds
drwxr-x---. 2 167 167 25 19. Nov 10:48 bootstrap-osd
drwxr-x---. 2 167 1676 10. Nov 13:06 bootstrap-rgw
drwxr-x---. 2 167 1676 10. Nov 13:06 mds
drwxr-x---. 2 167
I fixed the issue and opened a ticket on the ceph-deploy bug tracker
http://tracker.ceph.com/issues/13833
tl;dr:
change permission of the ssd journal partition with
chown ceph:ceph /dev/sdd1
On 19.11.2015 11:38, David Riedl wrote:
Hi everyone.
I updated one of my hammer osd nodes to infernalis
Regards
David
On 19.11.2015 14:02, Mykola Dvornik wrote:
cat /etc/udev/rules.d/89-ceph-journal.rules
KERNEL=="sdd?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660"
On 19 November 2015 at 13:54, Mykola <mailto:mykola.dvor...@gmail.com>
it's nerve wrecking.
Regards
David
On 19.11.2015 14:29, Mykola Dvornik wrote:
I am also using centos7.x. /usr/lib/udev/rules.d/ should be fine. If
not, one can always symlink to /etc/udev/rules.d/.
On 19 November 2015 at 14:13, David Riedl <mailto:david.ri...@wingcon.com>> wrote:
/operations/crush-map/
CEPH architecture overview (including CRUSH)
http://docs.ceph.com/docs/master/architecture/
- David
On 23.11.2015 15:44, louis wrote:
Hi, if I submit read or write io in a sequence from a ceph client,
will these sequence will be kept in osds side? Thanks
发自网易邮箱大师
as it sends the messages to the monitor.
David
On 12/3/15 4:36 AM, Wukongming wrote:
OK! One more question. Do you know why ceph has 2 ways outputting logs(dout &&
clog). Cause I find dout is more helpful than clog, Did ceph use clog first, and dout
added for new
kernel doesn't currently ship with CephFS support.
Regards, David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
There is potential for locking due to hung processes or such when you have OSDs
on Mons. My test cluster has OSDS on Mons and it hasn't come across it, but I
have heard of this happening in this mailing list. I don't think you would
ever hear a recommendation for this in a production environme
>>On 16-05-18 14:23, Sage Weil wrote:
>> Currently, after an OSD has been down for 5 minutes, we mark the OSD
>> "out", whic redistributes the data to other OSDs in the cluster. If the
>> OSD comes back up, it marks the OSD back in (with the same reweight value,
>> usually 1.0).
>>
>> The good thi
You can also mount the rbd with the discard option. It works the same way as
you would mount an ssd to free up the space when you delete things. I use the
discard option on my ext4 rbds on Ubuntu and it frees up the used Ceph space
immediately.
Sent from my iPhone
On May 19, 2016, at 12:30 PM,
Hi Andrey,
I usually use s3cmd for CLI and Sree for GUI.
2016-05-26 5:11 GMT+08:00 Andrey Ptashnik :
> Team,
>
> I wanted to ask if some of you are using CLI or GUI based S3
> browsers/clients with Ceph and what are the best ones?
>
> Regards,
>
> Andrey Ptashnik
>
> __
nswer to my problem, since I want to
utilize all interfaces and have redundancy at the same time.
So, back to my Question:
What 48 port gigabit switch is the best replacement for that type of
configuration? 10GB is not an option.
Regards
e Cumulus based offerings
(Penguin computing, etc).
Thanks, I'll look into it. Never heard of that protocol before.
Regards
David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
4. As Ceph has lots of connections on lots of IP's and port's, LACP or the
Linux ALB mode should work really well to balance connections.
Linux ALB Mode looks promising. Does that work with two switches? Each
server has 4 ports which are 'splitted' and connected to each switch.
Hi eric,
can the new release version 1.4 be used on ceph jewel ?
2016-05-31 15:05 GMT+08:00 eric mourgaya :
> hi guys,
>
> Inkscope 1.4 is released.
> You can find the rpms and debian packages at
> https://github.com/inkscope/inkscope-packaging.
> This release add a monitor panel using coll
First, please check your ceph cluster is HEALTH_OK and then check if you
have the caps the create users.
2016-05-31 16:11 GMT+08:00 Khang Nguyễn Nhật
:
> Thank, Wasserman!
> I followed the instructions here:
> http://docs.ceph.com/docs/master/radosgw/multisite/
> Step 1: radosgw-admin realm cre
Best practices in general say to do them separate. If something doesn't work...
Is it the new kernel, some package that are different on 16.04, Jewel, etc. The
less things in that list the easier it is to track down the issue and fix it.
As far as order, hammer 0.94.5 wasn't built with 16.04 in
things, then it isn't an issue for you. The point
stands, though, that upgrading for the sake of upgrading has a chance to
introduce regressions into your environment.
From: Uwe Mesecke [u...@mesecke.net]
Sent: Monday, June 06, 2016 8:01 AM
To: Davi
If you want to watch what a disk is doing while you watch it, use iostat on the
journal device. If you want to see it's patterns at all times of the day, use
sar. Neither of these are ceph specific commands, just Linux tools that can
watch your disk utilization, speeds, etc (among other things
list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Mit freundlichen Grüßen
David Riedl
WINGcon GmbH Wireless New Generation - Consulting & Solutions
Phone: +49 (0) 7543 9661 - 26
E-Mail: david.ri...@wingcon.com
Web: http://www.wingcon.com
Sitz
When you increase your PGs you're already going to be moving around all of your
data. Doing a full doubling of your PGs from 64 -> 128 -> 256 -> ... -> 2048
over and over and letting it backfill to healthy every time is a lot of extra
data movement that isn't needed.
I would recommend setting
The mon stores will remain "too big" until after
backfilling onto osd.53 finishes, but once the data stops moving around and all
of your osds are up and in, the mon stores will compact in no time.
I hope this helps. Ask questions if you have any, and never run a comma
and norecover flags just stop data from moving since you're
adding the osd back in with the same weight. Less useless data movement.
From: Salwasser, Zac [zsalw...@akamai.com]
Sent: Thursday, July 21, 2016 2:11 PM
To: David Turner; ceph-users@lists.ceph.com
could not find any follow up posts with details.
Does anyone have any more details on these internal methods and how to call
them?
Cheers,
David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the local filesystem but I want to ensure that the system will work
with BlueStore and any future changes so would love to be able to query this
e.g. via librados.
Cheers,
David
> -Original Message-
> From: Samuel Just [mailto:sj...@redhat.com]
> Sent: 27 July 2016 17:45
> To
@6aec5c76.479d003a]<https://storagecraft.com> David
Turner | Cloud Operations Engineer | StorageCraft Technology
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943
If you
t it looks like from here.
[cid:image81c5d9.JPG@4153b3b9.4998942b]<https://storagecraft.com> David
Turner | Cloud Operations Engineer | StorageCraft Technology
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office
r the backfilling is done, you
can just remove the old osds from the cluster and no more backfilling will
happen.
[cid:imaged112aa.JPG@2eb52165.4bb022da]<https://storagecraft.com> David
Turner | Cloud Operations Engineer | StorageCraft Technology
Corpor
best thing that you gain from doing
it this way is that you can remove multiple nodes/osds at the same time without
having degraded objects and especially without losing objects.
[cid:imagea5f6bc.JPG@1f888d0d.4ab0136e]<https://storagecraft.com> Da
rovide the output of
the following commands and try to clarify your question.
ceph status
ceph osd tree
I hope this helps.
> Hello David,
>
> Can you help me with steps/Procedure to uninstall Ceph storage from openstack
> environment?
>
>
> Regards
> Gaurav Goyal
_
to replace the OS drive of a storage node.
[cid:image036f36.JPG@bb80dafc.4f82c6ec]<https://storagecraft.com> David
Turner | Cloud Operations Engineer | StorageCraft Technology
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | D
emove the osd and uploading the same crush
map after you add it back in with the same id) then the only data movement will
be onto the re-added osd and nothing else.
[cid:image84bd5c.JPG@2e892687.44af8e6f]<https://storagecraft.com> David
Tur
deep-scrub finds problems it doesn't fix them. Try:
ceph osd repair
David Zafman
Senior Developer
david.zaf...@inktank.com
On Feb 27, 2013, at 12:43 AM, Jun Jun8 Liu wrote:
> Hi all
> I did a test about deep scrub . Version is " cep
Try first doing something like this first.
rados bench -p data 300 write --no-cleanup
David Zafman
Senior Developer
http://www.inktank.com
On Mar 12, 2013, at 1:46 PM, Scott Kinder wrote:
> When I try and do a rados bench, I see the following error:
>
> # rados bench -p dat
names of the preceding "_object#"
rados -p data cleanup benchmark_data_ubuntu_#
rados -p data rm benchmark_last_metadata
David Zafman
Senior Developer
http://www.inktank.com
On Mar 12, 2013, at 2:11 PM, Scott Kinder wrote:
> A follow-up question. How do I cleanup the written
d set noscrub
ceph osd set nodeep-scrub
David Zafman
Senior Developer
http://www.inktank.com
On Apr 16, 2013, at 12:30 AM, kakito wrote:
> Hi Martin B Nielsen,
>
> Thank you for your quick answer :)
>
> I am running with replication set to 1. Because my server used RAID 6,
> d
I don't believe that there would be a perceptible increase in data usage. The
next release called Cuttlefish is less than a week from release, so you might
wait for that.
Product questions should go to one of our mailing lists, not directly to
developers.
David Zafman
Senior Developer
I filed tracker bug 4822 and have wip-4822 with a fix. My manual testing shows
that it works. I'm building a teuthology test.
Given your osd tree has a single rack it should always mark OSDs down after 5
minutes by default.
David Zafman
Senior Developer
http://www.inktank.com
On A
According to "osdmap e504: 4 osds: 2 up, 2 in" you have 2 of 4 osds that are
down and out. That may be the issue.
David Zafman
Senior Developer
http://www.inktank.com
On May 8, 2013, at 12:05 AM, James Harper wrote:
> I've just upgraded my ceph install to cuttlefish (was
What version of ceph are you running?
David Zafman
Senior Developer
http://www.inktank.com
On May 20, 2013, at 9:14 AM, John Nielsen wrote:
> Some scrub errors showed up on our cluster last week. We had some issues with
> host stability a couple weeks ago; my guess is that error
I can't reproduce this on v0.61-2. Could the disks for osd.13 & osd.22 be
unwritable?
In your case it looks like the 3rd replica is probably the bad one, since
osd.13 and osd.22 are the same. You probably want to manually repair the 3rd
replica.
David Zafman
Senior Devel
1b
instructing pg 19.1b on osd.13 to repair
David Zafman
Senior Developer
http://www.inktank.com
On May 21, 2013, at 3:39 PM, John Nielsen wrote:
> I've checked, all the disks are fine and the cluster is healthy except for
> the inconsistent objects.
>
> How would I go about
replicas.
David Zafman
Senior Developer
http://www.inktank.com
On May 25, 2013, at 12:33 PM, Mike Lowe wrote:
> Does anybody know exactly what ceph repair does? Could you list out briefly
> the steps it takes? I unfortunately need to use it for an inconsist
It looks like the enclosure failure caused data corruption. Otherwise, your
OSD should have come back online as it would after a power failure.
David Zafman
Senior Developer
http://www.inktank.com
On May 26, 2013, at 9:09 AM, Andrey Korolyov wrote:
> Hello,
>
> Today a l
Cristian and everyone else have expertly responded to the SSD capabilities,
pros, and cons so I'll ignore that. I believe you were saying that it was
risky to swap out your existing journals to a new journal device. That is
actually a very simple operation that can be scripted to only take minutes
I wouldn't see this as problematic at all. As long as you're watching the
disk utilizations and durability, those are the only factors that would
eventually tell you that they are busy enough.
On Thu, Jun 22, 2017, 1:36 AM Ashley Merrick wrote:
> Hello,
>
>
> Currently have a pool of SSD's runni
Did you previously edit the init scripts to look in your custom location?
Those could have been overwritten. As was mentioned, Jewel changed what
user the daemon runs as, but you said that you tested running the daemon
manually under the ceph user? Was this without sudo? It used to run as root
unde
What is the output of the following command? If a directory has no quota,
it should respond "0" as the quota.
# getfattr -n ceph.quota.max_bytes /mnt/cephfs/foo
I tested this in my home cluster that uses ceph-fuse to mount cephfs under
the david user (hence no need for sudo). I'
I don't really have anything to add to this conversation, but I see emails
like this in the ML all the time. Have you looked through the archives?
Everything that's been told to you and everything you're continuing to ask
have been covered many many times.
http://lists.ceph.com/pipermail/ceph-use
If you have no control over what kernel the clients are going to use, then
I wouldn't even consider using the kernel driver for the clients. For me,
I would do anything to maintain the ability to use the object map which
would require the 4.9 kernel to use with the kernel driver. Because of
this
I doubt the ceph version from 10.2.5 to 10.2.7 makes that big of a
difference. Read through the release notes since 10.2.5 to see if it
mentions anything about cephfs quotas.
On Fri, Jun 23, 2017 at 12:30 PM Stéphane Klein
wrote:
> 2017-06-23 17:59 GMT+02:00 David Turner :
>
>>
# ceph health detail | grep 'ops are blocked'
# ceph osd blocked-by
My guess is that you have an OSD that is in a funky state blocking the
requests and the peering. Let me know what the output of those commands
are.
Also what are the replica sizes of your 2 pools? It shows that only 1 OSD
was l
All of the features you are talking about likely require the exclusive-lock
which requires the 4.9 linux kernel. You cannot map any RBDs that have
these features enabled with any kernel older than that.
The features you can enable are layering, exclusive-lock, object-map, and
fast-diff. You cann
dson
wrote:
> Thanks for the response:
>
> [root@ceph-control ~]# ceph health detail | grep 'ops are blocked'
> 100 ops are blocked > 134218 sec on osd.13
> [root@ceph-control ~]# ceph osd blocked-by
> osd num_blocked
>
> A problem with osd.13?
>
> Dan
&
"virtualized driver" that use
> ceph under a less-featured-standardized driver or if kernel and nbd differ
> only (assuming it's compared with last kernel) just for speed reason.
>
>
> Thanks Turner for any further info!
> Max
>
>
>
> Il 23/06/2017 18:21, Dav
What is your use case? That matters the most.
On Fri, Jun 23, 2017 at 4:31 PM David Turner wrote:
> I've never used nbd-rbd, I would use rbd-fuse. It's version should match
> your cluster's running version as it's a package compiled with each ceph
> release.
>
ure.
> Is it?
>
>
>
>
> Il 23/06/2017 22:25, David Turner ha scritto:
>
> All of the features you are talking about likely require the
> exclusive-lock which requires the 4.9 linux kernel. You cannot map any
> RBDs that have these features enabled with any kernel older
22 14:57:18.140672 7f3c6e03d700 0 log_channel(cluster) log [WRN]
> : 3 slow requests, 1 included below; oldest blocked for > 57.464203 secs
> 2017-06-22 14:57:18.140683 7f3c6e03d700 0 log_channel(cluster) log [WRN]
> : slow request 30.554865 seconds old, received at 2017-06-22
> 14:
annot restore anymore.
> As stated here: https://bugzilla.redhat.com/show_bug.cgi?id=1326645
> This is the right behaviour and redhat will not fix this.
>
> So don't downgrade your features or you'll have to export-import all the
> images.
> I'm getting crazy.
&g
server? My guess is that something in there is likely the culprit.
On Fri, Jun 23, 2017 at 6:26 PM Stéphane Klein
wrote:
>
> 2017-06-23 20:44 GMT+02:00 David Turner :
>
>> I doubt the ceph version from 10.2.5 to 10.2.7 makes that big of a
>> difference. Read through th
Snapshots are not a free action. To create them it's near enough free, but
deleting objects in Ceph is an n^2 operation. Being on Hammer you do not
have access to the object map feature on RBDs which drastically reduces the
n^2 problem by keeping track of which objects it actually needs to delete
Just so you're aware of why that's the case, the line
step chooseleaf firstn 0 type host
in your crush map under the rules section says "host". If you changed that
to "osd", then your replicas would be unique per OSD instead of per
server. If you had a larger cluster and changed it to "rack" an
I don't know specifics on Kubernetes or creating multiple keyrings for
servers, so I'll leave those for someone else. I will say that if you are
kernel mapping your RBDs, then the first tenant to do so will lock the RBD
and no other tenant can map it. This is built into Ceph. The original
tenant
What is the output of `lsblk`?
On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote:
> Dear cephers,
>
> Could someone show me an url where can I found how ceph calculate the
> available space?
>
> I've installed a small ceph (Kraken) environment with bluestore OSDs.
> The servers contains 2
The output of `sudo df -h` would also be helpful. Sudo/root is generally
required because the OSD folders are only readable by the Ceph user.
On Mon, Jun 26, 2017 at 4:37 PM David Turner wrote:
> What is the output of `lsblk`?
>
> On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Pét
And the `sudo df -h`? Also a `ceph df` might be helpful to see what's
going on.
On Mon, Jun 26, 2017 at 4:41 PM Papp Rudolf Péter wrote:
> Hi David!
>
> lsblk:
>
> NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:00 931,5G 0 disk
> ├─sda18:1
201 - 300 of 1516 matches
Mail list logo