n the default 4MB chunk size be
handled? Should they be padded somehow?
3) If any objects were completely missing and therefore unavailable to
this process, how should they be handled? I assume we need to offset/pad
to compensate.
--
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Arc
lls 3
--osd-recovery-max-active 3'
If I see slow requests, I drop them down.
The biggest downside to setting either to 1 seems to be the long tail
issue detailed in:
http://tracker.ceph.com/issues/9566
Thanks,
Mike Dawson
On 6/3/2015 6:44 PM, Sage Weil wrote:
On Mon, 1 Jun 2015, Greg
on as
your. Your results may vary.
- Mike Dawson
On 10/30/2014 4:50 PM, Erik Logtenberg wrote:
Thanks for pointing that out. Unfortunately, those tickets contain only
a description of the problem, but no solution or workaround. One was
opened 8 months ago and the other more than a year ago. No
this very issue
earlier this year, but got pulled in another direction before completing
the work. I'd like to bring a production cluster deployed with mkcephfs
out of the stone ages, so your work will be very useful to me.
Thanks again,
Mike Dawson
___
://ceph.com/debian-dumpling/pool/main/c/ceph/libcephfs1_0.67.11-1precise_amd64.deb
404 Not Found
Based on the timestamps of the files that made it, it looks like the
process to publish the packages isn't still in process, but rather
failed yesterday.
Thanks,
Mike Dawson
On 9/25/2014 11:
C, 0.67.11 does not include the proposed changes to address #9487 or
#9503, right?
Thanks,
Mike Dawson
* osd: fix mount/remount sync race (#9144 Sage Weil)
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.67.11.tar.gz
* For pac
up, but scaling up
will also help with the latency problems.
On Thu, Aug 28, 2014 at 10:38 AM, Mike Dawson mailto:mike.daw...@cloudapt.com>> wrote:
We use 3x replication and have drives that have relatively high
steady-state IOPS. Therefore, we tend to prioritize client-side IO
mor
On 8/28/2014 11:17 AM, Loic Dachary wrote:
On 28/08/2014 16:29, Mike Dawson wrote:
On 8/28/2014 12:23 AM, Christian Balzer wrote:
On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote:
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary
On 8/28/2014 12:23 AM, Christian Balzer wrote:
On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote:
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary wrote:
Hi Craig,
I assume the reason for the 48 hours recovery time is to keep the co
is each individual object
inside the PG locked as it is processed? Some of my PGs will be in
deep-scrub for minutes at a time.
0: http://ceph.com/docs/master/dev/osd_internals/scrub/
Thanks,
Mike Dawson
On 6/9/2014 6:22 PM, Craig Lewis wrote:
I've correlated a large deep scrubbing
Great work Inktank / Red Hat! An open source Calamari will be a great
benefit to the community!
Cheers,
Mike Dawson
On 5/30/2014 6:04 PM, Patrick McGarry wrote:
Hey cephers,
Sorry to push this announcement so late on a Friday but...
Calamari has arrived!
The source code bits have been
- node1: 10.2.1.1/24
- node2: 10.2.1.2/24
- public-leaf2: 10.2.2.0/24
ceph.conf would be:
cluster_network: 10.1.0.0/255.255.0.0
public_network: 10.2.0.0/255.255.0.0
- Mike Dawson
On 5/28/2014 1:01 PM, Travis Rhoden wrote:
Hi folks,
Does anybody know if there are any issues ru
any 'ceph'
related mounts.
Thanks,
Sharmila
On Wed, May 21, 2014 at 8:34 PM, Mike Dawson mailto:mike.daw...@cloudapt.com>> wrote:
Perhaps:
# mount | grep ceph
- Mike Dawson
On 5/21/2014 11:00 AM, Sharmila Govind wrote:
Hi,
I am new t
Perhaps:
# mount | grep ceph
- Mike Dawson
On 5/21/2014 11:00 AM, Sharmila Govind wrote:
Hi,
I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to
figure out to which pyhsical device/partition each of the OSDs are
attached to. Is there are command that can be executed in the
19
105 2014-05-20
I have set noscrub and nodeep-scrub, as well as noout and nodown off and
on while I performed various maintenance, but that hasn't (apparently)
impeded the regular schedule.
With what frequency are you setting the nodeep-scrub flag?
-Aaron
On Tue, May 20, 2014 at 5:21 P
needed a deep-scrub the longest.
Thanks,
Mike Dawson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Greg/Loic,
I can confirm that "logrotate --force /etc/logrotate.d/ceph" removes the
monitor admin socket on my boxes running 0.80.1 just like the
description in Issue 7188 [0].
0: http://tracker.ceph.com/issues/7188
Should that bug be reopened?
Thanks,
Mike Dawson
On 5/13/20
mors that it
may be open sourced at some point in the future.
Cheers,
Mike Dawson
On 5/13/2014 12:33 PM, Adrian Banasiak wrote:
Thanks for sugestion with admin daemon but it looks like single osd
oriented. I have used perf dump on mon socket and it output some
interesting data in case of monitoring
Upstart to control daemons. I never see this issue on
Ubuntu / Dumpling / sysvinit.
Has anyone else seen this issue or know the likely cause?
--
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 4
set the primary affinity:
# ceph osd primary-affinity osd.0 1
I have not scaled up my testing, but it looks like this has the
potential to work well in preventing unnecessary read starvation in
certain situations.
0: http://tracker.ceph.com/issues/8323#note-1
Cheers,
Mike Dawson
On 5/8/20
vel, but rather stays low seemingly for days at a time, until the
next onslaught. If driven by the max scrub interval, shouldn't it jump
quickly back up?
Is there way to find the last scrub time for a given PG via the CLI to
know for sure?
Thanks,
Mike Dawson
On 5/7/2014 10:59 PM
ng deep-scrub permanently?
0: http://www.mikedawson.com/deep-scrub-issue1.jpg
1: http://www.mikedawson.com/deep-scrub-issue2.jpg
Thanks,
Mike Dawson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rom being marked down (with or without proper cause), but that
tends to cause me more trouble than its worth.
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 5/7/2014 1:28 PM, Craig Lewis wrote:
The 5 OSDs
he cost of setting primary affinity is low enough, perhaps this
strategy could be automated by the ceph daemons.
Thanks,
Mike Dawson
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@l
Victor,
This is a verified issue reported earlier today:
http://tracker.ceph.com/issues/8260
Cheers,
Mike
On 4/30/2014 3:10 PM, Victor Bayon wrote:
Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
error when running the "ceph-deploy osd activate" and I am gett
Hi Greg,
On 4/19/2014 2:20 PM, Greg Poirier wrote:
We have a cluster in a sub-optimal configuration with data and journal
colocated on OSDs (that coincidentally are spinning disks).
During recovery/backfill, the entire cluster suffers degraded
performance because of the IO storm that backfills
Thanks Dan!
Thanks,
Mike Dawson
On 4/17/2014 4:06 AM, Dan van der Ster wrote:
Mike Dawson wrote:
Dan,
Could you describe how you harvested and analyzed this data? Even
better, could you share the code?
Cheers,
Mike
First enable debug_filestore=10, then you'll see logs like this:
20
Dan,
Could you describe how you harvested and analyzed this data? Even
better, could you share the code?
Cheers,
Mike
On 4/16/2014 11:08 AM, Dan van der Ster wrote:
Dear ceph-users,
I've recently started looking through our FileStore logs to better
understand the VM/RBD IO patterns, and not
Hello,
I have a production cluster that was deployed with mkcephfs around the
Bobtail release. Quite a bit has changed in regards to ceph.conf
conventions, ceph-deploy, symlinks to journal partitions, udev magic,
and upstart.
Is there any path to migrate these OSDs up to the new style setup?
Adam,
I believe you need the command 'ceph osd create' prior to 'ceph-osd -i X
--mkfs --mkkey' for each OSD you add.
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual
Cheers,
Mike
On 4/5/2014 7:37 PM, Adam Clark wrote:
HI all,
I am trying to setup a Ceph c
li/lio config is a simple default config without any
tuning or big configurations.
Mit freundlichen Grüßen / Best Regards,
Uwe Grohnwaldt
- Original Message -
> From: "Timofey"
> To: "Mike Dawson"
> Cc: ceph-users@lists.ceph.com
> Sent: Dienstag, 17.
I think my wording was a bit misleading in my last message. Instead of
"no re-balancing will happen", I should have said that no OSDs will be
marked out of the cluster with the noout flag set.
- Mike
On 12/21/2013 2:06 PM, Mike Dawson wrote:
It is also useful to mention that you c
It is also useful to mention that you can set the noout flag when doing
maintenance of any given length needs to exceeds the 'mon osd down out
interval'.
$ ceph osd set noout
** no re-balancing will happen **
$ ceph osd unset noout
** normal re-balancing rules will resume **
- M
Christian,
I think you are going to suffer the effects of spindle contention with
this type of setup. Based on your email and my assumptions, I will use
the following inputs:
- 4 OSDs, each backed by a 12-disk RAID 6 set
- 75iops for each 7200rpm 3TB drive
- RAID 6 write penalty of 6
- OSD Jo
can then inject new settings to running daemons with injectargs:
# ceph tell osd.* injectargs '--osd_max_backfills 10'
Or, your can add those to ceph.conf and restart the daemons.
Cheers,
Mike Dawson
On 12/5/2013 9:54 AM, Jonas Andersson wrote:
I mean, I have OSD's and MON'
Robert,
Do you have rbd writeback cache enabled on these volumes? That could
certainly explain the higher than expected write performance. Any chance
you could re-test with rbd writeback on vs. off?
Thanks,
Mike Dawson
On 12/3/2013 10:37 AM, Robert van Leeuwen wrote:
Hi Mike,
I am using
have? Any RAID involved under your OSDs?
Thanks,
Mike Dawson
On 12/3/2013 1:31 AM, Robert van Leeuwen wrote:
On 2 dec. 2013, at 18:26, "Brian Andrus" wrote:
Setting your pg_num and pgp_num to say... 1024 would A) increase data
granularity, B) likely lend no noticeable i
ss racks rather than hosts if your cluster will be large
enough.
- Don't set the "ceph osd set nodown" flag on your cluster, as it will
prevent osds from being marked as down automatically if unavailable,
substantially diminishing the HA capabilities.
Cheers,
Mike Dawson
On 1
/ceph-devel@vger.kernel.org/msg16168.html
4) Once you get an RBD admin socket, query it like:
ceph --admin-daemon /var/run/ceph/rbd-29050.asok config show | grep rbd
Cheers,
Mike Dawson
On 11/25/2013 11:12 AM, Gregory Farnum wrote:
On Mon, Nov 25, 2013 at 5:58 AM, Mark Nelson wrote:
On
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 11/7/2013 2:12 PM, Kyle Bader wrote:
Once I know a drive has had a head failure, do I trust that the rest of the
drive isn't going to go at an inc
https://github.com/gregsfortytwo/fsync-tester
Thanks,
Mike Dawson
On 11/6/2013 4:18 PM, Dinu Vlad wrote:
ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
By "fixed" - you mean replaced the SSDs?
Thanks,
Dinu
On Nov 6, 2013, at 10:25 PM, Mike Dawson wrote:
We just fixed a pe
We just fixed a performance issue on our cluster related to spikes of
high latency on some of our SSDs used for osd journals. In our case, the
slow SSDs showed spikes of 100x higher latency than expected.
What SSDs were you using that were so slow?
Cheers,
Mike
On 11/6/2013 12:39 PM, Dinu Vla
I also have time I could spend. Thanks for getting this started Loic!
Thanks,
Mike Dawson
On 11/6/2013 12:35 PM, Loic Dachary wrote:
Hi Ceph,
I would like to open a discussion about organizing a Ceph User Committee. We
briefly discussed the idea with Ross Turk, Patrick McGarry and Sage Weil
Narendra,
This is an issue. You really want your cluster to he HEALTH_OK with all
PGs active+clean. Some exceptions apply (like scrub / deep-scrub).
What do 'ceph health detail' and 'ceph osd tree' show?
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architectur
Aaron,
Don't mistake valid for advisable.
For documentation purposes, three monitors is the advisable initial
configuration for multi-node ceph clusters. If there is a valid need for
more than three monitors, it is advisable to add them two at a time to
maintain an odd number of total monitor
Vernon,
You can use the rbd command bench-write documented here:
http://ceph.com/docs/next/man/8/rbd/#commands
The command might looks something like:
rbd --pool test-pool bench-write --io-size 4096 --io-threads 16
--io-total 1GB test-image
Some other interesting flags are --rbd-cache, --no
were you seeing on the cluster during
the periods where things got laggy due to backfills, etc?
Last, did you attempt to throttle using ceph config setting in the old
setup? Do you need to throttle in your current setup?
Thanks,
Mike Dawson
On 10/24/2013 10:40 AM, Kurt Bauer wrote:
Hi,
we
For the time being, you can install the Raring debs on Saucy without issue.
echo deb http://ceph.com/debian-dumpling/ raring main | sudo tee
/etc/apt/sources.list.d/ceph.list
I'd also like to register a +1 request for official builds targeted at
Saucy.
Cheers,
Mike
On 10/22/2013 11:42 AM,
Andrija,
You can use a single pool and the proper CRUSH rule
step chooseleaf firstn 0 type host
to accomplish your goal.
http://ceph.com/docs/master/rados/operations/crush-map/
Cheers,
Mike Dawson
On 10/16/2013 5:16 PM, Andrija Panic wrote:
Hi,
I have 2 x 2TB disks, in 3 servers, so
/01Planning/02Blueprints/Emperor/Erasure_coded_storage_backend_%28step_2%29
Initial release is scheduled for Ceph's Firefly release in February 2014.
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
On 10/3/2013 2:44 PM, Aronesty, Erik wrote:
Does Ceph really h
tep chooseleaf" lines inside the rule for
each pool. Under certain configurations, I believe the placement that
you describe is in fact the expected behavior.
Thanks,
Mike Dawson
Co-Founder, Cloudapt LLC
On 10/1/2013 10:46 AM, Chen, Ching-Cheng (KFRM 1) wrote:
Found a weird behavior (or l
acker.ceph.com/issues/6278
[2] http://tracker.ceph.com/issues/6333
I think this family of issues speak to the need for Ceph to have more
visibility into the underlying storage's limitations (especially spindle
contention) when performing known expensive maintainance operations.
Thanks,
Mike Dawson
On
returns.
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
On 9/18/2013 1:27 PM, Gruher, Joseph R wrote:
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Mike Dawson
you need to unders
partition for each journal and leave the rest of the SSD
unallocated (it will be used for wear-leveling). If you use
high-endurance SSDs, you could certainly consider smaller drives as long
as they maintain sufficient performance characteristics.
Thanks,
Mike Dawson
Co-Founder & Director
cause. To re-enable
scrub and deep-scrub:
# ceph osd unset noscrub
# ceph osd unset nodeep-scrub
Because you seem to only have two OSDs, you may also be saturating your
disks even without scrub or deep-scrub.
http://tracker.ceph.com/issues/6278
Cheers,
Mike Dawson
On 9/16/2013 12:30 PM
aps: [osd] allow rwx pool=images, allow class-read object_prefix
rbd_children
client.volumes
key: AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes
Thanks
Darren
On 10 September
Darren,
I can confirm Copy on Write (show_image_direct_url = True) does work in
Grizzly.
It sounds like you are close. To check permissions, run 'ceph auth
list', and reply with "client.images" and "client.volumes" (or whatever
keys you use in Glance and Cinde
perf2 appear very
promising.
Thanks for your work! I'll report back tomorrow if I have any new results.
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 8/29/2013 2:52 PM, Oliver Daudey wrote:
Hey Mark
Jumping in pretty late on this thread, but I can confirm much higher CPU
load on ceph-osd using 0.67.1 compared to 0.61.7 under a write-heavy RBD
workload. Under my workload, it seems like it might be 2x-5x higher CPU
load per process.
Thanks,
Mike Dawson
On 8/22/2013 4:41 AM, Oliver Daudey
-acb4-2d32b4e0d3be
mon_initial_members = ubuntu3
mon_host = 10.147.41.3
#auth_supported = cephx
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true
-Original Message-
From: Mike Dawson [mailto:mik
Looks like you didn't get osd.0 deployed properly. Can you show:
- ls /var/lib/ceph/osd/ceph-0
- cat /etc/ceph/ceph.conf
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 8/8/2013 9:13 AM, Sur
Steffan,
It works for me. I have:
user@node:/etc/ceph# cat /etc/glance/glance-api.conf | grep rbd
default_store = rbd
# glance.store.rbd.Store,
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = images
rbd_store_pool = images
rbd_store_chunk_size = 4
Thanks,
Mike Dawson
SDs)
- rbd cache = true and cache=writeback
- qemu 1.4.0 1.4.0+dfsg-1expubuntu4
- Ubuntu Raring with 3.8.0-25-generic
This issue is reproducible in my environment, and I'm willing to run any
wip branch you need. What else can I provide to help?
Thanks,
Mike Dawson
On 8/5/2013 3:48 AM, Stefan
On 8/5/2013 12:51 PM, Brian Candler wrote:
On 05/08/2013 17:15, Mike Dawson wrote:
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
/Erasure_encoding_as_a_storage_backend
[4]:
http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend
[5]: http://www.inktank.com/about-inktank/roadmap/
Cheers,
Mike Dawson
On 8/5/2013 9:50 AM, Brian Candler wrote:
I am looking at evaluating ceph for use with
debug rbd = 20, debug ms = 1, and debug objectcacher = 30 that would be
great"
We'll do that over the weekend. If you could as well, we'd love the help!
[1] http://www.gammacode.com/kvm/wedged-with-timestamps.txt
[2] http://www.gammacode.com/kvm/not-wedged.txt
Thanks,
Mike Daws
are required anymore
though. See some history here:
http://tracker.ceph.com/issues/4895
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 8/1/2013 6:52 PM, Jeppesen, Nelson wrote:
My Mon store.db has been a
On 7/31/2013 3:34 PM, Greg Poirier wrote:
On Wed, Jul 31, 2013 at 12:19 PM, Mike Dawson mailto:mike.daw...@cloudapt.com>> wrote:
Due to the speed of releases in the Ceph project, I feel having
separate physical hardware is the safer way to go, especially in
light of your ment
production services.
A separate non-production cluster will allow you to test and validate
new versions (including point releases within a stable series) before
you attempt to upgrade your production cluster.
Cheers,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330
ot;}
# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok version
{"version":"0.61.7"}
Also, I use 'service ceph restart' on Ubuntu 13.04 running a mkcephfs
deployment. It may be different when using ceph-deploy.
Thanks,
Mike Dawson
Co-Founder & Director of
You can specify the uuid in the secret.xml file like:
bdf77f5d-bf0b-1053-5f56-cd76b32520dc
client.volumes secret
Then use that same uuid on all machines in cinder.conf:
rbd_secret_uuid=bdf77f5d-bf0b-1053-5f56-cd76b32520dc
Also, the column you are referring to in the Open
, Darryl Bond wrote:
Thanks for your prompt response.
Given that my mon.c /var/lib/ceph/mon/ceph-c is currently populated,
should I delete it's contents after removing the monitor and before
re-adding it?
Darryl
On 06/26/13 12:50, Mike Dawson wrote:
Darryl,
I've seen this issue a few ti
Darryl,
I've seen this issue a few times recently. I believe Joao was looking
into it at one point, but I don't know if it has been resolved (Any news
Joao?). Others have run into it too. Look closely at:
http://tracker.ceph.com/issues/4999
http://irclogs.ceph.widodh.nl/index.php?date=2013-06
Behind a registration form, but iirc, this is likely what you are
looking for:
http://www.inktank.com/resource/dreamcompute-architecture-blueprint/
- Mike
On 5/31/2013 3:26 AM, Gandalf Corvotempesta wrote:
In reference architecture PDF, downloadable from your website, there was
some reference
Jeff,
Perhaps these?
http://tracker.ceph.com/issues/4834
http://ceph.com/packages/qemu-kvm/
- Mike
On 6/4/2013 8:16 AM, Jeff Bachtel wrote:
Hijacking (because it's related): a couple weeks ago on IRC it was
indicated a repo with these (or updated) qemu builds for CentOS should
be coming soon
.
Thanks for the correction Sylvain.
- Mike
On 5/21/2013 8:57 AM, Mike Dawson wrote:
Sylvain,
I can confirm I see a similar traffic pattern.
Any time I have lots of writes going to my cluster (like heavy writes
from RBD or remapping/backfilling after losing an OSD), I see all sorts
of monitor
Sylvain,
I can confirm I see a similar traffic pattern.
Any time I have lots of writes going to my cluster (like heavy writes
from RBD or remapping/backfilling after losing an OSD), I see all sorts
of monitor issues.
If my monitor leveldb store.db directories grow past some unknown point
(m
Anyone running 0.61.1,
Watch out for high disk usage due to a file likely located at
/var/log/ceph/ceph-mon..tdump. This file contains debugging
for monitor transactions. This debugging was added in the past week or
so to track down another anomaly. It is not necessary (or useful unless
you a
kfs option.
#devs = {path-to-device}
[osd.1]
host = ceph
#devs = {path-to-device}
[mds.a]
host = ceph
On Wed, May 1, 2013 at 12:14 PM, Mike Dawson
mailto:mike.daw...@scholarstack.com>> wrote:
Wyatt,
Please post your ceph.conf.
- mike
On 5/1/2013 12:06 P
Wyatt,
Please post your ceph.conf.
- mike
On 5/1/2013 12:06 PM, Wyatt Gorman wrote:
Hi everyone,
I'm setting up a test ceph cluster and am having trouble getting it
running (great for testing, huh?). I went through the installation on
Debian squeeze, had to modify the mkcephfs script a bit be
Sage,
I confirm this issue. The requested info is listed below.
*Note that due to the pre-Cuttlefish monitor sync issues, this
deployment has been running three monitors (mon.b and mon.c working
properly in quorum. mon.a stuck forever synchronizing).
For the past two hours, no OSD processes
Mike,
I use a process like:
crushtool -c new-crushmap.txt -o new-crushmap && ceph osd setcrushmap -i
new-crushmap
I did not attempt to validate your crush map. If that command fails, I
would scrutinize your crushmap for validity/correctness.
Once you have the new crushmap injected, you can
Mandell,
Not sure if you can start with a partition to see which OSD it belongs
to, but you can start with the OSDs to see what journal partition
belongs to it:
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep
osd_journal | grep -v size
- Mike
On 4/24/2013 9:05 PM, Man
On 4/19/2013 11:43 AM, Gregory Farnum wrote:
On Thu, Apr 18, 2013 at 7:59 PM, Mike Dawson wrote:
Greg,
Looks like Sage has a fix for this problem. In case it matters, I have seen
a few cases that conflict with your notes in this thread and the bug report.
I have seen the bug exclusively on
Greg,
Looks like Sage has a fix for this problem. In case it matters, I have
seen a few cases that conflict with your notes in this thread and the
bug report.
I have seen the bug exclusively on new Ceph installs (without upgrading
from bobtail), so it is not isolated to upgrades.
Further,
Matthew,
I have seen the same behavior on 0.59. Ran through some troubleshooting
with Dan and Joao on March 21st and 22nd, but I haven't looked at it
since then.
If you look at running processes, I believe you'll see an instance of
ceph-create-keys start each time you start a Monitor. So, if
86 matches
Mail list logo