___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I've 2 dell systems with PERC H710 raid cards. Those are very good end
cards , but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
to:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Robert Fantini
> *Sent:* Wednesday, July 16, 2014 1:55 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] PERC H710 raid card
>
>
>
> I've 2 dell systems with PERC H710 raid cards. Those are very good end
Hi,
the next Ceph MeetUp in Berlin, Germany, happens on July 28.
http://www.meetup.com/Ceph-Berlin/events/195107422/
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a
Hello.
In this set up:
PowerEdge R720
Raid: Perc H710 eight-port, 6Gb/s
OSD drives: qty 4: Seagate Constellation ES.3 ST2000NM0023 2TB 7200 RPM
128MB Cache SAS 6Gb/s
Would it make sense to uses these good sas drives in raid-1 for journal?
Western Digital XE WD3001BKHG 300GB 1 RPM 32MB Cache
Hello Christian.
Our current setup has 4 osd's per node.When a drive fails the
cluster is almost unusable for data entry. I want to change pour set up
so that under no circumstances ever happens.We used drbd for 8 years,
and our main concern is high availability . 1200bps Modem spe
I've a question regarding advice from these threads:
https://mail.google.com/mail/u/0/#label/ceph/1476b93097673ad7?compose=1476ec7fef10fd01
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg11011.html
Our current setup has 4 osd's per node.When a drive fails the
cluster is almos
ying DRBD where it makes more sense
> (IOPS/speed), while migrating everything else to Ceph.
>
> Anyway, lets look at your mail:
>
> On Fri, 25 Jul 2014 14:33:56 -0400 Robert Fantini wrote:
>
> > I've a question regarding advice from these threads:
> >
> https:/
I have 3 hosts that i want to use to test new setup...
Currently they have 3-4 OSD's each.
Could you suggest a fast way to remove all the OSD's ?
On Mon, Jul 28, 2014 at 3:49 AM, Christian Balzer wrote:
>
> Hello,
>
> On Sun, 27 Jul 2014 18:20:43 -0400 Robert Fant
>
> On Mon, 28 Jul 2014 04:19:16 -0400 Robert Fantini wrote:
>
> > I have 3 hosts that i want to use to test new setup...
> >
> > Currently they have 3-4 OSD's each.
> >
> How did you create the current cluster?
>
> ceph-deploy or something withing
r will
> not serve those requests if quorum is not in place.
>
> -Joao
>
>
>
>> On 28/07/2014 12:22, Joao Eduardo Luis wrote:
>>
>>> On 07/28/2014 08:49 AM, Christian Balzer wrote:
>>>
>>>>
>>>> Hello,
>>>>
>>
: any other idea on how to increase availability are welcome .
On Mon, Jul 28, 2014 at 12:29 PM, Christian Balzer wrote:
> On Mon, 28 Jul 2014 11:22:38 +0100 Joao Eduardo Luis wrote:
>
> > On 07/28/2014 08:49 AM, Christian Balzer wrote:
> > >
> > > Hello,
>
uired to allow a single room to operate.
>
> There's no way you can do a 3/2 MON split that doesn't risk the two nodes
> being up and unable to serve data while the three are down so you'd need to
> find a way to make it a 2/2/1 split instead.
>
> -Michael
>
>
&g
anced?
If not I'll stick with 2 each room until I understand how configure things.
On Mon, Jul 28, 2014 at 9:19 PM, Christian Balzer wrote:
>
> On Mon, 28 Jul 2014 18:11:33 -0400 Robert Fantini wrote:
>
> > "target replication level of 3"
> > " with a mi
wrote:
>
> Hello,
>
> On Tue, 29 Jul 2014 06:33:14 -0400 Robert Fantini wrote:
>
> > Christian -
> > Thank you for the answer, I'll get around to reading 'Crush Maps ' a
> > few times , it is important to have a good understanding of ceph part
m at the same time like 2 out of 3).
I've read through the online manual, so now I'm looking for personal
perspectives that you may have.
Thanks,
Robert LeBlanc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This may be a better question for Federico. I've pulled the systemd stuff
from git and I have it working, but only if I have the volumes listed in
fstab. Is this the intended way that systemd will function for now or am I
missing a step? I'm pretty new to systemd.
Thanks,
Robert LeBlan
OK, I don't think the udev rules are on my machines. I built the cluster
manually and not with ceph-deploy. I must have missed adding the rules in
the manual or the Packages from Debian (Jessie) did not create them.
Robert LeBlanc
On Mon, Aug 18, 2014 at 5:49 PM, Sage Weil wrote:
> On
ht, a
udev-trigger should mount and activate the OSD, and I won't have to
manually run the init.d script?
Thanks,
Robert LeBlanc
On Tue, Aug 19, 2014 at 9:21 AM, Sage Weil wrote:
> On Tue, 19 Aug 2014, Robert LeBlanc wrote:
> > OK, I don't think the udev rules are on my machi
is if the cluster (2+1) is HEALTHY,
does the write return after 2 of the OSDs (itself and one replica) complete
the write or only after all three have completed the write? We are planning
to try to do some testing on this as well if a clear answer can't be found.
Thank you,
Robert LeBlan
Thanks, your responses have been helpful.
On Tue, Aug 19, 2014 at 1:48 PM, Gregory Farnum wrote:
> On Tue, Aug 19, 2014 at 11:18 AM, Robert LeBlanc
> wrote:
> > Greg, thanks for the reply, please see in-line.
> >
> >
> > On Tue, Aug 19, 2014 at 11:
hing there.
Robert LeBlanc
On Fri, Aug 22, 2014 at 12:41 PM, Andrei Mikhailovsky
wrote:
> Hello guys,
>
> I am planning to perform regular rbd pool off-site backup with rbd export
> and export-diff. I've got a small ceph firefly cluster with an active
> writeback cache pool made o
ndrei
>
>
> ----- Original Message -
> From: "Robert LeBlanc"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com
> Sent: Friday, 22 August, 2014 8:21:08 PM
> Subject: Re: [ceph-users] pool with cache pool and rbd export
>
>
> My understan
I believe the scrubbing happens at the pool level, when the backend pool is
scrubbed it is independent of the cache pool. It would be nice to get some
definite answers from someone who knows a lot more.
Robert LeBlanc
On Fri, Aug 22, 2014 at 3:16 PM, Andrei Mikhailovsky
wrote:
> Does t
s our reasoning sound in this regard?
Thanks,
Robert LeBlanc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Aug 27, 2014 at 4:15 PM, Sage Weil wrote:
> On Wed, 27 Aug 2014, Robert LeBlanc wrote:
> > I'm looking for a way to prioritize the heartbeat traffic higher than the
> > storage and replication traffic. I would like to keep the ceph.conf as
> > simple as
Interesting concept. What if this was extended to an external message bus
system like RabbitMQ, ZeroMQ, etc?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Aug 27, 2014 7:34 PM, "Matt W. Benjamin" wrote:
> Hi,
>
> I wasn't thinking of an interface
How many PGs do you have in your pool? This should be about 100/OSD. If it
is too low, you could get an imbalance. I don't know the consequence of
changing it on such a full cluster. The default values are only good for
small test environments.
Robert LeBlanc
Sent from a mobile device p
://www.oszimt.de/.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz
According to http://ceph.com/docs/master/rados/operations/crush-map/, you
should be able to construct a clever use of 'step take' and 'step choose'
rules in your CRUSH map to force one copy to a particular bucket and allow
the other two copies to be chosen elsewhere. I was looking for a way to
have
ill be the
best option, but it can still use some performance tweaking with small
reads before it will be really viable for us.
Robert LeBlanc
On Thu, Sep 4, 2014 at 10:21 AM, Dan Van Der Ster wrote:
> Dear Cephalopods,
>
> In a few weeks we will receive a batch of 200GB Intel DC S3700’s
;t want to make any big changes until we have a better idea of what the
future looks like. I think the Enterprise versions of Ceph (n-1 or n-2)
will be a bit too old from where we want to be, which I'm sure will work
wonderfully on Red Hat, but how will n.1, n.2 or n.3 run?
Robert LeBlanc
On T
yet. Do you know if you can use an md RAID1 as a cache
> dev? And is the graceful failover from wb to writethrough actually working
> without data loss?
>
> Also, write behind sure would help the filestore, since I'm pretty sure
> the same 4k blocks are being overwritten many t
gh.
Are the patches you talk about just backports from later kernels or
something different?
Robert LeBlanc
On Thu, Sep 4, 2014 at 1:13 PM, Stefan Priebe wrote:
> Hi Dan, hi Robert,
>
> Am 04.09.2014 21:09, schrieb Dan van der Ster:
>
> Thanks again for all of your input.
We are still in the middle of testing things, but so far we have had more
improvement with SSD journals than the OSD cached with bcache (five OSDs
fronted by one SSD). We still have yet to test if adding a bcache layer in
addition to the SSD journals provides any additional improvements.
Robert
Sorry this is delayed, catching up. I beleive this was talked about in
the last Ceph summit. I think this was the blueprint.
https://wiki.ceph.com/Planning/Blueprints/Hammer/Towards_Ceph_Cold_Storage
On Wed, Jan 14, 2015 at 9:35 AM, Martin Millnert wrote:
> Hello list,
>
> I'm currently trying to
We have had good luck with letting udev do it's thing on CentOS7.
On Wed, Feb 18, 2015 at 7:46 PM, Anthony Alba wrote:
> Hi Cephers,
>
> What is your "best practice" for starting up OSDs?
>
> I am trying to determine the most robust technique on CentOS 7 where I
> have too much choice:
>
> udev/g
Hi,
I would like to invite you to our next MeetUp in Berlin on March 23:
http://www.meetup.com/Ceph-Berlin/events/219958751/
Stephan Seitz will talk about HA-iSCSI with Ceph.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel
We use ceph-disk without any issues on CentOS7. If you want to do a
manual deployment, verfiy you aren't missing any steps in
http://ceph.com/docs/master/install/manual-deployment/#long-form.
On Tue, Feb 24, 2015 at 5:46 PM, Barclay Jameson
wrote:
> I have tried to install ceph using ceph-deploy
would like to try it to offer some feedback on your
question.
Thanks,
Robert LeBlanc
On Wed, Feb 25, 2015 at 12:31 PM, Sage Weil wrote:
> Hey,
>
> We are considering switching to civetweb (the embedded/standalone rgw web
> server) as the primary supported RGW frontend instead of
I think that your problem lies with systemd (even though you are using
SysV syntax, systemd is really doing the work). Systemd does not like
multiple arguments and I think this is why it is failing. There is
supposed to be some work done to get systemd working ok, but I think
it has the limitation
Cool, I'll see if we have some cycles to look at it.
On Wed, Feb 25, 2015 at 2:49 PM, Sage Weil wrote:
> On Wed, 25 Feb 2015, Robert LeBlanc wrote:
>> We tried to get radosgw working with Apache + mod_fastcgi, but due to
>> the changes in radosgw, Apache, mode_*cgi, etc a
s.
Thanks,
Robert LeBlanc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
teresting part is that "ceph-disk activate" apparently does it
> correctly. Even after reboot, the services start as they should.
>
> On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc
> wrote:
>>
>> I think that your problem lies with systemd (even though you are u
Thanks, we were able to get it up and running very quickly. If it
performs well, I don't see any reason to use Apache+fast_cgi. I don't
have any problems just focusing on civetweb.
On Wed, Feb 25, 2015 at 2:49 PM, Sage Weil wrote:
> On Wed, 25 Feb 2015, Robert LeBlanc wrote:
>&
Thu, Feb 26, 2015 at 11:39 AM, Deneau, Tom wrote:
> Robert --
>
> We are still having trouble with this.
>
> Can you share your [client.radosgw.gateway] section of ceph.conf and
> were there any other special things to be aware of?
>
> -- Tom
>
> -Original Mess
+1 for proxy. Keep the civetweb lean and mean and if people need
"extras" let the proxy handle this. Proxies are easy to set-up and a
simple example could be included in the documentation.
On Thu, Feb 26, 2015 at 11:43 AM, Wido den Hollander wrote:
>
>
>> Op 26 feb. 2015 om 18:22 heeft Sage Weil
Does deleting/reformatting the old osds improve the performance?
On Fri, Feb 27, 2015 at 6:02 AM, Corin Langosch
wrote:
> Hi guys,
>
> I'm using ceph for a long time now, since bobtail. I always upgraded every
> few weeks/ months to the latest stable
> release. Of course I also removed some osds
Also sending to the devel list to see if they have some insight.
On Wed, Feb 25, 2015 at 3:01 PM, Robert LeBlanc wrote:
> I tried finding an answer to this on Google, but couldn't find it.
>
> Since BTRFS can parallel the journal with the write, does it make
> sense to have t
I would be inclined to shut down both OSDs in a node, let the cluster
recover. Once it is recovered, shut down the next two, let it recover.
Repeat until all the OSDs are taken out of the cluster. Then I would
set nobackfill and norecover. Then remove the hosts/disks from the
CRUSH then unset nobac
If I remember right, someone has done this on a live cluster without
any issues. I seem to remember that it had a fallback mechanism if the
OSDs couldn't be reached on the cluster network to contact them on the
public network. You could test it pretty easily without much impact.
Take one OSD that h
t; Thanks for the tip of course !
> Andrija
>
> On 3 March 2015 at 18:34, Robert LeBlanc wrote:
>>
>> I would be inclined to shut down both OSDs in a node, let the cluster
>> recover. Once it is recovered, shut down the next two, let it recover.
>> Repeat until all t
gt;> that are stoped (and cluster resynced after that) ?
>>
>> Thx again for the help
>>
>> On 4 March 2015 at 17:44, Robert LeBlanc wrote:
>>>
>>> If I remember right, someone has done this on a live cluster without
>>> any issues. I seem to reme
I can't help much on the MDS front, but here is some answers and my
view on some of it.
On Wed, Mar 4, 2015 at 1:27 PM, Datatone Lists wrote:
> I have been following ceph for a long time. I have yet to put it into
> service, and I keep coming back as btrfs improves and ceph reaches
> higher versi
David,
You will need to up the limit of open files in the linux system. Check
/etc/security/limits.conf. it is explained somewhere in the docs and the
autostart scripts 'fixes' the issue for most people. When I did a manual
deploy for the same reasons you are, I ran into this too.
Robe
:53 PM, Andrija Panic
wrote:
> Hi Robert,
>
> it seems I have not listened well on your advice - I set osd to out,
> instead of stoping it - and now instead of some ~ 3% of degraded objects,
> now there is 0.000% of degraded, and arround 6% misplaced - and rebalancing
> is happe
I see that Jian Wen has done work on this for 0.94. I tried looking through
the code to see if I can figure out how to configure this new option, but
it all went over my head pretty quick.
Can I get a brief summary on how to set the priority of heartbeat packets
or where to look in the code to fig
Hidden HTML ... trying agin...
-- Forwarded message --
From: Robert LeBlanc
Date: Fri, Mar 6, 2015 at 5:20 PM
Subject: Re: [ceph-users] Prioritize Heartbeat packets
To: "ceph-users@lists.ceph.com" ,
ceph-devel
I see that Jian Wen has done work on this for 0.94. I tri
e commit, this ought to do the trick:
>
> osd heartbeat use min delay socket = true
>
> On 07/03/15 01:20, Robert LeBlanc wrote:
>>
>> I see that Jian Wen has done work on this for 0.94. I tried looking
>> through the code to see if I can figure out how to configure th
, etc), storage traffic (no
marking), and replication (scavenger class). We are interested to see
how things pan out.
Thanks,
Robert
On Mon, Mar 9, 2015 at 8:58 PM, Jian Wen wrote:
> Only OSD calls set_socket_priority().
> See https://github.com/ceph/ceph/pull/3353
>
> On Tue, Mar 10
ed? I
don't remember if you said you checked it.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 11, 2015 8:08 PM, "Jesus Chavez (jeschave)"
wrote:
> Thanks Steffen I have followed everything not sure what is going on, the
> mon keyring and client adm
anual)
and is outlined as follows:
1. Copy the Monitor key and Monitor map from a running Monitor to the
new monitor.
2. Create a monitor directory on the new monitor.
3. Add the new monitor to the Monitor map.
4. Start the new monitor.
On Thu, Mar 12, 2015 at 9:58 AM, Jesus
5538883255 <+51%201%205538883255>*
>
> CCIE - 44433
>
> On Mar 12, 2015, at 10:06 AM, Robert LeBlanc wrote:
>
> Add the new monitor to the Monitor map.
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
tion Information.
>
>
>
>
> On Mar 12, 2015, at 10:33 AM, Jesus Chavez (jeschave)
> wrote:
>
> Great :) so just 1 point more, step 4 in adding monitors (Add the
> new monitor to the Monitor map.) this command actually runs in the new
> monitor right?
>
> Thank you so much!
>
>
> * Jesus Chavez*
> SYSTEMS ENGINEER-C.SALES
>
> jesch...@cisco.com
> Phone: *+52 55 5267 3146 <+52%2055%205267%203146>*
> Mobile: *+51 1 5538883255 <+51%201%205538883255>*
>
> CCIE - 44433
>
> On Mar 12, 2015, at 10:06 AM, Robert LeBlanc wrote:
>
> Add the new monitor to the Monitor map.
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
The primary OSD for an object is responsible for the replication. In a
healthy cluster the workflow is as such:
1. Client looks up primary OSD in CRUSH map
2. Client sends object to be written to primary OSD
3. Primary OSD looks up replication OSD(s) in its CRUSH map
4. Primary OSD con
I'm not sure why you are having such a hard time. I added monitors (and
removed them) on CentOS 7 by following what I had. The thing that kept
tripping me up was firewalld. Once I either shut it off or created a
service for Ceph, it worked fine.
What is in in /var/log/ceph/ceph-mon.tauro.log when
We all get burned by the firewall at one time or another. Hence the name
'fire'wall! :) I'm glad you got it working.
On Thu, Mar 12, 2015 at 2:53 PM, Jesus Chavez (jeschave) wrote:
> This is awkard Robert all this time was the firewall :( I cant believe I
> spent 2 days
Two monitors don't work very well and really don't but you anything. I
would either add another monitor or remove one. Paxos is most effective
with an odd number of monitors.
I don't know about the problem you are experiencing and how to help you. An
even number of monitors shoul
Having two monitors should not be causing the problem you are seeing like
you say. What is in /var/log/ceph/ceph.mon.*.log?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 12, 2015 7:39 PM, "Georgios Dimitrakakis"
wrote:
> Hi Robert!
>
> Thanks for
n run ceph-disk activate. Ceph-disk is just a script so you can open it
up and take a look.
So I guess it depends on which automatically you want to happen.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 12, 2015 9:54 PM, "Jesus Chavez (jeschave)"
wrote:
>
That is correct, you make a tradeoff between space, performance and
resiliency. By reducing replication from 3 to 2, you will get more space
and likely more performance (less overhead from third copy), but it comes
at the expense of being able to recover your data when there are multiple
failures.
We have a test cluster with IB. We have both networks over IPoIB on the
same IP subnet though (no cluster network configuration).
On Tue, Mar 17, 2015 at 12:02 PM, German Anders
wrote:
> Hi All,
>
> Does anyone have Ceph implemented with Infiniband for Cluster and
> Public network?
>
> Th
s). Keep a
look out for progress on XIO on the mailing list to see when native IB
support will be in Ceph.
On Tue, Mar 17, 2015 at 12:13 PM, German Anders
wrote:
> Hi Robert,
>
> How are you? Thanks a lot for the quick response. I would like to
> know if you could share some
Udev already provides some of this for you. Look in /dev/disk/by-*.
You can reference drives by UUID, id or path (for
SAS/SCSI/FC/iSCSI/etc) which will provide some consistency across
reboots and hardware changes.
On Thu, Mar 19, 2015 at 1:10 PM, Colin Corr wrote:
> Greetings Cephers,
>
> I have
cally finds the volume with udev, mounts it
in the correct location and accesses the journal on the right disk.
It also may be a limitation on the version of ceph-deploy/ceph-disk
you are using.
On Thu, Mar 19, 2015 at 5:54 PM, Colin Corr wrote:
> On 03/19/2015 12:27 PM, Robert LeBlanc wrote:
You can create CRUSH rulesets and then assign pools to different rulesets.
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
On Thu, Mar 19, 2015 at 7:28 PM, Garg, Pankaj
wrote:
> Hi,
>
>
>
> I have a Ceph cluster with both ARM and x86 based server
Removing the OSD from the CRUSH map and deleting the auth key is how you
force remove an OSD. The OSD can no longer participate in the cluster, even
if it does come back to life. All clients forget about the OSD when the new
CRUSH map is distributed.
On Fri, Mar 20, 2015 at 11:19 AM, Jesus Chavez
We tested bcache and abandoned it for two reasons.
1. Didn't give us any better performance than journals on SSD.
2. We had lots of corruption of the OSDs and were rebuilding them
frequently.
Since removing them, the OSDs have been much more stable.
On Fri, Mar 20, 2015 at 4:03 AM, Nick
The weight can be based on anything, size, speed, capability, some random
value, etc. The important thing is that it makes sense to you and that you
are consistent.
Ceph by default (ceph-disk and I believe ceph-deploy) take the approach of
using size. So if you use a different weighting scheme, yo
> This seems to be a fairly consistent problem for new users.
>
> The create-or-move is adjusting the crush weight, not the osd weight.
> Perhaps the init script should set the defaultweight to 0.01 if it's <= 0?
>
> It seems like there's a downside to this, but I don&
> jesch...@cisco.com
> Phone: *+52 55 5267 3146 <+52%2055%205267%203146>*
> Mobile: *+51 1 5538883255 <+51%201%205538883255>*
>
> CCIE - 44433
>
> On Mar 20, 2015, at 2:21 PM, Robert LeBlanc wrote:
>
> Removing the OSD from the CRUSH map and deleting the
Yes, at this point, I'd export the CRUSH, edit it and import it back in.
What version are you running?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 20, 2015 4:28 PM, "Jesus Chavez (jeschave)"
wrote:
> thats what you sayd?
>
> [root@capri
a on it we were able to format 40 OSDs
in under 30 minutes (we formatted a while host at a time because we knew
that was safe ) with a few little online scripts.
Short answer is don't be afraid to do it this way.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 2
I don't have a fresh cluster on hand to double check, but the default is to
select a different host for each replica. You can adjust that to fit your
needs, we are using cabinet as the selection criteria so that we can lose
an entire cabinet of storage and still function.
In order to store multipl
I was trying to decompile and edit the CRUSH map to adjust the CRUSH
rules. My first attempt created a map that would decompile, but I
could not recompile the CRUSH even if didn't modify it. When trying to
download the CRUSH fresh, now the decompile fails.
[root@nodezz ~]# ceph osd getmap -o map.c
27;
For some reason it doesn't like the rack definition. I can move things
around, like putting root before it and it always chokes on the first
rack definition no matter which one it is.
On Mon, Mar 23, 2015 at 12:53 PM, Robert LeBlanc wrote:
> I was trying to decompile and edit the C
which we are on). Saving for posterity's sake. Thanks Sage!
On Mon, Mar 23, 2015 at 1:09 PM, Robert LeBlanc wrote:
> Ok, so the decompile error is because I didn't download the CRUSH map
> (found that out using hexdump), but I still can't compile an
> unmodified CRUSH
You just need to change your rule from
step chooseleaf firstn 0 type osd
to
step chooseleaf firstn 0 type host
There will be data movement as it will want to move about half the
objects to the new host. There will be data generation as you move
from size 1 to size 2. As far as I know a deep scr
might be most important to you. We mess with
"osd max backfills". You may want to look at "osd recovery max
active", "osd recovery op priority" to name a few. You can adjust the
idle load of the cluster to perform deep scrubs, etc.
On Mon, Mar 23, 2015 at 5:10 PM, Dim
I'm trying to create a CRUSH ruleset and I'm using crushtool to test
the rules, but it doesn't seem to mapping things correctly. I have two
roots, on for spindles and another for SSD. I have two rules, one for
each root. The output of crushtool on rule 0 shows objects being
mapped to SSD OSDs when
although we haven't had show stopping issues with BTRFS, we are still
going to start on XFS. Our plan is to build a cluster as a target for our
backup system and we will put BTRFS on that to prove it in a production
setting.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On M
Is there an enumerated list of issues with snapshots on cache pools.
We currently have snapshots on a cache tier and haven't seen any
issues (development cluster). I just want to know what we should be
looking for.
On Tue, Mar 24, 2015 at 9:21 AM, Stéphane DUGRAVOT
wrote:
>
>
> __
Mar 23, 2015 at 6:08 PM, Robert LeBlanc wrote:
> I'm trying to create a CRUSH ruleset and I'm using crushtool to test
> the rules, but it doesn't seem to mapping things correctly. I have two
> roots, on for spindles and another for SSD. I have two rules, one for
> ea
http://tracker.ceph.com/issues/11224
On Tue, Mar 24, 2015 at 12:11 PM, Gregory Farnum wrote:
> On Tue, Mar 24, 2015 at 10:48 AM, Robert LeBlanc wrote:
>> I'm not sure why crushtool --test --simulate doesn't match what the
>> cluster actually does, but the cluster seems
It doesn't look like your OSD is mounted. What do you have when you run
mount? How did you create your OSDs?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 25, 2015 1:31 AM, "oyym...@gmail.com" wrote:
> Hi,Jesus
> I encountered similar problem.
&g
I don't know much about ceph-deploy, but I know that ceph-disk has
problems "automatically" adding an SSD OSD when there are journals of
other disks already on it. I've had to partition the disk ahead of
time and pass in the partitions to make ceph-disk work.
Also, unless you are sure that the de
wrote:
> On Wed, Mar 25, 2015 at 6:06 PM, Robert LeBlanc wrote:
>> I don't know much about ceph-deploy, but I know that ceph-disk has
>> problems "automatically" adding an SSD OSD when there are journals of
>> other disks already on it. I've had to partition
down, create a new snapshot on the new
pool, point the VM to that and then flatten the RBD.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 26, 2015 5:23 PM, "Steffen W Sørensen" wrote:
>
> On 26/03/2015, at 23.13, Gregory Farnum wrote:
>
> The procedure
86_64
libcephfs1-0.93-0.el7.centos.x86_64
ceph-0.93-0.el7.centos.x86_64
ceph-deploy-1.5.22-0.noarch
[ulhglive-root@mon1 systemd]# for i in $(rpm -qa | grep ceph); do rpm
-ql $i | grep -i --color=always systemd; done
[nothing returned]
Thanks,
Robert Le
ount of
time causing the OSDs to overrun a journal or something (I know that
Ceph journals pgmap changes and such). I'm concerned that this could
be very detrimental in a production environment. There doesn't seem to
be a way to recover from this.
Any thoughts?
Thanks,
Robert LeBlanc
___
1 - 100 of 786 matches
Mail list logo