Hi,
I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
network also fine:
Ceph ceph-0.72.2.
When I issue "ceph status" command, I get randomly HEALTH_OK, and
imidiately after that when repeating command, I get HEALTH_WARN
Examle given down - these commands were issues with
un 2014 10:30:44 +0200 Andrija Panic wrote:
>
> > Hi,
> >
> > I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
> > network also fine:
> > Ceph ceph-0.72.2.
> >
> > When I issue "ceph status" command, I get randomly HEALT
sue.
>
>
>
>
>
> Regards,
>
> *Stanislav Yanchev*
> Core System Administrator
>
> [image: MAX TELECOM]
>
> Mobile: +359 882 549 441
> s.yanc...@maxtelecom.bg
> www.maxtelecom.bg
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Be
regory Farnum wrote:
> Try running "ceph health detail" on each of the monitors. Your disk space
> thresholds probably aren't configured correctly or something.
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Jun 17, 2014 at
As stupid as I could do it...
After lowering mon data . from 20% to 15% treshold, it seems I forgot
to restart MON service on this one node...
I appologies for bugging and thanks again everybody.
Andrija
On 18 June 2014 09:49, Andrija Panic wrote:
> Hi Gregory,
>
> indeed - I s
Thanks Greg, seems like I'm going to update soon...
Thanks again,
Andrija
On 18 June 2014 14:06, Gregory Farnum wrote:
> The lack of warnings in ceph -w for this issue is a bug in Emperor.
> It's resolved in Firefly.
> -Greg
>
> On Wed, Jun 18, 2014 at 3:49 A
Hi,
I have existing CEPH cluster of 3 nodes, versions 0.72.2
I'm in a process of installing CEPH on 4th node, but now CEPH version is
0.80.1
Will this make problems running mixed CEPH versions ?
I intend to upgrade CEPH on exsiting 3 nodes anyway ?
Recommended steps ?
Thanks
--
Andrija Pani
:08 PM, Andrija Panic wrote:
>
>> Hi,
>>
>> I have existing CEPH cluster of 3 nodes, versions 0.72.2
>>
>> I'm in a process of installing CEPH on 4th node, but now CEPH version is
>> 0.80.1
>>
>> Will this make problems running mixed CEPH v
Thanks a lot Wido, will do...
Andrija
On 3 July 2014 13:12, Wido den Hollander wrote:
> On 07/03/2014 10:59 AM, Andrija Panic wrote:
>
>> Hi Wido, thanks for answers - I have mons and OSD on each host...
>> server1: mon + 2 OSDs, same for server2 and server3.
>>
>
Wido,
one final question:
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
recompile libvirt again now with ceph-devel 0.80 ?
Perhaps not smart question, but need to make sure I don't screw something...
Thanks for your time,
Andrija
On 3 July 2014 14:27, Andrija Panic
Thanks again a lot.
On 3 July 2014 15:20, Wido den Hollander wrote:
> On 07/03/2014 03:07 PM, Andrija Panic wrote:
>
>> Wido,
>> one final question:
>> since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
>> recompile libvirt again now with ceph-
Hi,
Sorry to bother, but I have urgent situation: upgraded CEPH from 0.72 to
0.80 (centos 6.5), and now all my CloudStack HOSTS can not connect.
I did basic "yum update ceph" on the first MON leader, and all CEPH
services on that HOST, have been restarted - done same on other CEPH nodes
(I have 1
Any suggestion on need to recompile libvirt ? I got info from Wido, that
libvirt does NOT need to be recompiled
Best
On 13 July 2014 08:35, Mark Kirkwood wrote:
> On 13/07/14 17:07, Andrija Panic wrote:
>
>> Hi,
>>
>> Sorry to bother, but I have urgent situatio
your time for my issue...
Best.
Andrija
On 13 July 2014 10:20, Mark Kirkwood wrote:
> On 13/07/14 19:15, Mark Kirkwood wrote:
>
>> On 13/07/14 18:38, Andrija Panic wrote:
>>
>
> Any suggestion on need to recompile libvirt ? I got info from Wido, that
>>> libvi
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued "ceph osd crush
tunables optimal" and after only few minutes I have added 2 more OSDs to
the CEPH cluster...
So these 2 changes were more or a less done at the same time - rebalancing
because of tunables optimal, and rebalancing becau
rting daemons automatically ?
Since it makes sense to have all MONs updated first, and than OSD (and
perhaps after that MDS if using it...)
Upgraded to 0.80.3 release btw.
Thanks for your help again.
Andrija
On 3 July 2014 15:21, Andrija Panic wrote:
> Thanks again a lot.
>
>
> On
ommand should move not more than 10% of
> your data. However, in both of our cases our status was just over 30%
> degraded and it took a good part of 9 hours to complete the data
> reshuffling.
>
>
> Any comments from the ceph team or other ceph gurus on:
>
> 1. What ha
rioritization, though, and removing sources of
> overhead related to rebalancing... and it's clearly not perfect yet. :/
>
> sage
>
>
> On Sun, 13 Jul 2014, Andrija Panic wrote:
>
> > Hi,
> > after seting ceph upgrade (0.72.2 to 0.80.3) I have issued "ceph osd
&
Udo, I had all VMs completely unoperational - so don't set "optimal" for
now...
On 14 July 2014 20:48, Udo Lembke wrote:
> Hi,
> which values are all changed with "ceph osd crush tunables optimal"?
>
> Is it perhaps possible to change some parameter the weekends before the
> upgrade is running,
gt; overhead related to rebalancing... and it's clearly not perfect yet. :/
>
> sage
>
>
> On Sun, 13 Jul 2014, Andrija Panic wrote:
>
> > Hi,
> > after seting ceph upgrade (0.72.2 to 0.80.3) I have issued "ceph osd
> crush
> > tunables optimal"
Hi Sage,
can anyone validate, if there is still "bug" inside RPMs that does
automatic CEPH service restart after updating packages ?
We are instructed to first update/restart MONs, and after that OSD - but
that is impossible if we have MON+OSDs on same host...since the ceph is
automaticaly restar
nt you
> are hereby notified that any perusal, use, distribution, copying or
> disclosure is strictly prohibited. If you have received this email in error
> please immediately advise us by return email at and...@arhont.com and
> delete and purge the email and any attachments without making a cop
Hi,
we just had some new clients, and have suffered very big degradation in
CEPH performance for some reasons (we are using CloudStack).
I'm wondering if there is way to monitor OP/s or similar usage by client
connected, so we can isolate the heavy client ?
Also, what is the general best practic
Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
some precise OP/s per ceph Image at least...
Will check CloudStack then...
Thx
On 8 August 2014 13:53, Wido den Hollander wrote:
> On 08/08/2014 01:51 PM, Andrija Panic wrote:
>
>> Hi,
>>
>>
op/s - could not find
anything with google...
Thanks again Wido.
Andrija
On 8 August 2014 14:07, Wido den Hollander wrote:
> On 08/08/2014 02:02 PM, Andrija Panic wrote:
>
>> Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
>> some precise OP/s per ce
since without a throttle just a few active VMs can eat up the
> entire iops capacity of the cluster.
>
> Cheers, Dan
>
> -- Dan van der Ster || Data & Storage Services || CERN IT Department --
>
>
> On 08 Aug 2014, at 13:51, Andrija Panic wrote:
>
> Hi,
>
>
Thanks again, and btw, beside being Friday I'm also on vacation - so double
the joy of troubleshooting performance problmes :)))
Thx :)
On 8 August 2014 16:01, Dan Van Der Ster wrote:
> Hi,
>
> On 08 Aug 2014, at 15:55, Andrija Panic wrote:
>
> Hi Dan,
>
> t
o I recommend you set a limit. Set it
> to 750 write IOps for example. That way one Instance can't kill the whole
> cluster, but it still has enough I/O to run. (usually).
>
> Wido
>
>
>> Cheers, Dan
>>
>> -- Dan van der Ster || Data & Storage Services || CERN
:
Writes per RBD:
Writes per object:
Writes per length:
.
.
.
On 8 August 2014 16:01, Dan Van Der Ster wrote:
> Hi,
>
> On 08 Aug 2014, at 15:55, Andrija Panic wrote:
>
> Hi Dan,
>
> thank you very much for the script, will check it out...no thortling so
> far, but I
to fix this... ?
Thanks,
Andrija
On 11 August 2014 12:46, Andrija Panic wrote:
> Hi Dan,
>
> the script provided seems to not work on my ceph cluster :(
> This is ceph version 0.80.3
>
> I get empty results, on both debug level 10 and the maximum level of 20...
>
>
ceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl
> Cheers, Dan
>
> -- Dan van der Ster || Data & Storage Services || CERN IT Department --
>
>
> On 11 Aug 2014, at 12:48, Andrija Panic wrote:
>
> I appologize, clicked the Send button to fast...
>
> Anyway, I c
Hi people,
I had one OSD crash, so the rebalancing happened - all fine (some 3% of the
data has been moved arround, and rebalanced) and my previous
recovery/backfill throtling was applied fine and we didnt have a unusable
cluster.
Now I used the procedure to remove this crashed OSD comletely from
, when my
cluster completely colapsed during data rebalancing...
I don't see any option to contribute to documentation ?
Best
On 2 March 2015 at 16:07, Wido den Hollander wrote:
> On 03/02/2015 03:56 PM, Andrija Panic wrote:
> > Hi people,
> >
> > I had one OSD
HI Guys,
I yesterday removed 1 OSD from cluster (out of 42 OSDs), and it caused over
37% od the data to rebalance - let's say this is fine (this is when I
removed it frm Crush Map).
I'm wondering - I have previously set some throtling mechanism, but during
first 1h of rebalancing, my rate of reco
gt; example:
> [root@ceph08 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.94.asok
> config show | grep osd_recovery_delay_start
> "osd_recovery_delay_start": "10"
>
> 2015-03-03 13:13 GMT+03:00 Andrija Panic :
>
>> HI Guys,
>>
>> I yesterd
potentialy go with 7 x the same
number of missplaced objects...?
Any thoughts ?
Thanks
On 3 March 2015 at 12:14, Andrija Panic wrote:
> Thanks Irek.
>
> Does this mean, that after peering for each PG, there will be delay of
> 10sec, meaning that every once in a while, I will have 10sec od
would be low.
> In your case, the correct option is to remove the entire node, rather than
> each disk individually
>
> 2015-03-03 14:27 GMT+03:00 Andrija Panic :
>
>> Another question - I mentioned here 37% of objects being moved arround -
>> this is MISPLACED object (degraded
ugging you, I really appreaciate your help.
>>>
>>> Thanks
>>>
>>> On 3 March 2015 at 12:58, Irek Fasikhov wrote:
>>>
>>>> A large percentage of the rebuild of the cluster map (But low
>>>> percentage degradation). If you had not
al place. If you are
> still adding new nodes, when nobackfill and norecover is set, you can
> add them in so that the one big relocate fills the new drives too.
>
> On Tue, Mar 3, 2015 at 5:58 AM, Andrija Panic
> wrote:
> > Thx Irek. Number of replicas is 3.
> >
&g
Hi,
I'm having a live cluster with only public network (so no explicit network
configuraion in the ceph.conf file)
I'm wondering what is the procedure to implement dedicated
Replication/Private and Public network.
I've read the manual, know how to do it in ceph.conf, but I'm wondering
since this
t, then you are good to go by
> restarting one OSD at a time.
>
> On Wed, Mar 4, 2015 at 4:17 AM, Andrija Panic
> wrote:
> > Hi,
> >
> > I'm having a live cluster with only public network (so no explicit
> network
> > configuraion in the ceph.conf fi
t OSD will be done
> over the public network.
>
> So you can push a new configuration to all OSDs and restart them one by
> one.
>
> Make sure the network is ofcourse up and running and it should work.
>
> > On Wed, Mar 4, 2015 at 4:17 AM, Andrija Panic
> w
5 at 17:48, Andrija Panic wrote:
> That was my thought, yes - I found this blog that confirms what you are
> saying I guess:
> http://www.sebastien-han.fr/blog/2012/07/29/tip-ceph-public-slash-private-network-configuration/
> I will do that... Thx
>
> I guess it doesnt matter, s
tch which version of Ceph you are running, but I think
> there was some priority work done in firefly to help make backfills
> lower priority. I think it has gotten better in later versions.
>
> On Wed, Mar 4, 2015 at 1:35 AM, Andrija Panic
> wrote:
> > Thank you Rober - I
Thx again - I really appreciatethe help guys !
On 4 March 2015 at 17:51, Robert LeBlanc wrote:
> If the data have been replicated to new OSDs, it will be able to
> function properly even them them down or only on the public network.
>
> On Wed, Mar 4, 2015 at 9:49 AM, Andrija Pa
for some reason...it just stayed degraded... so
this is a reason why I started back the OSD, and then set it to out...)
Thanks
On 4 March 2015 at 17:54, Andrija Panic wrote:
> Hi Robert,
>
> I already have this stuff set. CEph is 0.87.0 now...
>
> Thanks, will schedule this f
do one data movement, always keep all copies of your objects online and
> minimize the impact of the data movement by leveraging both your old and
> new hardware at the same time.
>
> If you try this, please report back on your experience. I'm might try it
> in my lab, but I
Hi there,
just wanted to share some benchmark experience with RBD caching, that I
have just (partially) implemented. This is not nicely formated results,
just raw numbers to understadn the difference
*INFRASTRUCTURE:
- 3 hosts with: 12 x 4TB drives, 6 Journals on 1 SSD, 6 journals on
se
ceph is RAW format - should be all fine...so VM will be using that RAW
format
On 12 March 2015 at 09:03, Azad Aliyar wrote:
> Community please explain the 2nd warning on this page:
>
> http://ceph.com/docs/master/rbd/rbd-openstack/
>
> "Important Ceph doesn’t support QCOW2 for hosting a virtual
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for the
cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.
Since I now have 3 servers with 12 OSDs each (SSD based Journals) -
Thanks Wido - I will do that.
On 13 March 2015 at 09:46, Wido den Hollander wrote:
>
>
> On 13-03-15 09:42, Andrija Panic wrote:
> > Hi all,
> >
> > I have set nodeep-scrub and noscrub while I had small/slow hardware for
> > the cluster.
> > It has been
Nice - so I just realized I need to manually scrub 1216 placements groups :)
On 13 March 2015 at 10:16, Andrija Panic wrote:
> Thanks Wido - I will do that.
>
> On 13 March 2015 at 09:46, Wido den Hollander wrote:
>
>>
>>
>> On 13-03-15 09:42, Andrija Panic w
Will do, of course :)
THx Wido for quick help, as always !
On 13 March 2015 at 12:04, Wido den Hollander wrote:
>
>
> On 13-03-15 12:00, Andrija Panic wrote:
> > Nice - so I just realized I need to manually scrub 1216 placements
> groups :)
> >
>
> With manual I
ide limits.
>
>
> On 3/13/15 10:46, Wido den Hollander wrote:
>
>
> On 13-03-15 09:42, Andrija Panic wrote:
>
> Hi all,
>
> I have set nodeep-scrub and noscrub while I had small/slow hardware for
> the cluster.
> It has been off for a while now.
>
> Now w
Hmnice Thx guys
On 13 March 2015 at 12:33, Henrik Korkuc wrote:
> I think settings apply to both kinds of scrubs
>
>
> On 3/13/15 13:31, Andrija Panic wrote:
>
> Interestingthx for that Henrik.
>
> BTW, my placements groups are arround 1800 objects (ceph pg
Check firewall - I hit this issue over and over again...
On 13 March 2015 at 22:25, Georgios Dimitrakakis
wrote:
> On an already available cluster I 've tried to add a new monitor!
>
> I have used ceph-deploy mon create {NODE}
>
> where {NODE}=the name of the node
>
> and then I restarted the /e
55 <+51%201%205538883255>*
>
> CCIE - 44433
>
> On Mar 13, 2015, at 3:30 PM, Andrija Panic
> wrote:
>
> Check firewall - I hit this issue over and over again...
>
> On 13 March 2015 at 22:25, Georgios Dimitrakakis
> wrote:
>
>> On an already available
Georgeos
, you need to have "deployment server" and cd into folder that you used
originaly while deploying CEPH - in this folder you should already have
ceph.conf, admin.client keyring and other stuff - which is required to to
connect to cluster...and provision new MONs or OSDs, etc.
Message:
[ce
Public network is clients-to-OSD traffic - and if you have NOT explicitely
defined cluster network, than also OSD-to-OSD replication takes place over
same network.
Otherwise, you can define public and cluster(private) network - so OSD
replication will happen over dedicated NICs (cluster network) a
changin PG number - causes LOOOT of data rebalancing (in my case was 80%)
which I learned the hard way...
On 14 March 2015 at 18:49, Gabri Mate
wrote:
> I had the same issue a few days ago. I was increasing the pg_num of one
> pool from 512 to 1024 and all the VMs in that pool stopped. I came to
This is how I did it, and then retart each OSD one by one, but monritor
with ceph -s, when ceph is healthy, proceed with next OSD restart...
Make sure the networks are fine on physical nodes, that you can ping in
between...
[global]
x
x
x
x
x
x
#
### RE
Georgios,
no need to put ANYTHING if you don't plan to split client-to-OSD vs
OSD-OSD-replication on 2 different Network Cards/Networks - for pefromance
reasons.
if you have only 1 network - simply DONT configure networks at all inside
your CEPH.conf file...
if you have 2 x 1G cards in servers,
In that case - yes...put everything on 1 card - or if both cards are 1G (or
same speed for that matter...) - then you might want toblock all external
traffic except i.e. SSH, WEB, but allow ALL traffic between all CEPH
OSDs... so you can still use that network for "public/client" traffic - not
sure
Acutally, good question - is RBD caching at all - possible with Windows
guestes, if it ussing latest VirtIO drivers ?
Linux caching (write caching, writeback) is working fine with newer virt-io
drivers...
Thanks
On 18 March 2015 at 10:39, Alexandre DERUMIER wrote:
> Hi,
>
> I don't known how rb
Hi guys,
I have 1 SSD that hosted 6 OSD's Journals, that is dead, so 6 OSD down,
ceph rebalanced etc.
Now I have new SSD inside, and I will partition it etc - but would like to
know, how to proceed now, with the journal recreation for those 6 OSDs that
are down now.
Should I flush journal (where
Thx guys, thats what I will be doing at the end.
Cheers
On Apr 17, 2015 6:24 PM, "Robert LeBlanc" wrote:
> Delete and re-add all six OSDs.
>
> On Fri, Apr 17, 2015 at 3:36 AM, Andrija Panic
> wrote:
>
>> Hi guys,
>>
>> I have 1 SSD that hosted 6
015 o 18:31 użytkownik Andrija Panic
> napisał:
>
>> Thx guys, thats what I will be doing at the end.
>>
>> Cheers
>> On Apr 17, 2015 6:24 PM, "Robert LeBlanc" wrote:
>>
>>> Delete and re-add all six OSDs.
>>>
>>> On Fri, A
17 Apr 2015, at 18:49, Andrija Panic wrote:
>
> 12 osds down - I expect less work with removing and adding osd?
> On Apr 17, 2015 6:35 PM, "Krzysztof Nowicki" <
> krzysztof.a.nowi...@gmail.com> wrote:
>
>> Why not just wipe out the OSD filesystem, run ceph-osd --mkfs
SSD?
>
> /Josef
> On 17 Apr 2015 20:05, "Andrija Panic" wrote:
>
>> SSD died that hosted journals for 6 OSDs - 2 x SSD died, so 12 OSDs are
>> down, and rebalancing is about finish... after which I need to fix the OSDs.
>>
>> On 17 April 2015 at 19:01, Josef Joha
ow. So far so good. I'll be keeping a closer look at them.
>
> pt., 17 kwi 2015, 21:07 Andrija Panic użytkownik
> napisał:
>
>> nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died...
>> wearing level is 96%, so only 4% wasted... (yes I know these are not
Hi all,
when I run:
ceph-deploy osd create SERVER:sdi:/dev/sdb5
(sdi = previously ZAP-ed 4TB drive)
(sdb5 = previously manually created empty partition with fdisk)
Is ceph-deploy going to create journal properly on sdb5 (something similar
to: ceph-osd -i $ID --mkjournal ), or do I need to do so
y. The OSD would
> not start if the journal was not created.
>
> On Fri, Apr 17, 2015 at 2:43 PM, Andrija Panic
> wrote:
>
>> Hi all,
>>
>> when I run:
>>
>> ceph-deploy osd create SERVER:sdi:/dev/sdb5
>>
>> (sdi = previously ZAP-ed 4TB dri
wrote:
>
> > On 17/04/2015, at 21.07, Andrija Panic wrote:
> >
> > nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died...
> wearing level is 96%, so only 4% wasted... (yes I know these are not
> enterprise,etc… )
> Damn… but maybe your surname says it a
ere as well.
> On 18 Apr 2015 09:42, "Steffen W Sørensen" wrote:
>
>>
>> > On 17/04/2015, at 21.07, Andrija Panic wrote:
>> >
>> > nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died...
>> wearing level is 96%, so only 4% was
are less inclined to suffer this fate.
>
> Regards
>
> Mark
>
> On 18/04/15 22:23, Andrija Panic wrote:
>
>> these 2 drives, are on the regular SATA (on board)controler, and beside
>> this, there is 12 x 4TB on the fron of the servers - normal backplane on
>>
yes I know, but to late now, I'm afraid :)
On 18 April 2015 at 14:18, Josef Johansson wrote:
> Have you looked into the samsung 845 dc? They are not that expensive last
> time I checked.
>
> /Josef
> On 18 Apr 2015 13:15, "Andrija Panic" wrote:
>
>>
Hi,
small update:
in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
between of each SSD death) - cant believe it - NOT due to wearing out... I
really hope we got efective series from suplier...
Regards
On 18 April 2015 at 14:24, Andrija Panic wrote:
> yes I know,
Well, seems like they are on satellite :)
On 6 May 2015 at 02:58, Matthew Monaco wrote:
> On 05/05/2015 08:55 AM, Andrija Panic wrote:
> > Hi,
> >
> > small update:
> >
> > in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
> > betwe
Guys,
I'm Igor's colleague, working a bit on CEPH, together with Igor.
This is production cluster, and we are becoming more desperate as the time
goes by.
Im not sure if this is appropriate place to seek commercial support, but
anyhow, I do it...
If anyone feels like and have some experience i
This was related to the caching layer, which doesnt support snapshooting
per docs...for sake of closing the thread.
On 17 August 2015 at 21:15, Voloshanenko Igor
wrote:
> Hi all, can you please help me with unexplained situation...
>
> All snapshot inside ceph broken...
>
> So, as example, we ha
Make sure you test what ever you decide. We just learned this the hard way
with samsung 850 pro, which is total crap, more than you could imagine...
Andrija
On Aug 25, 2015 11:25 AM, "Jan Schermer" wrote:
> I would recommend Samsung 845 DC PRO (not EVO, not just PRO).
> Very cheap, better than I
performans was acceptable...no we are upgrading to intel S3500...
Best
any details on that ?
On Tue, 25 Aug 2015 11:42:47 +0200, Andrija Panic
wrote:
> Make sure you test what ever you decide. We just learned this the hard way
> with samsung 850 pro, which is total crap, more than you
And should I mention that in another CEPH installation we had samsung 850
pro 128GB and all of 6 ssds died in 2 month period - simply disappear from
the system, so not wear out...
Never again we buy Samsung :)
On Aug 25, 2015 11:57 AM, "Andrija Panic" wrote:
> First read ple
So we choose S3500 240G. Yes, it's cheaper than S3700 (about 2x times), and
> no so durable for writes, but we think more better to replace 1 ssd per 1
> year than to pay double price now.
>
> 2015-08-25 12:59 GMT+03:00 Andrija Panic :
>
>> And should I mention that in a
Hi,
I have 2 x 2TB disks, in 3 servers, so total of 6 disks... I have deployed
total of 6 OSDs.
ie:
host1 = osd.0 and osd.1
host2 = osd.2 and osd.3
host4 = osd.4 and osd.5
Now, since I will have total of 3 replica (original + 2 replicas), I want
my replica placement to be such, that I don't end
cs/master/rados/operations/crush-map/>
>
>
> Cheers,
> Mike Dawson
>
>
>
> On 10/16/2013 5:16 PM, Andrija Panic wrote:
>
>> Hi,
>>
>> I have 2 x 2TB disks, in 3 servers, so total of 6 disks... I have
>> deployed total of 6 OSDs.
>> ie:
>> host
Hi,
I'm trying to add CEPH as Primary Storage, but my libvirt 0.10.2 (CentOS
6.5) does some complaints:
- internal error missing backend for pool type 8
Is it possible that the libvirt 0.10.2 (shipped with CentOS 6.5) was not
compiled with RBD support ?
Can't find how to check this...
I'm able
Thank you very much Wido,
any suggestion on compiling libvirt with support (I already found a way) or
perhaps use some prebuilt , that you would recommend ?
Best
On 28 April 2014 13:25, Wido den Hollander wrote:
> On 04/28/2014 12:49 PM, Andrija Panic wrote:
>
>> Hi,
>>
&g
Thanks Dan :)
On 28 April 2014 15:02, Dan van der Ster wrote:
>
> On 28/04/14 14:54, Wido den Hollander wrote:
>
>> On 04/28/2014 02:15 PM, Andrija Panic wrote:
>>
>>> Thank you very much Wido,
>>> any suggestion on compiling libvirt with support (I alrea
Dan, is this maybe just rbd support for kvm package (I already have rbd
enabled qemu, qemu-img etc from ceph.com site)
I need just libvirt with rbd support ?
Thanks
On 28 April 2014 15:05, Andrija Panic wrote:
> Thanks Dan :)
>
>
> On 28 April 2014 15:02, Dan van der Ster wrote:
&
Hi,
I was wondering why would OSDs not start at the boot time, happens on 1
server (2 OSDs).
If i check with: chkconfig ceph --list, I can see that is should start,
that is, the MON on this server does really start but OSDs does not.
I can normally start them with: service ceph start osd.X
This
Hi.
I was wondering what would be correct way to migrate system VMs
(storage,console,VR) from local storage to CEPH.
I'm on CS 4.2.1 and will be soon updating to 4.3...
Is it enough to just change global setting system.vm.use.local.storage =
true, to FALSE, and then destroy system VMs (cloudstac
Thank you very much Wido, that's exatly what I was looking for.
Thanks
On 4 May 2014 18:30, Wido den Hollander wrote:
> On 05/02/2014 04:06 PM, Andrija Panic wrote:
>
>> Hi.
>>
>> I was wondering what would be correct way to migrate system VMs
>> (storage,con
Will try creating tag inside CS database, since GUI/cloudmoneky editing of
existing offer is NOT possible...
On 5 May 2014 16:04, Brian Rak wrote:
> This would be a better question for the Cloudstack community.
>
>
> On 5/2/2014 10:06 AM, Andrija Panic wrote:
>
> Hi.
&g
Any suggestion, please ?
Thanks,
Andrija
On 5 May 2014 16:11, Andrija Panic wrote:
> Will try creating tag inside CS database, since GUI/cloudmoneky editing of
> existing offer is NOT possible...
>
>
>
> On 5 May 2014 16:04, Brian Rak wrote:
>
>> This would be a be
Good question - I'm also interested. Do you want to movejournal to
dedicated disk/partition i.e. on SSD or just replace (failed) disk with
new/bigger one ?
I was thinking (for moving jorunal to dedicated disk) about changing
symbolic links or similar, on /var/lib/ceph/osd/osd-x/journal... ?
Regar
re is more elegant than this manual steps...
Cheers
On 6 May 2014 12:52, Gandalf Corvotempesta
wrote:
> 2014-05-06 12:39 GMT+02:00 Andrija Panic :
> > Good question - I'm also interested. Do you want to movejournal to
> dedicated
> > disk/partition i.e. on SSD or just repl
> On 05/05/2014 11:40 PM, Andrija Panic wrote:
>
>> Hi Wido,
>>
>> thanks again for inputs.
>>
>> Everything is fine, except for the Software Router - it doesn't seem to
>> get created on CEPH, no matter what I try.
>>
>>
> There is a s
Mapping RBD image to 2 or more servers is the same as a shared storage
device (SAN) - so from there on, you could do any clustering you want,
based on what Wido said...
On 7 May 2014 12:43, Andrei Mikhailovsky wrote:
>
> Wido, would this work if I were to run nfs over two or more servers with
Hi,
just to share my issue with qemu-img provided by CEPH (RedHat made a
problem, not CEPH):
newest qemu-img - /qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64.rpm was built
from RHEL 6.5 source code, where Redhat removed the "-s" paramter, so
snapshooting in CloudStack up to 4.2.1 does not work, I gu
1 - 100 of 114 matches
Mail list logo