Fran Barrera writes:
>
> Hi all,
> I have a problem installing ceph jewel with ceph-deploy (1.5.33) on ubuntu
14.04.4 (openstack instance).
>
> This is my setup:
>
>
> ceph-admin
>
> ceph-mon
> ceph-osd-1
> ceph-osd-2
>
>
> I've following these steps from ceph-admin node:
>
> I have the u
OTE - this is an all spinning HDD
cluster w/ 7200 rpm disks!
~~shane
On 8/4/15, 2:36 PM, "ceph-users on behalf of Bob Ababurko"
mailto:ceph-users-boun...@lists.ceph.com> on
behalf of b...@ababurko.net<mailto:b...@ababurko.net>> wrote:
I have my first ceph cluster up and running
ew of your Ceph cluster and
performance numbers/testing results from that configuration w/ VMware consuming
Ceph via NFS?? Just out of curiosity ...
~~shane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
lved with providing back to the Ceph community,
and will start with documentation - and hope to submit patches myself soon
for that very issue ...
~~shane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ts set to
9000 - and your switches to 1500 - you'll see this exact behavior...
Hopefully that helps some ...
~~shane
On 7/17/15, 8:57 AM, "ceph-users on behalf of J David"
wrote:
>On Fri, Jul 17, 2015 at 11:15 AM, Quentin Hartman
> wrote:
>> That looks a lot lik
acking here but if you've ever used a
>third-party FS with Hadoop I don't think it should be too challenging.
>I'm hoping we get better documentation written up soonish.
Ok - I'll give it a whirl in our dev/test environme
S3a exentsions within the RadosGW S3 API
implementation?
Plus - it seems like it's considered a "bad idea" to back Hadoop via S3 (and
indirectly Ceph via RGW) [3]; though not sure if the architectural differences
from Amazon's S3 implementation and the far superior Ceph make it m
ge for various large storage need platforms (eg
software and package repo/mirrors, etc...).
Thanks in advance for any input, thoughts, or pointers ...
~~shane
[1] http://ceph.com/docs/master/cephfs/hadoop/
___
ceph-users mailing list
ceph-users
hich brings me to a question ...
Are there any good documents out there that detail (preferably via a flow
chart/diagram or similar) how the various failure/recovery scenarios cause
"change" or "impact" to the cluster? I've seen very little in regards to
t
d then compare to faster sexier hardware. We do have a
lot of use cases for slower object services, where super high IOPS and
performance isn't critical; that’s our starting point for testing.
~~shane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
0xd platforms, with 12 spinning (4TB 7200 rpm)
data disks, and 2x 10k 600 GB mirrired OS disks. Memory is 128 GB, and dual
6-core HT CPUs.
~~shane
On 7/1/15, 5:24 PM, "German Anders"
mailto:gand...@despegar.com>> wrote:
I'm interested in such a configuration, can you shar
logy) ... so an 8 TB drive loss isn't too big of an
issue. Now that assumes that replication actually works well in that size
cluster. We're still cessing out this part of the PoC engagement.
~~shane
On 7/1/15, 5:05 PM, "ceph-users on behalf of German Anders"
mailto:ceph-
Bruce - I ran in to problems w/ ceph-disk on same version too - then switched
to Hammer (0.94) ... that worked for me. I didn't track the issue down.
Some reason you are deploying an older version ?
On 6/26/15, 2:09 PM, "ceph-users on behalf of Bruce McFarland"
mailto:ceph-users-boun...@lists
For a high perf cluster - absolutely agree ... but I would suggest that
running the MONs as VMs has it's on performance challenges, to carefully
manage as well. If you are on oversubscribed hypervisors, you may end up
with the same exact issues with perf impacting the MONs. For a very small
non
cores/HTs than OSD disks, you
probably don't have a huge CPU issue to worry about (...probably...).
~~shane
On 6/25/15, 9:23 AM, "ceph-users on behalf of Quentin Hartman"
mailto:ceph-users-boun...@lists.ceph.com> on
behalf of qhart...@direwolfdigital.com<mailto:qhart...@dir
sses? What's the issue that prevents the MON from
listening on multiple IPs? Is the name hashed w/ the IP for internal auth
or referencing of some sort?
I'm new to Ceph - so just learning the ins-n-outs of how we can architect
this. Thanks!!
~~shane
_
e {cluster}.conf (eg /etc/ceph/ceph.conf) settings with the a
stanza like:
[osd]
debug osd = 20/20
debug journal = 20/20
debug monc = 20/20
~~shane
On 6/21/15, 8:22 AM, "ceph-users on behalf of Cristian Falcas"
mailto:ceph-users-boun...@lists.ceph.com> on
he
email archives ...
Thanks!
~~shane
On 6/19/15, 9:08 AM, "ceph-users on behalf of Mark Nelson"
wrote:
>>>>
>>>> Would the above change the performance of 530s to be more like 520s?
>>>
>>> I need to comment that it's *really* not a good
know as we go to more "real production" workloads, we'll want/need
to change this for performance reasons - eg the Journal on SSDs ...
Any pointers on where I missed this info in the documentation would be
helpful too ... I've been all over the ceph.com/docs/ site
; with modern kernel
versions ... without crossing the line in to the "bleeding and painful" edge
versions ... ?
Thank you ...
~~shane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
clean". Insure that your VMs are all successfully running NTP or peering with
each other to keep in sync. NOTE that a lot of VM implementations wil suffer
significant clock drift (even within just a few hours of running) ... this can
be a pain in the behind to deal with...
~~shane
On
control your webserver
content - and you can easily "roll back" to a previous version if you need to.
You can create a "dev" branch and make changes to it, host it on a test web
server ... approved, then push the changes to the "master" branch and trigger
the refres
22 matches
Mail list logo