Hi All,
this would be interesting for us (at least temporarily). Do you think it would
be better to run the mon as a VM on the OSD host or natively?
Greetings
André
- Am 11. Feb 2015 um 20:56 schrieb pixelfairy pixelfa...@gmail.com:
> i believe combining mon+osd, up to whatever magic numbe
Hi,
I'm currently running a big mongodb cluster, around 2TB, (sharding +
replication).
And I have a lot of problems with mongo replication (out of syncs and need to
full replicate again and again datas between my mongo replicats).
So, I thinked to use rbd to replicate the storage and keep onl
Hi Sumit,
A couple questions:
What brand/model SSD?
What brand/model HDD?
Also how they are connected to controller/motherboard? Are they sharing a bus
(ie SATA expander)?
RAM?
Also look at the output of "iostat -x" or similiar, are the SSDs hitting 100%
utilisation?
I suspect that
Hello!
We use Ceph+Openstack in our private cloud. Recently we upgrade our
centos6.5 based cluster from Ceph Emperor to Ceph Firefly.
At first,we use redhat yum repo epel to upgrade, this Ceph's version is
0.80.5. First upgrade monitor,then osd,last client. when we complete this
upgrade, we
Hi Ceph-Experts,
Have a small ceph architecture related question
As blogs and documents suggest that ceph perform much better if we use
journal on SSD.
I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD + 1
SSD on each node and each SSD have 5 partition for journaling 5 OSDs
i believe combining mon+osd, up to whatever magic number of monitors you
want, is common in small(ish) clusters. i also have a 3 node ceph cluster
at home and doing mon+osd, but not client. only rbd served to the vm hosts.
no problem even with my abuses (yanking disks out, shutting down nodes etc)
Thanks for reporting, Nick - I've seen the same thing and thought I was
just crazy.
Chris
On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk wrote:
> Hi David,
>
>
>
> I have had a few weird issues when shutting down a node, although I can
> replicate it by doing a “stop ceph-all” as well. It seems tha
I saw a similar warning - turns out, its only an issue if your using
the kernel driver. If your using VMs and access thru the library (eg
qemu/kvm) you should be ok...
On Tue, Feb 10, 2015 at 10:06 AM, David Graham wrote:
> Hello, I'm giving thought to a minimal footprint scenario with full
> r
Hey cephers,
The Ceph Day program for this year is already shaping up to be a great
one! Our first two events have been solidified (with several more
getting close), and now we just need awesome speakers to share what
they have been doing with Ceph. Currently we are accepting speakers
for the foll
On Wed, Feb 11, 2015 at 5:30 AM, Dennis Kramer (DT) wrote:
> After setting the debug level to 2, I can see:
> 2015-02-11 13:36:31.922262 7f0b38294700 2 mds.0.cache check_memory_usage
> total 58516068, rss 57508660, heap 32676, malloc 1227560 mmap 0, baseline
> 39848, buffers 0, max 67108864, 8656
Am 10.02.2015 um 09:08 schrieb Mark Kirkwood:
On 10/02/15 20:40, Thomas Güttler wrote:
Hi,
does the lack of a battery backed cache in Ceph introduce any
disadvantages?
We use PostgreSQL and our servers have UPS.
But I want to survive a power outage, although it is unlikely. But "hope
is not
Hi David,
I have had a few weird issues when shutting down a node, although I can
replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
detection takes a lot longer when a monitor goes down at the same time,
sometimes I have seen the whole cluster grind to a halt for sev
On Wed, 11 Feb 2015, Wido den Hollander wrote:
On 11-02-15 12:57, Dennis Kramer (DT) wrote:
On Fri, 7 Nov 2014, Gregory Farnum wrote:
Did you upgrade your clients along with the MDS? This warning
indicates the
MDS asked the clients to boot some inboxes out of cache and they have
taken
too lon
Hi Florent,
On 11/02/2015 12:20, Florent B wrote:
> Hi every one,
>
> My question is simple : are erasure coded pools in Giant considered
> enough stables to be used in production ? (or is it a feature in
> development, like CephFS).
They are considered stable and useable in production.
> And
On 11-02-15 12:57, Dennis Kramer (DT) wrote:
> On Fri, 7 Nov 2014, Gregory Farnum wrote:
>
>> Did you upgrade your clients along with the MDS? This warning
>> indicates the
>> MDS asked the clients to boot some inboxes out of cache and they have
>> taken
>> too long to do so.
>> It might also just
On Fri, 7 Nov 2014, Gregory Farnum wrote:
Did you upgrade your clients along with the MDS? This warning indicates the
MDS asked the clients to boot some inboxes out of cache and they have taken
too long to do so.
It might also just mean that you're actively using more inodes at any given
time th
On 11 February 2015 at 20:43, John Spray wrote:
> Namespaces in CephFS would become useful in conjunction with limiting
> client authorization by sub-mount -- that way subdirectories could be
> assigned a layout with a particular namespace, and a client could be
> limited to that namespace on the
Namespaces in CephFS would become useful in conjunction with limiting
client authorization by sub-mount -- that way subdirectories could be
assigned a layout with a particular namespace, and a client could be
limited to that namespace on the OSD side and that path on the MDS
side. So I guess we'd
Thank you Vickie .. and thanks to the ceph community for showing continued
support
Best of luck to all !
> On Feb 11, 2015, at 3:58 AM, Vickie ch wrote:
>
> Hi
> The weight is reflect spaces or ability of disks.
> For example, the weight of 100G OSD disk is 0.100(100G/1T).
>
>
> Best wis
19 matches
Mail list logo