Any idea would be appreciated:)
Zhenshi Zhou 于2018年12月13日周四 下午6:01写道:
> Hi all,
>
> I'm running a luminous cluster with tens of OSDs and
> the cluster runs well. As the data grows, ceph becomes
> more and more important.
>
> What worries me is that many services will down if the
> cluster is out
Hi everybody,
We've runned a 50TB Cluster with 3 MON services on the same nodes with OSDs.
We are planning to upgrade to 200TB, I have 2 questions:
1. Should we separate MON services to dedicated hosts?
2. From your experiences, how size of cluster we shoud consider to put
MON on dedicated hos
We've runned a 50TB Cluster with 3 MON services on the same nodes with OSDs.
We are planning to upgrade to 200TB, I have 2 questions:
1. Should we separate MON services to dedicated hosts?
If you think about stability, simplicity and redundancy - yes.
2. From your experiences, how size of
Hello,
we do not see a problem in a small cluster having 3 MON on OSD hosts.
However we do suggest to use 5 MON's.
Near every customers of us does this without a problem! Please just
make sure to have enough cpu/ram/disk available.
So:
1. No not necessary, only if you want to spend more money tha
Hi all,
Bringing up this old thread with a couple questions:
1. Did anyone ever follow up on the 2nd part of this thread? -- is
there any way to cache keystone EC2 credentials?
2. A question for Valery: could you please explain exactly how you
added the EC2 credentials to the local backend (your
Hi,
On 12/17/18 11:42 AM, Dan van der Ster wrote:
Hi all,
Bringing up this old thread with a couple questions:
1. Did anyone ever follow up on the 2nd part of this thread? -- is
there any way to cache keystone EC2 credentials?
I don't think this is possible. The AWS signature algorithms inv
Hi everyone, i'm running a Luminous 12.2.5 cluster with 6 hosts on
ubuntu 16.04 - 12 HDDs for data each, plus 2 SSD metadata OSDs(three
nodes have an additional SSD i added to have more space to rebalance the
metadata). CUrrently, the cluster is used mainly as a radosgw storage,
with 28tb data
Hi all,
Ceph meetings are canceled until January 7th to observe various
upcoming holidays.
You can stay up to date with meetings by subscribing to the Ceph
community calendar via:
Google calendar:
https://calendar.google.com/calendar/b/1?cid=OXRzOWM3bHQ3dTF2aWMyaWp2dnFxbGZwbzBAZ3JvdXAuY2FsZW5kYX
Hello,
We are working to integrate s3 protocol in our webs applications. The
objective is to stop storing documents in bdd or filesytem but use s3's
buckets in replacement.
We already gave a try to ceph with rados gateway on physicals nodes, its
working well.
But we are also on Azure, and we can'
Hi All
I have a ceph cluster which has been working with out issues for about 2
years now, it was upgrade about 6 month ago to 10.2.11
root@blade3:/var/lib/ceph/mon# ceph status
2018-12-18 10:42:39.242217 7ff770471700 0 -- 10.1.5.203:0/1608630285 >>
10.1.5.207:6789/0 pipe(0x7ff768000c80 sd=4 :0
mmm wonder why the list is saying my email is forged, wonder what I have
wrong.
My email is sent via an outbound spam filter, but I was sure I had the
SPF set correctly.
Mike
On 18/12/18 10:53 am, Mike O'Connor wrote:
> Hi All
>
> I have a ceph cluster which has been working with out issues for
On Thu, Dec 13, 2018 at 5:01 AM Zhenshi Zhou wrote:
>
> Hi all,
>
> I'm running a luminous cluster with tens of OSDs and
> the cluster runs well. As the data grows, ceph becomes
> more and more important.
>
> What worries me is that many services will down if the
> cluster is out, for instance, th
Hi,everyone
I found libcephfs API, redefine "struct ceph_statx" instead of "struct
stat", why not use "struct stat "
directly and I think it may be better understandable and more convenient to
use for caller.
struct ceph_statx {
uint32_tstx_mask;
uint32_
Added DKIM to my server, will this help
On 18/12/18 11:04 am, Mike O'Connor wrote:
> mmm wonder why the list is saying my email is forged, wonder what I have
> wrong.
>
> My email is sent via an outbound spam filter, but I was sure I had the
> SPF set correctly.
>
> Mike
>
> On 18/12/18 10:53 am, M
On Tue, Dec 18, 2018 at 10:23 AM Mike O'Connor wrote:
>
> Hi All
>
> I have a ceph cluster which has been working with out issues for about 2
> years now, it was upgrade about 6 month ago to 10.2.11
>
> root@blade3:/var/lib/ceph/mon# ceph status
> 2018-12-18 10:42:39.242217 7ff770471700 0 -- 10.1
That's kind of unrelated to Ceph, but since you wrote two mails already,
and I believe it is caused by the mailing list software for ceph-users...
Your original mail distributed via the list ("[ceph-users] Ceph 10.2.11 -
Status not working") did
*not* have the forged-warning.
Only the subseque
Hi Alex,
We are using ceph mostly for shared filesystem with cephfs. As well as
some rbd images for docker volumes and a little s3 data. I have been
looking for a plan that can backup datas on rados level. But maybe I should
backup data seperately.
Thanks for the reply.
Alex Gorbachev 于2018年12月
Hi all, I have a ceph cluster running luminous 12.2.5.
In the cluster, we configured the cephfs with two MDS server,
ceph-mds-test04 is active and ceph-mds-test05 is standby.
Here is the MDS configuration:
> [mds]
mds_cache_size = 100
mds_cache_memory_limit = 42949672960
mds_standby_replay
On 2018-12-17 20:16, Brad Hubbard wrote:
On Tue, Dec 18, 2018 at 10:23 AM Mike O'Connor wrote:
Hi All
I have a ceph cluster which has been working with out issues for about
2
years now, it was upgrade about 6 month ago to 10.2.11
root@blade3:/var/lib/ceph/mon# ceph status
2018-12-18 10:4
Hi Oliver, Peter
Thanks, about an hour after my second email I sat back and thought about
it some more and realised this was the case.
I've also fixed the CEPH issue, a simple set of issues compounding into
ceph-mon's not working correctly.
1. We had power failure 7 days ago. Which for some reas
Thanks Mr Konstantin and Martin,
So with 200TB Cluster, the most afforadable choice is adding MON to OSD
hosts, and preparing enough CPU, RAM for MON services and Storage for
levelDB.
Vào Th 2, 17 thg 12, 2018 vào lúc 16:55 Martin Verges <
martin.ver...@croit.io> đã viết:
> Hello,
>
> we do
21 matches
Mail list logo