Hi Alexandre,
What is the behavior of mongo when a shard is unavailable for some reason
(crash or network partition) ? If shard3 is on the wrong side of a network
partition and uses RBD, it will hang. Is it something that mongo will
gracefully handle ?
I have no experience in this but I'm cur
>>What is the behavior of mongo when a shard is unavailable for some reason
>>(crash or network partition) ? If shard3 is on the wrong side of a network
>>partition and uses RBD, it will hang. Is it something that mongo will
gracefully handle ?
If one shard is down, I think the cluster is
I have this problems too , Help!
-- --
??: "??";;
: 2015??2??12??(??) 11:14
??: "ceph-users@lists.ceph.com";
: [ceph-users] Upgrade 0.80.5 to 0.80.8 --the VM's read requestbecome too
slow
Hello!We use Ceph+Opens
>>Hi,
>>Can you test with disabling rbd_cache ?
>>I remember of a bug detected in giant, not sure it's also the case for fireflt
This was this tracker:
http://tracker.ceph.com/issues/9513
But It has been solved and backported to firefly.
Also, can you test 0.80.6 and 0.80.7 ?
- Mai
Hello!
If I using cache tier 1 pool in writeback mode, it is a good idea turn
off journal on OSDs?
I think in this sutuation journal can help if you are hit a rebalance
procedure in a "cold" storage. In outer situation the journal is
useless, I think.
Any comments?
__
Hi,
Do you have also tested 0.80.6 and 0.80.7 librbd ?
could be usefull to search commits in git.
(I'm not sure that all changes are in the release note)
- Mail original -
De: "杨万元"
À: "ceph-users"
Envoyé: Jeudi 12 Février 2015 04:14:15
Objet: [ceph-users] Upgrade 0.80.5 to 0.80.8 --t
Hi.
hmm ... I thought, why I have such a low speed reading on another cluster
P.S. ceph 0.80.8
2015-02-12 14:33 GMT+03:00 Alexandre DERUMIER :
> >>Hi,
> >>Can you test with disabling rbd_cache ?
>
> >>I remember of a bug detected in giant, not sure it's also the case for
> fireflt
>
> This
Hi all,
Cluster : 540 OSDs , Cache tier and EC pool
ceph version 0.87
cluster c2a97a2f-fdc7-4eb5-82ef-70c52f2eceb1
health HEALTH_WARN 10 pgs peering; 21 pgs stale; 2 pgs stuck inactive;
2 pgs stuck unclean; 287 requests are blocked > 32 sec; recovery 24/6707031
objects degraded (0.000%); to
Hi All,
Having a few problems removing cephfs file systems.
I want to remove my current pools (was used for test data) - wiping all current
data, and start a fresh file system on my current cluster.
I have looked over the documentation but I can't find anything on this. I have
an object store
Hello,
it's about RADOS Gateway.
S3-Clients get a 400 error uploading files lager or equal 2GB. For
example tcpdump extracts:
Uploading less than 2GB files:
>Client:
PUT /tzzzvb/file.txt HTTP/1.1
User-Agent: CloudBerryLab.Base.HttpUtil.Client 4.0.6
(http://www.cloudberrylab.com/)
x-amz-meta
Hi all,
I'm running a small Ceph cluster with 4 OSD nodes, which serves as a
storage backend for a set of KVM virtual machines. The VMs use RBD for disk
storage. On the VM side I'm using virtio-scsi instead of virtio-blk in
order to gain DISCARD support.
Each OSD node is running on a separate mac
Hello Community Members
I am happy to introduce the first book on Ceph with the title “Learning Ceph”.
Me and many folks from the publishing house together with technical reviewers
spent several months to get this book compiled and published.
Finally the book is up for sale on , i hope you wou
Hello!
I try to collect some info about Ceph performance on cluster. The question
is, if I can collect all metrics from the cluster, or the only way to do it
is to ask all nodes by ceph perf dump commands?
Or, may be, there are some better ways to understand on what operations
Ceph spend time?
And
- Original Message -
> From: baijia...@126.com
> To: "ceph-users"
> Sent: Wednesday, February 4, 2015 5:47:03 PM
> Subject: [ceph-users] RGW put file question
>
> when I put file failed, and run the function "
> RGWRados::cls_obj_complete_cancel",
> why we use CLS_RGW_OP_ADD not use CLS
Hi Howard,
be default each OSD is weighed based on its capacity automatically. So the
smaller OSDs will receive less data than the bigger ones.
Be careful though in this case to properly monitor the utilization rate of all
OSDs in your cluster so that one of them does not reach the odd_full rat
hi, everybody
Thang you for reading my question. my ceph cluster is 5 mon, 1 mds , 3 osd
. When ceph cluster runned one day or some days, I can't cp some file from
ceph. I use mount.ceph for client . The cp'command is zombie for a long long
time ! When I restart mds , cp again , it wo
On 02/08/2015 10:41 PM, Scott Laird wrote:
Does anyone have a good recommendation for per-OSD memory for EC? My EC
test blew up in my face when my OSDs suddenly spiked to 10+ GB per OSD
process as soon as any reconstruction was needed. Which (of course)
caused OSDs to OOM, which meant more reco
ok, I'll test it tomorrow , thank you.
-- Original --
From: "Irek Fasikhov";;
Date: Thu, Feb 12, 2015 09:29 PM
To: "Alexandre DERUMIER";
Cc: "killingwolf";
"ceph-users";
Subject: Re: [ceph-users] re: Upgrade 0.80.5 to 0.80.8 --the VM's read
requestbecome
>>To my surprise however these slow requests caused aborts from the block
>>device on the VM side, which ended up corrupting files
This is very strange, you shouldn't have corruption.
Do you use writeback ? if yes, do you have disable barrier on your filesystem ?
(What is the qemu version ? gue
On Fri, Feb 6, 2015 at 12:16 PM, Krzysztof Nowicki
wrote:
> Hi all,
>
> I'm running a small Ceph cluster with 4 OSD nodes, which serves as a storage
> backend for a set of KVM virtual machines. The VMs use RBD for disk storage.
> On the VM side I'm using virtio-scsi instead of virtio-blk in order
Trying to build calamari rpm+deb packages following this guide:
http://karan-mj.blogspot.fi/2014/09/ceph-calamari-survival-guide.html
Server packages works fine, but fails in clients for:
dashboard manage admin login due to:
yo < 1.1.0 seems needed to build the clients,
but can't found this with
What version of Ceph are you running? It's varied by a bit.
But I think you want to just turn off the MDS and run the "fail"
command — deactivate is actually the command for removing a logical
MDS from the cluster, and you can't do that for a lone MDS because
there's nobody to pass off the data to
I am running 0.87, In the end I just wiped the cluster and started again - it
was quicker.
Warren
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: 12 February 2015 16:25
To: Jeffs, Warren (STFC,RAL,ISIS)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph
Oh, hah, your initial email had a very delayed message
delivery...probably got stuck in the moderation queue. :)
On Thu, Feb 12, 2015 at 8:26 AM, wrote:
> I am running 0.87, In the end I just wiped the cluster and started again - it
> was quicker.
>
> Warren
>
> -Original Message-
> Fro
Hi all,
Trying to do this:
ceph -k ceph.client.admin.keyring auth add client.radosgw.gateway -i
ceph.client.radosgw.keyring
Getting this error:
Error EINVAL: entity client.radosgw.gateway exists but key does not match
What can this be??
Thanks!
Beanos___
On 02/10/2015 07:54 PM, Blair Bethwaite wrote:
Just came across this in the docs:
"Currently (i.e., firefly), namespaces are only useful for
applications written on top of librados. Ceph clients such as block
device, object storage and file system do not currently support this
feature."
Then fou
Steffen Winther writes:
>
> Trying to build calamari rpm+deb packages following this guide:
> http://karan-mj.blogspot.fi/2014/09/ceph-calamari-survival-guide.html
>
> Server packages works fine, but fails in clients for:
> dashboard manage admin login due to:
>
> yo < 1.1.0 seems needed to b
On 12/02/15 23:18, Alexandre DERUMIER wrote:
What is the behavior of mongo when a shard is unavailable for some reason (crash or
network partition) ? If shard3 is on the wrong side of a network partition and uses
RBD, it will hang. Is it something that mongo will >>gracefully handle ?
If one
My particular interest is for a less dynamic environment, so manual
key distribution is not a problem. Re. OpenStack, it's probably good
enough to have the Cinder host creating them as needed (presumably
stored in its DB) and just send the secret keys over the message bus
to compute hosts as needed
Hi , all developers and users there are 5 mons in our ceph cluster: epoch 7
fsid 0dfd2bd5-1896-4712-916b-ec02dcc7b049 last_changed 2015-02-13
09:11:45.758839 created 0.00 0: 10.117.16.17:6789/0 mon.b 1:
10.118.32.7:6789/0 mon.cHEALTH_WARN 2 mons down, quorum 0,1,2 b,c,d mon.e (rank
3) addr
Hi ,
all developers and users
when i add a new mon to current mon cluter, failed with 2 mon out of quorum.
there are 5 mons in our ceph cluster:
epoch 7
fsid 0dfd2bd5-1896-4712-916b-ec02dcc7b049
last_changed 2015-02-13 09:11:45.758839
created 0.00
0: 10.117.16.17:6789/0 mon.b
1: 10.118.3
Hi Chir,
Please fidn my answer below in blue
On Thu, Feb 12, 2015 at 12:42 PM, Chris Hoy Poy wrote:
> Hi Sumit,
>
> A couple questions:
>
> What brand/model SSD?
>
samsung 480G SSD(PM853T) having random write 90K IOPS (4K, 368MBps)
>
> What brand/model HDD?
>
64GB memory, 300GB SAS HDD (seagate
thanks very much for your advice .
yes,as you said,disabled the rbd_cache will improve the read request,but if
i disabled rbd_cache, the randwrite request will be worse. so this method
maybe can not solve my problem, is it ?
In addition , I also test the 0.80.6 and 0.80.7 librbd,they are as good
Wow, Cong
BTW, I found the link of sample copy is 404.
2015-02-06 6:53 GMT+08:00 Karan Singh :
> Hello Community Members
>
> I am happy to introduce the first book on Ceph with the title “*Learning
> Ceph*”.
>
> Me and many folks from the publishing house together with technical
> reviewers spe
On Tue, Feb 10, 2015 at 9:26 PM, kenmasida <981163...@qq.com> wrote:
>
> hi, everybody
>Thang you for reading my question. my ceph cluster is 5 mon, 1 mds , 3
> osd . When ceph cluster runned one day or some days, I can't cp some file
> from ceph. I use mount.ceph for client . The cp'com
I get the following error on standard Debian Wheezy
# wget https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
--2015-02-13 07:19:04-- https://ceph.com/git/?p=ceph.git
Resolving ceph.com (ceph.com)... 208.113.241.137, 2607:f298:4:147::b05:fe2a
Connecting to ceph.com (ceph.com)|208.11
Hi,
I think the root-CA (COMODO RSA Certification Authority) is not available on
your Linux host? Using Google chrome connecting to https://ceph.com/ works fine.
regards
Danny
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dietmar Maur
Thu Feb 12 2015 at 16:23:38 użytkownik Andrey Korolyov
napisał:
On Fri, Feb 6, 2015 at 12:16 PM, Krzysztof Nowicki
> wrote:
> > Hi all,
> >
> > I'm running a small Ceph cluster with 4 OSD nodes, which serves as a
> storage
> > backend for a set of KVM virtual machines. The VMs use RBD for disk
>
38 matches
Mail list logo