Hello,
just to throw some hard numbers into the ring, I've (very much STRESS)
tested readproxy vs. readforward with more or less expected results.
New Jewel cluster, 3 cache-tier nodes (5 OSD SSDs each), 3 HDD nodes,
IPoIB network.
Notably 2x E5-2623 v3 @ 3.00GHz in the cache-tiers.
2 VMs (on d
Hi community!
I'm wondering what are actual use cases for cache tiering?
Can I expect improvement of performance with scenario where I use ceph
with RBD for VMs hosting?
Current pool includes 15 OSD on 10K SAS drives with SSD journal for each
5 OSDs.
Thanks
Ruslan
Email and Anti-Spam servic
Hi,
We've been having this ongoing problem with threads timing out on the
OSDs. Typically we'll see the OSD become unresponsive for about a minute,
as threads from other OSDs time out. The timeouts don't seem to be
correlated to high load. We turned up the logs to 10/10 for part of a day
to ca
Since your app is an Apache / php app is it possible for you to reconfigure
the app to use S3 module rather than a posix open file()? Then with Ceph
drop CephFS and configure Civetweb S3 gateway? You can have
"active-active" endpoints with round robin dns or F5 or something. You
would also have
Then you want separate partitions for each OSD journal. if you have 4 HDD
OSDs using this as they're journal, you should have 4x 5GB partitions on
the SSD.
On Mon, Jun 12, 2017 at 12:07 PM Deepak Naidu wrote:
> Thanks for the note, yes I know them all. It will be shared among multiple
> 3-4 HDD
Thanks for the note, yes I know them all. It will be shared among multiple 3-4
HDD OSD Disks.
--
Deepak
On Jun 12, 2017, at 7:07 AM, David Turner
mailto:drakonst...@gmail.com>> wrote:
Why do you want a 70GB journal? You linked to the documentation, so I'm
assuming that you followed the formu
2017-06-12 16:10 GMT+02:00 David Turner :
> I have an incredibly light-weight cephfs configuration. I set up an MDS
> on each mon (3 total), and have 9TB of data in cephfs. This data only has
> 1 client that reads a few files at a time. I haven't noticed any downtime
> when it fails over to a s
I have an incredibly light-weight cephfs configuration. I set up an MDS on
each mon (3 total), and have 9TB of data in cephfs. This data only has 1
client that reads a few files at a time. I haven't noticed any downtime
when it fails over to a standby MDS. So it definitely depends on your
workl
Why do you want a 70GB journal? You linked to the documentation, so I'm
assuming that you followed the formula stated to figure out how big your
journal should be... "osd journal size = {2 * (expected throughput *
filestore max sync interval)}". I've never heard of a cluster that
requires such a
We use the following in our ceph.conf for MDS failover. We're running one
active and one standby. Last time it failed over there was about 2 minutes
of downtime before the mounts started responding again but it did recover
gracefully.
[mds]
max_mds = 1
mds_standby_for_rank = 0
mds_standby_replay =
Adding ceph-devel as this now involves two bugs that are IMO critical,
one resulting in data loss, the other in data not getting removed
properly.
2017-06-07 9:23 GMT+00:00 Jens Rosenboom :
> 2017-06-01 18:52 GMT+00:00 Gregory Farnum :
>>
>>
>> On Thu, Jun 1, 2017 at 2:03 AM Jens Rosenboom wrote:
2017-06-12 10:49 GMT+02:00 Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de>:
> Hi,
>
>
> On 06/12/2017 10:31 AM, Daniel Carrasco wrote:
>
>> Hello,
>>
>> I'm very new on Ceph, so maybe this question is a noob question.
>>
>> We have an architecture that have some web servers (ngin
Hi,
On 06/12/2017 10:31 AM, Daniel Carrasco wrote:
Hello,
I'm very new on Ceph, so maybe this question is a noob question.
We have an architecture that have some web servers (nginx, php...)
with a common File Server through NFS. Of course that is a SPOF, so we
want to create a multi FS to a
Hello,
I'm very new on Ceph, so maybe this question is a noob question.
We have an architecture that have some web servers (nginx, php...) with a
common File Server through NFS. Of course that is a SPOF, so we want to
create a multi FS to avoid future problems.
We've already tested GlusterFS, bu
Hello Eric,
You are probably hitting the git commits listed on this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017731.html
If this is the same behaviour, your options are:
a) set all fqn inside the array of hostnames of your zonegroup(s)
or
b) remove 'rgw dns nam
15 matches
Mail list logo