d that if I go directly (Internet <--> Radosgw_Server_1) I do not
have any issues with special characters.
Any idea what I am missing? Perhaps something needs changing on the proxy
server?
Cheers
Andrei
- Original Message -----
From: "Yehuda Sadeh"
To: "Andr
Hi guys,
Was wondering if 0.80.2 is coming any time soon? I am planning na upgrade from
Emperor and was wondering if I should wait for 0.80.2 to come out if the
release date is pretty soon. Otherwise, I will go for the 0.80.1.
Cheers
Andrei
___
ce
Just thought to save some time )))
- Original Message -
From: "Wido den Hollander"
To: ceph-users@lists.ceph.com
Sent: Thursday, 3 July, 2014 12:11:07 PM
Subject: Re: [ceph-users] release date for 0.80.2
On 07/03/2014 10:27 AM, Andrei Mikhailovsky wrote:
> Hi gu
nginx and also have a problem with 100-Continue.
Only Apache 2.x works fine.
BR,
Michael
I haven't tried SSL yet. We currently don't have a wildcard certificate
for this, so it hasn't been a concern (and our current use case, all the files
are publ
Hi Andrija,
I've got at least two more stories of similar nature. One is my friend running
a ceph cluster and one is from me. Both of our clusters are pretty small. My
cluster has only two osd servers with 8 osds each, 3 mons. I have an ssd
journal per 4 osds. My friend has a cluster of 3 mons
Quenten,
It has been noted before and I've seen a thread on the mailing list about it.
In a long term, I've not noticed a great increase in ram. By that i mean that
initially, right after doing the upgrade from emperor to firefly and restarting
the odd servers I did notice about 20-25% more r
h mon or osd server.
Cheers
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http://www.wi-foo.com
Tel: +44 (0)870 4431337
Fax: +44 (0)208 429 3111
PGP: Key ID - 0x2B3438DE
PGP: Server - keyserver.pgp.com
DISCLAIMER
The information cont
Drew, I would not use iscsi with ivm. instead, I would use built in rbd
support.
However, you would use something like nfs/iscsi if you were to connect other
hypervisors to ceph backend. Having failover capabilities is important here ))
Andrei
--
Andrei Mikhailovsky
Director
Arhont
Quenten,
We've got two monitors sitting on the osd servers and one on a different
server.
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http://www.wi-foo.com
Tel: +44 (0)870 4431337
Fax: +44 (0)208 429 3111
PGP: Key ID - 0x2B3
ng, emperor and now firefly releases.
Because of this I've set noout flag on my cluster and have to keep an eye on
the osds for manual intervention, which is far from ideal case (((.
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http:/
Comments inline
- Original Message -
From: "Sage Weil"
To: "Quenten Grasso"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, 17 July, 2014 4:44:45 PM
Subject: Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at
the same time
On Thu, 17 Jul 2014, Quenten Grasso wrot
Hello guys,
I have noticed the following message/error after upgrading to firefly. Does
anyone know what needs doing to correct it?
Thanks
Andrei
[ 25.911055] libceph: mon1 192.168.168.201:6789 feature set mismatch, my 40002
< server's 20002040002, missing 2000200
[ 25
Thanks guys,
I am trying 3.15 kernel to see how it works.
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http://www.wi-foo.com
Tel: +44 (0)870 4431337
Fax: +44 (0)208 429 3111
PGP: Key ID - 0x2B3438DE
PGP: Server - keyserver.pgp.com
Ricardo,
Thought to share my testing results.
I've been using IPoIB with ceph for quite some time now. I've got QDR
osd/mon/client servers to serve rbd images to kvm hypervisor. I've done some
performance testing using both rados and guest vm benchmarks while running the
last three stable ve
Hello guys,
Was wondering if anyone has tried using the Crucial MX100 ssds either for osd
journals or cache pool? It seems like a good cost effective alternative to the
more expensive drives and read/write performance is very good as well.
Thanks
--
Andrei Mikhailovsky
Director
Arhont
Thanks for your comments.
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http://www.wi-foo.com
Tel: +44 (0)870 4431337
Fax: +44 (0)208 429 3111
PGP: Key ID - 0x2B3438DE
PGP: Server - keyserver.pgp.com
DISCLAIMER
The information
Hello guys,
I was hoping to get some answers on how would ceph behaive when I install SSDs
on the hypervisor level and use them as cache pool. Let's say I've got 10 kvm
hypervisors and I install one 512GB ssd on each server. I then create a cache
pool for my storage cluster using these ssds. M
Anyone have an idea on how it works?
Thansk
- Original Message -
From: "Andrei Mikhailovsky"
To: ceph-users@lists.ceph.com
Sent: Monday, 4 August, 2014 10:10:03 AM
Subject: [ceph-users] cache pools on hypervisor servers
Hello guys,
I was hoping to get some answ
Robert, thanks for your reply, please see my comments inline
- Original Message -
> From: "Robert van Leeuwen"
> To: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com
> Sent: Wednesday, 13 August, 2014 6:57:57 AM
> Subject: RE: cache pools on hypervisor
-
> From: "Robert van Leeuwen"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, 14 August, 2014 9:31:24 AM
> Subject: RE: cache pools on hypervisor servers
> > Personally I am not worried too much about the hypervisor
Thanks a lot for your input. I will proceed with putting the cache pool on the
storage layer instead.
Andrei
- Original Message -
> From: "Sage Weil"
> To: "Andrei Mikhailovsky"
> Cc: "Robert van Leeuwen" ,
> ceph-users@lists.ceph.com
&g
Hugo,
I would look at setting up a cache pool made of 4-6 ssds to start with. So, if
you have 6 osd servers, stick at least 1 ssd disk in each server for the cache
pool. It should greatly reduce the osd's stress of writing a large number of
small files. Your cluster should become more responsiv
Hello guys,
I am planning to perform regular rbd pool off-site backup with rbd export and
export-diff. I've got a small ceph firefly cluster with an active writeback
cache pool made of couple of osds. I've got the following question which I hope
the ceph community could answer:
Will this rbd e
So it looks like using rbd export / import will negatively effect the client
performance, which is unfortunate. Is this really the case? Any plans on
changing this behavior in future versions of ceph?
Cheers
Andrei
- Original Message -
From: "Robert LeBlanc"
To: "Andr
Does that also mean that scrubbing and deep-scrubbing also squishes data out of
the cache pool? Could someone from the ceph community confirm this?
Thanks
- Original Message -
From: "Robert LeBlanc"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Fri
considered as hot data
by the pool as it has been recently changed. So, I do not expect the delta
exports squeeze too much data from the cache pool. That is if I got the
understanding of how cache pools work.
Andrei
- Original Message -
From: "Sage Weil"
To: "Andrei Mi
) will go
through cache?
Andrei
- Original Message -
From: "Sage Weil"
To: "Andrei Mikhailovsky"
Cc: "Robert LeBlanc" , ceph-users@lists.ceph.com
Sent: Friday, 22 August, 2014 10:34:24 PM
Subject: Re: [ceph-users] pool with cache pool and rbd expor
Hello guys,
I am planning to do rbd images off-site backup with rbd export-diff and I was
wondering if ceph has checksumming functionality so that I can compare source
and destination files for consistency? If so, how do I retrieve the checksum
values from the ceph cluster?
Thanks
Andrei
Hello guys,
Is it possible to export rbd image while preserving the clones structure? So,
if I've got a single clone rbd image and 10 vm images that were cloned from the
original one, would the rbd export preserve this structure on the destination
pool, or would it waste space and create 10 ind
- Original Message -
From: "Wido den Hollander"
To: ceph-users@lists.ceph.com
Sent: Monday, 25 August, 2014 10:31:14 AM
Subject: Re: [ceph-users] ceph rbd image checksums
On 08/24/2014 08:27 PM, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I am planning to do rbd images of
From the top of my head, it is recommended to use 3 mons in production. Also,
for the 22 osds your number of PGs look a bug low, you should look at that.
The performance of the cluster is poor - this is too vague. What is your
current performance, what benchmarks have you tried, what is your dat
Hello
I am seeing this message every 900 seconds on the osd servers. My dmesg output
is all filled with:
[256627.683702] libceph: osd3 192.168.168.200:6821 socket closed (con state
OPEN)
[256627.687663] libceph: osd6 192.168.168.200:6841 socket closed (con state
OPEN)
Looking at the ceph-osd
Hi,
I am running a few tests for exporting volumes with rbd export and noticing
very poor performance. It takes almost 3 hours to export 100GB volume. Servers
are pretty idle during the export.
The performance of the cluster itself is way faster. How can I increase the
speed of rbd export?
Th
Thanks!
i thought it's something serious.
Andrei
- Original Message -
From: "Gregory Farnum"
To: "Andrei Mikhailovsky"
Cc: "ceph-users"
Sent: Tuesday, 26 August, 2014 9:00:06 PM
Subject: Re: [ceph-users] Two osds are spaming dmesg every 900 seconds
Hello guys,
I was wondering if someone could point me in the right direction of a step by
step guide on setting up a cache pool. I've seen the
http://ceph.com/docs/firefly/dev/cache-pool/. However, it has no mentioning of
the first steps that one need to take.
For instance, I've got my ssd di
at deal of improvements introduce to accommodate high IO of the
SSDs. Does that apply to the improvements of the cache tier as well?
Cheers
Andrei
- Original Message -
From: "Vladislav Gorbunov"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Thursday
Hello guys,
I was wondering if there is a benefit of using journal-less btrfs file system
on the cache pool osds? Would it speed up the writes to the cache tier? Is
btrfs and ceph getting close to production level?
Cheers
Andrei
___
ceph-users mailin
Hello guys,
was wondering if it is a good idea to enable TRIM (mount option discard) on the
ssd disks which are used for either cache pool or osd journals?
For performance, is it better to enable it or run fstrim with cron every once
in a while?
Thanks
Andrei
_
Keith,
You should consider doing regular rbd volume snapshots and keep them for N
amount of hours/days/months depending on your needs.
Cheers
Andrei
- Original Message -
From: "Keith Phua"
To: ceph-users@lists.ceph.com
Cc: y...@nus.edu.sg, cheechi...@nus.edu.sg, eng...@nus.edu.
;Ilya Dryomov"
To: "Keith Phua"
Cc: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com,
y...@nus.edu.sg, cheechi...@nus.edu.sg, eng...@nus.edu.sg
Sent: Wednesday, 10 September, 2014 11:51:04 AM
Subject: Re: [ceph-users] Best practices on Filesystem recovery on RBD b
Hello guys,
I am experimeting with cache pool and running some tests to see how adding the
cache pool improves the overall performance of our small cluster.
While doing testing I've noticed that it seems that the cache pool is writing
too much on the cache pool ssds. Not sure what the issue
might explain the behaviour that i am experiencing?
Cheers
Andrei
- Original Message -
From: "Xiaoxi Chen"
To: "Andrei Mikhailovsky" , "ceph-users"
Sent: Thursday, 11 September, 2014 2:00:31 AM
Subject: RE: Cache Pool writing too much on ssds, poor p
Irek,
have you change the ceph.conf file to change the recovery p riority?
Options like these might help with prioritising repair/rebuild io with the
client IO:
osd_recovery_max_chunk = 8388608
osd_recovery_op_priority = 2
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_th
to be careful about.
For more info, see the following thread:
http://www.spinics.net/lists/ceph-devel/msg20189.html
Mark
On 09/10/2014 07:51 AM, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I am experimeting with cache pool and running some tests to see how
> adding the cache pool improves th
Hello guys,
I've been trying to map an rbd disk to run some testing and I've noticed that
while I can successfully read from the rbd image mapped to /dev/rbdX, I am
failing to reliably write to it. Sometimes write tests work perfectly well,
especially if I am using large block sizes. But often
someone help me with debugging the issue and getting to the root cause?
Thanks
Andrei
- Original Message -
From: "Andrei Mikhailovsky"
To: ceph-users@lists.ceph.com
Sent: Sunday, 14 September, 2014 12:04:15 AM
Subject: [ceph-users] writing to rbd mapped device produces
ge1-ib kernel: [ 1200.472523] [] ?
flush_kthread_worker+0xb0/0xb0
Cheers
- Original Message -
From: "Andrei Mikhailovsky"
To: ceph-users@lists.ceph.com
Sent: Sunday, 14 September, 2014 11:34:07 AM
Subject: Re: [ceph-users] writing to rbd mapped device produces hang tasks
Hi
To answer my own question, I think I am getting 8818 bug -
http://tracker.ceph.com/issues/8818 . The solution seems to be to upgrade to
the latest 3.17 kernel brunch.
Cheers
- Original Message -
From: "Andrei Mikhailovsky"
To: ceph-users@lists.ceph.com
Sent: Sunday, 14
Hello guys,
Was wondering if anyone uses or done some testing with using bcache or
enhanceio caching in front of ceph osds?
I've got a small cluster of 2 osd servers, 16 osds in total and 4 ssds for
journals. I've recently purchased four additional ssds to be used for ceph
cache pool, but i'
Hi
Does anyone know how to check the basic cache pool stats for the information
like how well the cache layer is working for a recent or historic time frame?
Things like cache hit ratio would be very helpful as well as.
Thanks
Andrei
___
ceph-use
- Original Message -
> From: "Mark Nelson"
> To: ceph-users@lists.ceph.com
> Sent: Monday, 15 September, 2014 1:13:01 AM
> Subject: Re: [ceph-users] Bcache / Enhanceio with osds
> On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote:
> > Hello guys,
> >
Hello guys,
I was wondering if there has been any updates on getting XenServer ready for
ceph? I've seen a howto that was written well over a year ago (I think) for a
PoC integration of XenServer and Ceph. However, I've not seen any developments
lately.It would be cool to see other hypervisors
fronted by one SSD). We still have yet to test if adding a
> > bcache
> > layer in addition to the SSD journals provides any additional
> > improvements.
> >
> > Robert LeBlanc
> >
> > On Sun, Sep 14, 2014 at 6:13 PM, Mark Nelson
> > > <mail
Luis,
you may want to take a look at rbd export/import and export-diff import-diff
functionality. this could be used to copy data to another cluster or offsite.
S3 has regions, which you could use for async replication.
Not sure how the cephfs work for backups.
Andrei
- Original Messa
Hi cephers,
I've got three questions:
1. Does anyone have an estimation on the release dates of the next stable ceph
branch?
2. Will the new stable release have improvements in the following areas: a)
working with ssd disks; b) cache tier
3. Will the new stable release introduce support f
- Original Message -
> I'm not sure what you mean about improvements for SSD disks, but the
> OSD should be generally a bit faster. There are several cache tier
> improvements included that should improve performance on most
> workloads that others can speak about in more detail than I.
W
Not sure what exact brands of samsung you have, but i've got the 840 Pro and it
sucks big time. its is slow and unreliable and halts to a stand still over a
period of time due to the trimming issue. Even after i've left unreserved like
50% of the disk.
Unlike the Intel disks (even the consumer
Yeah, guys, thanks! I've got it a few days ago and done a few chapters already.
Well done!
Andrei
- Original Message -
> From: "Wido den Hollander"
> To: ceph-users@lists.ceph.com
> Sent: Friday, 13 February, 2015 5:38:47 PM
> Subject: Re: [ceph-users] Introducing "Learning Ceph" :
Mark, many thanks for your effort and ceph performance tests. This puts things
in perspective.
Looking at the results, I was a bit concerned that the IOPs performance in
niether releases come even marginally close to the capabilities of the
underlying ssd device. Even the fastest PCI ssds have
Martin,
I have been using Samsung 840 Pro for journals about 2 years now and have just
replaced all my samsung drives with Intel. We have found a lot of performance
issues with 840 Pro (we are using 128mb). In particular, a very strange
behaviour with using 4 partitions (with 50% underprovisio
I would not use a single ssd for 5 osds. I would recommend the 3-4 osds max per
ssd or you will get the bottleneck on the ssd side.
I've had a reasonable experience with Intel 520 ssds (which are not produced
anymore). I've found Samsung 840 Pro to be horrible!
Otherwise, it seems that everyo
e -
> From: "Tony Harris"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com, "Christian Balzer"
> Sent: Sunday, 1 March, 2015 8:49:56 PM
> Subject: Re: [ceph-users] SSD selection
> Ok, any size suggestion? Can I get a 120 and be ok
In a long term use I also had some issues with flashcache and enhanceio. I've
noticed frequent slow requests.
Andrei
- Original Message -
> From: "Robert LeBlanc"
> To: "Nick Fisk"
> Cc: ceph-users@lists.ceph.com
> Sent: Friday, 20 March, 2015 8:14:16 PM
> Subject: Re: [ceph-users]
Hi,
Am I the only person noticing disappointing results from the preliminary RDMA
testing, or am I reading the numbers wrong?
Yes, it's true that on a very small cluster you do see a great improvement in
rdma, but in real life rdma is used in large infrastructure projects, not on a
few serve
Somnath,
Sounds very promising! I can't wait to try it on my cluster as I am currently
using IPOIB instread of the native rdma.
Cheers
Andrei
- Original Message -
> From: "Somnath Roy"
> To: "Andrei Mikhailovsky" , "Andrey Korolyov"
Mike, yeah, I wouldn't switch to rdma until it is fully supported in a stable
release )))
Andrei
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "Somnath Roy"
> Cc: ceph-users@lists.ceph.com, "ceph-devel"
>
> Sent: Wednes
d see if it makes a
difference.
Thanks for your feedback
Andrei
- Original Message -
> From: "LOPEZ Jean-Charles"
> To: "Andrei Mikhailovsky"
> Cc: "LOPEZ Jean-Charles" ,
> ceph-users@lists.ceph.com
> Sent: Saturday, 11 April, 2015 7:5
otherwise, I
will need to revert back to the default settings as the cluster as it currently
is is not functional.
Andrei
- Original Message -
> From: "LOPEZ Jean-Charles"
> To: "Andrei Mikhailovsky"
> Cc: "LOPEZ Jean-Charles" ,
> ceph-user
o not want to have more than 1 or 2
scrub/deep-scrubs running at the same time on my cluster. How do I implement
this?
Thanks
Andrei
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "LOPEZ Jean-Charles"
> Cc: ceph-users@lists.ceph.com
> Se
on a cluster basis rather than
on an osd basis.
Andrei
- Original Message -
> From: "Jean-Charles Lopez"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com
> Sent: Sunday, 12 April, 2015 5:17:10 PM
> Subject: Re: [ceph-users] deep scr
Hi
I have been testing the Samsung 840 Pro (128gb) for quite sometime and I can
also confirm that this drive is unsuitable for osd journal. The performance and
latency that I get from these drives (according to ceph osd perf) are between
10 - 15 times slower compared to the Intel 520. The Inte
Anthony,
I doubt the manufacturer reported 315MB/s for 4K block size. Most likely
they've used 1M or 4M as the block size to achieve the 300MB/s+ speeds
Andrei
- Original Message -
> From: "Alexandre DERUMIER"
> To: "Anthony Levesque"
> Cc: "ceph-users"
> Sent: Saturday, 25 April,
Piotr,
You may also investigate if the cache tier made of a couple of ssds could help
you. Not sure how the data is used in your company, but if you have a bunch of
hot data that moves around from one vm to another it might greatly speed up the
rsync. On the other hand, if a lot of rsync data i
Hi guys,
I also use a combination of intel 520 and 530 for my journals and have noticed
that the latency and the speed of 520s is better than 530s.
Could someone please confirm that doing the following at start up will stop the
dsync on the relevant drives?
# echo temporary write through > /s
19 June, 2015 3:59:55 PM
> Subject: Re: [ceph-users] rbd performance issue - can't find bottleneck
>
>
>
> On 06/19/2015 09:54 AM, Andrei Mikhailovsky wrote:
> > Hi guys,
> >
> > I also use a combination of intel 520 and 530 for my journals and have
> >
sense to get a small battery
protected raid card in front of the 520s and 530s to protect against these
types of scenarios?
Cheers
- Original Message -
> From: "Mark Nelson"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com
> Sent: Friday, 19 Jun
Hi,
I seem to be missing the latest Hammer release 0.94.2 in the repo for Ubuntu
precise. I can see the packages for trusty, but precise still shows 0.94.1. Is
there a miss or did you stop supporting precise? Or perhaps something is odd
happened with my precise servers?
Cheers
Andrei
Thanks Mate, I was under the same impression.
Could someone at Inktank please help us with this problem? Is this intentional
or has it simply been an error?
Thanks
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http://www.wi-foo.com
Hi Nick,
I've played with Flashcache and EnhanceIO, but I've decided not to use it for
production in the end. The reason was that using both has increased the amount
of slow requests that I had on the cluster and I have also noticed somewhat
higher level of iowait on the vms. At that time, I d
I can confirm that I am having similar issues with ubuntu vm guests using fio
with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks,
occasionally guest vm stops responding without leaving anything in the logs and
sometimes i see kernel panic on the console. I typically leave th
Hi
i've also tested 4k performance and found similar results with fio and iozone
tests as well as simple dd. I've noticed that my io rate doesn't go above 2k-3k
in the virtual machines. I've got two servers with ssd journals but spindles
for the osd. I've previusly tried to use nfs + zfs on th
Hello guys,
I am doing a test ACS setup to see how we can use Ceph for both Primary and
Secondary storage services. I have now successfully added both Primary (cluster
wide) and Secondary storage. However, I've noticed that my SSVM and CPVM are
not being created, so digging in the logs reveale
Hello guys,
I am doing a test ACS setup to see how we can use Ceph for both Primary and
Secondary storage services. I have now successfully added both Primary (cluster
wide) and Secondary storage. However, I've noticed that my SSVM and CPVM are
not being created, so digging in the logs reveal
To answer myself - there was a problem with my api secret key which rados
generated. It has escaped the "/", which for some reason CloudStack couldn't
understand. Removing the escape (\) character has solved the problem.
Andrei
- Original Message -
From: "Andre
Ilya,
Was wondering if you've had a chance to look into performance issues with rbd
and the patched kernel? I've downloaded 3.16.3 and running some dd tests, which
were producing hang tasks in the past. I've noticed that i can't get past
20mb/s on the rbd mounted volume. I am sure I was hittin
I also had the hang tasks issues with 3.13.0-35 -generic - Original Message
-
> From: "German Anders"
> To: "Micha Krause"
> Cc: ceph-users@lists.ceph.com
> Sent: Wednesday, 24 September, 2014 4:35:15 PM
> Subject: Re: [ceph-users] Frequent Crashes on rbd to nfs gateway
> Server
> 3.13
Guys,
Have done some testing with 3.16.3-031603-generic downloaded from Ubuntu utopic
branch. The hang task problem is gone when using large block size (tested with
1M and 4M) and I could no longer preproduce the hang tasks while doing 100 dd
tests in a for loop.
However, I can confirm that
ughts why I am getting 250kb/s instead of expected 100MB/s+ with large
block size?
How do I investigate what's causing this crappy performance?
Cheers
Andrei
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "Micha Krause"
> Cc: ceph-us
tasks when doing dd testing? have you tried 4K
block sizes and running it for sometime, like I have done?
Thanks
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "Micha Krause" , ceph-users@lists.ceph.com
Hello Cephers,
I am having some issues with two osds, which are either flapping or just
crashing without recovering back. I've got a log file 100MB or so for these
osds which has been generated in a couple of hours if anyone is interested. I
am running firefly with the latest updates on Ubuntu
Timur,
As far as I know, the latest master has a number of improvements for ssd disks.
If you check the mailing list discussion from a couple of weeks back, you can
see that the latest stable firefly is not that well optimised for ssd drives
and IO is limited. However changes are being made to
Greg, are they going to be a part of the next stable release?
Cheers
- Original Message -
> From: "Gregory Farnum"
> To: "Andrei Mikhailovsky"
> Cc: "Timur Nurlygayanov" , "ceph-users"
>
> Sent: Wednesday, 1 October, 2014 3:
Hello Cephers,
I am a bit lost on the best ways of using ssd and hdds for ceph cluster which
uses rbd + kvm for guest vms.
At the moment I've got 2 osd servers which currently have 8 hdd osds (max 16
bays) each and 4 ssd disks. Currently, I am using 2 ssds for osd journals and
I've got 2x512
From: "Christian Balzer"
> To: ceph-users@lists.ceph.com
> Sent: Friday, 3 October, 2014 2:06:48 AM
> Subject: Re: [ceph-users] ceph, ssds, hdds, journals and caching
> On Thu, 2 Oct 2014 21:54:54 +0100 (BST) Andrei Mikhailovsky wrote:
> > Hello Cephers,
> &g
That is what I am afraid of!
- Original Message -
> From: "Vladislav Gorbunov"
> To: "Andrei Mikhailovsky"
> Cc: "Christian Balzer" , ceph-users@lists.ceph.com
> Sent: Friday, 3 October, 2014 12:04:37 PM
> Subject: Re: [ceph-users] ce
> While I doubt you're hitting any particular bottlenecks on your
> storage
> servers I don't think Zabbix (very limited experience with it so I
> might
> be wrong) monitors everything, nor does it so at sufficiently high
> freqency to show what is going on during a peak or fio test from a
> client
> Read the above link again, carefully. ^o^
> In in it I state that:
> a) despite reading such in old posts, setting read_ahead on the OSD
> nodes
> has no or even negative effects. Inside the VM, it is very helpful:
> b) the read speed increased about 10 times, from 35MB/s to 380MB/s
Christian,
Tuan,
I had a similar behaviour when I've connected the cache pool tier. I resolved
the issues by restarting all my osds. If your case is the same, try it and see
if it works. If not, I guess the guys here and on the ceph irc might be able to
help you.
Cheers
Andrei
- Original Message
Hello cephers,
I've been testing flashcache and enhanceio block device caching for the osds
and i've noticed i have started getting the slow requests. The caching type
that I use is ready only, so all writes bypass the caching ssds and go directly
to osds, just like what it used to be before i
Hello cephers,
I've been testing flashcache and enhanceio block device caching for the osds
and i've noticed i have started getting the slow requests. The caching type
that I use is ready only, so all writes bypass the caching ssds and go directly
to osds, just like what it used to be before
1 - 100 of 265 matches
Mail list logo