On 08/29/2014 11:22 PM, J David wrote:
So an even number N of monitors doesn't give you any better fault
resilience than N-1 monitors. And the more monitors you have, the
more traffic there is between them. So when N is even, N monitors
consume more resources and provide no extra benefit compar
On Fri, Aug 29, 2014 at 12:52 AM, pragya jain wrote:
> #2: why odd no. of monitors are recommended for production cluster, not even
> no.?
Because to achieve a quorum, you must always have participation of
more than 50% of the monitors. Not 50%. More than 50%. With an even
number of monitors,
On Thu, Aug 28, 2014 at 9:52 PM, pragya jain wrote:
> I have some basic question about monitor and paxos relationship:
>
> As the documents says, Ceph monitor contains cluster map, if there is any
> change in the state of the cluster, the change is updated in the cluster
> map. monitor use paxos a
Hello,
- Mail original -
> De: "Chad Seys"
> À: ceph-users@lists.ceph.com
> Envoyé: Vendredi 29 Août 2014 18:53:19
> Objet: [ceph-users] script for commissioning a node with multiple osds,
> added to cluster as a whole
>
> Hi All,
> Does anyone have a script or sequence of comma
Hmm, so you've got PGs which are out-of-date on disk (by virtue of being an
older snapshot?) but still have records of them being newer in the OSD
journal?
That's a new failure node for me and I don't think we have any tools
designed for solving. If you can *back up the disk* before doing this, I
t
Hi,
You could use ceph-deploy.
There wont be any difference in the total amount of data being moved.
W dniu 29.08.2014 o 18:53 Chad Seys pisze:
Hi All,
Does anyone have a script or sequence of commands to prepare all
drives on a
single computer for use by ceph, and then start up all OSDs
Hi All,
Does anyone have a script or sequence of commands to prepare all drives on a
single computer for use by ceph, and then start up all OSDs on the computer at
one time?
I feel this would be faster and less network traffic than adding one drive
at a time, which is what the current script
Hi Somnath,
we're in the process evaluating sandisk ssds for ceph (fs and journal on each).
8 osds / ssds per host xeon e3 1650
Which one can you recommend?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 29.08.2014 um 18:33 schrieb Somnath Roy :
>
> Somnath
___
Sebastien,
No timeline yet..
Couple of issues here.
1. I don't think FileStore is a solution for SSDs because of its double write.
WA will be >=2 always. So far can't find a way to replace Ceph journal for
filesystem backend.
2. The existing key-value backend like leveldb/Rocksdb also are not
I am running active/standby and it didn't swap over to the standby. If I
shutdown the active server it swaps to the standby fine though. When there
were issues, disk access would back up on the webstats servers and a cat of
/sys/kernel/debug/ceph/*/mdsc would have a list of entries whereas normal
Results are with HT enabled. We are yet to measure the performance with HT
disabled.
No, I didn't measure I/O time/utilization.
-Original Message-
From: Andrey Korolyov [mailto:and...@xdel.ru]
Sent: Friday, August 29, 2014 1:03 AM
To: Somnath Roy
Cc: Haomai Wang; ceph-users@lists.ceph.com
Hi Mark,
Yeah. The application defines portals which are active threaded, then the
transport layer is servicing the portals with EPOLL.
Matt
- "Mark Nelson" wrote:
> Excellent, I've been meaning to check into how the TCP transport is
> going. Are you using a hybrid threadpool/epoll app
Excellent, I've been meaning to check into how the TCP transport is
going. Are you using a hybrid threadpool/epoll approach? That I
suspect would be very effective at reducing context switching,
especially compared to what we do now.
Mark
On 08/28/2014 10:40 PM, Matt W. Benjamin wrote:
Hi,
@Dan: thanks for sharing your config, with all your flags I don’t seem to get
more that 3,4K IOPS and they even seem to slow me down :( This is really weird.
Yes I already tried to run to simultaneous processes and only half of 3,4K for
each of them.
@Kasper: thanks for these results, I believe
On 08/29/2014 06:10 AM, Dan Van Der Ster wrote:
Hi Sebastien,
Here’s my recipe for max IOPS on a _testing_ instance with SSDs:
osd op threads = 2
With SSDs, In the past I've seen increasing the osd op thread count can
help random reads.
osd disk threads = 2
journal max write bytes
Hi Sébastien,
On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote:
> Hey all,
(...)
> We have been able to reproduce this on 3 distinct platforms with some
> deviations (because of the hardware) but the behaviour is the same.
> Any thoughts will be highly appreciated, only getting 3,2k
Hi Sebastien,
Here’s my recipe for max IOPS on a _testing_ instance with SSDs:
osd op threads = 2
osd disk threads = 2
journal max write bytes = 1048576
journal queue max bytes = 1048576
journal max write entries = 1
journal queue max ops = 5
filestore op threads = 2
Hi all,
From radosgw-admin commond :
# radosgw-admin object rm --object=my_test_file.txt --bucket=test_buck
# radosgw-admin object unlink --object=my_test_file.txt --bucket=test_buck
The both above did "delete" the object from bucket, both got the right
result.
What is the differen
Thanks a lot for the answers, even if we drifted from the main subject a little
bit.
Thanks Somnath for sharing this, when can we expect any codes that might
improve _write_ performance?
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the journal (plus
y
Hi,
We have set up a preliminary Chinese version of Calamari at
https://github.com/xiaoxianxia/calamari-clients-cn, major jobs done
are translating the English words on the web interface into Chinese,
we did not change the localization infrastructure, any help
in this direction are appreciated,
On Fri, Aug 29, 2014 at 4:03 PM, Andrey Korolyov wrote:
> On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy
> wrote:
> > Thanks Haomai !
> >
> > Here is some of the data from my setup.
> >
> >
> >
> >
> -
On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy wrote:
> Thanks Haomai !
>
> Here is some of the data from my setup.
>
>
>
> ---
guys:
There's a ceph cluster working and nodes were connected with 10Gb cable.
We defined fio's bs=4k and the object size of rbd is 4MB.
Client node was connect with the cluster via 1000Mb cable. Finally the IOPS is
27 ,when we control the latency of fio under 20ms. 4MB*27=108MB nearly
23 matches
Mail list logo