PM, Sumit Gaur wrote:
> > Hello ,
> > Can anybody let me know if ceph team is working on porting of librbd on
> > openSolaris like it did for librados ?
>
> Nope, this isn't on anybody's roadmap.
> -Greg
>
___
ceph-u
Hello ,
Can anybody let me know if ceph team is working on porting of librbd on
openSolaris like it did for librados ?
Thanks
sumit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Experts,
I need a quick advise on deployment of ceph cluster on AWS EC2 VMs.
1) I have two separate AWS accounts and I am trying to create ceph cluster
on one account and
create ceph-client on another account and connect.
(EC2 Account and VMs + ceph client) public ip---> (EC2 Account B + c
Hi
I have a basic architecture related question. I know Calamari collect
system usages data (diamond collector) using perfrormance counters. I need
to knwo if all the system performance data that calamari shows remains in
memory or it usages files to store that.
Thanks
sumit
___
Hi Irek,
I am using v0.80.5 Firefly
<http://ceph.com/docs/master/release-notes/#v0-80-5-firefly>
-sumit
On Fri, Feb 13, 2015 at 1:30 PM, Irek Fasikhov wrote:
> Hi.
> What version?
>
> 2015-02-13 6:04 GMT+03:00 Sumit Gaur :
>
>> Hi Chir,
>> Please fidn my answe
mmon), 5x ~130
> megabytes/second gets very close to most SATA bus limits. If its a shared
> BUS, you possibly hit that limit even earlier (since all that data is now
> being written twice out over the bus).
>
> cheers;
> \Chris
>
>
> --
Hi Ceph-Experts,
Have a small ceph architecture related question
As blogs and documents suggest that ceph perform much better if we use
journal on SSD.
I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD + 1
SSD on each node and each SSD have 5 partition for journaling 5 OSDs
confused
here with this behaviour.
Thanks
sumit
On Mon, Feb 9, 2015 at 11:36 AM, Gregory Farnum wrote:
> On Sun, Feb 8, 2015 at 6:00 PM, Sumit Gaur wrote:
> > Hi
> > I have installed 6 node ceph cluster and doing a performance bench mark
> for
> > the same using Nova V
Hi
I have installed 6 node ceph cluster and doing a performance bench mark for
the same using Nova VMs. What I have observed that FIO random write reports
around 250 MBps for 1M block size and PGs 4096 and *650MBps for iM block
size and PG counts 2048* . Can some body let me know if I am missing a
n somebody explain
> this behaviour ?
>
> This is because rbd_cache merge coalesced ios in bigger ios, so it's
> working only with sequential workload.
>
> you'll do less ios but bigger ios to ceph, so less cpus,....
>
>
> - Mail original -
> De: "Su
Hope this is helpful.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Sumit Gaur
> *Sent:* Sunday, February 01, 2015 6:55 PM
> *To:* Florent MONTHEL
> *Cc:* ceph-users@lists.c
Hi All,
What I saw after enabling RBD cache it is working as expected, means
sequential write has better MBps than random write. can somebody explain
this behaviour ? Is RBD cache setting must for ceph cluster to behave
normally ?
Thanks
sumit
On Mon, Feb 2, 2015 at 9:59 AM, Sumit Gaur wrote
> Sent from my iPad
>
> > On 1 févr. 2015, at 15:50, Sumit Gaur wrote:
> >
> > Hi
> > I have installed 6 node ceph cluster and to my surprise when I ran rados
> bench I saw that random write has more performance number then sequential
> write. This is opposite to
Hi
I have installed 6 node ceph cluster and to my surprise when I ran rados
bench I saw that random write has more performance number then sequential
write. This is opposite to normal disk write. Can some body let me know if
I am missing any ceph Architecture point here ?
__
14 matches
Mail list logo