By the way,
on the link that John send I believe there is a typo.
In the very beginning of the "Open Required Ports" session the port
range says 6800:7810 where below is
mentioned as 6800:7100.
I think that the former is a typo based on previous documentation where
the ports where declared t
Thanks for spotting the inconsistency.
The 7100 number is out of date, the upper port bound has been 7300 for
some time. The 7810 number does indeed look like a simple typo.
The 6810 number is an example rather than the upper bound -- the body
of the text explains that it is up to the administra
Amazing piece of work Karan , this was something which is missing since
long , thanks for filling the gap.
I got my book today and just finished reading couple of pages , excellent
introduction to Ceph.
Thanks again , its worth purchasing this book.
Best Regards
Vicky
On Fri, Feb 6, 2015 at
Hi List,
Fisrt tutorial to map/unmap RBD devices into OpenSVC service :
http://www.flox-arts.net/article29/monter-un-disque-ceph-dans-service-opensvc-step-1
Sorry it’s in French
Next step : Christophe Varoqui has just integrated CEPH in core OpenSVC code
with snapshots & clones managing, I will
Hello ceph teams,
Anyone can provide or confirm?
"ct_target_max_mem_mb" is cache target pool's maximum memory in MB, the cache
pool's maximum memory it can used?
Additional details would be appreciated.
Regards;
_benaquino
___
ceph-users mailing lis
Hi
I have installed 6 node ceph cluster and doing a performance bench mark for
the same using Nova VMs. What I have observed that FIO random write reports
around 250 MBps for 1M block size and PGs 4096 and *650MBps for iM block
size and PG counts 2048* . Can some body let me know if I am missing a
On Sun, Feb 8, 2015 at 6:00 PM, Sumit Gaur wrote:
> Hi
> I have installed 6 node ceph cluster and doing a performance bench mark for
> the same using Nova VMs. What I have observed that FIO random write reports
> around 250 MBps for 1M block size and PGs 4096 and 650MBps for iM block size
> and PG
Dear cephers:
My cluster( 0.87) got an odd incident.
The incident is when I marked the default crush rule "replicated_ruleset"
and set new rule called "new_rule1".
Content of rule "new_rule1" is just like "replicated_ruleset". Only
difference is ruleset number .
After applied new map into crush th
Does anyone have a good recommendation for per-OSD memory for EC? My EC
test blew up in my face when my OSDs suddenly spiked to 10+ GB per OSD
process as soon as any reconstruction was needed. Which (of course) caused
OSDs to OOM, which meant more reconstruction, which fairly immediately led
to a
Hello!
*** Shameless plug: Sage, I'm working with Dirk Grunwald on this cluster; I
believe some of the members of your thesis committee were students of his =)
We have a modest cluster at CU Boulder and are frequently plagued by "requests
are blocked" issues. I'd greatly appreciate any insight or
Hi,
I'm currently use crush tunables "optimal" value.
If I upgrade from firefly to hammer, does the optimal value will upgrade to
optimal values for hammer.
So, does my clients (qemu-librbd) need to be also upgraded to hammer to support
new hammer features ?
If yes,
I think to:
- change c
On Mon, 9 Feb 2015, Alexandre DERUMIER wrote:
> Hi,
>
> I'm currently use crush tunables "optimal" value.
>
> If I upgrade from firefly to hammer, does the optimal value will upgrade
> to optimal values for hammer.
The tunables won't change on upgrade, and optimal on firefly != optimal on
hamm
Ah ok, Great !
I was just a bit worried about upgrade.
Thanks for your response sage !
- Mail original -
De: "Sage Weil"
À: "aderumier"
Cc: "ceph-users"
Envoyé: Lundi 9 Février 2015 07:11:46
Objet: Re: [ceph-users] crush tunables : optimal : upgrade from firefly to
hammer behaviour ?
Hi Stefan,
>>Only fyi i tested current hammer git master and got crashes / repeering pgs
>>after just 5 hours of benchmarking. So be careful.
Yes sure, I'll not upgrade until Hammer is released. (and maybe after first
point release to be sure)
Thanks for your reply.
- Mail original
14 matches
Mail list logo