Hi, Mario!
Can you give more information about your cluster? Number of nodes,
OSDs per node, HDD models etc?
In general, you can use SSD as OSD journals (
http://irq0.org/articles/ceph/journal ), but it will give you
performance boost on relatively small bursty workloads. If you'll just
mix H
Hi all,
we had an ceph cluster with 7 OSD-nodes (Debian Jessie (because patched
tcmalloc) with ceph 0.94) which we expand with
one further node.
For this node we use puppet with Debian 7.8, because ceph 0.92.2 doesn't
install on Jessie (upgrade 0.94.1 work on the
other nodes but 0.94.2 looks not
Le lundi 13 juillet 2015 à 11:31 +0100, Gregory Farnum a écrit :
> On Mon, Jul 13, 2015 at 11:25 AM, Kostis Fardelas <
> dante1...@gmail.com> wrote:
> > Hello,
> > it seems that new packages for firefly have been uploaded to repo.
> > However, I can't find any details in Ceph Release notes. There i
On Tue, 21 Jul 2015, Olivier Bonvalet wrote:
> Le lundi 13 juillet 2015 à 11:31 +0100, Gregory Farnum a écrit :
> > On Mon, Jul 13, 2015 at 11:25 AM, Kostis Fardelas <
> > dante1...@gmail.com> wrote:
> > > Hello,
> > > it seems that new packages for firefly have been uploaded to repo.
> > > However
Le mardi 21 juillet 2015 à 07:06 -0700, Sage Weil a écrit :
> On Tue, 21 Jul 2015, Olivier Bonvalet wrote:
> > Le lundi 13 juillet 2015 à 11:31 +0100, Gregory Farnum a écrit :
> > > On Mon, Jul 13, 2015 at 11:25 AM, Kostis Fardelas <
> > > dante1...@gmail.com> wrote:
> > > > Hello,
> > > > it seems
Hi Lakshmi,
Is your issues solved, can you please let me know if you solved this, bcoz I am
also having same issue.
Thanks & Regards,
Naga Venkata
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
This is a bugfix release for Firefly.
We recommend that all Firefly users upgrade at their convenience.
Notable Changes
---
* rgw: check for timestamp for s3 keystone auth (#10062, Abhishek Lekshmanan)
* mon: PGMonitor: several stats output error fixes (#10257, Joao Eduardo Luis)
* o
Hey cephers,
Just a reminder that the Ceph Tech Talk on CephFS that was scheduled
for last month (and cancelled due to technical difficulties) has been
rescheduled for this month's talk. It will be happening next Thurs at
17:00 UTC (1p EST) on our Blue Jeans conferencing system. If you have
any qu
Hello Cephers,
I am using CephFS, and running some benchmarks using fio.
After increasing the object_size to 33554432, when I try to run some read
and write tests with different block sizes, when I get to block size of 64m
and beyond, Ceph does not finish the operation (I tried letting it run for
Hello all,
I'm trying to add a new data pool to CephFS, as we need some longer
term archival storage.
ceph mds add_data_pool archive
Error EINVAL: can't use pool 'archive' as it's an erasure-code pool
Here are the steps taken to create the pools for this new datapool:
ceph osd pool create arccac
(as per IRC) Yep, that's a bug alright. http://tracker.ceph.com/issues/12426
I expect we'll backport to hammer once fixed.
John
On 21/07/15 22:39, Adam Tygart wrote:
Hello all,
I'm trying to add a new data pool to CephFS, as we need some longer
term archival storage.
ceph mds add_data_pool
On 21/07/15 21:54, Hadi Montakhabi wrote:
Hello Cephers,
I am using CephFS, and running some benchmarks using fio.
After increasing the object_size to 33554432, when I try to run some
read and write tests with different block sizes, when I get to block
size of 64m and beyond, Ceph does not f
On Tue, Jul 21, 2015 at 6:09 PM, Patrick McGarry wrote:
> Hey cephers,
>
> Just a reminder that the Ceph Tech Talk on CephFS that was scheduled
> for last month (and cancelled due to technical difficulties) has been
> rescheduled for this month's talk. It will be happening next Thurs at
> 17:00 UT
Hi Johannes,
Thanks for your reply.
I am naive for this, no idea how to make a configurations or where I can
starts? based on the 4 options mentioned.
Hope you can expound it further if possible.
Best regards,
Mario
On Tue, Jul 21, 2015 at 2:44 PM, Johannes Formann wrote:
> Hi,
>
> > Can
Hi Mark
I've something like 600 write IOPs on EC pool and 800 write IOPs on replicated
3 pool with rados bench
With Radosgw I have 30/40 write IOPs with Cosbench (1 radosgw- the same with
2) and servers are sleeping :
- 0.005 core for radosgw process
- 0.01 core for osd process
I don't know
Hello Mario,
in the end your workload defines which option(s) can be considered. They are
different trade offs between read/write performance and the price That depend
on your workload. E.g.
- distribution of reads/writes
- size of IO Requests (4k IO-Operations or 4MB..)
- „locality“ of the I
Hi Frederic,
When you have Ceph cluster with 1 node you don’t experienced network and
communication overhead due to distributed model
With 2 nodes and EC 4+1 you will have communication between 2 nodes but you
will keep internal communication (2 chunks on first node and 3 chunks on second
node)
Hi cephers,
I am using CephFS on AWS as persistent shared storage.
Last night, I upgraded to Hammer(v0.94.2) from Firefly, I cannot enable MDS
service.
Here is the log of MDS service:
2015-07-22 02:25:08.284564 7f8417d2c7c0 0 ceph version 0.94.2
(5fb85614ca8f354284c713a2f9c610860720bbf3), proc
Never mind, I know the root cause.
Thanks,
Houwa Cheung
On Wed, Jul 22, 2015 at 12:22 PM, Hou Wa Cheung wrote:
> Hi cephers,
>
> I am using CephFS on AWS as persistent shared storage.
>
> Last night, I upgraded to Hammer(v0.94.2) from Firefly, I cannot enable
> MDS service.
>
> Here is the log
Hi Cephers.
I'm looking for solution to scrubbing process optimization. In our
environment this process make big impact on performance. For monitoring
disks we are using monitorix. If process running 'Disk I/O activity (R+W)'
shows 20-60 reads+writes per second. After disabling scrub and deep-scru
20 matches
Mail list logo