Hi Mark
Yes enough PG and no error on Apache logs
We identified some bottleneck on bucket index with huge IOPs on one OSD (IOPs
is done on only 1 bucket)
With bucket sharding (32) configured write IOPs us now 5x better (and after
bucket delete/create). But we don't yet reach Firefly performance
I guess you only need to add "osd objectstore = keyvaluestore" and
"enable experimental unrecoverable data corrupting features =
keyvaluestore".
And you need to know keyvaluestore is a experimental backend now, it's
not recommended to deploy in producation env !
On Thu, Jul 23, 2015 at 7:13 AM, S
Hi,
I'm rather new to ceph and I was trying to launch a test cluster with the
Hammer release with the default OSD backend as KeyValueStore instead of
FileStore. I am deploying my cluster using ceph-deploy. Can someone who has
already done this please share the changes they have made for this? I
Ah, I see that --max-backlog must be expressed in bytes/sec,
in spite of what the --help message says.
-- Tom
> -Original Message-
> From: Deneau, Tom
> Sent: Wednesday, July 22, 2015 5:09 PM
> To: 'ceph-users@lists.ceph.com'
> Subject: load-gen throughput numbers
>
> If I run rados load
If I run rados load-gen with the following parameters:
--num-objects 50
--max-ops 16
--min-object-size 4M
--max-object-size 4M
--min-op-len 4M
--max-op-len 4M
--percent 100
--target-throughput 2000
So every object is 4M in size and all the ops are reads of the entire 4M.
I
Le 22/07/2015 21:17, Lincoln Bryant a écrit :
> Hi Hadi,
>
> AFAIK, you can’t safely mount RBD as R/W on multiple machines. You
> could re-export the RBD as NFS, but that’ll introduce a bottleneck and
> probably tank your performance gains over CephFS.
>
> For what it’s worth, some of our RBDs are
Annoying that we don't know what caused the replica's stat structure to get out
of sync. Let us know if you see it recur. What were those pools used for?
-Sam
- Original Message -
From: "Dan van der Ster"
To: "Samuel Just"
Cc: ceph-users@lists.ceph.com
Sent: Wednesday, July 22, 2015 1
Workaround... We're building now a huge computing cluster 140 computing
DISKLESS nodes and they are pulling to storage a lot of computing data
concurrently
User that put job for the cluster - need also access to the same storage
place (seeking progress & results)
We've built Ceph cluster:
Cool, writing some objects to the affected PGs has stopped the
consistent/inconsistent cycle. I'll keep an eye on them but this seems
to have fixed the problem.
Thanks!!
Dan
On Wed, Jul 22, 2015 at 6:07 PM, Samuel Just wrote:
> Looks like it's just a stat error. The primary appears to have the c
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
RBD can be safely mounted on multiple machines at once, but the file
system has to be designed for such scenarios. File systems like ext,
xfs, btrfs, etc are only designed to be accessed by a single system.
Clustered file systems like OCFS, GFS, etc
Hi Hadi,
AFAIK, you can’t safely mount RBD as R/W on multiple machines. You could
re-export the RBD as NFS, but that’ll introduce a bottleneck and probably tank
your performance gains over CephFS.
For what it’s worth, some of our RBDs are mapped to multiple machines, mounted
read-write on one
Hello Cephers,
I've been experimenting with CephFS and RBD for some time now.
>From what I have seen so far, RBD outperforms CephFS by far. However, there
is a catch!
RBD could be mounted on one client at a time!
Now, assuming that we have multiple clients running some MPI code (and
doing some dis
Hi Gregory,
Thanks for your replies.
Let's take the 2 hosts config setup (3 MON + 3 idle MDS on same hosts).
2 dell R510 servers, CentOS 7.0.1406, dual xeon 5620 (8
cores+hyperthreading),16GB RAM, 2 or 1x10gbits/s Ethernet (same results with
and without private 10gbits network), PERC H700
Looks like it's just a stat error. The primary appears to have the correct
stats, but the replica for some reason doesn't (thinks there's an object for
some reason). I bet it clears itself it you perform a write on the pg since
the primary will send over its stats. We'd need information from
This cluster is server RBD storage for openstack, and today all the I/O was
just stopped.
After looking in the boxes ceph-mon was using 17G ram - and this was on
*all* the mons. Restarting the main one just made it work again (I
restarted the other ones because they were using a lot of ram).
This h
Hi Ceph community,
Env: hammer 0.94.2, Scientific Linux 6.6, kernel 2.6.32-431.5.1.el6.x86_64
We wanted to post here before the tracker to see if someone else has
had this problem.
We have a few PGs (different pools) which get marked inconsistent when
we stop the primary OSD. The problem is stra
1.
Is the layout default, apart from the change to object_size?
It is default. The only change I make is object_size and stripe_unit. I set
both to the same value (i.e. stripe_count is 1 in all cases).
2. What version are the client and server?
ceph version 0.94.1
3.
Not really... are you usi
On Sat, Jul 18, 2015 at 10:25 PM, Nick Fisk wrote:
> Hi All,
>
> I’m doing some testing on the new High/Low speed cache tiering flushing and
> I’m trying to get my head round the effect that changing these 2 settings
> have on the flushing speed. When setting the osd_agent_max_ops to 1, I can
We might also be able to help you improve or better understand your
results if you can tell us exactly what tests you're conducting that
are giving you these numbers.
-Greg
On Wed, Jul 22, 2015 at 4:44 AM, Florent MONTHEL wrote:
> Hi Frederic,
>
> When you have Ceph cluster with 1 node you don’t
Ok,
So good news that RADOS appears to be doing well. I'd say next is to
follow some of the recommendations here:
http://ceph.com/docs/master/radosgw/troubleshooting/
If you examine the objecter_requests and perfcounters during your
cosbench write test, it might help explain where the reque
We have been error free for almost 3 weeks now. The following settings on
all OSD nodes were changed:
vm.swappiness=1
vm.min_free_kbytes=262144
My discussion on XFS list is here:
http://www.spinics.net/lists/xfs/msg33645.html
Thanks,
Alex
On Fri, Jul 3, 2015 at 6:27 AM, Jan Schermer wrote:
>
On 15/07/15 10:55, Jelle de Jong wrote:
> On 13/07/15 15:40, Jelle de Jong wrote:
>> I was testing a ceph cluster with osd_pool_default_size = 2 and while
>> rebuilding the OSD on one ceph node a disk in an other node started
>> getting read errors and ceph kept taking the OSD down, and instead of
I just filed a ticket after trying ceph-objectstore-tool:
http://tracker.ceph.com/issues/12428
On Fri, Jul 17, 2015 at 3:36 PM, Dan van der Ster wrote:
> A bit of progress: rm'ing everything from inside current/36.10d_head/
> actually let the OSD start and continue deleting other PGs.
>
> Cheers,
23 matches
Mail list logo