Hello.
For our CCTV storing streams project we decided to use Ceph cluster with
EC pool.
Input requirements is not scary: max. 15Gbit/s input traffic from CCTV,
30 day storing,
99% write operations, a cluster must has grow up with out downtime.
By now our vision of architecture it like:
* 6 J
Hi all,
Is there a way to see what an MDS is actually doing? We are testing
metadata operations, but in the ceph status output only see about 50
ops/s : client io 90791 kB/s rd, 54 op/s
Our active ceph-mds is using a lot of cpu and 25GB of memory, so I guess
it is doing a lot of operations fr
I'm currently upgrading to Infernalis and the chown stage is taking a log
time on my OSD nodes. I've come up with this little one liner to run the
chown's in parallel
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown
-R ceph:ceph
NOTE: You still need to make sure
I would just disable barriers and enable them afterwards(+sync), should be a
breeze then.
Jan
> On 10 Nov 2015, at 12:58, Nick Fisk wrote:
>
> I’m currently upgrading to Infernalis and the chown stage is taking a log
> time on my OSD nodes. I’ve come up with this little one liner to run the
I’m looking at iostat and most of the IO is read, so I think it would still
take a while if it was still single threaded
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 0.500.005.50
Interesting. I have all the inodes in cache on my nodes so I expect the
bottleneck to be filesystem metadata -> journal writes. Unless something else
is going on in here ;-)
Jan
> On 10 Nov 2015, at 13:19, Nick Fisk wrote:
>
> I’m looking at iostat and most of the IO is read, so I think it wo
Hello.
We have CephFS deployed over Ceph cluster (0.94.5).
We experience constant MDS restarting under high IOPS workload (e.g.
rsyncing lots of small mailboxes from another storage to CephFS using
ceph-fuse client). First, cluster health goes to HEALTH_WARN state with
the following disclaime
On 10/11/15 02:07, c...@dolphin-it.de wrote:
Hello,
I filed a new ticket:
http://tracker.ceph.com/issues/13739
Regards,
Kevin
[ceph-users] Problem with infernalis el7 package (10-Nov-2015 1:57)
From: Bob R
To:ceph-users@lists.ceph.com
Hello,
We've got two problems trying to update our
Because our problem was not related to ceph-deploy, I created a new ticket:
http://tracker.ceph.com/issues/13746
On 10/11/15 16:53, Kenneth Waegeman wrote:
On 10/11/15 02:07, c...@dolphin-it.de wrote:
Hello,
I filed a new ticket:
http://tracker.ceph.com/issues/13739
Regards,
Kevin
[ceph-u
I am in the process of upgrading a cluster with mixed 0.94.2/0.94.3 to
0.94.5 this morning and am seeing identical crashes. In the process of
doing a rolling upgrade across the mons this morning, after the 3rd of
3 mons was restarted to 0.94.5, all 3 crashed simultaneously identical
to what you are
Mike - unless things have changed in the latest versions(s) of Ceph, I *not*
believe CRUSH will be successful in creating a valid PG map if the ’n' value is
10 (k+m), your host count is 6, and your failure domain is set to host. You’ll
need to increase your host count to match or exceed ’n', ch
On Tue, Nov 10, 2015 at 11:46 AM, Kenneth Waegeman
wrote:
> Hi all,
>
> Is there a way to see what an MDS is actually doing? We are testing metadata
> operations, but in the ceph status output only see about 50 ops/s : client
> io 90791 kB/s rd, 54 op/s
> Our active ceph-mds is using a lot of cpu
I am on trusty also but my /var/lib/ceph/mon lives on an xfs filesystem.
My mons seem to have stabilized now after upgrading the last of the
OSDs to 0.94.5. No crashes in the last 20 minutes whereas they were
crashing every 1-2 minutes in a rolling fashion the entire time I was
upgrading OSDs.
On
Hi Vasily,
Did you see anything interesting in the logs?? I do not really kown
where else look for. Everything seems to be ok for me.
Any help will be very appreciated.
2015-11-06 15:29 GMT+01:00 Iban Cabrillo :
> Hi Vasily,
> Of course,
> from cinder-volume.log
>
> 2015-11-06 12:28:52.865
Can you dump the metadata ops in flight on each ceph-fuse when it hangs?
ceph daemon mds_requests
-Greg
On Mon, Nov 9, 2015 at 8:06 AM, Burkhard Linke
wrote:
> Hi,
>
> On 11/09/2015 04:03 PM, Gregory Farnum wrote:
>>
>> On Mon, Nov 9, 2015 at 6:57 AM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
On Tue, Nov 10, 2015 at 6:32 AM, Oleksandr Natalenko
wrote:
> Hello.
>
> We have CephFS deployed over Ceph cluster (0.94.5).
>
> We experience constant MDS restarting under high IOPS workload (e.g.
> rsyncing lots of small mailboxes from another storage to CephFS using
> ceph-fuse client). First,
On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander wrote:
> On 11/09/2015 05:27 PM, Vickey Singh wrote:
> > Hello Ceph Geeks
> >
> > Need your comments with my understanding on straw2.
> >
> >- Is Straw2 better than straw ?
>
> It is not persé better then straw(1).
>
> straw2 distributes data
Hi, I'm having issues activating my OSDs. I have provided the output of the
fault. I can see that the error message has said that the connection is
timing out however, I am struggling to understand why as I have followed
each stage within the quick start guide. For example, I can ping node1
(which
On Sun, Nov 8, 2015 at 10:41 PM, Alexandre DERUMIER wrote:
> Hi,
>
> debian repository seem to miss librbd1 package for debian jessie
>
> http://download.ceph.com/debian-infernalis/pool/main/c/ceph/
>
> (ubuntu trusty librbd1 is present)
This is now fixed and should be now available.
>
>
> -
Yeah, this was our bad. As indicated in
http://tracker.ceph.com/issues/13746 , Alfredo rebuilt CentOS 7
infernalis packages so that they don't have this dependency, re-signed
them, and re-uploaded them to the same location. Please clear your yum
cache (`yum makecache`) and try again.
On Tue, Nov 1
On Mon, Nov 9, 2015 at 5:20 PM, Jason Altorf wrote:
> On Tue, Nov 10, 2015 at 7:34 AM, Ken Dreyer wrote:
>> It is not a known problem. Mind filing a ticket @
>> http://tracker.ceph.com/ so we can track the fix for this?
>>
>> On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de
>> wrote:
>>>
>>>
On Mon, Nov 9, 2015 at 6:03 PM, Bob R wrote:
> We've got two problems trying to update our cluster to infernalis-
This was our bad. As indicated in http://tracker.ceph.com/issues/13746
, Alfredo rebuilt CentOS 7 infernalis packages, re-signed them, and
re-uploaded them to the same location on do
On Fri, Nov 6, 2015 at 4:05 PM, c...@dolphin-it.de wrote:
> Error: Package: 1:cups-client-1.6.3-17.el7.x86_64 (core-0)
>Requires: cups-libs(x86-64) = 1:1.6.3-17.el7
>Installed: 1:cups-libs-1.6.3-17.el7_1.1.x86_64 (@updates)
>cups-libs(x86-64) = 1:1.6.3-17.el
Hello,
I follow these steps trying to rollback a snapshot but it seems fail,
the files are lost.
root@ceph3:/mnt/ceph-block-device# rbd snap create rbd/bar@snap1
root@ceph3:/mnt/ceph-block-device# rbd snap ls rbd/bar
SNAPID NAME SIZE
2 snap1 10240 MB
root@ceph3:/mnt/ceph-block-device
On Fri, Oct 30, 2015 at 7:20 PM, Artie Ziff wrote:
> I'm looking forward to learning...
> Why the different SHA1 values in two places that reference v0.94.3?
95cefea9fd9ab740263bf8bb4796fd864d9afe2b is the commit where we bump
the version number in the debian packaging.
b2503b0e15c0b13f480f083506
On Fri, Oct 30, 2015 at 2:02 AM, Andrey Shevel wrote:
> ceph-release-1-1.el7.noarch.rp FAILED
> http://download.ceph.com/rpm-giant/el7/noarch/ceph-release-1-1.el7.noarch.rpm:
> [Errno 14] HTTP Error 404 - Not Found
> ] 0.0 B/s |0 B --:--:-- ETA
> Trying other mirror.
>
>
> Error down
Hello,
On Tue, 10 Nov 2015 13:29:31 +0300 Mike Almateia wrote:
> Hello.
>
> For our CCTV storing streams project we decided to use Ceph cluster with
> EC pool.
> Input requirements is not scary: max. 15Gbit/s input traffic from CCTV,
> 30 day storing,
> 99% write operations, a cluster must ha
Thanks for sharing this. I modified it slightly to stop and start the OSDs
on the fly rather than having all osds needlessly stopped during the chown.
ie.
chown ceph:ceph /var/lib/ceph /var/lib/ceph/* && find /var/lib/ceph/osd
-maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 -I '{}' bash -c 'echo
Hi Guys,
I'm in the process of configuring a ceph cluster and am getting some less than
ideal performance and need some help figuring it out!
This cluster will only really be used for backup storage for Veeam so I don't
need a crazy amount of I/O but good sequential writes would be ideal.
At t
On small cluster i've get a great sequental perfomance by using btrfs
on OSD, journal file (max sync interval ~180s) and with option
filestore journal parallel = true
2015-11-11 10:12 GMT+03:00 Ben Town :
> Hi Guys,
>
>
>
> I’m in the process of configuring a ceph cluster and am getting some less
10.11.2015 19:40, Paul Evans пишет:
> Mike - unless things have changed in the latest versions(s) of Ceph, I *not*
> believe CRUSH will be successful in creating a valid PG map if the ’n' value
> is 10 (k+m), your host count is 6, and your failure domain is set to host.
> You’ll need to increas
10.11.2015 22:38, Gregory Farnum wrote:
Which requests are they? Are these MDS operations or OSD ones?
Those requests appeared in ceph -w output and are the follows:
https://gist.github.com/5045336f6fb7d532138f
Is that correct that there are OSD operations blocked? osd.3 is one of
data poo
Hello,
On Wed, 11 Nov 2015 07:12:56 + Ben Town wrote:
> Hi Guys,
>
> I'm in the process of configuring a ceph cluster and am getting some
> less than ideal performance and need some help figuring it out!
>
> This cluster will only really be used for backup storage for Veeam so I
> don't ne
33 matches
Mail list logo