Hi Somnath,
Yes, We will analyze is there any bottleneck, do we have any
valuable command to analyze this bottleneck.
>> 1. What is your backend cluster configuration like how many OSDs,
PGs/pool, HW details etc
We are using 2 OSD and there is no PGs/Pool created , it is defau
> We've since merged something
> that stripes over several small xattrs so that we can keep things inline,
> but it hasn't been backported to hammer yet. See
> c6cdb4081e366f471b372102905a1192910ab2da.
Hi Sage:
You wrote "yet" - should we earmark it for hammer backport?
Nathan
___
Cache on top of the data drives (not journal) will not help in most cases,
those writes are already buffered in the OS - so unless your OS is very light
on memory and flushing constantly it will have no effect, it just adds overhead
in case a flush comes. I haven’t tested this extensively with C
Hi,
After upgrading to 0.94.2 yesterday on our test cluster, we've had 3
PGs go inconsistent.
First, immediately after we updated the OSDs PG 34.10d went inconsistent:
2015-06-16 13:42:19.086170 osd.52 137.138.39.211:6806/926964 2 :
cluster [ERR] 34.10d scrub stat mismatch, got 4/5 objects, 0/0
On Thu, Jun 11, 2015 at 7:34 PM, Sage Weil wrote:
> * ceph-objectstore-tool should be in the ceph server package (#11376, Ken
> Dreyer)
We had a little trouble yum updating from 0.94.1 to 0.94.2:
file /usr/bin/ceph-objectstore-tool from install of
ceph-1:0.94.2-0.el6.x86_64 conflicts with file
I have done some quick tests with FUSE too: it seems to me that, both with
the old and with the new kernel, FUSE is approx. five times slower than
kernel driver for both reading files and getting stats.
I don't know whether it is just me or if it is expected.
On Wed, Jun 17, 2015 at 2:56 AM, Franc
On Wed, Jun 17, 2015 at 8:56 AM, Dan van der Ster wrote:
> Hi,
>
> After upgrading to 0.94.2 yesterday on our test cluster, we've had 3
> PGs go inconsistent.
>
> First, immediately after we updated the OSDs PG 34.10d went inconsistent:
>
> 2015-06-16 13:42:19.086170 osd.52 137.138.39.211:6806/926
On Wed, Jun 17, 2015 at 10:52 AM, Gregory Farnum wrote:
> On Wed, Jun 17, 2015 at 8:56 AM, Dan van der Ster wrote:
>> Hi,
>>
>> After upgrading to 0.94.2 yesterday on our test cluster, we've had 3
>> PGs go inconsistent.
>>
>> First, immediately after we updated the OSDs PG 34.10d went inconsiste
Hi,
We've been doing some testing of ceph hammer (0.94.2), but the
performance is very slow and we can't find what's causing the problem.
Initially we've started with four nodes with 10 osd's total.
The drives we've used were SATA enterprise drives and on top of that
we've used SSD drives as
Hi,
Does anybody know how many data gets written from the monitors? I was using
some cheaper ssds for monitors and was wondering why they had already written
80 TB after 8 month.
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
On Wed, Jun 17, 2015 at 10:18 AM, Stefan Priebe - Profihost AG
wrote:
> Hi,
>
> Does anybody know how many data gets written from the monitors? I was using
> some cheaper ssds for monitors and was wondering why they had already written
> 80 TB after 8 month.
3.8MB/s? That's a little more than I
I have ~60 TiB written for the past few months (4? 6?) and 40TiB on one drive
added about 3 months ago.
Jan
> On 17 Jun 2015, at 11:18, Stefan Priebe - Profihost AG
> wrote:
>
> Hi,
>
> Does anybody know how many data gets written from the monitors? I was using
> some cheaper ssds for moni
On Wed, Jun 17, 2015 at 1:02 PM, Nathan Cutler wrote:
>> We've since merged something
>> that stripes over several small xattrs so that we can keep things inline,
>> but it hasn't been backported to hammer yet. See
>> c6cdb4081e366f471b372102905a1192910ab2da.
>
> Hi Sage:
>
> You wrote "yet" - sh
>From my cluster of ~200 OSDs, 36K PGs
# iostat -mx 1 20 /dev/md2 |grep md2
md2 0.00 0.000.10 2064.41 0.01 8.05 8.00
0.000.00 0.00 0.00
md2 0.00 0.000.00 338.00 0.00 1.32 7.98
0.000.00 0.00 0.00
md2
Hi all, is any way to rename a pool by ID (pool number).
I have one pool with empty name, it is not used and just want delete
this, but can't do it, because pool name required.
ceph osd lspools
0 data,1 metadata,2 rbd,12 ,16 libvirt,
I want rename this: pool #12
Thanks,
Pavel
--
icq: 362-744-
Pavel,
unfortunately there isn't a way to rename a pool usign its ID as I
have learned myself the hard way since I 've faced a few months ago the
exact same issue.
It would be a good idea for developers to also include a way to
manipulate (rename, delete, etc.) pools using the ID which is d
On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
Hi,
We've been doing some testing of ceph hammer (0.94.2), but the
performance is very slow and we can't find what's causing the problem.
Initially we've started with four nodes with 10 osd's total.
The drives we've used were SATA enterprise driv
Hi,
can you post your ceph.conf ?
Which tools do you use for benchmark ?
which block size, iodepth, number of client/rbd volume do you use ?
Is it with krbd kernel driver ?
(I have seen some bad performance with kernel 3.16, but at much higher rate
(100k iops)
Is it with ethernet switches ? o
Is it possible to access Ceph from Spark as it is mentioned here for Openstack
Swift?
https://spark.apache.org/docs/latest/storage-openstack-swift.html
Thanks for help.
Milan Sladky ___
ceph-users mailing list
ceph-us
On 06/17/2015 03:34 PM, Mark Nelson wrote:
On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
Hi,
[ cut ]
~60MB/s seq writes
~100MB/s seq reads
~2-3k iops random reads
Is this per SSD or aggregate?
aggregate (if I understand You correctly). This is what I see when I run
tests on client
On 06/17/2015 03:38 PM, Alexandre DERUMIER wrote:
Hi,
can you post your ceph.conf ?
sure:
[global]
fsid = e96fdc70-4f9c-4c12-aae8-63dd7c64c876
mon initial members = cf01,cf02
mon host = 10.4.10.211,10.4.10.212
auth cluster required = cephx
auth service required = cephx
auth client required =
On 06/17/2015 09:03 AM, Jacek Jarosiewicz wrote:
On 06/17/2015 03:34 PM, Mark Nelson wrote:
On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
Hi,
[ cut ]
~60MB/s seq writes
~100MB/s seq reads
~2-3k iops random reads
Is this per SSD or aggregate?
aggregate (if I understand You correct
On Wed, 17 Jun 2015 16:03:17 +0200 Jacek Jarosiewicz wrote:
> On 06/17/2015 03:34 PM, Mark Nelson wrote:
> > On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
> >> Hi,
> >>
>
> [ cut ]
>
> >>
> >> ~60MB/s seq writes
> >> ~100MB/s seq reads
> >> ~2-3k iops random reads
> >
> > Is this per SSD or a
On Wed, Jun 17, 2015 at 2:58 PM, Milan Sladky wrote:
> Is it possible to access Ceph from Spark as it is mentioned here for
> Openstack Swift?
>
> https://spark.apache.org/docs/latest/storage-openstack-swift.html
Depends on what you're trying to do. It's possible that the Swift
bindings described
On Wed, 17 Jun 2015, Nathan Cutler wrote:
> > We've since merged something
> > that stripes over several small xattrs so that we can keep things inline,
> > but it hasn't been backported to hammer yet. See
> > c6cdb4081e366f471b372102905a1192910ab2da.
>
> Hi Sage:
>
> You wrote "yet" - should
Hi Milan,
We've done some tests here and our hadoop can talk to RGW successfully
with this SwiftFS plugin. But we haven't tried Spark yet. One thing is
the data locality feature, it actually requires some special
configuration of Swift proxy-server, so RGW is not able to archive the
data locality
Hi,
I have 5 OSD servers, with total of 45 OSDS in my clusters. I am trying out
Erasure Coding with different K and m values.
I seem to always get Warnings about : Degraded and Undersized PGs, whenever I
create a profile and create a Pool based on that profile.
I have profiles with K and M value
Hi,
On 17/06/2015 18:04, Garg, Pankaj wrote:
> Hi,
>
>
>
> I have 5 OSD servers, with total of 45 OSDS in my clusters. I am trying out
> Erasure Coding with different K and m values.
>
> I seem to always get Warnings about : Degraded and Undersized PGs, whenever I
> create a profile and cre
Just to follow up, I started from scratch, and I think the key was to run
ceph-deploy purge (nodes) , ceph-deploy purgdata (nodes) and finally
ceph-deploy forgetkeys
Thanks for the replies Alex and Alex!
Mike C
___
ceph-users mailing list
ceph-users@list
Ok - I know this post has the potential to spread to unsavory corners of
discussion about "the best linux distro" ... blah blah blah ... please, don't
let it go there ... !
I'm seeking some input from people that have been running larger Ceph clusters
... on the order of 100s of physical serve
Okay..You didn’t mention anything about your rbd client host config and the cpu
cores of OSD/rbd system..Some thoughts what you can do…
1. Considering pretty lean cpu config you have , I would say check for cpu
usage of both OSD and rbd nodes first. If it is saturated already, you are out
of lu
There's often a great deal of discussion about which SSDs to use for
journals, and why some of the cheaper SSDs end up being more expensive
in the long run. The recent blog post at Algoria, though not Ceph
specific, provides a good illustration of exactly how insidious
kernel/SSD interactions can b
Hi!
I am seeing our monitor logs filled over and over with lines like these:
2015-06-17 20:29:53.621353 7f41e48b1700 1 mon.node02@1(peon).log
v26344529 check_sub sending message to client.? 10.102.4.11:0/1006716
with 1 entries (version 26344529)
2015-06-17 20:29:53.621448 7f41e48b1700 1 mon.
This is presently written from log level 1 onwards :-)
So, only log level 0 will not log this..
Try, 'debug_mon = 0/0' in the conf file..
Now, I don't have enough knowledge on that part to say whether it is important
enough to log at log level 1 , sorry :-(
Thanks & Regards
Somnath
-Origina
On 2015-06-17 18:52:51 +, Somnath Roy said:
This is presently written from log level 1 onwards :-)
So, only log level 0 will not log this..
Try, 'debug_mon = 0/0' in the conf file..
Yeah, once I had sent the mail I realized that "1" in the log line was
the level. Had overlooked that befor
<< However, I'd rather not set the level to 0/0, as that would disable all
logging from the MONs
I don't think so. All the error scenarios and stack trace (in case of crash)
are supposed to be logged with log level 0. But, generally, we need the highest
log level (say 20) to get all the informa
Sorry Prabu, I forgot to mention the bold settings in the conf file you need to
tweak based on your HW configuration (cpu, disk etc.) and number of OSDs
otherwise it may hit you back badly.
Thanks & Regards
Somnath
From: Somnath Roy
Sent: Wednesday, June 17, 2015 11:25 AM
To: 'gjprabu'
Cc: Kama
I've been working on automating a lot of our ceph admin tasks lately and am
pretty pleased with how the puppet-ceph module has worked for installing
packages, managing ceph.conf, and creating the mon nodes. However, I don't
like the idea of puppet managing the OSDs. Since we also use ansible in m
1) Flags available in ceph osd set are
pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent
I know or can guess most of them (the docs are a “bit” lacking)
But with "ceph osd set nodown” I have no idea what it should be used for - to
keep hammering a faulty OSD?
2
Hey gang,
Some options are just not documented well…
What’s up with:
osd_scrub_chunk_min
osd_scrub_chunk_max
osd_scrub_sleep?
===
Tu Holmes
tu.hol...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
Hi all,
I want to use swift-client to connect ceph cluster. I have done s3 test
on this cluster before.
So I follow the guide to create a subuser and use swift client to test
it. But always got an error "404 Not Found"
How can I create the "auth" page?Any help will be appreciated.
-
Thanks for answer,
I made some test, first leave dwc=enabled and caching on journal drive
disabled. Latency grows from 20ms to 90ms on this drive. Next I enabled cache
on journal drive and disabled all cache on data drives. Latency on data drives
grows from 30 – 50ms to 1500 – 2000ms.
Test mad
42 matches
Mail list logo