And now attached the log from creating bucket with the header X-Container-read
(without http).
-Mensaje original-
De: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph..com] En nombre de Alvaro Izquierdo Jimeno
Enviado el: martes, 09 de julio de 2013 8:55
Para: Ye
I looked at the code and it seems that the HTTP-X-Container-Read is
only expected when updating the object / container metadata.
Therefore, try to do a POST operation after the container's creation
with that specific field.
On Tue, Jul 9, 2013 at 12:04 AM, Alvaro Izquierdo Jimeno
wrote:
> And now
Thanks Yehuda,
With the POST and the X-Container-Read header (without HTTP-) is running
perfectly.
Many thanks!
Álvaro.
-Mensaje original-
De: Yehuda Sadeh [mailto:yeh...@inktank.com]
Enviado el: martes, 09 de julio de 2013 9:25
Para: Alvaro Izquierdo Jimeno
CC: ceph-users@lists.ceph.c
Hi all,While running some benchmarks with the internal rados benchmarker I noticed something really strange. First of all, this is the line I used to run it:$ sudo rados -p 07:59:54_performance bench 300 write -b 4194304 -t 1 --no-cleanupSo I want to test an IO with a concurrency of 1. I had a look
Hi all,
I've set up a new Ceph cluster for testing and it doesn't seem to be
working out-of-the-box. If I check the status it tells me that of the 3
defined OSD's, only 1 is in:
health HEALTH_WARN 392 pgs degraded; 392 pgs stuck unclean
>monmap e1: 3 mons at {controller-01=
> 10.20.3.110:6
On 07/09/2013 03:20 AM, Sebastien Han wrote:
Hi all,
While running some benchmarks with the internal rados benchmarker I
noticed something really strange. First of all, this is the line I used
to run it:
$ sudo rados -p 07:59:54_performance bench 300 write -b 4194304 -t 1
--no-cleanup
So I wan
Thanks Shain/ceph-users - this seems to have worked and I now have a running
gateway:
# ps -ef | grep rados
apache3056 1 0 03:56 ?00:00:10 /usr/bin/radosgw -n
client.radosgw.gateway
Unfortunately still unable to connect to the gateway though:
# swift -V 1.0 -A http://10
Hi Mark,Yes write back caching is enable since we have a BBU. See the current cache policy of the controller: WriteBack, ReadAheadNone and Direct.FYI, both journal and filestore are stored on the same disks, thus sd*1 is the journal and sd*2 is the filestore.In order to give you a little bit more a
On Mon, Jul 8, 2013 at 11:45 PM, Mihály Árva-Tóth
wrote:
> Hello,
>
> Is there any limit or recommendation to store objects in one container?
> (rados) When I store one thousand or 100 million objects, performance will
> not affect?
Nope, no limit. RADOS doesn't index contents or anything, so the
On Tue, Jul 9, 2013 at 3:08 AM, Tom Verdaat wrote:
> Hi all,
>
> I've set up a new Ceph cluster for testing and it doesn't seem to be working
> out-of-the-box. If I check the status it tells me that of the 3 defined
> OSD's, only 1 is in:
>
>>health HEALTH_WARN 392 pgs degraded; 392 pgs stuck
On 07/09/2013 06:47 AM, Sebastien Han wrote:
Hi Mark,
Yes write back caching is enable since we have a BBU. See the current
cache policy of the controller: WriteBack, ReadAheadNone and Direct.
FYI, both journal and filestore are stored on the same disks, thus sd*1
is the journal and sd*2 is the
Hi,
I´m using RedHat 6.4.
Attached two files: one with the log output from GET bucket1 from ytenant and
the other with the log output from GET object1 from ytenant (both with 401
response)
When I get the bucket (after the Put request with X-Container-Read header) from
xtenant, I can see
< HTT
That was it!
Sorry the 10.20.4.x NICs weren't configured correctly on those two nodes.
I'll admit this one was definitely my mistake.
Thanks for pointing it out.
Tom
2013/7/9 Gregory Farnum
> On Tue, Jul 9, 2013 at 3:08 AM, Tom Verdaat wrote:
> > Hi all,
> >
> > I've set up a new Ceph clu
2013/7/9 Gregory Farnum
> On Mon, Jul 8, 2013 at 11:45 PM, Mihály Árva-Tóth
> wrote:
> > Hello,
> >
> > Is there any limit or recommendation to store objects in one container?
> > (rados) When I store one thousand or 100 million objects, performance
> will
> > not affect?
>
> Nope, no limit. RAD
One in four people with diabetes is undiagnosed. Could you or a family member be one of the millions people who has diabetes and doesn't know it?
Take this simple Diabetes Risk Test and learn more about your risk for getting type 2 diabetes:
http://www.metabolicwellness.org/diabetestest
Hi Guys,
Just wanted to let everyone know that we've released part 1 of a series
of performance articles that looks at Cuttlefish vs Bobtail on our
Supermicro test chassis. We'll be looking at both RADOS bench and RBD
performance with a variety of IO sizes, IO patterns, concurrency levels,
f
I've tried to use samba shared directory as an OSD in ceph,
follow the steps:
1) mount -t cifs //192.168.0.13/public /var/lib/ceph/osd/ceph-4 -o
username=root,user_xattr
2) configure the osd.4 in ceph.conf
[osd]
journal dio = false
journal aio = false
[osd.4]
host = lab14
3) ceph-osd -i 4 --mkfs --
On Tue, Jul 9, 2013 at 8:54 AM, huangjun wrote:
> I've tried to use samba shared directory as an OSD in ceph,
> follow the steps:
> 1) mount -t cifs //192.168.0.13/public /var/lib/ceph/osd/ceph-4 -o
> username=root,user_xattr
> 2) configure the osd.4 in ceph.conf
> [osd]
> journal dio
l tried this mainly bc we have a NAS and want to use it as OSD nodes,but didn't
break the NAS.
发自我的小米手机
Gregory Farnum 编写:
>On Tue, Jul 9, 2013 at 8:54 AM, huangjun wrote:
>> I've tried to use samba shared directory as an OSD in ceph,
>> follow the steps:
>> 1) mount -t cifs //192.168.0.13/
On Mon, Jul 8, 2013 at 8:51 AM, Bright wrote:
> Hello Guys:
>
> I am working with ceph nowadys and i want to setup a system which
>
> includes a web page to create the ceph object storage user.
>
> So, i tried to use Admin Ops API to fulfill my needs. However, if i use
>
> GET
i applied the changes from the fs/ceph directory in the -topo branch and
now it looks better - the map tasks are running on the same nodes as the
splits they're processing. good stuff !
On Mon, Jul 8, 2013 at 9:18 PM, Noah Watkins wrote:
> You might want create a new branch and cherry-pick the
Awesome! Thanks for testing that out. I'll be merging that branch
soon, and will let you know when the new jar is published.
-Noah
On Tue, Jul 9, 2013 at 9:53 AM, ker can wrote:
> i applied the changes from the fs/ceph directory in the -topo branch and
> now it looks better - the map tasks are
Our last development release before dumpling is here! The main
improvements here are with monitor performance and OSD pg log rewrites to
speed up peeering.
In other news, the dumpling feature freeze is upon us. The next month
we will be focusing entirely on stabilization and testing. There w
hi Noah,
while we're still on the hadoop topic ... I was also trying out the
TestDFSIO tests ceph v/s hadoop. The Read tests on ceph takes about 1.5x
the hdfs time. The write tests are worse about ... 2.5x the time on hdfs,
but I guess we have additional journaling overheads for the writes on ce
On Tue, Jul 9, 2013 at 12:35 PM, ker can wrote:
> hi Noah,
>
> while we're still on the hadoop topic ... I was also trying out the
> TestDFSIO tests ceph v/s hadoop. The Read tests on ceph takes about 1.5x
> the hdfs time. The write tests are worse about ... 2.5x the time on hdfs,
> but I guess
For this particular test I turned off replication for both hdfs and ceph.
So there is just one copy of the data lying around.
hadoop@vega7250:~$ ceph osd dump | grep rep
pool 0 'data' rep size 1 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 960 pgp_num 960 last_change 26 owner 0 crash_rep
Any updates on this? My production cluster has been running on one monitor for
a while and I'm a little nervous.
Can I expect a fix in 0.61.5? Thank you.
> (Re-adding the list for future reference)
>
> Wolfgang, from your log file:
>
> 2013-06-25 14:58:39.739392 7fa329698780 -1 common/
On Tue, 9 Jul 2013, Jeppesen, Nelson wrote:
>
> Any updates on this? My production cluster has been running on one monitor f
> or a while and I?m a little nervous.
>
>
>
> Can I expect a fix in 0.61.5? Thank you.
This is fixed in the current cuttlefish branch and will be in the next
0.61.x u
by the way ... here's the log of the write.
13/07/09 05:52:56 INFO fs.TestDFSIO: - TestDFSIO - : write (HDFS)
13/07/09 05:52:56 INFO fs.TestDFSIO:Date & time: Tue Jul 09
05:52:56 PDT 2013
13/07/09 05:52:56 INFO fs.TestDFSIO:Number of files: 300
13/07/09 05:52:56 INFO fs
Hi,
i'm planning a new cluster on a 10GbE network.
Each storage node will have a maximum of 12 SATA disks and 2 SSD as journals.
What do you suggest as journal size for each OSD? 5GB is enough?
Should I just consider SATA writing speed when calculating journal
size or also network speed?
_
In this slide deck on Slide #14, there is some stuff about being able to
embed code in the ceph-osd daemon via plugin API. Are there links to some
examples on how to do that ?
http://indico.cern.ch/getFile.py/access?contribId=9&sessionId=1&resId=1&materialId=slides&confId=246453
thx
kc
___
On Wed, Jul 10, 2013 at 1:16 AM, Gandalf Corvotempesta
wrote:
> Hi,
> i'm planning a new cluster on a 10GbE network.
> Each storage node will have a maximum of 12 SATA disks and 2 SSD as journals.
>
> What do you suggest as journal size for each OSD? 5GB is enough?
> Should I just consider SATA wr
On Tue, Jul 9, 2013 at 2:37 PM, ker can wrote:
> In this slide deck on Slide #14, there is some stuff about being able to
> embed code in the ceph-osd daemon via plugin API. Are there links to some
> examples on how to do that ?
>
> http://indico.cern.ch/getFile.py/access?contribId=9&sessionId=1
Hrm! Were you using 4MB of data with rados put? Also, I don't know how much extra latency running "rados put" would add from start to finish. Is it slower than RADOS bench when you loop it? It may not show much concurrency if the writes on the OSDs are finishing quickly and waiting on the next o
> Is the JNI interface still an issue or have we moved past that ?
We haven't done much performance tuning with Hadoop, but I suspect
that the JNI interface is not a bottleneck.
My very first thought about what might be causing slow read
performance is the read-ahead settings we use vs Hadoop. Ha
Greg pointed out the read-ahead client options. I would suggest
fiddling with these settings. If things improve, we can put automatic
configuration of these settings into the Hadoop client itself. At the
very least, we should be able to see if it is the read-ahead that is
causing performance proble
Makes sense. I can try playing around with these settings when
you're saying client, would this be libcephfs.so ?
On Tue, Jul 9, 2013 at 5:35 PM, Noah Watkins wrote:
> Greg pointed out the read-ahead client options. I would suggest
> fiddling with these settings. If things improve, we
Thank you for the response.
You are talking of median expected writes, but should I consider the single
disk write speed or the network speed? A single disk is 100MB/s so
100*30=3000MB of journal for each osd? Or should I consider the network
speed that is 1.25GB/s?
Why 30 seconds? default flush fr
Yes, the libcephfs client. You should be able to adjust the settings
without changing any code. The settings should be adjustable either by
setting the config options in ceph.conf, or using the
"ceph.conf.options" settings in Hadoop's core-site.xml.
On Tue, Jul 9, 2013 at 4:26 PM, ker can wrote:
39 matches
Mail list logo