Hi again,
I realized that the problem is caused by the space in our room name '0513
R-0050'. If I change the space to a dash, it compiles.
The strange thing is that neither ceph osd crush add-bucket nor ceph osd crush
set complain about the space in a bucket name. And I didn't find a way to
es
Hi There,
I'm test driving Hadoop with CephFS as the storage layer. I was running the
Terasort benchmark and I noticed a lot of network IO activity when
compared to a HDFS storage layer setup. (Its a half-a-terabyte sort
workload over two data nodes.)
Digging into the job tracker logs a little,
That explanation makes quite a lot of sense — unfortunately the crush
parser isn't very intelligent right now.
Could you put a ticket in the tracker (ceph.com/tracker) describing
this issue? :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Jul 8, 2013 at 12:45 PM, Da
On Mon, 8 Jul 2013, Gregory Farnum wrote:
> That explanation makes quite a lot of sense ? unfortunately the crush
> parser isn't very intelligent right now.
>
> Could you put a ticket in the tracker (ceph.com/tracker) describing
> this issue? :)
Already there,
http://tracker.ceph.com/iss
On 2013-07-08 9:56 PM, "Sage Weil" wrote:
>
> On Mon, 8 Jul 2013, Gregory Farnum wrote:
> > That explanation makes quite a lot of sense ? unfortunately the crush
> > parser isn't very intelligent right now.
> >
> > Could you put a ticket in the tracker (ceph.com/tracker) describing
> > this issue?
Hi KC,
The locality information is now collected and available to Hadoop
through the CephFS API, so fixing this is certainly possible. However,
there has not been extensive testing. I think the tasks that need to
be completed are (1) make sure that `CephFileSystem` is encoding the
correct block lo
Hello,
I am testing ceph using ubuntu raring with ceph version 0.61.4
(1669132fcfc27d0c0b5e5bb93ade59d147e23404) on 3 virtualbox nodes.
What is this HEALTH_WARN indicating?
# ceph -s
health HEALTH_WARN
monmap e3: 3 mons at {node1=
192.168.56.191:6789/0,node2=192.168.56.192:6789/0,node3=192
Run "ceph health detail" and it should give you more information.
(I'd guess an osd or mon has a full hard disk)
Cheers
Mike
On 8 July 2013 21:16, Jordi Llonch wrote:
> Hello,
>
> I am testing ceph using ubuntu raring with ceph version 0.61.4
> (1669132fcfc27d0c0b5e5bb93ade59d147e23404) on 3 vir
It is that command...however if you are following the docs (like I did)...then
you will see that your key is NOT '/etc/ceph/ceph.keyring' it is
'/etc/ceph/ceph.client.admin.keyring'. So try:
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
client.radosgw.gateway -i /etc/ceph/keyring
Yes, all of the code needed to get the locality information should be
present the version of the jar file you referenced. We have tested a
to make sure the right data is available, but have not extensively
tested that it is being used correctly by core Hadoop (e.g. that is
being correctly propagate
Shain - you are correct there is no entry in 'ceph auth list'. How do I
register the key ? I have the contents needed:
# ls /etc/ceph
ceph.bootstrap-mds.keyring ceph.conf keyring.radosgw.gateway
ceph.bootstrap-osd.keyring ceph.log
ceph.client.admin.keyring client.fedora.keyring
Hi dear list :)
I have a small doubt about these two options, as the documentation
states this :
osd client op priority
Description: The priority set for client operations. It is relative to
osd recovery op priority.
Default:63
osd recovery op priority
Description: The priority
hi Noah, okay I think the current version may have a problem haven't
figured out where yet. Looking at the log messages and how the data blocks
are distributed among the OSDs.
So, the job tracker log had for example this output for the map task for
the first split/block 0 – which it’s executing on
KC,
Thanks a lot for checking that out. I just went to investigate, and
the work we have done on the locality/topology-aware features are
sitting in a branch, and have not been merged into the tree that is
used to produce the JAR file you are using. I will get that cleaned up
and merged soon, and
On Mon, Jul 8, 2013 at 6:13 PM, Mikaël Cluseau wrote:
> Hi dear list :)
>
> I have a small doubt about these two options, as the documentation states
> this :
>
> osd client op priority
>
> Description:The priority set for client operations. It is relative to
> osd recovery op priority.
> Defa
FYI, here is the patch as it currently stands:
https://github.com/ceph/hadoop-common/compare/cephfs;branch-1.0...cephfs;branch-1.0-topo
I have not tested it recently, but it looks like it should be close to
correct. Feel free to test it out--I won't be able to get to until
tomorrow or Wednesday.
Yep, I'm running cuttlefish ... I'll try building out of that branch and
let you know how that goes.
-KC
On Mon, Jul 8, 2013 at 9:06 PM, Noah Watkins wrote:
> FYI, here is the patch as it currently stands:
>
>
> https://github.com/ceph/hadoop-common/compare/cephfs;branch-1.0...cephfs;branch-1.0
You might want create a new branch and cherry-pick the topology
relevant commits (I think there is 1 or 2) from the -topo branch into
cephfs/branch-1.0.. I'm not sure what the -topo branch might be
missing as far as bug fixes and such.
On Mon, Jul 8, 2013 at 7:11 PM, ker can wrote:
> Yep, I'm run
On Mon, Jul 8, 2013 at 8:08 PM, Mikaël Cluseau wrote:
>
> Hi Greg,
>
> thank you for your (fast) answer.
Please keep all messages on the list. :)
I just realized you were talking about increased latencies during
scrubbing; the options you reference are for data recovery, not
scrubbing. However,
Hi Greg,
thank you for your (fast) answer.
Since we're going more in-depth, in must say :
* we're running 2 Gentoo GNU/Linux servers doing both storage and
virtualization (I know this is not recommended but we mostly have a
low load and virtually no writes outside of ceph)
* sys-cluster
On 09/07/2013 14:41, Gregory Farnum wrote:
On Mon, Jul 8, 2013 at 8:08 PM, Mikaël Cluseau wrote:
Hi Greg,
thank you for your (fast) answer.
Please keep all messages on the list. :)
oops, reply-to isn't set by default here ^^
I just realized you were talking about increased latencies dur
On 09/07/2013 14:57, Mikaël Cluseau wrote:
I think I'll go for the second option because the problematic load
spikes seem to have a period of 24h + epsilon...
Seems good : the load drop behind the 1.0 line, ceph starts to scrub,
the scrub is fast and load goes higher the 1.0, there's a pause,
Any idea?
Thanks a lot,
Álvaro.
-Mensaje original-
De: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph..com] En nombre de Alvaro Izquierdo Jimeno
Enviado el: viernes, 05 de julio de 2013 11:58
Para: Yehuda Sadeh
CC: ceph-users@lists.ceph.com
Asunto: Re: [ceph-user
>From what I can tell, this should be enough. I'll need to see more
concrete logs to figure out what went wrong though.
Yehuda
On Mon, Jul 8, 2013 at 10:47 PM, Alvaro Izquierdo Jimeno
wrote:
> Any idea?
>
> Thanks a lot,
> Álvaro.
>
> -Mensaje original-
> De: ceph-users-boun...@lists.cep
Can you try using 'HTTP-X-Container-Read' instead?
On Mon, Jul 8, 2013 at 11:31 PM, Alvaro Izquierdo Jimeno
wrote:
> Hi,
>
> I´m using RedHat 6.4.
> Attached two files: one with the log output from GET bucket1 from ytenant and
> the other with the log output from GET object1 from ytenant (both w
Hello,
Is there any limit or recommendation to store objects in one container?
(rados) When I store one thousand or 100 million objects, performance will
not affect?
Thank you,
Mihaly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
The same result.
Attached the log from creating bucket with the header 'HTTP-X-Container-Read'
Response of HEAD of the bucket1
< HTTP/1.1 204
< Date: Tue, 09 Jul 2013 06:53:55 GMT
< Server: Apache/2.2.15 (Red Hat)
< X-Container-Object-Count: 1
< X-Container-Bytes-Used: 6163
< X-Container-Bytes-Us
Hello Guys:
I am working with ceph nowadys and i want to setup a system which
includes a web page to create the ceph object storage user.
So, i tried to use Admin Ops API to fulfill my needs. However, if i use
GET /admin/usage?format=json HTTP/1.1
Host: ceph-server
Hi,
We are just deploying a new cluster (0.61.4) and noticed this:
[root@andy01 ~]# ceph osd getcrushmap -o crush.map
got crush map from osdmap epoch 2166
[root@andy01 ~]# crushtool -d crush.map -o crush.txt
[root@andy01 ~]# crushtool -c crush.txt -o crush2.map
crush.txt:640 error: parse error at
There is nothing in the radosgw logs. This led me to believe it is not running.
Should there be a daemon constantly running ? I assume so, but I can't see one.
The radosgw start script runs without error:
# bash -xv /etc/rc.d/init.d/ceph-radosgw start
...
...
+ runuser -s /bin/bash apache -
What is the output of 'ceph auth list'?
There should be an entry similar to this one:
client.radosgw.gateway
key: AQB6H9NR6IcMJBAAZOuGdrKPjLXfkEXmNoOirw==
caps: [mds] allow
caps: [mon] allow rw
caps: [osd] allow rwx
If it does not exist you will need to create it.
Shain
On 07
Check the logs. There was an error in the doc. It now requires monitor
write permissions to create the pools for radosgw. I fixed it last
Wednesday, I believe.
On Mon, Jul 8, 2013 at 9:24 AM, Shain Miley wrote:
> What is the output of 'ceph auth list'?
>
> There should be an entry similar to this
No, Creating a new container and copying the data over is the only way I
believe.
On Fri, Jul 5, 2013 at 4:32 AM, Mihály Árva-Tóth <
mihaly.arva-t...@virtual-call-center.eu> wrote:
> Hello,
>
> Is there any method to rename a container? (via swift or S3 API tools)
>
> Thank you,
> Mihaly
>
> ___
33 matches
Mail list logo