As the doc
http://ceph.com/docs/master/rados/operations/placement-groups/
I get this formula:
(OSDs * 100)
Total PGs =
Replicas
In my ceph cluster have 1 pool, 5 osds,2 replicas, so I have set pg_num
to 250.
one day , i ned to add a pool(2 pools), what
于 2014/3/19 15:11, alan.zhang 写道:
As the doc http://ceph.com/docs/master/rados/operations/placement-groups/
I get this formula:
(OSDs * 100)
Total PGs =
This formula is recommended that pg nums assigned to each OSD controls
below 100.
Replicas
In my ceph cluster have 1 pool,
Hi,
I've just upgraded a test cluster to Emporer, and one of my S3 buckets
seems to have broken.
s3 access is returning a 500 code (UnknownError).
Running bucket stats, it's missing from the list.
Trying to do it explicitly:
radosgw-admin bucket stats --bucket=productimages
2014-03-19 10:06:17.8
Hi,
I have ceph 0.72.2 running on debian wheezy with cloudera 5.0 beta 2
hadoop. I have installed the ceph hadoop binding with hadoop 2.x
support. I am able to run the command such as
# hadoop fs -ls /
# hdfs dfs -touchz /test
But when I start the mapreduce job history server I am getting the er
Hi, Karol
Here is something that I can share. We are running Ceph as an Exchange
Backend via iSCSI. We currently host about 2000 mailboxes which is about
7 TB data overall. Our configuration is
- Proxy Node (with tgt daemon) x 2
- Ceph Monitor x 3 (virtual machines)
- Ceph OSD x 50 (SATA 7200rpm
You are right, but I still don’t know why the objects in .rgw.buckets are not
overrided.
If the object name is produced through ino and ono, why the same file(bigger
than 4M) have different result?
Thanks & Regards
Li JiaMin
发件人: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lis
Hi,
I have ceph 0.72.2 running on debian wheezy with cloudera 5.0 beta 2
hadoop. I have installed the ceph hadoop binding with hadoop 2.x
support. I am able to run the command such as
# hadoop fs -ls /
# hdfs dfs -touchz /test
But when I start the mapreduce job history server I am getting the
er
So I've done some more digging, and running the radosgw in debug mode I
found some messages from osd.3 saying IOError, when it was trying to get
.rgw:productimages.
I took that OSD down, and everything started working.
My question now is, why didn't that OSD suicide when it hit an IOError,
instead
Hi,
I have ceph 0.72.2 running on debian wheezy with cloudera 5.0 beta 2
hadoop. I have installed the ceph hadoop binding with hadoop 2.x
support. I am able to run the command such as
# hadoop fs -ls /
# hdfs dfs -touchz /test
But when I start the mapreduce job history server I am getting the
er
On Wed, Mar 19, 2014 at 4:28 AM, Gurvinder Singh
wrote:
> Hi,
>
> I have ceph 0.72.2 running on debian wheezy with cloudera 5.0 beta 2
> hadoop. I have installed the ceph hadoop binding with hadoop 2.x
> support. I am able to run the command such as
>From github.com/noahdesu/cephfs-hadoop patched
Cisco is searching for an experienced DevOps engineer to work as part of a team
characterizing the stability, scale and performance of a large distributed
cloud architecture. This position focuses on locating the bottlenecks in the
architecture and developing test suites to add to CI/CD efforts
On 03/19/2014 03:04 PM, Alejandro Bonilla wrote:
> Hi Gurvinder,
>
> This setup sounds interesting. Which guide did you follow?
>
There wasn't any specific guide to follow. But the documentation from
ceph wiki http://ceph.com/docs/master/cephfs/hadoop/ has helped. I can
write a step by step post
On 03/19/2014 04:50 PM, Noah Watkins wrote:
> Since `hadoop -fs ls /` seems to work on your local node, can you
> verify that (1) it is in fact listing the contents of CephFS, and (2)
> that on your worker nodes where the error is occuring that the
> relevant dependencies (naming the Ceph hadoop bi
On 03/19/2014 05:18 PM, Noah Watkins wrote:
> Err, obviously switching things out for Ceph rather than Gluster.
>
> On Wed, Mar 19, 2014 at 9:18 AM, Noah Watkins
> wrote:
>> Looks like this is a configuration issue that has popped up with other
>> 3rd party file systems in Hadoop 2.x with YARN.
On Ubuntu 1310 with Ceph 0.72, after manually putting in the patch from
http://tracker.ceph.com/issues/6966
I was able to create my dmcrypt OSD with:
ceph-deploy disk zap tca14:/dev/cciss/c0d1
ceph-deploy --verbose osd create --dmcrypt tca14:/dev/cciss/c0d1
Looking at the mount points with df
Is there a way to set cache control headers for objects served by the rados
gateway? In Apache one would modify the .htaccess file to set the required
cache control headers, but I wonder how one would do this with rgw when using
it as a CDN origin.
-Steve
__
On 03/19/2014 05:58 PM, Noah Watkins wrote:
> That certainly is odd. Does it work if you list both old and new
> properties (perhaps the CLI tools are looking at an older property..
> but that seems unlikely)? Sorry I don't have more answers, I haven't
> yet deploying Hadoop 2.x..
Another strange t
On 03/19/2014 03:51 PM, Noah Watkins wrote:
> On Wed, Mar 19, 2014 at 4:28 AM, Gurvinder Singh
> wrote:
>> Hi,
>>
>> I have ceph 0.72.2 running on debian wheezy with cloudera 5.0 beta 2
>> hadoop. I have installed the ceph hadoop binding with hadoop 2.x
>> support. I am able to run the command suc
That certainly is odd. Does it work if you list both old and new
properties (perhaps the CLI tools are looking at an older property..
but that seems unlikely)? Sorry I don't have more answers, I haven't
yet deploying Hadoop 2.x..
On Wed, Mar 19, 2014 at 9:30 AM, Gurvinder Singh
wrote:
> On 03/19/
Calling all potential Google Summer of Code participants!
This is just a friendly reminder that there are only two days
remaining in the submission window to be considered for this year's
summer of code. If you, or someone you know, is still planning on
submitting a proposal to work on the Ceph p
Err, obviously switching things out for Ceph rather than Gluster.
On Wed, Mar 19, 2014 at 9:18 AM, Noah Watkins wrote:
> Looks like this is a configuration issue that has popped up with other
> 3rd party file systems in Hadoop 2.x with YARN.
>
>
> http://mail-archives.apache.org/mod_mbox/hadoo
Since `hadoop -fs ls /` seems to work on your local node, can you
verify that (1) it is in fact listing the contents of CephFS, and (2)
that on your worker nodes where the error is occuring that the
relevant dependencies (naming the Ceph hadoop bindings) are installed
and in the classpath?
The err
Looks like this is a configuration issue that has popped up with other
3rd party file systems in Hadoop 2.x with YARN.
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201306.mbox/%3c1023550423.3137743.1371825668412.javamail.r...@redhat.com%3E
Says use this:
fs.AbstractFileSystem.
My understanding, from dealing with replication, is RadosGW is
copy-on-write. Overwriting an object is a delete and create, and the old
data gets garbage collected later.
I'm guessing a bit, but that's what I believe from Greg's comment about
RGW replication:
http://permalink.gmane.org/gmane.comp.
Craig Lewis writes:
>
>
> I've looked into this a bit, and the
> best I've come up with is to snapshot all of the RGW pools. I
> asked a similar question before:
http://comments.gmane.org/gmane.comp.file-systems.ceph.user/855
> I am planning to have a 2nd cluster for dis
I'm planning to add RGW Snapshots to Giant:
https://wiki.ceph.com/Planning/Blueprints/Giant/rgw%3A_Snapshots .
I'm still getting my development environment setup, so I don't have
anything on github yet. If you're interested in testing, my repo is
github.com/clewis/ceph
*Craig Lewis*
Senior
The RGW objects are [for most cases] immutable. Therefore in order to
provide read and write consistency we keep most of the data in
immutable rados objects, and do the operations on the mutable 'head'
part of the object atomically. So this allows us to do stuff like
having one user read data of th
Exactly what errors did you see, from which log? In general the OSD
does suicide on filesystem errors.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Mar 19, 2014 at 4:06 AM, Mike Bryant wrote:
> So I've done some more digging, and running the radosgw in debug mode I
I'm afraid the exact error got lost out of scrollback before I thought to
save it.
I was running radosgw in debug mode (-d --debug-rgw=20 --debug-ms=20), and
it was along the lines of this:
2014-03-19 7faeb7fff700 20 get_obj_state: rctx=0x7fae78002df0
obj=.rgw:productimages state=0x7fae7800a368 s-
I haven't worked with Hadoop in a while, but from the error it sounds
like the map reduce server needs another config option set specifying
which filesystem to work with. I don't think those instructions you
linked to are tested with hadoop 2.
-Greg
Software Engineer #42 @ http://inktank.com | http
Hello,
I want to bind an OSD to a virtual network interface (let's call it
eth0:0). The setting is the following. There are two net interfaces:
eth0 --> 10.0.0.100
eth0:0 --> 10.0.0.200
"/etc/hostname" contains "testnode" and in "/etc/hosts" I have:
10.0.0.100 testnode
10.0.0.200 testnode0
Bo
Kyle,
Thanks for your prompt reply. I have been doing some further reading and
planning after receiving your valuable input.
>> 1. Is it possible to install Ceph and Ceph monitors on the the XCP
>> (XEN) Dom0 or would we need to install it on the DomU containing the
>> Openstack com
Hi,adding osds will bring data migration,but only a small part of all data.
you can view the crush algorithm to learn more.
于 2014/3/10 10:25, You, Rong 写道:
Hi guys,
I need to add a extend server, which reside several osds, to a
running ceph cluster. During add osds, ceph would not a
Hi Yehuda&Craig Lewis,
Thank you very much for your explanations, does the head object have some info
about how to find the objects constituting the file, which is stripped?
I am interested in the construction of the head object, but I can't see content
of it.
Thanks & Regards
Li JiaMin
-
34 matches
Mail list logo