Hi ceph-users,
I recently deployed a ceph cluster with use of *ceph-deploy* utility, on
RHEL6.4, during the time, I came across a couple of issues / questions which I
would like to ask for your help.
1. ceph-deploy does not help to install dependencies (snappy leveldb gdisk
python-argparse gper
Hi all
Does gateway instances mean multi-process of a gateway user for a ceph cluster.
Though they were configured independently during the configure file,can they
configured with zones among different region?
lixuehui___
ceph-users mailing list
ceph-
Dnia 2013-09-27, o godz. 15:30:21
Guang napisał(a):
> Hi ceph-users,
> I recently deployed a ceph cluster with use of *ceph-deploy* utility,
> on RHEL6.4, during the time, I came across a couple of issues /
> questions which I would like to ask for your help.
>
> 1. ceph-deploy does not help to
Sorry for replying only now, I did not get to try it earlier…
On Thu, 19 Sep 2013 08:43:11 -0500, Mark Nelson wrote:
On 09/19/2013 08:36 AM, Niklas Goerke wrote:
[…]
My Setup:
* Two Hosts with 45 Disks each --> 90 OSDs
* Only one newly created pool with 4500 PGs and a Replica Size of 2
-->
s
On Fri, Sep 27, 2013 at 3:30 AM, Guang wrote:
> Hi ceph-users,
> I recently deployed a ceph cluster with use of *ceph-deploy* utility, on
> RHEL6.4, during the time, I came across a couple of issues / questions which
> I would like to ask for your help.
>
> 1. ceph-deploy does not help to install
Ø You can also create additional data pools and map directories to them, but
this probably isn't what you need (yet).
Is there a link to a web page where you can read how to map a directory to a
pool? (I googled ceph map directory to pool ... and got this post)
From: ceph-users-boun...@lists.c
On Fri, Sep 27, 2013 at 7:10 AM, Aronesty, Erik
wrote:
> Ø You can also create additional data pools and map directories to them,
> but
>
>
> this probably isn't what you need (yet).
>
> Is there a link to a web page where you can read how to map a directory to a
> pool? (I googled ceph map dire
I see it's the undocumented "ceph.dir.layout.pool"
Something like:
setfattr -n ceph.dir.layout.pool -v mynewpool
On an empty dir should work. I'd like one directory to be more heavily
mirrored so that a) objects are more likely to be on a less busy server b)
availability increas
Hello everyone,
I'm running a Cuttlefish cluster that hosts a lot of RBDs. I recently
removed a snapshot of a large one (rbd snap rm -- 12TB), and I noticed
that all of the clients had markedly decreased performance. Looking
at iostat on the OSD nodes had most disks pegged at 100% util.
I know
On Wed, Sep 25, 2013 at 2:07 PM, Gruher, Joseph R
wrote:
> Hi all-
>
>
>
> I am following the object storage quick start guide. I have a cluster with
> two OSDs and have followed the steps on both. Both are failing to start
> radosgw but each in a different manner. All the previous steps in the
Hi Corin!
On 09/24/2013 11:37 AM, Corin Langosch wrote:
Hi there,
do snapshots have an impact on write performance? I assume on each write
all snapshots have to get updated (cow) so the more snapshots exist the
worse write performance will get?
I'll be honest, I haven't tested it so I'm not s
On Fri, Sep 27, 2013 at 1:10 AM, lixuehui wrote:
> Hi all
> Does gateway instances mean multi-process of a gateway user for a ceph
> cluster. Though they were configured independently during the configure
> file,can they configured with zones among different region?
Not sure I follow your questi
[cc ceph-devel]
Travis,
RBD doesn't behave well when Ceph maintainance operations create spindle
contention (i.e. 100% util from iostat). More about that below.
Do you run XFS under your OSDs? If so, can you check for extent
fragmentation? Should be something like:
xfs_db -c frag -r /dev/s
Hi Somnath,
With SSDs, you almost certainly are going to be running into bottlenecks
on the RGW side... Maybe even fastcgi or apache depending on the machine
and how things are configured. Unfortunately this is probably one of
the more complex performance optimization scenarios in the Ceph wo
Yes, I understand that..
I tried with thread pool size of 300 (default 100, I believe). I am in process
of running perf on radosgw as well as on osds for profiling.
BTW, let me know if any particular ceph component you want me to focus.
Thanks & Regards
Somnath
-Original Message-
From:
Likely on the radosgw side you are going to see the top consumers be
malloc/free/memcpy/memcmp. If you have kernel 3.9 or newer compiled
with libunwind, you might get better callgraphs in perf which could be
helpful.
Mark
On 09/27/2013 01:56 PM, Somnath Roy wrote:
Yes, I understand that..
I
Hi Mike,
Thanks for the info. I had seem some of the previous reports of
reduced performance during various recovery tasks (and certainly
experienced them) but you summarized them all quite nicely.
Yes, I'm running XFS on the OSDs. I checked fragmentation on a few of
my OSDs -- all came back ~3
Hi,
I'm trying to setup my first cluster, (have never manually bootstrapped a
cluster)
Is ceph-deploy odd activate/prepare supposed to write to the master ceph.conf
file, specific entries for each OSD along the lines of
http://ceph.com/docs/master/rados/configuration/osd-config-ref/ ?
I app
Hi,
I probably did something wrong setting up my cluster with 0.67.3. I
previously built a cluster with 0.61 and everything went well, even after
an upgrade to 0.67.3. Now I built a fresh 0.67.3 cluster and when I try to
mount CephFS:
aaron@seven ~ $ sudo mount -t ceph 10.42.6.21:/ /mnt/ceph
moun
I created an rdosgw user and swift subuser and attempted to generate a key for
the swift user. Using the commands below. However the swift key was empty when
the command completed. What did I miss?
root@controller21:/etc# radosgw-admin user create --uid=rados
--display-name=rados --email=n...@
Hi Tim,
Try adding --gen-key to your create command (you should be able to create
a key for the subuser you already created).
Thanks,
Matt
On 9/27/13 4:35 PM, "Snider, Tim" wrote:
>I created an rdosgw user and swift subuser and attempted to generate a
>key for the swift user. Using the command
On Fri, Sep 27, 2013 at 2:12 PM, Aaron Ten Clay wrote:
> Hi,
>
> I probably did something wrong setting up my cluster with 0.67.3. I
> previously built a cluster with 0.61 and everything went well, even after an
> upgrade to 0.67.3. Now I built a fresh 0.67.3 cluster and when I try to
> mount Ceph
On Fri, Sep 27, 2013 at 2:44 PM, Gregory Farnum wrote:
> What is the output of ceph -s? It could be something underneath the
> filesystem.
>
> root@chekov:~# ceph -s
cluster 18b7cba7-ccc3-4945-bb39-99450be81c98
health HEALTH_OK
monmap e3: 3 mons at {chekov=
10.42.6.29:6789/0,laforge=10.42
Thanks that worked - you were close:
This is another document issue on http://ceph.com/docs/next/radosgw/config/ ,
--gen-secret parameter requirement isn't mentioned.
Enabling Swift Access
Allowing access to the object store with Swift (OpenStack
Object Storage) compatibl
Hello!
Does RADOS Gateway supports or integrates with OpenStack (Grizzly)
Authentication (Keystone PKI)?
Can RADOS Gateway use PKI tokens to conduct user token verification without
explicit calls to Keystone.
Thanks!
Amit
Amit Vijairania | 978.319.3684
--*--
__
Thx Sage for help me understand Ceph much more deeply!
And recently i have another questions as follows,
1. As we know, Ceph -s is the summary of system's state, and is there any
tools to monitor the detail of data's flow when the Crush map is changed?
2. In my understanding, the mapping between
26 matches
Mail list logo