.mobstor.gq1.yahoo.com][INFO ] /dev/sda :
[repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda1 other, ext4, mounted on /boot
[repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda2 other, LVM2_member
Thanks,
Guang
___
ceph-users mailing list
ceph-users
_thread_pool_size": "100",
Is this expected?
4. cephx authentication. After reading through the cephx introduction, I got
the feeling that cephx is for client to cluster authentication, so that each
librados user will need to create a new key. However, this page
http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx got
me confused in terms of why should we create keys for mon and osd? And how does
that fit into the authentication diagram? BTW, I found the keyrings under
/var/lib/cecph/{role}/ for each roles, are they being used when talk to other
roles?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
et.gateway_bootstrap.HostNotFound: web2
Does anyone come across the same issue? Looks like I mis-configured the network
environment?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
actually used ceph-deploy to install ceph onto
the web2 remote host…
Thanks,
Guang
Date: Wed, 25 Sep 2013 10:29:14 +0200
From: Wolfgang Hennerbichler
To:
Subject: Re: [ceph-users] Ceph deployment issue in physical hosts
Message-ID: <52429eda.8070...@risc-software.at>
Content-Type: text
her investigate.
Thanks all for the help!
Guang
On Sep 25, 2013, at 8:38 PM, Alfredo Deza wrote:
> On Wed, Sep 25, 2013 at 5:08 AM, Guang wrote:
>> Thanks Wolfgang.
>>
>> -bash-4.1$ ping web2
>> PING web2 (10.193.244.209) 56(84) bytes of data.
>> 64 bytes from web2 (
Hi ceph-users,
I recently deployed a ceph cluster with use of *ceph-deploy* utility, on
RHEL6.4, during the time, I came across a couple of issues / questions which I
would like to ask for your help.
1. ceph-deploy does not help to install dependencies (snappy leveldb gdisk
python-argparse gper
Hi ceph-users,
After walking through the operations document, I still have several questions
in terms of operation / monitoring for ceph which need you help. Thanks!
1. Does ceph provide build in monitoring mechanism for Rados and RadosGW?
Taking Rados for example, is it possible to monitor the
optimal any more and there is no
chance to correct it?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For the second issue, I got the answer from within:
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location.
Thanks,
Guang
On Oct 8, 2013, at 8:43 PM, Guang wrote:
> Hi ceph-users,
> After walking through the operations document, I still have several que
Hi,
Can someone share your experience with monitoring the Ceph cluster? How is
going with the work mentioned here:
http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/ceph_stats_and_monitoring_tools
Thanks,
Guang___
ceph-users mailing list
ceph-user
Thanks Mike.
Is there any documentation for that?
Thanks,
Guang
On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote:
> You can add PGs, the process is called splitting. I don't think PG merging,
> the reduction in the number of PGs, is ready yet.
>
> On Oct 8, 2013, at 11:58
1.1PB to 1.2PB or move to 2PB directly?
Thanks,
Guang
On Oct 10, 2013, at 11:10 AM, Michael Lowe wrote:
> There used to be, can't find it right now. Something like 'ceph osd set
> pg_num ' then 'ceph osd set pgp_num ' to actually move your data
> into the new pg
of httpd processes?
Besides, it looks like a documentation issue on this page:
http://ceph.com/docs/master/install/rpm/, it recommends using
FastCgiExternalServer, however, it does not mention that part that radosgw
application should be manually launched (maybe I am new to fastcgi though).
Than
Hi ceph-users,
I am trying with the new ceph-deploy utility on RHEL6.4 and I came across a new
issue:
-bash-4.1$ ceph-deploy --version
1.2.7
-bash-4.1$ ceph-deploy disk zap server:/dev/sdb
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy disk zap
server:/dev/sdb
[ceph_deploy.osd][
-bash-4.1$ which sgdisk
/usr/sbin/sgdisk
Which path does ceph-deploy use?
Thanks,
Guang
On Oct 15, 2013, at 11:15 PM, Alfredo Deza wrote:
> On Tue, Oct 15, 2013 at 10:52 AM, Guang wrote:
>> Hi ceph-users,
>> I am trying with the new ceph-deploy utility on RHEL6.4 and I came
ead.so.0() [0x3208a07851]
14: (clone()+0x6d) [0x32086e890d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to
interpret this.
Anyone else came across the same issue? Or am I missing anything when add a new
OSD?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I just found the trick..
When I am using a default crush, which use straw bucket type, things are good.
However, for the error I posted below, it is using tree bucket type.
Is it related?
Thanks,
Guang
On Oct 30, 2013, at 6:52 PM, Guang wrote:
> Hi all,
> Today I tried to add a new OS
very much.
Thanks,
Guang
Date: Thu, 10 Oct 2013 05:15:27 -0700
From: Kyle Bader
To: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] Expanding ceph cluster by adding more OSDs
Message-ID:
Content-Type: text/plain; charset="utf-8"
I've contracted and expand
Hi ceph-users and ceph-devel,
Merry Christmas and Happy New Year!
We have a ceph cluster with radosgw, our customer is using S3 API to access the
cluster.
The basic information of the cluster is:
bash-4.1$ ceph -s
cluster b9cb3ea9-e1de-48b4-9e86-6921e2c537d2
health HEALTH_ERR 1 pgs inconsis
Hi ceph-users,
After reading through the GC related code, I am thinking to use a much larger
value for "rgw gc max obis" (like 997), and I don't see any side effect if we
increase this value. Did I miss anything?
Thanks,
Guang
Begin forwarded message:
> From: redm..
;fsid": "b9cb3ea9-e1de-48b4-9e86-6921e2c537d2",
"modified": "0.00",
"created": "0.00",
"mons": [
{ "rank": 0,
"name": "osd152",
"addr&q
5}) = 0 (Timeout)
[pid 75873] select(0, NULL, NULL, NULL, {0, 5}) = 0 (Timeout)
[pid 75873] select(0, NULL, NULL, NULL, {0, 5}) = 0 (Timeout)
[pid 75873] select(0, NULL, NULL, NULL, {0, 5}) = 0 (Timeout)
Thanks,
Guang
On Jan 15, 2014, at 5:54 AM, Guang wrote:
> Thanks Sage
Thanks Sage.
I just captured part of the log (it was fast growing), the process did not hang
but I saw the same pattern repeatedly. Should I increase the log level and send
over email (it constantly reproduced)?
Thanks,
Guang
On Jan 18, 2014, at 12:05 AM, Sage Weil wrote:
> On Fri, 17
+ceph-users.
Does anybody have the similar experience of scrubbing / deep-scrubbing?
Thanks,
Guang
On Jan 29, 2014, at 10:35 AM, Guang wrote:
> Glad to see there are some discussion around scrubbing / deep-scrubbing.
>
> We are experiencing the same that scrubbing could affect late
Hello,
Most recently when looking at PG’s folder splitting, I found that there was
only one sub folder in the top 3 / 4 levels and start having 16 sub folders
starting from level 6, what is the design consideration behind this?
For example, if the PG root folder is ‘3.1905_head’, in the first le
Got it. Thanks Greg for the response!
Thanks,
Guang
On Feb 26, 2014, at 11:51 AM, Gregory Farnum wrote:
> On Tue, Feb 25, 2014 at 7:13 PM, Guang wrote:
>> Hello,
>> Most recently when looking at PG's folder splitting, I found that there was
>> only one sub folder i
Hello ceph-users,
We are trying to play with RBD and I would like to ask if RBD works for RedHat
6.4 (with kernel version 2.6.32), what modules / package / drives do I need to
install to use RBD?
Thanks,
Guang
___
ceph-users mailing list
ceph-users
Hi all,
In order to deal with PG uneven problem[1, 2] which further leads to uneven
disk usage, I recently developed a simple script which aims to do *osd crush*
reweight right after creating the pool (which has the most significant data,
e.g. .rgw.buckets), we had good experience to tune the di
Hi all,
One goal of the storage system is to achieve certain durability SLAs, so that
we replicate data with multiple copies, and check consistency on regular basis
(e.g. scrubbing), however, replication could increase cost (tradeoff between
cost & durability), and cluster wide consistency check
ask how to check if “there are other copies of its content” for
a given monitor instance?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
radosgw for
1, I would like to check if anyone has experience on tape backup?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On May 15, 2014, at 6:06 PM, Cao, Buddy wrote:
> Hi,
>
> One of the osd in my cluster downs w no reason, I saw the error message in
> the log below, I restarted osd, but after several hours, the problem come
> back again. Could you help?
>
> “Too many open files not handled on operation 24
a thread for each connection, so kerblam.
> (And it actually requires a couple such connections because we have
> separate heartbeat, cluster data, and client data systems.)
Hi Greg,
Is there any plan to refactor the messenger component to reduce the num of
threads? For example, use eve
On May 28, 2014, at 5:31 AM, Gregory Farnum wrote:
> On Sun, May 25, 2014 at 6:24 PM, Guang Yang wrote:
>> On May 21, 2014, at 1:33 AM, Gregory Farnum wrote:
>>
>>> This failure means the messenger subsystem is trying to create a
>>> thread and is getting an
large directories.
I would like to check with your experience on top of this for your Ceph cluster
if you are using XFS. Thanks.
[1] http://www.scs.stanford.edu/nyu/02fa/sched/xfs.pdf
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
this limitation.
Thanks,
Guang
On Jun 30, 2014, at 2:54 PM, baijia...@126.com wrote:
>
> hello, everyone!
>
> when I user rest bench test RGW performance and the cmd is:
> ./rest-bench --access-key=ak --secret=sk --bucket=bucket_name --seconds=600
> -t 200 -b 524288
n was hang there waiting other ops finishing
their work.
>
> thanks
> baijia...@126.com
>
> 发件人: Guang Yang
> 发送时间: 2014-06-30 14:57
> 收件人: baijiaruo
> 抄送: ceph-users
> 主题: Re: [ceph-users] Ask a performance question for the RGW
> Hello,
> There is a k
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi ceph-users,
This is Guang and I am pretty new to ceph, glad to meet you guys in the
community!
After walking through some documents of Ceph, I have a couple of questions:
1. Is there any comparison between Ceph and AWS S3, in terms of the ability
to handle different work-loads (from KB to
Hi ceph-users,
This is Guang and I am pretty new to ceph, glad to meet you guys in the
community!
After walking through some documents of Ceph, I have a couple of questions:
1. Is there any comparison between Ceph and AWS S3, in terms of the ability
to handle different work-loads (from KB to
Hi ceph-users,
I would like to check if there is any manual / steps which can let me try to
deploy ceph in RHEL?
Thanks,
Guang___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Mark.
What is the design considerations to break large files into 4M chunk rather
than storing the large file directly?
Thanks,
Guang
From: Mark Kirkwood
To: Guang Yang
Cc: "ceph-users@lists.ceph.com"
Sent: Monday, August 19, 2013 5:18
Thanks Greg.
Some comments inline...
On Sunday, August 18, 2013, Guang Yang wrote:
Hi ceph-users,
>This is Guang and I am pretty new to ceph, glad to meet you guys in the
>community!
>
>
>After walking through some documents of Ceph, I have a couple of questions:
> 1. Is th
Then that makes total sense to me.
Thanks,
Guang
From: Mark Kirkwood
To: Guang Yang
Cc: "ceph-users@lists.ceph.com"
Sent: Tuesday, August 20, 2013 1:19 PM
Subject: Re: [ceph-users] Usage pattern and design of Ceph
On 20/08/13 13:27, Guang
Thanks Greg.
>>The typical case is going to depend quite a lot on your scale.
[Guang] I am thinking the scale as billions of objects with size from several
KB to several MB, my concern is over the cache efficiency for such use case.
That said, I'm not sure why you'd want to
Thanks all for the recommendation. I worked around by modifying the ceph-deploy
by giving and full path for sgdisk.
Thanks,
Guang
在 2013-10-16,下午10:47,Alfredo Deza 写道:
> On Tue, Oct 15, 2013 at 9:19 PM, Guang wrote:
>> -bash-4.1$ which sgdisk
>> /usr/sbin/sgdisk
>>
&g
PGs, and for
the large cluster, the pool has 4 PGs (as I will to further scale the
cluster, so I choose a much large PG).
Does my test result make sense? Like when the PG number and OSD increase, the
latency might drop?
Thanks,
Guang
___
ceph
:13 AM, Guang Yang wrote:
> Dear ceph-users,
Hi!
> Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs)
> to a much bigger one (330 OSDs).
>
> When using rados bench to test the small cluster (24 OSDs), it showed the
> average latency was around 3ms (objec
Hi Kyle and Greg,
I will get back to you with more details tomorrow, thanks for the response.
Thanks,
Guang
在 2013-10-22,上午9:37,Kyle Bader 写道:
> Besides what Mark and Greg said it could be due to additional hops through
> network devices. What network devices are you using, what is the n
e OSDs, the cluster will need to maintain
far more connections between OSDs which potentially slow things down?
3. Anything else i might miss?
Thanks all for the constant help.
Guang
在 2013-10-22,下午10:22,Guang Yang 写道:
> Hi Kyle and Greg,
> I will get back to you with more details tomo
Thanks Mark.
I cannot connect to my hosts, I will do the check and get back to you tomorrow.
Thanks,
Guang
在 2013-10-24,下午9:47,Mark Nelson 写道:
> On 10/24/2013 08:31 AM, Guang Yang wrote:
>> Hi Mark, Greg and Kyle,
>> Sorry to response this late, and thanks for providing the
Hello ceph-users,
I am a little bit confused by these two options, I understand crush reweight
determine the weight of the OSD in the crush map so that it impacts I/O and
utilization, however, I am a little bit confused by osd reweight option, is
that something control the I/O distribution acros
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrote:
> H
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrote:
> H
Thanks Mark, my comments inline...
Date: Mon, 30 Dec 2013 07:36:56 -0600
From: Mark Nelson
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
after running some time
On 12/30/2013 05:45 AM, Guang wrote:
> Hi ceph-users and ceph-devel,
>
Thanks all for the help.
We finally identified the root cause of the issue was due to a lock contention
happening at folder splitting and here is a tracking ticket (thanks Inktank for
the fix!): http://tracker.ceph.com/issues/7207
Thanks,
Guang
On Tuesday, December 31, 2013 8:22 AM, Guang
Hi ceph-users,
We are using Ceph (radosgw) to store user generated images, as GET latency is
critical for us, most recently I did some investigation over the GET path to
understand where time spend.
I first confirmed that the latency came from OSD (read op), so that we
instrumented code to trac
Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD
hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency (average latency
is around 60 milliseconds via radosgw) comes from file loo
turn of
run and cleanup on that basis, if you still want to do a slow liner search to
cleanup, be sure removing the benchmark_last_metadata object before you kick
off running the cleanup.
Let me know if that helps.
Thanks,
Guang
On May 20, 2014, at 6:45 AM, matt.lat...@hgst.com wrote:
>
>
Hi,
I've been trying to use block device recently. I have a running cluster
with 2 machines and 3 OSDs.
On a client machine, let's say A, I created a rbd image using `rbd create`
, then formatted, mounted and wrote something in it, everything was working
fine.
However, problem occurred when I tr
ve such
> mechanisms (with limits, read the docs).
>
> On May 1, 2013, at 11:19 AM, Yudong Guang
> wrote:
>
> > Hi,
> >
> > I've been trying to use block device recently. I have a running cluster
> with 2 machines and 3 OSDs.
> >
> > On a client ma
Thank you, Gandalf and Igor. I intuitively think that building a cluster on
another is not appropriate. Maybe I should give RadosGW a try first.
On Thu, May 2, 2013 at 3:00 AM, Igor Laskovy wrote:
> Or maybe in case the hosting purposes easier implement RadosGW.
>
--
Yudong
action immediately.
> Could we delay the queue of the transactions until all PGs on the host are
> peered?
Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> On Mon, 4 Jan 2016, Guang Yang wrote:
>> Hi Cephers,
>> Happy New Year! I got question regards to the long PG peering..
>>
>> Over the last several days I have been looking into the *long peering*
>> problem when
65 matches
Mail list logo