Tried on kernel 6.2.32 and ceph 0.61 provided by epel
Ceph cluster built with v0.67
No problem at all.
IO on the RDB device stalled around 30secs when an OSD failed.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John-Pau
After upgrading from Cuddlefish to Dumpling I am no longer able to obtain user
information from the rados gateway.
radosgw-admin user infocould not fetch user info: no user info saved
radosgw-admin user create --uid=bob --display-name="bob"could not create user:
unable to create user, unable t
On Thu, Sep 26, 2013 at 7:42 AM, Mike O'Toole wrote:
> After upgrading from Cuddlefish to Dumpling I am no longer able to obtain
> user information from the rados gateway.
>
> radosgw-admin user info
> could not fetch user info: no user info saved
>
>
> radosgw-admin user create --uid=bob --displa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I know this is late, but you are probably running the -virtual kernel.
rbd is most definitely included with the -generic kernel. To add it
to the -virtual kernel you need to install linux-image-extra-virtual I
believe.
Dave.
On 09/15/2013 02:53 PM,
I'm having the same issue after upgrade from 67.2 to 67.3
radosgw-admin user info
could not fetch user info: no user info saved
radosgw-admin user info --debug-rgw=2 --debug-ms=1 --log-to-stderr
2013-09-26 17:54:17.785067 7fdcda941780 1 -- :/0 messenger.start
2013-09-26 17:54:17.785560 7fdcda94
You are talking about test ceph rbd on Vms? It works perfectly fine for
me. You'll need to make your VM true PV driven then the kernel version is
supposed to show something like 3.8.0-29-generic #42-Ubuntu. If it shows
virtual, it can hardly be a true PV driver VM.
On 9/26/13 8:30 AM, "Dave Chi
Are you sure you ran the same radosgw-admin command? the log shows a
different error message. Can you re-run it with also (besides the
--debug-ms=1 and --debug-rgw=20) also with --debug-objecter=20?
On Thu, Sep 26, 2013 at 9:09 AM, Mike O'Toole wrote:
> radosgw-admin zone get
> { "domain_root": "
Dear all,
I am fairly new to ceph and just in the process of testing it using
several virtual machines.
Now I tried to create a block device on a client and fumbled with
settings for about an hour or two until the command line
rbd --id dovecot create home --size=1024
finally succeeded. The k
The osd returns some unexpected error:
2013-09-26 13:25:17.651552 7ff13cd0b700 1 -- 10.10.2.55:0/1005496 <==
osd.2 10.10.2.200:6800/4790 5 osd_op_reply(25 bob
[call,getxattrs,stat] ack = -5 (Input/output error)) v4 186+0+0
(4137529543 0 0) 0x7ff120002490 con 0xa6a380
Is there a matchin
On Wed, Sep 25, 2013 at 8:44 PM, Sage Weil wrote:
> On Wed, 25 Sep 2013, Aaron Ten Clay wrote:
> > Hi all,
> >
> > Does anyone know how to specify which pool the mds and CephFS data will
> be
> > stored in?
> >
> > After creating a new cluster, the pools "data", "metadata", and "rbd" all
> > exis
Hi Mark,
FYI, I tried with wip-6286-dumpling release and the results are the same for
me. The radosgw throughput is around ~6x slower than the single rados bench
output!
Any other suggestion ?
Thanks & Regards
Somnath
-Original Message-
From: Somnath Roy
Sent: Friday, September 20, 20
It's kind of annoying, but it may be worth setting up a 2nd RGW server
and seeing if having two copies of the benchmark going at the same time
on two separate RGW servers increases aggregate throughput.
Also, it may be worth tracking down latencies with messenger debugging
enabled, but I'm afr
Mark,
I did set up 3 radosgw servers in 3 server nodes and the tested with 3
swift-bench client hitting 3 radosgw in the same time. I saw the aggregated
throughput is linearly scaling. But, as an individual radosgw performance is
very low we need to put lots of radosgw/apache server combination
Ah, that's very good to know!
And RGW CPU usage you said was low?
Mark
On 09/26/2013 05:40 PM, Somnath Roy wrote:
Mark,
I did set up 3 radosgw servers in 3 server nodes and the tested with 3
swift-bench client hitting 3 radosgw in the same time. I saw the aggregated
throughput is linearly sc
Nope...With one client hitting the radaosgw , the daemon cpu usage is going up
till 400-450% i.e taking in avg 4 core..In one client scenario, the server node
(having radosgw + osds) cpu usage is ~80% idle and out of the 20% usage bulk is
consumed by radosgw.
Thanks & Regards
Somnath
-Orig
Mark,
One more thing, all my test is with rgw cache enabled , disabling the cache the
performance is around 3x slower.
Thanks & Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thursday, Se
You specify the relative performance, but what the actual numbers that
you're seeing? How many GETs per second, and how many PUTs per second
do you see?
On Thu, Sep 26, 2013 at 4:00 PM, Somnath Roy wrote:
> Mark,
> One more thing, all my test is with rgw cache enabled , disabling the cache
> the
Hi Yehuda,
With my 3 node cluster (30 OSDs in total, all in ssds), I am getting avg of
~3000 Gets/s from a single swift-bench client hitting single radosgw instance.
Put is ~1000/s. BTW, I am not able to generate very big load yet and as the
server has ~140G RAM, all the GET requests are served
18 matches
Mail list logo