Hello List,
Using the radosgw works fine, as long as the amount of data doesn't get
too big.
I have created one bucket that holds many small files, separated into
different "directories". But whenever I try to acess the bucket, I only
run into some timeout. The timeout is at around 30 - 100
On 22.05.2014 15:36, Yehuda Sadeh wrote:
On Thu, May 22, 2014 at 6:16 AM, Georg Höllrigl
wrote:
Hello List,
Using the radosgw works fine, as long as the amount of data doesn't get too
big.
I have created one bucket that holds many small files, separated into
different "directo
Thank you very much - I think I've solved the whole thing. It wasn't in
radosgw.
The solution was,
- increase the timeout in Apache conf.
- when using haproxy, also increase the timeouts there!
Georg
On 22.05.2014 15:36, Yehuda Sadeh wrote:
On Thu, May 22, 2014 at 6:16 AM, Geor
On 22.05.2014 17:30, Craig Lewis wrote:
On 5/22/14 06:16 , Georg Höllrigl wrote:
I have created one bucket that holds many small files, separated into
different "directories". But whenever I try to acess the bucket, I
only run into some timeout. The timeout is at around 30 - 1
Hi,
I'm using ceph 0.61.7.
When using ceph-fuse, I couldn't find a way, to only mount one pool.
Is there a way to mount a pool - or is it simply not supported?
Kind Regards,
Georg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.c
rds,
Georg
On 13.08.2013 12:09, Dzianis Kahanovich wrote:
Georg Höllrigl пишет:
I'm using ceph 0.61.7.
When using ceph-fuse, I couldn't find a way, to only mount one pool.
Is there a way to mount a pool - or is it simply not supported?
This mean "mount as fs"?
Sam
Hello,
I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with "ceph pg force_
logs and see if you can get a sense
for why one of your monitors is down. Some of the other devs will
probably be around later that might know if there are any known issues
with recreating the OSDs and missing PGs.
Mark
On 08/16/2013 08:21 AM, Georg Höllrigl wrote:
Hello,
I'm still evalua
tors is down. Some of the other devs will
probably be around later that might know if there are any known issues
with recreating the OSDs and missing PGs.
Mark
On 08/16/2013 08:21 AM, Georg Höllrigl wrote:
Hello,
I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I
n initial setup) when the
cluster is unstable.
If so, clear out the metadata pool and check the docs for "newfs".
-Greg
On Monday, August 19, 2013, Georg Höllrigl wrote:
Hello List,
The troubles to fix such a cluster continue... I get output like
this now:
# ceph healt
Hello,
I'm horribly failing at creating a bucket on radosgw at ceph 0.67.2
running on ubuntu 12.04.
Right now I feel frustrated about radosgw-admin for beeing inconsistent
in its options. It's possible to list the buckets and also to delete
them but not to create!
No matter what I tried -
On 23.08.2013 16:24, Yehuda Sadeh wrote:
On Fri, Aug 23, 2013 at 1:47 AM, Tobias Brunner wrote:
Hi,
I'm trying to use radosgw with s3cmd:
# s3cmd ls
# s3cmd mb s3://bucket-1
ERROR: S3 error: 405 (MethodNotAllowed):
So there seems to be something missing according to buckets. How can I
creat
2013 08:51, Georg Höllrigl wrote:
On 23.08.2013 16:24, Yehuda Sadeh wrote:
On Fri, Aug 23, 2013 at 1:47 AM, Tobias Brunner wrote:
Hi,
I'm trying to use radosgw with s3cmd:
# s3cmd ls
# s3cmd mb s3://bucket-1
ERROR: S3 error: 405 (MethodNotAllowed):
So there seems to be something missing accor
The whole GIT seems unreachable. Anybody knows what's going on?
On 30.09.2013 17:33, Mike O'Toole wrote:
I have had the same issues.
From: qgra...@onq.com.au
To: ceph-users@lists.ceph.com
Date: Mon, 30 Sep 2013 00:01:11 +0
kspace.com and delete the original message. Your cooperation is
appreciated.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Dipl
Hello List,
I've troubles with our radosgw. The "backup" bucket was mounted via
s3fs. But now I'm only getting a "transport endpoint not connected" when
trying to access the directory.
Strange thing is, that differnt buckets show different behaviours in the
webbrowser:
https://example.com/
Hello,
Using Ceph MDS with one active and one standby server - a day ago one of
the mds crashed and I restarted it.
Tonight it crashed again, a few hours later, also the second mds crashed.
#ceph -v
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
At the moment cephfs is dead -
a5e1700 0
log [ERR] : dir 1c468b6.1c468b6 object missing on disk; some
files may be lost
At the moment, I've only one mds running - but clients (mainly using
fuse) can't connect.
Regards,
Georg
On 16.04.2014 16:27, Gregory Farnum wrote:
What's the backtrace from the M
!
Georg
On 17.04.2014 09:45, Georg Höllrigl wrote:
Hello Greg,
I've searched - but don't see any backtraces... I've tried to get some
more info out of the logs. I really hope, there is something interesting
in it:
It all started two days ago with an authentication error:
2014-04-14
Looks like you enabled directory fragments, which is buggy in ceph version 0.72.
Regards
Yan, Zheng
When it's enabled it wasn't intentionally. So how would I disable it?
Regards,
Georg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
And that's exactly what it sounds like — the MDS isn't finding objects
that are supposed to be in the RADOS cluster.
I'm not sure, what I should think about that. MDS shouldn't access data
for RADOS and vice versa?
Anyway, glad it fixed itself, but it sounds like you've got some
infrastruc
Hello,
We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By now
I've tried this multiple times - but the result keeps the same and shows
me lots of troubles (the cluster is empty, no client has accessed it)
#ceph -s
cluster b04fc583-9e71-48b7-a741-92f4dff4cfef
health
osd.8 up 1
On 08.05.2014 19:11, Craig Lewis wrote:
What does `ceph osd tree` output?
On 5/8/14 07:30 , Georg Höllrigl wrote:
Hello,
We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By
now I've tried this multiple times - but the result keeps the same an
ve replication size=3, and won't place replica pgs on the same host...
'ceph osd dump' will let you know if this is the case. If it is ether
reduce size to 2, add another host or edit your crush rules to allow
replica pgs on the same host.
Cheers
Mark
On 09/05/14 18:20, Georg Hö
it worked.
I didn't have time to look into replicating the issue.
Cheers,
Martin
On Thu, May 8, 2014 at 4:30 PM, Georg Höllrigl
mailto:georg.hoellr...@xidras.com>> wrote:
Hello,
We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By
now I've t
Hello,
System Ubuntu 14.04
Ceph 0.80
I'm getting either a 405 Method Not Allowed or a 403 Permission Denied
from Radosgw.
Here is what I get from radosgw:
HTTP/1.1 405 Method Not Allowed
Date: Tue, 13 May 2014 12:21:43 GMT
Server: Apache
Accept-Ranges: bytes
Content-Length: 82
Content-Type:
message if you miss anything!
Kind Regards,
Georg
On 13.05.2014 14:30, Georg Höllrigl wrote:
Hello,
System Ubuntu 14.04
Ceph 0.80
I'm getting either a 405 Method Not Allowed or a 403 Permission Denied
from Radosgw.
Here is what I get from radosgw:
HTTP/1.1 405 Method Not Allowed
Da
Hello List,
I see a pool without a name:
ceph> osd lspools
0 data,1 metadata,2 rbd,3 .rgw.root,4 .rgw.control,5 .rgw,6 .rgw.gc,7
.users.uid,8 openstack-images,9 openstack-volumes,10
openstack-backups,11 .users,12 .users.swift,13 .users.email,14 .log,15
.rgw.buckets,16 .rgw.buckets.index,17 .u
On 14.05.2014 17:26, Wido den Hollander wrote:
On 05/14/2014 05:24 PM, Georg Höllrigl wrote:
Hello List,
I see a pool without a name:
ceph> osd lspools
0 data,1 metadata,2 rbd,3 .rgw.root,4 .rgw.control,5 .rgw,6 .rgw.gc,7
.users.uid,8 openstack-images,9 openstack-volumes,10
openstack-back
29 matches
Mail list logo