Hi all,
We're setting up radosgw against a large 8/3 EC pool (across 24 nodes)
with a modest 4 node cache tier in front (those 4 nodes each have 20x
10k SAS drives and 4x Intel DC S3700 journals). With the cache tiering
we're not sure what the best setup is for all the various peripheral
rgw pools
Seems like we are hitting the qemu file descriptor limits. Thanks for
pointing out.
Thanks,
Jeyaganesh.
On 7/15/15, 2:12 AM, "Jan Schermer" wrote:
>We are getting the same log message as you, but not too often, and from
>what I gathered it is normal to see that.
>Not sure how often you are seei
Hi,
>
> I have an issue where I cannot delete files or folders from Buckets, no
> issues when copying data over. whenever i try to delete something i get:
>
> Internal error 500, here is a sample from the radosgw log:
>
>
> 2015-07-12 17:51:33.216750 7f5daaf65700 15 calculated
> digest=4/aScqOXX
Hi,
I have an issue where I cannot delete files or folders from Buckets, no
issues when copying data over. whenever i try to delete something i get:
Internal error 500, here is a sample from the radosgw log:
2015-07-12 17:51:33.216750 7f5daaf65700 15 calculated
digest=4/aScqOXY8O45BFQds0
On 07/15/2015 05:12 AM, Ken Dreyer wrote:
> On 07/14/2015 04:14 PM, Wido den Hollander wrote:
>> Hi,
>>
>> Curently tracker.ceph.com doesn't have SSL enabled.
>>
>> Every time I log in I'm sending my password over plain text which I'd
>> rather not.
>>
>> Can we get SSL enabled on tracker.ceph.com?
For me setting recovery_delay_start helps during the OSD bootup _sometimes_,
but it clearly does something different than what’s in the docs.
Docs say:
After peering completes, Ceph will delay for the specified number of seconds
before starting to recover objects.
However, what I see is greatl
Hello,
I'm experimenting with ceph for caching, it's configured with size=1 (so
no redundancy/replication) and exported via cephfs to clients, now I'm
wondering what happens is an SSD dies and all of its data is lost? I'm
seeing files being in 4MB chunks in PGs, do we know if a whole file as
saved
On Thu, Jul 16, 2015 at 11:58 AM, Vedran Furač wrote:
> Hello,
>
> I'm experimenting with ceph for caching, it's configured with size=1 (so
> no redundancy/replication) and exported via cephfs to clients, now I'm
> wondering what happens is an SSD dies and all of its data is lost? I'm
> seeing fil
Depends on how you use it - as RADOS object store? Or via radosgw? Or as RBD
images? CephFS?
Basically all RADOS object in the PGs on the OSD will be lost.
That means all RBD images using those PGs will be lost (or have holes in them
if you force it to, but basically you should consider them los
On 16/07/15 11:58, Vedran Furač wrote:
I'm experimenting with ceph for caching, it's configured with size=1 (so
no redundancy/replication) and exported via cephfs to clients, now I'm
wondering what happens is an SSD dies and all of its data is lost? I'm
seeing files being in 4MB chunks in PGs,
Hi
I have tested a bit this with different ceph-fuse versions and linux
distros and it seems a mount issue in CentOS7. The problem is that mount
tries to find first the = from fstab fs_spec field into
the blkid block device attributes and of course this flag is not there
and you always get an erro
I've spotted an issue with radosgw on Giant and I'm hoping someone can shed
any light on if it's known or been fixed. Google and #ceph weren't
particularly useful sadly.
Long story short when probing S3's admin API to get user quotas it's
returning a malformed HTTP response and causing HAproxy to
On Wed, Jul 15, 2015 at 10:50 PM, John Spray wrote:
>
>
> On 15/07/15 16:57, Shane Gibson wrote:
>>
>>
>>
>> We are in the (very) early stages of considering testing backing Hadoop
>> via Ceph - as opposed to HDFS. I've seen a few very vague references to
>> doing that, but haven't found any conc
On 16-07-15 15:52, Simon Murray wrote:
> I've spotted an issue with radosgw on Giant and I'm hoping someone can
> shed any light on if it's known or been fixed. Google and #ceph weren't
> particularly useful sadly.
>
> Long story short when probing S3's admin API to get user quotas it's
> retur
Hi list
my cluster include 4 nodes , 1 mon ,3 osd nodes(4 SATA/node),totall 12 osds.
ceph version is 0.72. each osd node has 1Gbps NIC, mon node has 2*1Gbps NIC.
tgt is on mon node, client is vmware. upload(copy) a 500G file in Vmware. the
HW Accelerated in VMware had turned off as Nick sugges
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> maoqi1982
> Sent: 16 July 2015 15:30
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] wmware tgt librbd performance very bad
>
> Hi list
> my cluster include 4 nodes , 1 mon ,3 osd
I originally built a single node cluster, and added
'osd_crush_chooseleaf_type = 0 #0 is for one node cluster' to ceph.conf
(which is now commented out).
I've now added a 2nd node, where can I set this value to 1? I see in the
crush map that the osd's are under 'host' buckets and don't see any
ref
This development release features more of the OSD work queue
unification, randomized osd scrub times, a huge pile of librbd fixes,
more MDS repair and snapshot fixes, and a significant amount of work
on the tests and build infrastructure.
Notable Changes
---
* buffer: some cleanup (Mi
False alarm, things seem to be fine now.
-- Tom
> -Original Message-
> From: Deneau, Tom
> Sent: Wednesday, July 15, 2015 1:11 PM
> To: ceph-users@lists.ceph.com
> Subject: Any workaround for ImportError: No module named ceph_argparse?
>
> I just installed 9.0.2 on Trusty using ceph-depl
Some context. I have a small cluster running ubuntu 14.04 and giant ( now
hsmmer). I ran some updates everything was fine. Rebooted a node and a
drive must have failed as it no longer shows up.
I use --dmcrypt with ceph deploy and 5 osds per ssd journal. To do this I
created the ssd partit
On 7/16/15, 6:55 AM, "Gregory Farnum" wrote:
>
>Yep! The Hadoop workload is a fairly simple one that is unlikely to
>break anything in CephFS. We run a limited set of Hadoop tests on it
>every week and provide bindings to set it up; I think the
>documentation is a bit lacking here but if you've
Here is the output I am getting when I try ceph-deploy::
lacadmin@kh10-9:~/GDC$ ceph-deploy osd --dmcrypt create
kh10-7:sde:/dev/sdab1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/lacadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.21): /usr/local/bin/ceph-deploy
Dear All...
Maybe the issue I am about to report is already found by some of you. If
yes, sorry for the duplication.
I spent a lot of time understanding why my mons were not in good
condition. ceph service was up and running, but I noticed the following
process hanging... A clean installatio
The basics are, 14.04, giant, apache with the ceph version of fastCGI.
I'll spin up a test system in openstack with Civet and see if it misbehaves
the same way or I need to narrow it down further. Chances are if you
haven't heard of it I'll need to crack out g++ and get my hands dirty
--
DataCent
24 matches
Mail list logo