* On 26 Feb 2014, Gregory Farnum wrote:
> >> > q1. CephFS has a tunable for max file size, currently set to 1TB. If
> >> > I want to change this, what needs to be done or redone? Do I have to
> >> > rebuild, or can I just change the param, restart services, and be off?
> >>
> >> What version are
Thanks, Greg, for the response.
* On 26 Feb 2014, Gregory Farnum wrote:
> >
> > 1. Place the 8m files in a disk image. Mount the disk image (read-only)
> > to provide access to the 8m files, and allow copying the disk image to
> > accelerate read of the enture dataset.
> >
> > 2. Put the 8m file
On Wed, Feb 26, 2014 at 10:37 AM, David Champion wrote:
> Thanks, Greg, for the response.
>
> * On 26 Feb 2014, Gregory Farnum wrote:
>> >
>> > 1. Place the 8m files in a disk image. Mount the disk image (read-only)
>> > to provide access to the 8m files, and allow copying the disk image to
>> >
On Wed, Feb 26, 2014 at 6:10 AM, David Champion wrote:
> I have a 1.6 TB collecton of 8 million files in CephFS, distributed up
> to 8-10 directories deep. (Never mind why - this design decision is out
> of my hands and not in scope.) I need to expose this data on multiple
> application servers.
Your OSDs aren't supposed to be listed in the config file, but they
should show up under /var/lib/ceph. Probably your OSD disks aren't
being mounted for some reason (that would be the bug). Try mounting
them and seeing what blocked the mount.
-Greg
Software Engineer #42 @ http://inktank.com | http:
[Re-adding the list because I failed to last time.]
Interesting! I don't think I've seen local nodes get their usage wrong
like that before, but there are a lot of storage systems I don't have
much experience with. The aggregate usage stats across a Ceph cluster
are derived from the local output o
That worked!
FYI (for others in case it helps): I think the key thing that fixed the
problems were a) set tunables to optimal and then b) resetting all the
weights back to 1 (per Gregory's suggestion below)
On Tue, Feb 25, 2014 at 1:03 PM, Gregory Farnum wrote:
> With the reweight-by-utilizati
Larry ..anything wrong if run rados gateway on my client machine rather
than one of the OSD nodes or monitor node?
On Wed, Feb 26, 2014 at 8:05 PM, Larry Liu wrote:
> Srinivasa, my radosgw runs on one of my 3 node ceph servers. I don't your
> 'setting up rados gateway on seperate host outside t
Srinivasa, my radosgw runs on one of my 3 node ceph servers. I don't your
'setting up rados gateway on seperate host outside the cluster' is a right
practice. And I can see it running:
$ ps -ef|grep -i rado
root 11004 1 0 Feb19 ?00:17:57 /usr/bin/radosgw -n
client.radosgw.gatewa
I have a 1.6 TB collecton of 8 million files in CephFS, distributed up
to 8-10 directories deep. (Never mind why - this design decision is out
of my hands and not in scope.) I need to expose this data on multiple
application servers. For the sake of argument, let's say I'm exposing
files from Ce
Hi Yehuda/Joseph/Any,
My ceph object storage cluster setup is node1(OSD1), node2(OSD2) and
monitor(MON).
>From any client I could able to access the cluster if i copy ceph.conf and
ceph.client.keyring. Ceph HEALTH is OK and active+ clean.
My requirement is to run swift APIs initially and later i
All my ceph servers(3 nodes) are running ubuntu 13.04(Raring) which was
recommended by ceph & inktank. I didn't do anything special except following
ceph.com's documentation carefully.
Swift client pkg, yes, apt-get install.
From: Srinivasa Rao Ragolu mailto:srag...@mvista.com>>
Date: Tuesday,
My configuration is: two osd servers, one admin node, three monitors;
all running 072.2
I had to switch of one of the OSD servers. The ngood news is: As
expected, all clients survived and continued to work with the
cluster, and the cluster entered a "health warn" state (one monitor
down, 5 of
13 matches
Mail list logo