A follow-up question. How do I cleanup the written data, after I finish up
with my benchmarks? I notice there is a cleanup object command,
though I'm unclear on how to use it.
On Tue, Mar 12, 2013 at 2:59 PM, Scott Kinder wrote:
> That did the trick, thanks David.
>
>
> On
That did the trick, thanks David.
On Tue, Mar 12, 2013 at 2:48 PM, David Zafman wrote:
>
> Try first doing something like this first.
>
> rados bench -p data 300 write --no-cleanup
>
> David Zafman
> Senior Developer
> http://www.inktank.com
>
>
>
>
> On
When I try and do a rados bench, I see the following error:
# rados bench -p data 300 seq
Must write data before running a read benchmark!
error during benchmark: -2
error 2: (2) No such file or directory
There's been objects written to the data pool. What's required to get the
read bench test to
Turns out, this was a configuration error on my part. I had no MDS defined.
Fixed that, now I can mount ceph on remote hosts.
On Mon, Mar 11, 2013 at 10:54 AM, Scott Kinder wrote:
> So on another server, which never mounted the ceph FS from 10.122.32.21,
> it still outputs the mount erro
.
On Mon, Mar 11, 2013 at 10:37 AM, Scott Kinder wrote:
> Here's my ceph.conf file.
>
> http://pastebin.com/JgE9YuHd
>
>
> On Mon, Mar 11, 2013 at 10:32 AM, Scott Kinder wrote:
>
>> I do not, that server is offline. Though at one point, I did have a
>> monitor r
Here's my ceph.conf file.
http://pastebin.com/JgE9YuHd
On Mon, Mar 11, 2013 at 10:32 AM, Scott Kinder wrote:
> I do not, that server is offline. Though at one point, I did have a
> monitor running at that IP, and had mounted the ceph FS from that server on
> the server which is
2013 at 10:19 AM, Sam Lang wrote:
> On Mon, Mar 11, 2013 at 11:05 AM, Scott Kinder
> wrote:
> > Auth is disabled, John. I tried mounting the ceph FS on a Ubuntu server,
> and
> > I see the following error in /var/log/syslog:
> >
> > libceph: mon0 10.122.32.21:6
the client with proper chmod
> permissions?
>
> On Sun, Mar 10, 2013 at 6:11 AM, Wido den Hollander wrote:
> > On 03/09/2013 06:55 PM, Scott Kinder wrote:
> >>
> >> I'm running ceph 0.56.3 on CentOS, and when I try to mount ceph as a
> >> file system on
I'm running ceph 0.56.3 on CentOS, and when I try to mount ceph as a file
system on other servers, the process just waits interminably. I'm not
seeing any relevant entries in syslog on the hosts trying to mount the file
system, nor am I seeing any entries in the ceph monitor logs. Any ideas on
how
gt; -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tuesday, March 5, 2013 at 8:44 AM, Scott Kinder wrote:
>
> > When is ceph 0.57 going to be available from the ceph.com (
> http://ceph.com) PPA? I checked, and all releases under
> http://ceph.co
When is ceph 0.57 going to be available from the ceph.com PPA? I checked,
and all releases under http://ceph.com/debian/dists/ seem to still be
0.56.3. Or am I missing something?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/
Excellent, thanks for the clarification.
On Fri, Mar 1, 2013 at 10:20 AM, Gregory Farnum wrote:
> On Fri, Mar 1, 2013 at 8:17 AM, Scott Kinder wrote:
> > In my ceph.conf file, I set the options under the [osd] section:
> >
> > osd pool default pg num = 133
> > osd
pg_num
2624 pgp_num 2624 last_change 1 owner 0
pool 2 'rbd' rep size 2 crush_ruleset 2 object_hash rjenkins pg_num 2624
pgp_num 2624 last_change 1 owner 0
On Fri, Mar 1, 2013 at 9:17 AM, Scott Kinder wrote:
> In my ceph.conf file, I set the options under the [osd] section:
>
> o
In my ceph.conf file, I set the options under the [osd] section:
osd pool default pg num = 133
osd pool default pgp num = 133
And yet, after running a mkcephfs, when I do a ceph -s it shows:
pgmap v23972: 7872 pgs: 7872 active+clean;
I should also mention I have 40 OSDs, with a replica level of
14 matches
Mail list logo