Hi there,
i believe for the quick start of Ceph we will get two OSDs, even it's not
recommended i wanna increase them.
My question is it gonna ended bad if i end up with 10 OSDs per host ?? and for
these increment i must manipulate the configuration file, right ?? finally if i
finished the ins
Hi,
I've collected a osd log with these parameters:
debug osd = 20
debug ms = 1
debug filestore = 20
You can download it from here:
https://docs.google.com/file/d/0B1lZcgrNMBAJVjBqa1lJRndxc2M/edit?usp=sharing
I have also captured a video to show the behavior in
realtime:http://youtu.be/708AI8P
On Tue, Mar 5, 2013 at 3:44 AM, Gandalf Corvotempesta
wrote:
> Hello,
> is possible to get a list of operations made on a bucket (for exmaple,
> put file, delete file, and so on) from a radosgw call?
Not at the moment. If enabled, the ops log can be accessed through
radosgw-admin.
Yehuda
___
2013/3/5 Yehuda Sadeh :
> Not at the moment. If enabled, the ops log can be accessed through
> radosgw-admin.
Logs are enabled in my configuration but i'm unable to get them from
radosgw-admin:
# radosgw-admin log show --bucket=mybucket --date=2013-02-01
object or (at least one of date, bucket, b
On Mon, Mar 4, 2013 at 10:00 PM, Nick Bartos wrote:
> When trying to create a swift container with a length of 1 or 2
> characters, I get a 400 error. When choosing a container name with at
> least 3 characters, this does not happen. Is this an arbitrary limit
> that can be easily changed?
>
> M
On Tue, Mar 5, 2013 at 6:48 AM, Gandalf Corvotempesta
wrote:
> 2013/3/5 Yehuda Sadeh :
>> Not at the moment. If enabled, the ops log can be accessed through
>> radosgw-admin.
>
> Logs are enabled in my configuration but i'm unable to get them from
> radosgw-admin:
>
> # radosgw-admin log show --bu
2013/3/5 Yehuda Sadeh :
> You're missing the bucket-id param. You can get it via:
>
> # radosgw-admin bucket stats --bucket=mybucket
Thank you.
I think that you should change this message:
"object or (at least one of date, bucket, bucket-id) were not specified"
because as far as I understand it,
On Tue, Mar 5, 2013 at 7:10 AM, Gandalf Corvotempesta
wrote:
> 2013/3/5 Yehuda Sadeh :
>> You're missing the bucket-id param. You can get it via:
>>
>> # radosgw-admin bucket stats --bucket=mybucket
>
> Thank you.
> I think that you should change this message:
>
> "object or (at least one of date,
Forwarding to Ceph-users
Ian R. Colle
Ceph Program Manager
Inktank
Twitter: ircolle
LinkedIn: www.linkedin.com/in/ircolle
Begin forwarded message:
> From: Andrew Hume
> Date: March 5, 2013, 3:55:35 MST
> To: ceph-de...@vger.kernel.org
> Subject: rados get failure
>
> i have a new ceph install
When is ceph 0.57 going to be available from the ceph.com PPA? I checked,
and all releases under http://ceph.com/debian/dists/ seem to still be
0.56.3. Or am I missing something?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/
According to
http://ceph.com/docs/master/rados/configuration/ceph-conf/#logs-debugging
there are in-memory logs. How are those accessed? I tried specifying
'debug rgw 0/20', but when an error occurs (such as "-1 Initialization
timeout, failed to initialize"), no additional verbosity is put in t
Try http://ceph.com/debian-testing/dists/
On Mar 5, 2013, at 11:44 AM, Scott Kinder wrote:
> When is ceph 0.57 going to be available from the ceph.com PPA? I checked, and
> all releases under http://ceph.com/debian/dists/ seem to still be 0.56.3. Or
> am I missing something?
> _
I believe the debian folder only includes stable releases; .57 is a dev
release. See http://ceph.com/docs/master/install/debian/ for more! :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tuesday, March 5, 2013 at 8:44 AM, Scott Kinder wrote:
> When is ceph 0.57 going t
This is a companion discussion to the blog post at
http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
The short and slightly alternate version: I spent most of about two weeks
working on bugs related to snapshots in the MDS, and we started realizing that
we could probably d
Ah, okay. I did not realize it was a dev release. Thanks for the
clarification.
On Tue, Mar 5, 2013 at 10:01 AM, Greg Farnum wrote:
> I believe the debian folder only includes stable releases; .57 is a dev
> release. See http://ceph.com/docs/master/install/debian/ for more! :)
> -Greg
>
> Softw
The only two features I'd deem necessary for our workload would be
stable distributed metadata / MDS and a working fsck equivalent.
Snapshots would be great once the feature is deemed stable, as would
nfs / cifs reexport, and quota support eventually would be nice to
have. Anything else is gravy.
On 03/05/2013 12:27 PM, Dino Yancey wrote:
> The only two features I'd deem necessary for our workload would be
> stable distributed metadata / MDS and a working fsck equivalent.
> Snapshots would be great once the feature is deemed stable, as would
> nfs / cifs reexport, and quota support eventual
It's been two weeks and v0.58 is baked. Notable changes since v0.57
include:
* mon: rearchitected to utilize single instance of paxos and a key/value
store (Joao Luis)
* librbd: fixed some locking issues with flatten (Josh Durgin)
* rbd: udevadm settle on map/unmap to avoid various races (D
On Tue, Mar 05, 2013 at 12:27:04PM -0600, Dino Yancey wrote:
> The only two features I'd deem necessary for our workload would be
> stable distributed metadata / MDS and a working fsck equivalent.
> Snapshots would be great once the feature is deemed stable, as would
We have the same needs here.
I'm currently running centos on 3.6.9 and only haven't updated it
because of my own laziness. I'd be happy to provide .config files for this.
On 03/05/2013 01:38 PM, Dimitri Maziuk wrote:
On 03/05/2013 12:27 PM, Dino Yancey wrote:
The only two features I'd deem necessary for our workload wou
Please keep discussion on the list where it starts. I've added
ceph-users back to the Cc list.
You have a key error because you didn't install the key for the packages
you're installing. See http://ceph.com/docs/master/install/debian/ and
all the instructions about keys (there are several, so
Hi,
right now i have a bunch of OSD hosts (servers) which have just 4 disks
each. All of them use SSDs right now.
So i have a lot of free harddisk slots in the chassis. So my idea was to
create a second ceph system using these free slots. Is this possible? Or
should i just the first one with
Andrew, this isn't by chance a 32-bit build or kernel, is it?
I just filled 32GB with random data, then truncated out to 2TB, and am
reading with no errors so far. I don't know that that proves anything;
the lack of detail makes it hard to know.
On 03/05/2013 07:14 AM, Ian Colle wrote:
Forwa
On 03/05/2013 02:13 PM, Steven Presser wrote:
> I'm currently running centos on 3.6.9 and only haven't updated it
> because of my own laziness. I'd be happy to provide .config files for
> this.
I only have about 60 hosts here and 3 dozen more downstairs, plus a
bunch of programming projects. Gene
On 03/05/2013 03:25 PM, Dimitri Maziuk wrote:
> On 03/05/2013 02:13 PM, Steven Presser wrote:
>> I'm currently running centos on 3.6.9 and only haven't updated it
>> because of my own laziness. I'd be happy to provide .config files for
>> this.
I mean, thanks for the offer, but the issue isn't th
Andrew: what version of Ceph? It seems like
234becd3447a679a919af458440bc31c8bd6b84f may well address this issue;
it's in v0.57.
On 03/05/2013 07:14 AM, Ian Colle wrote:
Forwarding to Ceph-users
Ian R. Colle
Ceph Program Manager
Inktank
Twitter: ircolle
LinkedIn: www.linkedin.com/in/ircolle
i am using 0.56.3
it is likely a 32 bit issue.
my architecture is x86_64 and i just get the normal rpm.
but looking at the code (rados.cc), the size parameter to do_get
is passed as an int (it should be a uint64).
thanks for looking at this.
On Mar 5, 2013, at 2:56 PM, Dan Mick wrote:
> Andrew:
If you mean op_size, in the current code that's just the per-read/write
size; the offset is uint64_t, and tracks the entire object. op_size is
4M by default.
However, now that I look in v0.56.3¸ there was only one read, and its
size was put into an int 'ret', and that will surely break past 2
On Tue, Mar 5, 2013 at 12:44 PM, Kevin Decherf wrote:
>
> On Tue, Mar 05, 2013 at 12:27:04PM -0600, Dino Yancey wrote:
> > The only two features I'd deem necessary for our workload would be
> > stable distributed metadata / MDS and a working fsck equivalent.
> > Snapshots would be great once the f
Hello,
I've been working to deploy ceph recently and have been conducting some
performance tests and have ran into some problems. I've created an rbd
image, mapped it on a client and put a file system on it. I've tried
with both ext4 and xfs to which the same results occur. After a while
the
On Tuesday, March 5, 2013 at 5:53 AM, Marco Aroldi wrote:
> Hi,
> I've collected a osd log with these parameters:
>
> debug osd = 20
> debug ms = 1
> debug filestore = 20
>
> You can download it from here:
> https://docs.google.com/file/d/0B1lZcgrNMBAJVjBqa1lJRndxc2M/edit?usp=sharing
>
> I ha
As an extra request, it would be great if people explained a little
about their use-case for the filesystem so we can better understand
how the features requested map to the type of workloads people are
trying.
Thanks
Neil
On Tue, Mar 5, 2013 at 9:03 AM, Greg Farnum wrote:
> This is a companion
What kernel version are you using on your client?
On Tue, Mar 5, 2013 at 4:48 PM, tra26 wrote:
> Hello,
>
> I've been working to deploy ceph recently and have been conducting some
> performance tests and have ran into some problems. I've created an rbd
> image, mapped it on a client and put a fil
All systems are running: 3.2.0-38-generic.
-Trevor
On 2013-03-06 00:40, Dino Yancey wrote:
What kernel version are you using on your client?
On Tue, Mar 5, 2013 at 4:48 PM, tra26 wrote:
Hello,
I've been working to deploy ceph recently and have been conducting
some
performance tests and hav
34 matches
Mail list logo