By digging a bit more I found a part of the answer :
http://wiki.ceph.com/Planning/Blueprints/Firefly/rgw%3A_object_versioning
Are there any future plans for swift ?
Thanks
--
Cédric
Le 23/04/2014 21:55, Cedric Lemarchand a écrit :
> Hi Cephers,
>
> I would like to know if is the swift object
On Sat, Apr 26, 2014 at 7:16 PM, rAn rAnn wrote:
> hi, Im on a site with no access to the internet and Im trying to install
> ceph
> during the installation it tries to download files from the internet and
> then I get an error
> I tried to download the files and make my own repository, also i hav
Thanks all
Im trying to deploy from node1(the admin node) to the new node via the
command " ceph-deploy install node2".
I have coppied the two main repositories (noarc and x86-64) to my secure
site and I have encountered the folowing warnings and errors;
[node2][warnin] http://ceph.com/rmp-emperor
Hi rAn,
Le 27/04/2014 13:13, rAn rAnn a écrit :
>
> Thanks all
> Im trying to deploy from node1(the admin node) to the new node via the
> command " ceph-deploy install node2".
> I have coppied the two main repositories (noarc and x86-64) to my
> secure site and I have encountered the folowing war
Cédric,
See http://tracker.ceph.com/issues/8221
The S3 and Swift APIs handle versioning very differently, so we'll
implement S3 in the Giant time frame and consider how to handle Swift once
that's completed.
Ian Colle
Director of Engineering
Inktank
On Sunday, April 27, 2014, Cedric Lemarchand
I don't know how to count objects but its a test cluster,
i have not more than 50 small files
2014-04-27 22:33 GMT+02:00 Andrey Korolyov :
> What # of objects do you have? After all, such large footprint can be
> just a bug in your build if you do not have ultimate high object
> count(>~1e8) or an
For the record, ``rados df'' will give an object count. Would you mind
to send out your ceph.conf? I cannot imagine what parameter may raise
memory consumption so dramatically, so config at a glance may reveal
some detail. Also core dump should be extremely useful (though it`s
better to pass the fl
2014-04-27 23:20 GMT+02:00 Andrey Korolyov :
> For the record, ``rados df'' will give an object count. Would you mind
> to send out your ceph.conf? I cannot imagine what parameter may raise
> memory consumption so dramatically, so config at a glance may reveal
> some detail. Also core dump should b
What # of objects do you have? After all, such large footprint can be
just a bug in your build if you do not have ultimate high object
count(>~1e8) or any extraordinary configuration parameter.
On Mon, Apr 28, 2014 at 12:26 AM, Gandalf Corvotempesta
wrote:
> So, are you suggesting to lower the pg
Thanks Yehuda.
If you have to make a technology choice, at equal features, between S3
and Swift, considering the stability and robustness, what it would be ?
I ask because I think you have a whole and precise vision about S3 and
Swift, which I havn't, "yet" ;-)
Cheers !
Le 27/04/2014 17:04, Yeh
On Mon, Apr 28, 2014 at 1:26 AM, Gandalf Corvotempesta
wrote:
> 2014-04-27 23:20 GMT+02:00 Andrey Korolyov :
>> For the record, ``rados df'' will give an object count. Would you mind
>> to send out your ceph.conf? I cannot imagine what parameter may raise
>> memory consumption so dramatically, so
So, are you suggesting to lower the pg count ?
Actually i'm using the suggested number of OSD*100/Replicas
and I have just 2 OSDs per server.
2014-04-24 19:34 GMT+02:00 Andrey Korolyov :
> On 04/24/2014 08:14 PM, Gandalf Corvotempesta wrote:
>> During a recovery, I'm hitting oom-killer for ceph-o
Not implementing the swift object versioning api doesn't mean that the
s3 implementation is not going to be available through swift. It means
that we don't implement the unique swift behavior which diverges from
s3. Specifically I'm pointing at the rollback-like behavior on object
removal that, as
We discussed it internally a few days ago, and even created some
tickets for future work. The swift object versioning has some
differences from the s3 one, and our plan at the moment is have the s3
working first and only then do swift.
Yehuda
On Sun, Apr 27, 2014 at 3:51 AM, Cedric Lemarchand wr
Hi Craig,
Good day to you, and thank you for your enquiry.
As per your suggestion, I have created a 3rd partition on the SSDs and did
the dd test directly into the device, and the result is very slow.
root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdg3
conv=fdatasync oflag=d
Dear all,
I have multiple OSDs per node (normally 4) and I realised that for all the
nodes that I have, only one OSD will contain logs under /var/log/ceph, the
rest of the logs are empty.
root@ceph-osd-07:/var/log/ceph# ls -la *.log
-rw-r--r-- 1 root root 0 Apr 28 06:50 ceph-client.admin.log
16 matches
Mail list logo