On Wed, Sep 4, 2013 at 11:56 PM, Sage Weil wrote:
> On Wed, 4 Sep 2013, Alphe Salas Michels wrote:
> > Hi again,
> > as I was doomed to full wipe my cluster once again after. I uploaded to
> > ceph-deploy 1.2.3
> > all went smoothing along my ceph-deploy process.
> >
> > until I create the mds an
>>>What happens if you do
>>>ceph-disk -v activate /dev/sdaa1
>>>on ceph001?
Hi. My issue has not been solved. When i execute ceph-disk -v activate
/dev/sdaa - all is ok:
ceph-disk -v activate /dev/sdaa
DEBUG:ceph-disk:Mounting /dev/sdaa on /var/lib/ceph/tmp/mnt.yQuXIa with options
noatime
mount
Hello,
I'm horribly failing at creating a bucket on radosgw at ceph 0.67.2
running on ubuntu 12.04.
Right now I feel frustrated about radosgw-admin for beeing inconsistent
in its options. It's possible to list the buckets and also to delete
them but not to create!
No matter what I tried -
Thats correct. We created 65k buckets, using two hex characters as the
naming convention, then stored the files in each container based on their
first two characters in the file name. The end result was 20-50 files per
bucket. Once all of the buckets were created and files were being loaded,
we
On 09/05/2013 09:19 AM, Bill Omer wrote:
Thats correct. We created 65k buckets, using two hex characters as the
naming convention, then stored the files in each container based on
their first two characters in the file name. The end result was 20-50
files per bucket. Once all of the buckets we
I have read about support image format 2 in 3.9 kernel.
Is 3.9/3.10 kernel support rbd images format 2 now (I need connect to
images, cloned from snapshot)?
--
Blog: www.rekby.ru
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
Hello Gaylord,
I do not think there is something implemented to do this. Perhaps it
could be useful. For example, with the command "rbd info".
For now, I did not find any other way than using "rbd showmapped" on
each host.
Laurent Barbe
Le 05/09/2013 01:18, Gaylord Holder a écrit :
Is it p
On Thu, 5 Sep 2013, Pavel Timoschenkov wrote:
> >>>What happens if you do
> >>>ceph-disk -v activate /dev/sdaa1
> >>>on ceph001?
>
> Hi. My issue has not been solved. When i execute ceph-disk -v activate
> /dev/sdaa - all is ok:
> ceph-disk -v activate /dev/sdaa
Try
ceph-disk -v activate /dev/
Hi all,
as a ceph newbie I got another question that is probably solved long ago.
I have my testcluster consisting two OSDs that also host MONs
plus one to five MONs.
Now I want to reboot all instance, simulating a power failure.
So I shutdown the extra MONs,
Than shutting down the first OSD/MON
Hi Bernhard,
I have my testcluster consisting two OSDs that also host MONs plus
one to five MONs.
Are you saying that you have a total of 7 mons?
down the at last, not the other MON though (since - surprise - they
are in this test szenario just virtual instances residing on some
ceph rbds)
On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
wrote:
>
> Hi all,
>
> as a ceph newbie I got another question that is probably solved long ago.
> I have my testcluster consisting two OSDs that also host MONs
> plus one to five MONs.
> Now I want to reboot all instance, simulating a power failure.
Hello Timofey,
Yes, it works in kernel 3.9 and 3.10.
Laurent Barbe
Le 05/09/2013 17:21, Timofey Koolin a écrit :
I have read about support image format 2 in 3.9 kernel.
Is 3.9/3.10 kernel support rbd images format 2 now (I need connect to
images, cloned from snapshot)?
--
Blog: www.rekby.ru
Wouldn't using only the first two characters in the file name result
in less then 65k buckets being used?
For example if the file names contained 0-9 and a-f, that would only
be 256 buckets (16*16). Or if they contained 0-9, a-z, and A-Z, that
would only be 3,844 buckets (62 * 62).
Bryan
On Th
Le 03/09/2013 14:56, Joao Eduardo Luis a écrit :
On 09/03/2013 02:02 AM, 이주헌 wrote:
Hi all.
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling
0.67.2 version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon
launched on port 6800 not 6789.
This has been
Mark,
Yesterday I blew away all the objects and restarted my test using
multiple buckets, and things are definitely better!
After ~20 hours I've already uploaded ~3.5 million objects, which much
is better then the ~1.5 million I did over ~96 hours this past
weekend. Unfortunately it seems that t
I'm using all defaults created with ceph-deploy
I will try the rgw cache setting. Do you have any other recommendations?
On Thu, Sep 5, 2013 at 1:14 PM, Yehuda Sadeh wrote:
> On Thu, Sep 5, 2013 at 9:49 AM, Sage Weil wrote:
> > On Thu, 5 Sep 2013, Bill Omer wrote:
> >> Thats correct. We cre
On Thu, Sep 5, 2013 at 9:49 AM, Sage Weil wrote:
> On Thu, 5 Sep 2013, Bill Omer wrote:
>> Thats correct. We created 65k buckets, using two hex characters as the
>> naming convention, then stored the files in each container based on their
>> first two characters in the file name. The end result
On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza wrote:
> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
> wrote:
>>
>> Hi all,
>>
>> as a ceph newbie I got another question that is probably solved long ago.
>> I have my testcluster consisting two OSDs that also host MONs
>> plus one to five MONs.
>
based on your numbers, you were at something like an average of 186
objects per bucket at the 20 hour mark? I wonder how this trend
compares to what you'd see with a single bucket.
With that many buckets you should have indexes well spread across all of
the OSDs. It'd be interesting to know
Thanks, Neil! Anyone has a working doc on how to generate a secret for a
CentOS6.4 tech preview machine to access a RBD cluster?
From: Neil Levine mailto:neil.lev...@inktank.com>>
Date: Thursday, August 29, 2013 5:01 PM
To: Larry Liu mailto:larry@disney.com>>
Cc: "ceph-users@lists.ceph.com
On Thu, 5 Sep 2013, Bill Omer wrote:
> Thats correct. We created 65k buckets, using two hex characters as the
> naming convention, then stored the files in each container based on their
> first two characters in the file name. The end result was 20-50 files per
> bucket. Once all of the buckets
Sorry, I meant to say the first four characters, for a total of 65539
buckets
On Thu, Sep 5, 2013 at 12:30 PM, Bryan Stillwell wrote:
> Wouldn't using only the first two characters in the file name result
> in less then 65k buckets being used?
>
> For example if the file names contained 0-9 and
I need to restart the upload process again because all the objects
have a content-type of 'binary/octet-stream' instead of 'image/jpeg',
'image/png', etc. I plan on enabling monitoring this time so we can
see if there are any signs of what might be going on. Did you want me
to increase the number
On Thu, Sep 5, 2013 at 12:27 AM, Nigel Williams
wrote:
> I notice under HOSTNAME RESOLUTION section the use of 'host -4
> {hostname}' as a required test, however, in all my trial deployments
> so far, none would pass as this command is a direct DNS query, and
> instead I usually just add entries t
Larry,
If you're talking about how to do that with libvirt and QEMU on
CentOS6.4, you might look at
http://openstack.redhat.com/Using_Ceph_for_Block_Storage_with_RDO. You
just don't need to install and configure OpenStack, obviously. You do
need to get the upstream version of QEMU from the Ceph re
Let me follow up on that and get back to you. There has been a
significant amount of work on ceph-deploy since that was written.
On Wed, Sep 4, 2013 at 9:27 PM, Nigel Williams
wrote:
> I notice under HOSTNAME RESOLUTION section the use of 'host -4
> {hostname}' as a required test, however, in all
Hi
Installed the latest ceph and having an issue with permission and don't
know where to start looking.
My Config:
(2) ods data nodes
(1) monitor node
(1) mds node
(1) admin node
(1) deploy node
(1) client node (not configured)
All on vmware
I collected all keysrings
I pushed the config file to
On 06/09/13 11:07, Gary Mazzaferro wrote:
Hi
Installed the latest ceph and having an issue with permission and don't
know where to start looking.
My Config:
(2) ods data nodes
(1) monitor node
(1) mds node
(1) admin node
(1) deploy node
(1) client node (not configured)
All on vmware
I collect
On Thu, Sep 5, 2013 at 12:38 PM, Gregory Farnum wrote:
> On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza wrote:
>> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
>> wrote:
>>>
>>> Hi all,
>>>
>>> as a ceph newbie I got another question that is probably solved long ago.
>>> I have my testcluster co
I am trying to deploy ceph reading the instructions from this link.
http://ceph.com/docs/master/start/quick-ceph-deploy/
I get the error below. Can someone let me know if this is something related
to what I am doing wrong or the script?
[abc@abc-ld ~]$ ceph-deploy install abc-ld
[ceph_deploy.ins
On 23.08.2013 16:24, Yehuda Sadeh wrote:
On Fri, Aug 23, 2013 at 1:47 AM, Tobias Brunner wrote:
Hi,
I'm trying to use radosgw with s3cmd:
# s3cmd ls
# s3cmd mb s3://bucket-1
ERROR: S3 error: 405 (MethodNotAllowed):
So there seems to be something missing according to buckets. How can I
creat
31 matches
Mail list logo