Hello Cephers
I have been following ceph documentation to install and configure RGW and
fortunately everything went fine and RGW is correctly setup.
Next i would like to use RGW with OpenStack , and for this i have followed
http://ceph.com/docs/master/radosgw/keystone/ , as per the document i
*Hello Cephers*
*I have followed Ceph documentation and my radios gateway setup is working
fine.*
# swift -V 1.0 -A http://bmi-pocfe2.scc.fi/auth -U scc:swift -K secretkey
list
Hello-World
bmi-pocfe2
scc
packstack
test
#
# radosgw-admin bucket stats --bucket=scc
{ "bucket": "scc",
Amazing piece of work Karan , this was something which is missing since
long , thanks for filling the gap.
I got my book today and just finished reading couple of pages , excellent
introduction to Ceph.
Thanks again , its worth purchasing this book.
Best Regards
Vicky
On Fri, Feb 6, 2015 at
Hello There
I am trying to install Giant on CentOS7 using ceph-deploy and encountered
below problem.
[rgw-node1][DEBUG ] Package python-ceph is obsoleted by python-rados, but
obsoleting package does not provide for requirements
[rgw-node1][DEBUG ] ---> Package cups-libs.x86_64 1:1.6.3-17.el7 will
ttps://admin.fedoraproject.org/updates/FEDORA-EPEL-2015-1607/ceph-0.80.7-0.5.el7
>
> - If you can run Hammer (0.94), please try testing that out. The Hammer
> release's packages have been split up to match the split that happened
> in EPEL.
>
> - Ken
>
> On 04/07/2015
9.
>
> regards,
> Sam
>
> On 08-04-15 09:32, Vickey Singh wrote:
>
> Hi Ken
>
>
> As per your suggestion , i tried enabling epel-testing repository but
> still no luck.
>
>
> Please check the below output. I would really appreciate any help here.
>
> yum install gdisk cryptsetup leveldb python-jinja2 hdparm -y
>
> yum install --disablerepo=base --disablerepo=epel
> ceph-common-0.80.7-0.el7.centos.x86_64 -y
> yum install --disablerepo=base --disablerepo=epel ceph-0.80.7-0.el7.centos
> -y
>
> 2015-04-08 12:40 GMT+
Any suggestion geeks
VS
On Wed, Apr 8, 2015 at 2:15 PM, Vickey Singh
wrote:
>
> Hi
>
>
> The below suggestion also didn’t worked
>
>
> Full logs here : http://paste.ubuntu.com/10771939/
>
>
>
>
> [root@rgw-node1 yum.repos.d]# yum --showduplicates list
Community , need help.
-VS-
On Wed, Apr 8, 2015 at 4:36 PM, Vickey Singh
wrote:
> Any suggestion geeks
>
>
> VS
>
> On Wed, Apr 8, 2015 at 2:15 PM, Vickey Singh
> wrote:
>
>>
>> Hi
>>
>>
>> The below suggestion also didn’t worked
>
Hello Cephers
I am trying to setup RGW using Ceph-deploy which is described here
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
But unfortunately it doesn't seems to be working
Is there something i am missing or you know some fix for this.
[root@ceph-node1 y
de=python-rados python-rbd
>>>>
>>>> So this is what my epel.repo file looks like: http://fpaste.org/208681/
>>>>
>>>> It is those two packages in EPEL that are causing problems. I also
>>>> tried enabling epel-testing, but
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication
Everything seems to work except radosgw-agent sync , Request you to please
check the below outputs and help me
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication
Everything seems to work except radosgw-agent sync , Request you to please
check the below outputs and help me
Any help with related to this problem would be highly appreciated.
-VS-
On Sun, Apr 26, 2015 at 6:01 PM, Vickey Singh
wrote:
> Hello Geeks
>
>
> I am trying to setup Ceph Radosgw multi site data replication using
> official documentation
> http://ceph.com/docs/master/radosg
Hello Cephers
Still waiting for your hep.
I tried sever things but no luck.
On Mon, Apr 27, 2015 at 9:07 AM, Vickey Singh
wrote:
> Any help with related to this problem would be highly appreciated.
>
> -VS-
>
>
> On Sun, Apr 26, 2015 at 6:01 PM, Vickey Singh > wr
"log_data": "true"},
{ "name": "us-west",
"endpoints": [
"http:\/\/us-west-1.crosslogic.com:7480\/"],
"log_meta": "true",
Hello Geeks
Need your help and advice in this problem.
- VS -
On Tue, Apr 28, 2015 at 12:48 AM, Vickey Singh
wrote:
> Hello Alfredo / Craig
>
> First of all Thank You So much for replying and giving your precious time
> to this problem.
>
> @Alfredo : I tried version rad
Hello Cephers
Beginners question on Ceph Journals creation. Need answers from experts.
- Is it true that by default ceph-deploy creates journal on dedicated
partition and data on another partition. It does not creates journal on
file ??
ceph-deploy osd create ceph-node1:/dev/sdb
This commands i
o this:
> ceph-deploy osd create ceph-node1:/ceph-disk
>
> Your journal would be a file doing it this way.
>
>
>
> [image: yp]
>
>
>
> Michael Kuriger
>
> Sr. Unix Systems Engineer
>
> * mk7...@yp.com |( 818-649-7235
>
> From: Vickey Singh
&g
Hello Ceph lovers
You would have noticed that recently RedHat has released RedHat Ceph
Storage 1.3
http://redhatstorage.redhat.com/2015/06/25/announcing-red-hat-ceph-storage-1-3/
My question is
- What's the exact version number of OpenSource Ceph is provided with this
Product
- RHCS 1.3 Feature
idden" git repo for the ceph
> releases done by redhat? Or how can we understand this?
>
> Stefan
>
> >
> > Cheers
> >
> > On 01/07/2015 23:02, Vickey Singh wrote:
> >> Hello Ceph lovers
> >>
> >> You would have noticed th
Hello Community
I am facing a very wired problem with Ceph socket files.
For all monitor nodes under /var/run/ceph/ i can see ~160 Thousands asok
files , most of the file names are ceph-client.admin.*
*If i delete these files are the getting generated very quickly.*
*Could someone please answ
Hi Thank you for your reply
No its these nodes are just Ceph nodes , nothing shared with OpenSack.
Infact i am not using openstack with this ceph cluster.
However i have configured Calamari on this cluster , but not sure if that
has caused the problem. I also tried to remove calamari configuratio
Thanks Nick for your suggestion.
Can you also tell how i can reduce RBD block size to 512K or 1M , do i need
to put something in clients ceph.conf ( what parameter i need to set )
Thanks once again
- Vickey
On Wed, Aug 12, 2015 at 4:49 PM, Nick Fisk wrote:
> > -Original Message-
> >
der=20, 512 is
> order=19.
>
> Thanks,
> Bill Sanders
>
>
> On Thu, Aug 13, 2015 at 1:31 AM, Vickey Singh > wrote:
>
>> Thanks Nick for your suggestion.
>>
>> Can you also tell how i can reduce RBD block size to 512K or 1M , do i
>> need to put someth
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster *recovery
IO* and *client IO* operation metrics , that can be further used with
collectd.
*For example , i need to take out these values*
*recovery io 814 MB/s, 101 objects/s*
*client io 85475 kB/s rd, 1430 kB/s wr,
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster *recovery
IO* and *client IO* operation metrics , that can be further used with
collectd.
*For example , i need to take out these values*
*recovery io 814 MB/s, 101 objects/s*
*client io 85475 kB/s rd, 1430 kB/s wr,
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster *recovery
IO* and *client IO* operation metrics , that can be further used with
collectd.
*For example , i need to take out these values*
*recovery io 814 MB/s, 101 objects/s*
*client io 85475 kB/s rd, 1430 kB/s wr,
Hello Cephers , need your advice and tips here.
*Problem statement : Ceph RBD getting unmapped each time i reboot my server
. After reboot every time i need to manually map it and mount it.*
*Setup : *
Ceph Firefly 0.80.1
CentOS 6.5 , Kernel : 3.15.0-1
I have tried doing as mentioned in the b
ile: No such file or directory
>
> Looks like it's trying to mount, but your secretfile is gone.
>
>
> *Chris Armstrong*Head of Services
> OpDemand / Deis.io
>
> GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
>
>
> On Sat, Oct 25, 2014 at 2
Hello Ceph Experts
I have a strange problem , when i am reading or writing to Ceph pool , its
not writing properly. Please notice Cur MB/s which is going up and down
--- Ceph Hammer 0.94.2
-- CentOS 6, 2.6
-- Ceph cluster is healthy
One interesting thing is when every i start rados bench comman
Thank You Mark , please see my response below.
On Wed, Sep 2, 2015 at 5:23 PM, Mark Nelson wrote:
> On 09/02/2015 08:51 AM, Vickey Singh wrote:
>
>> Hello Ceph Experts
>>
>> I have a strange problem , when i am reading or writing to Ceph pool ,
>> its not writing
and
CPU utilization
- Vickey -
On Wed, Sep 2, 2015 at 11:28 PM, Vickey Singh
wrote:
> Thank You Mark , please see my response below.
>
> On Wed, Sep 2, 2015 at 5:23 PM, Mark Nelson wrote:
>
>> On 09/02/2015 08:51 AM, Vickey Singh wrote:
>>
>>> Hello Ceph
Dear Experts
Can someone please help me , why my cluster is not able write data.
See the below output cur MB/S is 0 and Avg MB/s is decreasing.
Ceph Hammer 0.94.2
CentOS 6 (3.10.69-1)
The Ceph status says OPS are blocked , i have tried checking , what all i
know
- System resources ( CPU ,
ou may want to iperf everything just in case.
>
Yeah i did that , iperf shows no problem.
Is there anything else i should do ??
>
> --Lincoln
>
>
> On 9/7/2015 9:36 AM, Vickey Singh wrote:
>
> Dear Experts
>
> Can someone please help me , why my cluster is not able write
Adding ceph-users.
On Mon, Sep 7, 2015 at 11:31 PM, Vickey Singh
wrote:
>
>
> On Mon, Sep 7, 2015 at 10:04 PM, Udo Lembke wrote:
>
>> Hi Vickey,
>>
> Thanks for your time in replying to my problem.
>
>
>> I had the same rados bench output after changing
Hello Experts ,
I want to increase my Ceph cluster's read performance.
I have several OSD nodes having 196G RAM. On my OSD nodes Ceph just uses
15-20 GB of RAM.
So, can i instruct Ceph to make use of the remaining 150GB+ RAM as read
cache. So that it should cache data in RAM and server to client
16 32445 32429 321.031 0 - 0.193655
> > 405 16 32445 32429 320.238 0 - 0.193655
> > 406 16 32445 32429319.45 0 - 0.193655
> > 407 16 32445 32429 318.665 0 -
; Hello
IT , Have you tried turning it OFF and ON again ' ]
It would be really helpful if someone provides a real solution.
>
> —Lincoln
>
>
> > On Sep 7, 2015, at 3:35 PM, Vickey Singh
> wrote:
> >
> > Adding ceph-users.
> >
> > On Mon, Sep 7, 2015 a
Agreed with Alphe , Ceph Hammer (0.94.2) sucks when it comes to recovery
and rebalancing.
Here is my Ceph Hammer cluster , which is like this for more than 30 hours.
You might be thinking about that one OSD which is down and not in. Its
intentional, i want to remove that OSD.
I want the cluster
Hello Guys
Doing hardware planning / selection for a new production Ceph cluster. Just
wondering how should i select memory.
*I have found two different rules of selecting memory for Ceph OSD.( on
Internet / googling / presentations )*
*#11GB / Ceph OSD or 2GB / Ceph OSD ( for more performa
On Fri, Sep 18, 2015 at 6:33 PM, Robert LeBlanc
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Depends on how easy it is to rebuild an OS from scratch. If you have
> something like Puppet or Chef that configure a node completely for
> you, it may not be too much of a pain to forgo
Hello Ceph Geeks
Need your comments with my understanding on straw2.
- Is Straw2 better than straw ?
- Is it straw2 recommended for production usage ?
I have a production Ceph Firefly cluster , that i am going to upgrade to
Ceph hammer pretty soon. Should i use straw2 for all my ceph pool
On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander wrote:
> On 11/09/2015 05:27 PM, Vickey Singh wrote:
> > Hello Ceph Geeks
> >
> > Need your comments with my understanding on straw2.
> >
> >- Is Straw2 better than straw ?
>
> It is not persé bette
Hello Community
Need your help in understanding this.
I have the below node, which is hosting 60 physical disks, running 1 OSD
per disk so total 60 Ceph OSD daemons
*[root@node01 ~]# service ceph status | grep -i osd | grep -i running | wc
-l*
*60*
*[root@node01 ~]#*
However if i check OSD proc
Hello Community
Need your help in understanding this.
I have the below node, which is hosting 60 physical disks, running 1 OSD
per disk so total 60 Ceph OSD daemons
*[root@node01 ~]# service ceph status | grep -i osd | grep -i running | wc
-l*
*60*
*[root@node01 ~]#*
However if i check OSD proc
Can anyone please help me understand this.
Thank You
On Mon, Nov 16, 2015 at 5:55 PM, Vickey Singh
wrote:
> Hello Community
>
> Need your help in understanding this.
>
> I have the below node, which is hosting 60 physical disks, running 1 OSD
> per disk so total 60 Ceph OSD
A BIG Thanks Dmitry for your HELP.
On Wed, Nov 18, 2015 at 11:47 AM, Дмитрий Глушенок wrote:
> Hi Vickey,
>
> 18 нояб. 2015 г., в 11:36, Vickey Singh
> написал(а):
>
> Can anyone please help me understand this.
>
> Thank You
>
>
> On Mon, Nov 16, 2015
Hello Guys
Is several millions of object with Ceph ( for RGW use case ) still an issue
? Or is it fixed ?
Thnx
Vickey
On Thu, Jan 28, 2016 at 12:55 AM, Krzysztof Księżyk
wrote:
> Stefan Rogge writes:
>
> >
> >
> > Hi,
> > we are using the Ceph with RadosGW and S3 setting.
> > With more and m
Hello Community , wishing you a great new year :)
This is the recommended upgrade path
http://docs.ceph.com/docs/master/install/upgrading-ceph/
Ceph Deploy
Ceph Monitors
Ceph OSD Daemons
Ceph Metadata Servers
Ceph Object Gateways
How about upgrading Ceph clients ( in my case openstack compute an
Hello Guys
Need help with this , thanks
- vickey -
On Tue, Jan 12, 2016 at 12:10 PM, Vickey Singh
wrote:
> Hello Community , wishing you a great new year :)
>
> This is the recommended upgrade path
> http://docs.ceph.com/docs/master/install/upgrading-ceph/
>
> Ceph Depl
Hello Community
I need some guidance how can i reduce openstack instance boot time using
Ceph
We are using Ceph Storage with openstack ( cinder, glance and nova ). All
OpenStack images and instances are being stored on Ceph in different pools
glance and nova pool respectively.
I assume that Ceph
port Nova+RBD.
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
>
> > From: "Vickey Singh"
> > To: ceph-users@lists.ceph.com, "ceph-users"
> > Sent: Monday, February 8, 2
Hello Community
Happy Valentines Day ;-)
I need some advice on using EXATA RAM on my OSD servers to improve Ceph's
write performance.
I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so assuming
cluster is not recovering, most of the time system will have at least
~150GB RAM free. A
Hello Guys
I am getting wired output from osd map. The object does not exists on pool
but osd map still shows its PG and OSD on which its stored.
So i have rbd device coming from pool 'gold' , this image has an object
'rb.0.10f61.238e1f29.2ac5'
The below commands verifies this
*[root@ce
> an object.
> -Greg
>
>
> On Tuesday, February 23, 2016, Vickey Singh
> wrote:
>
>> Hello Guys
>>
>> I am getting wired output from osd map. The object does not exists on
>> pool but osd map still shows its PG and OSD on which its stored.
>>
>&
Adding community for further help on this.
On Tue, Feb 23, 2016 at 10:57 PM, Vickey Singh
wrote:
>
>
> On Tue, Feb 23, 2016 at 9:53 PM, Gregory Farnum
> wrote:
>
>>
>>
>> On Tuesday, February 23, 2016, Vickey Singh
>> wrote:
>>
>>> Thanks
Hello Geeks
Can someone please review and comment on my custom crush maps. I would
really appreciate your help
My setup : 1 Rack , 4 chassis , 3 storage nodes each chassis ( so total 12
storage nodes ) , pool size = 3
What i want to achieve is:
- Survive chassis failures , even if i loose 2 co
59 matches
Mail list logo