Re: [ceph-users] More problems building Ceph....

2014-07-25 Thread Abhishek L
Noah Watkins writes: > Oh, it looks like autogen.sh is smart about that now. If you using the > latest master, my suggestion may not be the solution. > > On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins > wrote: >> Make sure you are intializing the sub-modules.. the autogen.sh script >> should pr

Re: [ceph-users] Fw: single node installation

2014-08-10 Thread Abhishek L
Lorieri writes: > http://ceph.com/docs/dumpling/start/quick-ceph-deploy/ These steps work against the current ceph release (firefly) as well, for me, as far as the config file has the setting osd crush chooseleaf type = 0 -- Abhishek L pgp: 69CF 4838 8EE3 746C 5ED4 1F16 F9F0 641F 1B65 E

Re: [ceph-users] Fw: single node installation

2014-08-10 Thread Abhishek L
can do the same steps on a single node itself ie. install the mons & osds on a single node itself. Alternatively if you plan on running ceph as a backend for openstack glance & cinder, you could try the latest devstack http://techs.enovance.com/6572/brace-yourself-devstack-ceph-is-here Re

Re: [ceph-users] updation of container and account while using Swift API

2015-02-06 Thread Abhishek L
Hi pragya jain writes: > Hello all! > I have some basic questions about the process followed by Ceph > software when a user use SwiftAPIs for accessing its > storage1. According to my understanding, to keep the objects listing > in containers and containers listing in an account, Ceph software

Re: [ceph-users] Shadow files

2015-03-18 Thread Abhishek L
Yehuda Sadeh-Weinraub writes: >> Is there a quick way to see which shadow files are safe to delete >> easily? > > There's no easy process. If you know that a lot of the removed data is on > buckets that shouldn't exist anymore then you could start by trying to > identify that. You could do that

[ceph-users] Radosgw multi-region user creation question

2015-03-31 Thread Abhishek L
Hi I'm trying to set up a POC multi-region radosgw configuration (with different ceph clusters). Following the official docs[1], here the part about creation of zone system users was not very clear. Going by an example configuration of 2 regions US (master zone us-dc1), EU (master zone eu-dc1) for

[ceph-users] Radosgw startup failures & misdirected client requests

2015-05-12 Thread Abhishek L
We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws failing to start, failing continously with the following error: --8<---cut here---start->8--- 2015-05-06 04:40:38.815545 7f3ef9046840 0 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d

Re: [ceph-users] Radosgw startup failures & misdirected client requests

2015-05-13 Thread Abhishek L
On Tue, May 12, 2015 at 9:13 PM, Abhishek L wrote: > > We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws > failing to start, failing continously with the following error: > > --8<---cut here---start->8--- &g

Re: [ceph-users] Radosgw startup failures & misdirected client requests

2015-05-18 Thread Abhishek L
[..] > Seeing this in the firefly cluster as well. Tried a couple of rados > commands on the .rgw.root pool this is what is happening: > > abhi@st:~$ sudo rados -p .rgw.root put test.txt test.txt > error putting .rgw.root/test.txt: (6) No such device or address > > abhi@st:~$ sudo ceph osd map .rg

[ceph-users] PG object skew settings

2015-05-20 Thread Abhishek L
Hi Is it safe to tweak the value of `mon pg warn max object skew` from the default value of 10 to a higher value of 20-30 or so. What would be a safe upper limit for this value? Also what does exceeding this ratio signify in terms of the cluster health? We are sometimes hitting this limit in buck

Re: [ceph-users] ceph.conf boolean value for mon_cluster_log_to_syslog

2015-05-22 Thread Abhishek L
Gregory Farnum writes: > On Thu, May 21, 2015 at 8:24 AM, Kenneth Waegeman > wrote: >> Hi, >> >> Some strange issue wrt boolean values in the config: >> >> this works: >> >> osd_crush_update_on_start = 0 -> osd not updated >> osd_crush_update_on_start = 1 -> osd updated >> >> In a previous versi

Re: [ceph-users] xattrs vs. omap with radosgw

2015-06-17 Thread Abhishek L
On Wed, Jun 17, 2015 at 1:02 PM, Nathan Cutler wrote: >> We've since merged something >> that stripes over several small xattrs so that we can keep things inline, >> but it hasn't been backported to hammer yet. See >> c6cdb4081e366f471b372102905a1192910ab2da. > > Hi Sage: > > You wrote "yet" - sh

Re: [ceph-users] 'pgs stuck unclean ' problem

2015-06-29 Thread Abhishek L
jan.zel...@id.unibe.ch writes: > Hi, > > as I had the same issue in a little virtualized test environment (3 x 10g lvm > volumes) I would like to understand the 'weight' thing. > I did not find any "userfriendly explanation" for that kind of problem. > > The only explanation I found is on > ht

Re: [ceph-users] Health WARN, ceph errors looping

2015-07-07 Thread Abhishek L
Steve Dainard writes: > Hello, > > Ceph 0.94.1 > 2 hosts, Centos 7 > > I have two hosts, one which ran out of / disk space which crashed all > the osd daemons. After cleaning up the OS disk storage and restarting > ceph on that node, I'm seeing multiple errors, then health OK, then > back into th

Re: [ceph-users] pg_num docs conflict with Hammer PG count warning

2015-08-06 Thread Abhishek L
On Thu, Aug 6, 2015 at 1:55 PM, Hector Martin wrote: > On 2015-08-06 17:18, Wido den Hollander wrote: >> >> The mount of PGs is cluster wide and not per pool. So if you have 48 >> OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide. >> >> Now, with enough memory you can easily have 100

Re: [ceph-users] Repair inconsistent pgs..

2015-08-18 Thread Abhishek L
Voloshanenko Igor writes: > Hi Irek, Please read careful ))) > You proposal was the first, i try to do... That's why i asked about > help... ( > > 2015-08-18 8:34 GMT+03:00 Irek Fasikhov : > >> Hi, Igor. >> >> You need to repair the PG. >> >> for i in `ceph pg dump| grep inconsistent | grep -v '

Re: [ceph-users] Why are RGW pools all prefixed with a period (.)?

2015-08-27 Thread Abhishek L
On Thu, Aug 27, 2015 at 3:01 PM, Wido den Hollander wrote: > On 08/26/2015 05:17 PM, Yehuda Sadeh-Weinraub wrote: >> On Wed, Aug 26, 2015 at 6:26 AM, Gregory Farnum wrote: >>> On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander wrote: Hi, It's something which has been 'bugging' me

Re: [ceph-users] How to back up RGW buckets or RBD snapshots

2015-08-28 Thread Abhishek L
Somnath Roy writes: > Hi, > I wanted to know how RGW users are backing up the bucket contents , so that > in the disaster scenario user can recreate the setup. > I know there is geo replication support but it could be an expensive > proposition. > I wanted to know if there is any simple solutio

Re: [ceph-users] Ceph fs has error: no valid command found; 10 closest matches: fsid

2014-11-25 Thread Abhishek L
Huynh Dac Nguyen writes: > Hi Chris, > > > I see. > I'm runing on version 0.80.7. > How do we know which part of document for our version? As you see, we > have only one ceph document here, It make us confused. > Could you show me the document for ceph version 0.80.7? > Tried ceph.com/do

[ceph-users] Query about osd pool default flags & hashpspool

2014-12-09 Thread Abhishek L
Hi I was going through various conf options to customize a ceph cluster and came across `osd pool default flags` in pool-pg config ref[1]. Though the value specifies an integer, though I couldn't find a mention of possible values this can take in the docs. Looking a bit deeper onto ceph sources [2

Re: [ceph-users] normalizing radosgw

2014-12-09 Thread Abhishek L
Sage Weil writes: [..] > Thoughts? Suggestions? > [..] Suggestion: radosgw should handle injectargs like other ceph clients do? This is not a major annoyance, but it would be nice to have. -- Abhishek signature.asc Description: PGP signature ___ c

Re: [ceph-users] Ceph.conf

2015-09-10 Thread Abhishek L
On Thu, Sep 10, 2015 at 2:51 PM, Shinobu Kinjo wrote: > Thank you for your really really quick reply, Greg. > > > Yes. A bunch shouldn't ever be set by users. > > Anyhow, this is one of my biggest concern right now -; > > rgw_keystone_admin_password = > > > MU

[ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Abhishek L
I'm just thinking of keysto- > ne federation. > But you can ignore me anyhow or point out anything to me -; > Shinobu > > - Original Message - > From: "Abhishek L" > To: "Shinobu Kinjo" > Cc: "Gregory Farnum" , "ceph-users" &

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-17 Thread Abhishek L
On Fri, Sep 18, 2015 at 4:38 AM, Robert Duncan wrote: > > Hi > > > > It seems that radosgw cannot find users in Keystone V3 domains, that is, > > When keystone is configured for domain specific drivers radossgw cannot find > the users in the keystone users table (as they are not there) > > I hav

[ceph-users] v12.1.0 Luminous RC released

2017-06-23 Thread Abhishek L
This is the first release candidate for Luminous, the next long term stable release. Ceph Luminous will be the foundation for the next long-term stable release series. There have been major changes since Kraken (v11.2.z) and Jewel (v10.2.z). Major Changes from Kraken - -

Re: [ceph-users] Stealth Jewel release?

2017-07-12 Thread Abhishek L
On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote: > +However, it also introduced a regression that could cause MDS damage. > +Therefore, we do *not* recommend that Jewel users upgrade to this version - > +instead, we recommend upgrading directly to v10.2.9 in which the regression > is > +fixed.

[ceph-users] v11.0.2 released

2016-10-18 Thread Abhishek L
This development checkpoint release includes a lot of changes and improvements to Kraken. This is the first release introducing ceph-mgr, a new daemon which provides additional monitoring & interfaces to external monitoring/management systems. There are also many improvements to bluestore, RGW intr

Re: [ceph-users] How to create two isolated rgw services in one ceph cluster?

2016-12-02 Thread Abhishek L
piglei writes: > Hi, I am a ceph newbie. I want to create two isolated rgw services in a > single ceph cluster, the requirements: > > * Two radosgw will have different hosts, such as radosgw-x.site.com and > radosgw-y.site.com. File uploaded to rgw-xcannot be accessed via rgw-y. > * Isolated bu

[ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Abhishek L
This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. Also note the following when upgrading from hammer Upgrading from hammer - When the last hammer OSD in a cluster containing jewel

[ceph-users] v11.1.0 kraken candidate released

2016-12-12 Thread Abhishek L
Hi everyone, This is the first release candidate for Kraken, the next stable release series. There have been major changes from jewel with many features being added. Please note the upgrade process from jewel, before upgrading. Major Changes from Jewel - *RADOS*: * Th

[ceph-users] v11.1.1 Kraken rc released

2016-12-27 Thread Abhishek L
Hi everyone, This is the second release candidate for kraken, the next stable release series. Major Changes from Jewel - *RADOS*: * The new *BlueStore* backend now has a change in the on-disk format, from the previous release candidate 11.1.0 and there might po

[ceph-users] v11.2.0 kraken released

2017-01-20 Thread Abhishek L
This is the first release of the Kraken series. It is suitable for use in production deployments and will be maintained until the next stable release, Luminous, is completed in the Spring of 2017. Major Changes from Jewel - *RADOS*: * The new *BlueStore* backend now h

[ceph-users] v12.0.0 Luminous (dev) released

2017-02-08 Thread Abhishek L
This is the first development checkpoint release of Luminous series, the next long time stable release. We're off to a good start to release Luminous in the spring of '17. Major changes from Kraken - * When assigning a network to the public network and not to the clust

[ceph-users] v0.94.10 Hammer released

2017-02-22 Thread Abhishek L
This Hammer point release fixes several bugs and adds a couple of new features. We recommend that all hammer v0.94.x users upgrade. Please note that Hammer will be retired when Luminous is released later during the spring of this year. Until then, the focus will be primarily on bugs that would hi

Re: [ceph-users] Hammer update

2017-03-02 Thread Abhishek L
Sasha Litvak writes: > Hello everyone, > > Hammer 0.94.10 update was announced in the blog a week ago. However, there > are no packages available for either version of redhat. Can someone tell me > what is going on? I see the packages at http://download.ceph.com/rpm-hammer/el7/x86_64/. Are you

[ceph-users] Jewel v10.2.6 released

2017-03-08 Thread Abhishek L
This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. For more detailed information, see the complete changelog[1] and the release notes[2] Notable Changes --- * build/ops: add hostname sani

[ceph-users] v12.0.3 Luminous (dev) released

2017-05-17 Thread Abhishek L
This is the fourth development checkpoint release of Luminous, the next long term stable release. This would most likely be the final development checkpoint release before we move to a release candidate soon. This release introduces several improvements in bluestore, monitor, rbd & rgw. Major chan