Noah Watkins writes:
> Oh, it looks like autogen.sh is smart about that now. If you using the
> latest master, my suggestion may not be the solution.
>
> On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins
> wrote:
>> Make sure you are intializing the sub-modules.. the autogen.sh script
>> should pr
Lorieri writes:
> http://ceph.com/docs/dumpling/start/quick-ceph-deploy/
These steps work against the current ceph release (firefly) as well, for
me, as far as the config file has the setting
osd crush chooseleaf type = 0
--
Abhishek L
pgp: 69CF 4838 8EE3 746C 5ED4 1F16 F9F0 641F 1B65 E
can do the same steps on a single node itself ie. install the mons &
osds on a single node itself.
Alternatively if you plan on running ceph as a backend for openstack
glance & cinder, you could try the latest devstack
http://techs.enovance.com/6572/brace-yourself-devstack-ceph-is-here
Re
Hi
pragya jain writes:
> Hello all!
> I have some basic questions about the process followed by Ceph
> software when a user use SwiftAPIs for accessing its
> storage1. According to my understanding, to keep the objects listing
> in containers and containers listing in an account, Ceph software
Yehuda Sadeh-Weinraub writes:
>> Is there a quick way to see which shadow files are safe to delete
>> easily?
>
> There's no easy process. If you know that a lot of the removed data is on
> buckets that shouldn't exist anymore then you could start by trying to
> identify that. You could do that
Hi
I'm trying to set up a POC multi-region radosgw configuration (with
different ceph clusters). Following the official docs[1], here the part
about creation of zone system users was not very clear. Going by an
example configuration of 2 regions US (master zone us-dc1), EU (master
zone eu-dc1) for
We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws
failing to start, failing continously with the following error:
--8<---cut here---start->8---
2015-05-06 04:40:38.815545 7f3ef9046840 0 ceph version 0.94.1
(e4bfad3a3c51054df7e537a724c8d
On Tue, May 12, 2015 at 9:13 PM, Abhishek L
wrote:
>
> We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws
> failing to start, failing continously with the following error:
>
> --8<---cut here---start->8---
&g
[..]
> Seeing this in the firefly cluster as well. Tried a couple of rados
> commands on the .rgw.root pool this is what is happening:
>
> abhi@st:~$ sudo rados -p .rgw.root put test.txt test.txt
> error putting .rgw.root/test.txt: (6) No such device or address
>
> abhi@st:~$ sudo ceph osd map .rg
Hi
Is it safe to tweak the value of `mon pg warn max object skew` from the
default value of 10 to a higher value of 20-30 or so. What would be a
safe upper limit for this value?
Also what does exceeding this ratio signify in terms of the cluster
health? We are sometimes hitting this limit in buck
Gregory Farnum writes:
> On Thu, May 21, 2015 at 8:24 AM, Kenneth Waegeman
> wrote:
>> Hi,
>>
>> Some strange issue wrt boolean values in the config:
>>
>> this works:
>>
>> osd_crush_update_on_start = 0 -> osd not updated
>> osd_crush_update_on_start = 1 -> osd updated
>>
>> In a previous versi
On Wed, Jun 17, 2015 at 1:02 PM, Nathan Cutler wrote:
>> We've since merged something
>> that stripes over several small xattrs so that we can keep things inline,
>> but it hasn't been backported to hammer yet. See
>> c6cdb4081e366f471b372102905a1192910ab2da.
>
> Hi Sage:
>
> You wrote "yet" - sh
jan.zel...@id.unibe.ch writes:
> Hi,
>
> as I had the same issue in a little virtualized test environment (3 x 10g lvm
> volumes) I would like to understand the 'weight' thing.
> I did not find any "userfriendly explanation" for that kind of problem.
>
> The only explanation I found is on
> ht
Steve Dainard writes:
> Hello,
>
> Ceph 0.94.1
> 2 hosts, Centos 7
>
> I have two hosts, one which ran out of / disk space which crashed all
> the osd daemons. After cleaning up the OS disk storage and restarting
> ceph on that node, I'm seeing multiple errors, then health OK, then
> back into th
On Thu, Aug 6, 2015 at 1:55 PM, Hector Martin wrote:
> On 2015-08-06 17:18, Wido den Hollander wrote:
>>
>> The mount of PGs is cluster wide and not per pool. So if you have 48
>> OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide.
>>
>> Now, with enough memory you can easily have 100
Voloshanenko Igor writes:
> Hi Irek, Please read careful )))
> You proposal was the first, i try to do... That's why i asked about
> help... (
>
> 2015-08-18 8:34 GMT+03:00 Irek Fasikhov :
>
>> Hi, Igor.
>>
>> You need to repair the PG.
>>
>> for i in `ceph pg dump| grep inconsistent | grep -v '
On Thu, Aug 27, 2015 at 3:01 PM, Wido den Hollander wrote:
> On 08/26/2015 05:17 PM, Yehuda Sadeh-Weinraub wrote:
>> On Wed, Aug 26, 2015 at 6:26 AM, Gregory Farnum wrote:
>>> On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander wrote:
Hi,
It's something which has been 'bugging' me
Somnath Roy writes:
> Hi,
> I wanted to know how RGW users are backing up the bucket contents , so that
> in the disaster scenario user can recreate the setup.
> I know there is geo replication support but it could be an expensive
> proposition.
> I wanted to know if there is any simple solutio
Huynh Dac Nguyen writes:
> Hi Chris,
>
>
> I see.
> I'm runing on version 0.80.7.
> How do we know which part of document for our version? As you see, we
> have only one ceph document here, It make us confused.
> Could you show me the document for ceph version 0.80.7?
>
Tried ceph.com/do
Hi
I was going through various conf options to customize a ceph cluster and
came across `osd pool default flags` in pool-pg config ref[1]. Though
the value specifies an integer, though I couldn't find a mention of
possible values this can take in the docs. Looking a bit deeper onto
ceph sources [2
Sage Weil writes:
[..]
> Thoughts? Suggestions?
>
[..]
Suggestion:
radosgw should handle injectargs like other ceph clients do?
This is not a major annoyance, but it would be nice to have.
--
Abhishek
signature.asc
Description: PGP signature
___
c
On Thu, Sep 10, 2015 at 2:51 PM, Shinobu Kinjo wrote:
> Thank you for your really really quick reply, Greg.
>
> > Yes. A bunch shouldn't ever be set by users.
>
> Anyhow, this is one of my biggest concern right now -;
>
> rgw_keystone_admin_password =
>
>
> MU
I'm just thinking of keysto-
> ne federation.
> But you can ignore me anyhow or point out anything to me -;
> Shinobu
>
> - Original Message -
> From: "Abhishek L"
> To: "Shinobu Kinjo"
> Cc: "Gregory Farnum" , "ceph-users"
&
On Fri, Sep 18, 2015 at 4:38 AM, Robert Duncan wrote:
>
> Hi
>
>
>
> It seems that radosgw cannot find users in Keystone V3 domains, that is,
>
> When keystone is configured for domain specific drivers radossgw cannot find
> the users in the keystone users table (as they are not there)
>
> I hav
This is the first release candidate for Luminous, the next long term
stable release.
Ceph Luminous will be the foundation for the next long-term
stable release series. There have been major changes since Kraken
(v11.2.z) and Jewel (v10.2.z).
Major Changes from Kraken
-
-
On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote:
> +However, it also introduced a regression that could cause MDS damage.
> +Therefore, we do *not* recommend that Jewel users upgrade to this version -
> +instead, we recommend upgrading directly to v10.2.9 in which the regression
> is
> +fixed.
This development checkpoint release includes a lot of changes and
improvements to Kraken. This is the first release introducing ceph-mgr,
a new daemon which provides additional monitoring & interfaces to
external monitoring/management systems. There are also many improvements
to bluestore, RGW intr
piglei writes:
> Hi, I am a ceph newbie. I want to create two isolated rgw services in a
> single ceph cluster, the requirements:
>
> * Two radosgw will have different hosts, such as radosgw-x.site.com and
> radosgw-y.site.com. File uploaded to rgw-xcannot be accessed via rgw-y.
> * Isolated bu
This point release fixes several important bugs in RBD mirroring, RGW
multi-site, CephFS, and RADOS.
We recommend that all v10.2.x users upgrade. Also note the following when
upgrading from hammer
Upgrading from hammer
-
When the last hammer OSD in a cluster containing jewel
Hi everyone,
This is the first release candidate for Kraken, the next stable
release series. There have been major changes from jewel with many
features being added. Please note the upgrade process from jewel,
before upgrading.
Major Changes from Jewel
- *RADOS*:
* Th
Hi everyone,
This is the second release candidate for kraken, the next stable release series.
Major Changes from Jewel
- *RADOS*:
* The new *BlueStore* backend now has a change in the on-disk
format, from the previous release candidate 11.1.0 and there might
po
This is the first release of the Kraken series. It is suitable for
use in production deployments and will be maintained until the next
stable release, Luminous, is completed in the Spring of 2017.
Major Changes from Jewel
- *RADOS*:
* The new *BlueStore* backend now h
This is the first development checkpoint release of Luminous series, the
next long time stable release. We're off to a good start to release
Luminous in the spring of '17.
Major changes from Kraken
-
* When assigning a network to the public network and not to
the clust
This Hammer point release fixes several bugs and adds a couple of new
features.
We recommend that all hammer v0.94.x users upgrade.
Please note that Hammer will be retired when Luminous is released later
during the spring of this year. Until then, the focus will be primarily
on bugs that would hi
Sasha Litvak writes:
> Hello everyone,
>
> Hammer 0.94.10 update was announced in the blog a week ago. However, there
> are no packages available for either version of redhat. Can someone tell me
> what is going on?
I see the packages at http://download.ceph.com/rpm-hammer/el7/x86_64/.
Are you
This point release fixes several important bugs in RBD mirroring, RGW
multi-site, CephFS, and RADOS.
We recommend that all v10.2.x users upgrade.
For more detailed information, see the complete changelog[1] and the release
notes[2]
Notable Changes
---
* build/ops: add hostname sani
This is the fourth development checkpoint release of Luminous, the next
long term stable release. This would most likely be the final
development checkpoint release before we move to a release candidate
soon. This release introduces several improvements in bluestore,
monitor, rbd & rgw.
Major chan
37 matches
Mail list logo