Hello,
I have a two pools (default and sas).
Is it able to push osd to non default pool after restart without setting
crushmap?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Sage for the quick response!
We are using firefly (v0.80.4 with a couple of back-ports). One observation we
have is that during peering stage (especially if the OSD got down/in for
several hours with high load), the peering OPs are in contention with normal
OPs and thus bring extremely l
On Thu, 23 Oct 2014, GuangYang wrote:
> Hello Cephers,
> During our testing, I found that the filestore throttling became a limiting
> factor for performance, the four settings (with default value) are:
> filestore queue max ops = 50
> filestore queue max bytes = 100 << 20
> filestore queue com
Hello Cephers,
During our testing, I found that the filestore throttling became a limiting
factor for performance, the four settings (with default value) are:
filestore queue max ops = 50
filestore queue max bytes = 100 << 20
filestore queue committing max ops = 500
filestore queue committing
Shot in the dark: try manually deep-scrubbing the PG. You could also try
marking various osd's OUT, in an attempt to get the acting set to include
osd.25 again, then do the deep-scrub again. That probably won't help
though, because the pg query says it probed osd.25 already... actually , it
doesn
Hello,
On Wed, 22 Oct 2014 17:41:45 -0300 Ricardo J. Barberis wrote:
> El Martes 21/10/2014, Christian Balzer escribió:
> > Hello,
> >
> > I'm trying to change the value of mon_osd_down_out_subtree_limit from
> > rack to something, anything else with ceph 0.80.(6|7).
> >
> > Using injectargs it
On Oct 22, 2014, at 7:51 PM, Craig Lewis wrote:
> On Wed, Oct 22, 2014 at 3:09 PM, Chris Kitzmiller
> wrote:
>> On Oct 22, 2014, at 1:50 PM, Craig Lewis wrote:
>>> Incomplete means "Ceph detects that a placement group is missing a
>>> necessary period of history from its log. If you see this sta
On Wed, Oct 22, 2014 at 3:09 PM, Chris Kitzmiller wrote:
> On Oct 22, 2014, at 1:50 PM, Craig Lewis wrote:
> > Incomplete means "Ceph detects that a placement group is missing a
> necessary period of history from its log. If you see this state, report a
> bug, and try to start any failed OSDs tha
On Oct 22, 2014, at 1:50 PM, Craig Lewis wrote:
> Incomplete means "Ceph detects that a placement group is missing a necessary
> period of history from its log. If you see this state, report a bug, and try
> to start any failed OSDs that may contain the needed information".
>
> In the PG query,
We have been running several rounds of benchmarks through the Rados
Gateway. Each run creates several hundred thousand objects and similarly
many containers.
The cluster consists of 4 machines, 12 OSD disks (spinning, 4TB) — 48
OSDs total.
After running a set of benchmarks we renamed the pools us
I'm not sure on RedHat, but on Ubuntu, /etc/init/ceph-osd.conf calls
ceph osd crush create-or-move. Try adding some logging to that script,
something like
echo "Create or move OSD $id weight ${weight:-${defaultweight:-1}} to
location $location" >> /tmp/ceph-osd.log
You probably want more detail
I would like to add that removing log files (/var/log/ceph is also
removed on uninstall) is also a bad thing.
My suggestion would be to simply drop the whole %postun trigger, since
it does only these two very questionable things.
Thanks,
Erik.
On 10/22/2014 09:16 PM, Dmitry Borodaenko wrote:
>
El Martes 21/10/2014, Christian Balzer escribió:
> Hello,
>
> I'm trying to change the value of mon_osd_down_out_subtree_limit from rack
> to something, anything else with ceph 0.80.(6|7).
>
> Using injectargs it tells me that this isn't a runtime supported change
> and changing it in the config fi
Current version of RPM spec for Ceph removes the whole /etc/ceph
directory on uninstall:
https://github.com/ceph/ceph/blob/master/ceph.spec.in#L557-L562
I don't think contents of /etc/ceph is disposable and should be
silently discarded like that.
--
Dmitry Borodaenko
Hello,
given small test cluster, following sequence resulted to the inability
to join back for freshly formatted OSD:
- update cluster sequentially from cuttlefish to dumpling to firefly,
- execute tunables change, wait for recovery completion,
- shut down single osd, reformat filestore and journ
For some reason GMail wasn't showing me any of the other responses. I saw
what appeared to be an unanswered question. As soon as I hit send, the
rest of the discussion showed up. Sorry about that.
On Tue, Oct 21, 2014 at 8:25 AM, Chad Seys wrote:
> Hi Craig,
>
> > It's part of the way the CR
I wanted to let the group know that I just completed a relatively painless IP
address transition of a 3-node cluster from a flat allocated IP space to one
using a dedicated (and different) network space for the "public" and "cluster"
networks.
This includes changing the block-serving address o
Incomplete means "Ceph detects that a placement group is missing a
necessary period of history from its log. If you see this state, report a
bug, and try to start any failed OSDs that may contain the needed
information".
In the PG query, it lists some OSDs that it's trying to probe:
"pr
Just starting to use ceph-deploy on a SLES11 system and seeing the same thing.
You're right that the command does seem to complete. Haven't yet determined if
its an actual problem or just bad output.
Bill
From: ceph-users [ceph-users-boun...@lists.ceph.
When I had PGs stuck with down_osds_we_would_probe, there was no way I
could convince Ceph to give up on the data while those OSDs were down.
I tried ceph osd lost, ceph pg mark_unfound_lost, ceph pg force_create_pg.
None of them would do anything.
I eventually re-formatted the down OSD, and brou
Greetings cephalofolks,
As I'm sure many of you have seen, the schedule for the upcoming
virtual Ceph Developer Summit has been posted:
https://wiki.ceph.com/Planning/CDS/Hammer_(Oct_2014)
As usual we have a EMEA friendly day and an APAC friendly day with the
sessions being distributed across ea
Hi Guys,
We are trying to install cephFS using puppet on all the ODS nodes, as well as
MON and MDS. Are there recommended puppet modules that anyone has used in the
past or created their own?
Thanks.
—Jiten
___
ceph-users mailing list
ceph-users@lis
Hi,
We're running a debian jessie ceph 0.80.6 cluster and this morning
package update launched os-prober which killed instantly 5 of our 15 OSD
on write assert failures:
http://tracker.ceph.com/issues/9860
While the fault is probably on os-prober doing something writing/locking
instead of being
Hi Xavier,
[Moving this to ceph-devel]
On Tue, 21 Oct 2014, Xavier Trilla wrote:
> Hi Sage,
>
> Yes, I know about rbd diff, but the motivation behind this was to be
> able to dump an entire RBD pool via RADOS to another cluster, as our
> primary cluster uses quite expensive SSD storage and we
Ad 1) Thanks for information.
Ad 2) Cluster Information: 5x Dell R720 with 256 GB RAM, and 2x6cores with
HT. We use 10 GB ethernet for networking. One interface for public network
and other for cluster network.
Three of servers have Ubuntu Server 12.04 with 3.16.0-031600-generic
kernel, and 2 new
Hi,
>From time to time when I replace broken OSD the new one gets weight of
zero. Crush map from epoch before adding OSD seems to be fine. Is there
any way to debug this issue?
Regards,
--
PS
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Hi,
I am building a three node cluster on debian7.7. I have a problem in zapping
the disk of the very first node.
ERROR:
[ceph1][WARNIN] Error: Partition(s) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34, 35,
27 matches
Mail list logo