Hi, everyone.
What are the meanings of the fields actingbackfill, want_acting and
backfill_targets of the PG class?
Thank you:-)___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have a 3 Cluster Giant setup with 8 OSD each, during the installation I
had to redo a cluster but it looks like the info is still on crush map
(based on my readings). How do I fix this?
[root@avatar0-ceph1 ~]# ceph -s
cluster 2f0d1928-2ee5-4731-a259-64c0dc16110a
health HEALTH_WARN 139
> Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
>
>
>
> > Op 26 oktober 2016 om 10:44 schreef Sage Weil :
> >
> >
> > On Wed, 26 Oct 2016, Dan van der Ster wrote:
> > > On Tue, Oct 25, 2016 at 7:06 AM, Wido den Hollander wrote:
> > > >
> > > >> Op 24 oktober 2016 om 22:29 schreef
On Wed, 2 Nov 2016, Wido den Hollander wrote:
>
> > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
> >
> >
> >
> > > Op 26 oktober 2016 om 10:44 schreef Sage Weil :
> > >
> > >
> > > On Wed, 26 Oct 2016, Dan van der Ster wrote:
> > > > On Tue, Oct 25, 2016 at 7:06 AM, Wido den Holla
> Op 2 november 2016 om 14:30 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> >
> > > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
> > >
> > >
> > >
> > > > Op 26 oktober 2016 om 10:44 schreef Sage Weil :
> > > >
> > > >
> > > > On Wed, 26 Oct 2016, Da
Hey,
http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.html
"Each AG can be up to one terabyte in size (512 bytes * 2^31), regardless of
the underlying device's sector size."
"The only global information maintained by the first AG (primary) is free sp
On Wed, 2 Nov 2016, Wido den Hollander wrote:
>
> > Op 2 november 2016 om 14:30 schreef Sage Weil :
> >
> >
> > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > >
> > > > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
> > > >
> > > >
> > > >
> > > > > Op 26 oktober 2016 om 10:44 s
> Op 2 november 2016 om 15:06 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> >
> > > Op 2 november 2016 om 14:30 schreef Sage Weil :
> > >
> > >
> > > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > >
> > > > > Op 26 oktober 2016 om 11:18 schreef Wido den Holla
On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > Op 2 november 2016 om 15:06 schreef Sage Weil :
> >
> >
> > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > >
> > > > Op 2 november 2016 om 14:30 schreef Sage Weil :
> > > >
> > > >
> > > > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > >
> Op 2 november 2016 om 16:00 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > Op 2 november 2016 om 15:06 schreef Sage Weil :
> > >
> > >
> > > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > >
> > > > > Op 2 november 2016 om 14:30 schreef Sage Weil :
> > > >
On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > > I'm pretty sure this is a race condition that got cleaned up as part of
> > > > https://github.com/ceph/ceph/pull/9078/commits. The mon only checks
> > > > the
> > > > pg_temp entries that are getting set/changed, and since those are
> > >
> Op 2 november 2016 om 16:21 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > > > I'm pretty sure this is a race condition that got cleaned up as part
> > > > > of
> > > > > https://github.com/ceph/ceph/pull/9078/commits. The mon only checks
> > > > > the
> > >
Her cephers,
I wanted to both post a reminder that our Ceph Developer Monthly
meeting was tonight at 9p EDT, and pose a question:
Are periodic Ceph Developer Meetings helpful and desired? Lately the
participation has been sadly lacking, and I want to make sure we are
providing a worthwhile platfo
On this particular occasion most of the cephfs developers are in Europe, so
we are unlikely to make it.
John
On 2 Nov 2016 5:27 p.m., "Patrick McGarry" wrote:
> Her cephers,
>
> I wanted to both post a reminder that our Ceph Developer Monthly
> meeting was tonight at 9p EDT, and pose a question
Yes a rolling restart should work. That was enough in my case.
Am 2. November 2016 01:20:48 MEZ, schrieb "Will.Boege" :
>Start with a rolling restart of just the OSDs one system at a time,
>checking the status after each restart.
>
>On Nov 1, 2016, at 6:20 PM, Ronny Aasen
>mailto:ronny+ceph-us...@
Hi all,
Just a bit of an outage with CephFS around the MDS's, I managed to get
everything up and running again after a bit of head
scratching and thought I would share here what happened.
Cause
I believe the MDS's which were running as VM's suffered when the hypervisor ran
out of ram and starte
Hi John,
How does one configure namespaces for file/dir layouts? I'm looking here, but
am not seeing any mentions of namespaces:
http://docs.ceph.com/docs/jewel/cephfs/file-layouts/
Thanks,
-- Dan
> On Oct 28, 2016, at 04:11, John Spray wrote:
>
> On Thu, Oct 27, 2016 at 9:43 PM, Reed Di
We currently have one master RADOS pool in our cluster that is shared among
many applications. All objects stored in the pool are currently stored using
specific namespaces -- nothing is stored in the default namespace.
We would like to add a CephFS filesystem to our cluster, and would like to
A bit more digging, the original crash appears to be similar (but not exactly
the same) as this tracker report
http://tracker.ceph.com/issues/16983
I can see that this was fixed in 10.2.3, so I will probably look to upgrade.
If the logs make sense to anybody with a bit more knowledge I would be
Due to low attendance we have had to cancel CDM tonight. Sorry for the
confusion.
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
___
ceph-users mailing lis
In case anyone is disappointed and not on, there were technical
difficulties that split the call. We are on now.
https://bluejeans.com/707503600
On Wed, Nov 2, 2016 at 9:02 PM, Patrick McGarry wrote:
> Due to low attendance we have had to cancel CDM tonight. Sorry for the
> confusion.
>
>
>
> -
I'm running Kraken built from Git right now and I've found that my OSDs eat
as much memory as they can before they're killed by OOM. I understand that
Bluestore is experimental but thought the fact that it does this should be
known.
My setup:
- Xeon D-1540, 32GB DDR4 ECC RAM
- Arch Linux
- Single
Hi guys,
I'm not sure this was asked before as I wasn't able to find anything
googling (and the search function of the list is broken at
http://lists.ceph.com/pipermail/ceph-users-ceph.com/) - anyway:
- How would you backup the config of all users and bucket configurations
for the radosgw so
Hi All,
I thought I should make a little noise about a project some of us at
SUSE have been working on, called DeepSea. It's a collection of Salt
states, runners and modules for orchestrating deployment of Ceph
clusters. To help everyone get a feel for it, I've written a blog post
which walks th
I would try to Set pgp for your pool equal to 300
#Ceph osd pool yourpool set pgp 300
...not sure about the command...
If that did not help try to restart osd 7 and 15
Hth
- Mehmet
Am 2. November 2016 14:15:09 MEZ, schrieb Vlad Blando :
>I have a 3 Cluster Giant setup with 8 OSD each, during
25 matches
Mail list logo