I've run through many upgrades without anyone noticing, including in
very busy openstack environments.
As a rule of thumb you should upgrade MONs, OSDs, MDSs and RadosGWs in
that order, however you should always read the upgrade instructions on
the release notes page
(http://docs.ceph.com/docs/mas
Hi ,
We are validating kraken 11.2.0 with bluestore on 5 node cluster with EC
4+1.
When an OSD is down , the peering is not happening and ceph health status
moved to ERR state after few mins. This was working in previous development
releases. Any additional configuration required in v11.2.0
Fol
> Op 19 januari 2017 om 20:00 schreef Ben Hines :
>
>
> Sure. However, as a general development process, many projects require
> documentation to go in with a feature. The person who wrote it is the best
> person to explain how to use it.
>
Probably, but still, it's not a requirement. All I'm
Hi,
Is the current strange DNS issue with docs.ceph.com related to this also? I
noticed that docs.ceph.com is getting a different A record from
ns4.redhat.com vs ns{1..3}.redhat.com
dig output here > http://pastebin.com/WapDY9e2
Thanks
On Thu, Jan 19, 2017 at 11:03 PM, Dan Mick wrote:
> On 01
I don't know exactly where but I'm guessing in the database of the
monitor server which should be located at
"/var/lib/ceph/mon/".
Best,
Martin
On Fri, Jan 20, 2017 at 8:55 AM, Chen, Wei D wrote:
> Hi Martin,
>
> Thanks for your response!
> Could you pls tell me where it is on the monitor nodes?
This is the first release of the Kraken series. It is suitable for
use in production deployments and will be maintained until the next
stable release, Luminous, is completed in the Spring of 2017.
Major Changes from Jewel
- *RADOS*:
* The new *BlueStore* backend now h
On 01/20/2017 03:52 AM, Chen, Wei D wrote:
Hi,
I have read through some documents about authentication and user management
about ceph, everything works fine with me, I can create
a user and play with the keys and caps of that user. But I cannot find where
those keys or capabilities stored, obv
looking forward so read some good help here
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello ceph users,
My graphs of several counters in our Ceph cluster are showing abnormal
behaviour after changing the pg_num and pgp_num respectively.
We're using "http://eu.ceph.com/debian-hammer/ jessie/main".
Is this a bug, or will the counters stabilize at some time in the near
future? Or,
Only in that we changed the zone and apparently it hasn't propagated properly.
I'll check with RHIT.
Sent from Nine
From: Sean Redmond
Sent: Jan 20, 2017 3:07 AM
To: Dan Mick
Cc: Shinobu Kinjo; Brian Andrus; ceph-users
Subject: Re: [ceph-users] Problems with htt
if you change pg_num value,
ceph will reshuffle almost all datas, so depend of the size of your storage, it
can take some times ...
- Mail original -
De: "Kai Storbeck"
À: "ceph-users"
Envoyé: Vendredi 20 Janvier 2017 17:17:08
Objet: [ceph-users] Ceph counters decrementing after changi
Hi,
This email is better suited for the 'ceph-users' list (CC'ed).
You'll likely find more answers there.
-Joao
On 01/20/2017 04:33 PM, hen shmuel wrote:
im new to Ceph and i want to build ceph storage cluster at my work site,
to provide NAS services to are clients, as NFS to are linux serv
Here's a really good write up on how to cluster NFS servers backed by RBD
volumes. It could be adapted to use CephFS with relative ease.
https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
___
John Petrini
NOC Systems Administrator // *CoreDial, LLC* // coredial.com
// [image:
> Op 20 januari 2017 om 17:17 schreef Kai Storbeck :
>
>
> Hello ceph users,
>
> My graphs of several counters in our Ceph cluster are showing abnormal
> behaviour after changing the pg_num and pgp_num respectively.
What counters exactly? Like pg information? It could be that it needs a scrub
What does `ceph -s` say?
On Sat, Jan 21, 2017 at 3:39 AM, Wido den Hollander wrote:
>
>> Op 20 januari 2017 om 17:17 schreef Kai Storbeck :
>>
>>
>> Hello ceph users,
>>
>> My graphs of several counters in our Ceph cluster are showing abnormal
>> behaviour after changing the pg_num and pgp_num re
CephFS does not require a central NFS server. Any Linux server can mount the
CephFS volume at the same time. There is also a windows client for CephFS
(https://drupal.star.bnl.gov/STAR/blog/mpoat/cephfs-client-windows-based-dokan-060).
I don't see the need for the NFS/SMB server or complicate
I'm pretty sure the default configs won't let an EC PG go active with
only "k" OSDs in its PG; it needs at least k+1 (or possibly more? Not
certain). Running an "n+1" EC config is just not a good idea.
For testing you could probably adjust this with the equivalent of
min_size for EC pools, but I do
This is resolved. Apparently ns3 was shutdown a while ago and ns4 just
took a while to catch up.
ceph.com, download.ceph.com, and docs.ceph.com all have updated DNS records.
Sorry again for the trouble this caused all week. The steps we've taken
should allow us to return to a reasonable leve
ns3 is still answering, wrongly, for the record
On 1/20/2017 12:18 PM, David Galloway wrote:
This is resolved. Apparently ns3 was shutdown a while ago and ns4
just took a while to catch up.
ceph.com, download.ceph.com, and docs.ceph.com all have updated DNS
records.
Sorry again for the tro
`ceph pg dump` should show you something like:
* active+undersized+degraded ... [NONE,3,2,4,1]3[NONE,3,2,4,1]
Sam,
Am I wrong? Or is it up to something else?
On Sat, Jan 21, 2017 at 4:22 AM, Gregory Farnum wrote:
> I'm pretty sure the default configs won't let an EC PG go active with
I think he needs the “gateway” servers because he wishes to expose the storage
to clients which won’t speak Ceph natively. I’m not sure I would entirely trust
that windows port of CephFS and there are also security concerns with allowing
end users to talk directly to Ceph. There’s also future st
Hi Sam,
I have a test cluster, albeit small. I’m happy to run tests + graph results
with a wip branch and work out reasonable settings…etc
From: Samuel Just [mailto:sj...@redhat.com]
Sent: 19 January 2017 23:23
To: David Turner
Cc: Nick Fisk ; ceph-users
Subject: Re: [ceph-users] osd_sn
22 matches
Mail list logo