I was just going by : docs.ceph.com/docs/master/start/os-recommendations/
Which states 4.9
docs.ceph.com/docs/master/rados/operations/crush-map
Only goes as far as Jewel and states 4.5
Not sure where else I can find a concrete answer to if 4.10 is new enough.
,Ashley
On 17-09-06 07:33, Ashley Merrick wrote:
Hello,
Have recently upgraded a cluster to Luminous (Running Proxmox), at the
same time I have upgraded the Compute Cluster to 5.x meaning we now
run the latest kernel version (Linux 4.10.15-1) Looking to do the
following :
ceph osd set-require-min-c
Hi all,
(Sorry if this shows up twice - I got auto-unsubscribed and so first
attempt was blocked)
I'm keen to read up on some performance comparisons for replication versus
EC on HDD+SSD based setups. So far the only recent thing I've found is
Sage's Vault17 slides [1], which have a single slide
On Wed, Aug 30, 2017 at 01:04:51AM -0300, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> http://wiki.ceph.com/Planning
>
> If you have work that you're doing that it a feature work, significant
> backports,
Hello,
Have recently upgraded a cluster to Luminous (Running Proxmox), at the same
time I have upgraded the Compute Cluster to 5.x meaning we now run the latest
kernel version (Linux 4.10.15-1) Looking to do the following :
ceph osd set-require-min-compat-client luminous
Below is the output of
I'm using the default (host level) crush map so that shouldn't be the case.
Nothing was misplaced, etc.
And yes, judging by the pg dump output these OSDs were on different hosts.
I was thinking, maybe this has to do with OSDs not having a consistent
state somehow? Or some pgmap issues?
6 сент. 2
On Tue, Sep 5, 2017 at 1:44 PM, Florian Haas wrote:
> Hi everyone,
>
> with the Luminous release out the door and the Labor Day weekend over,
> I hope I can kick off a discussion on another issue that has irked me
> a bit for quite a while. There doesn't seem to be a good documented
> answer to th
On Mon, Aug 28, 2017 at 4:05 AM, Yuri Gorshkov wrote:
> Hi.
>
> When trying to take down a host for maintenance purposes I encountered an
> I/O stall along with some PGs marked 'peered' unexpectedly.
>
> Cluster stats: 96/96 OSDs, healthy prior to incident, 5120 PGs, 4 hosts
> consisting of 24 OSD
On Thu, Aug 31, 2017 at 11:51 AM, Marc Roos wrote:
>
> Should these messages not be gone in 12.2.0?
>
> 2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
> dangerous and experimental features are enabled: bluestore
> 2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
On Fri, Aug 25, 2017 at 3:20 AM, Henrik Korkuc wrote:
> Hello,
>
> I tried creating tiering with EC pools (EC pool as a cache for another EC
> pool) and end up with "Error ENOTSUP: tier pool 'ecpool' is an ec pool,
> which cannot be a tier". Having overwrite support on EC pools with direct
> suppo
Hi everyone,
with the Luminous release out the door and the Labor Day weekend over,
I hope I can kick off a discussion on another issue that has irked me
a bit for quite a while. There doesn't seem to be a good documented
answer to this: what are Ceph's real limits when it comes to RBD
snapshots?
On Fri, Sep 1, 2017 at 7:24 AM, wrote:
> Hi:
> I want to ask a question about CEPH_IOC_SYNCIO flag.
> I know that when using O_SYNC flag or O_DIRECT flag, write call
> executes in other two code paths different than using CEPH_IOC_SYNCIO flag.
> And I find the comments ab
Hey Cephers,
Leo and I are coordinators for Ceph's particpation in Outreachy
(https://www.outreachy.org/), a program similar to the Google Summer of Code
for groups that are traditionally underrepresented in tech. During the program,
mentee's work on a project for three months under a mentor an
Hi,
I come back with same issue as seen in previous thread ( link given)
trying to a 2TB SATA as OSD:
Using proxmox GUI or CLI (command given) give the same (bad) result.
Didn't want to use a direct 'ceph osd create', thus bypassing pxmfs
redundant filesystem.
I tried to build an OSD woth sam
We had to change these in our cluster for some drives to come up.
_
Tyler Bishop
Founder EST 2007
O: 513-299-7108 x10
M: 513-646-5809
[ http://beyondhosting.net/ | http://BeyondHosting.net ]
This email is intended only for the recipient(s) abo
Thanks for your suggestions, Matt. ldapsearch functionality from the rados
gw machines works fine using the same parameters specified in ceph.conf
(uri, binddn, searchdn, ldap_secret). As expected I see network traffic
to/from the ldap host when performing a search as well.
The only configuration
Did the journal drive fail during operation? Or was it taken out during
pre-failure. If it fully failed, then most likely you can't guarantee the
consistency of the underlying osds. In this case, you just put the affected
osds and add them back in as new osds.
In the case of having good data on th
Good to know. We must have misconfigured our router when we were testing
this.
On Tue, Sep 5, 2017, 3:00 AM Morrice Ben wrote:
> Hi all,
>
> Thanks for your responses. I managed to re-ip the OSDs
>
> I did not need to set cluster network or public network in [global], just
> changing the address
what is output of "netstat -anp | grep 7000"?
On 17-09-05 14:19, 许雪寒 wrote:
Sorry, for the miss formatting, here is the right one:
Sep 5 19:01:56 rg1-ceph7 ceph-mgr: File
"/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in
_start_http_thread
Sep 5 19:01:56 rg1-ceph7
Sorry, for the miss formatting, here is the right one:
Sep 5 19:01:56 rg1-ceph7 ceph-mgr: File
"/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in
_start_http_thread
Sep 5 19:01:56 rg1-ceph7 ceph-mgr: self.httpserver.start()
Sep 5 19:01:56 rg1-ceph7 ceph-mgr: File
"/
Here is the log in /var/log/messages
Sep 5 19:01:55 rg1-ceph7 systemd: Started Ceph cluster manager daemon.
Sep 5 19:01:55 rg1-ceph7 systemd: Starting Ceph cluster manager daemon...
Sep 5 19:01:56 rg1-ceph7 ceph-mgr: [05/Sep/2017:19:01:56] ENGINE Bus STARTING
Sep 5 19:01:56 rg1-ceph7 ceph-mgr:
On Tue, Sep 5, 2017 at 10:28 AM, Marc Roos wrote:
> What would be the best way to get an overview of all client connetions.
> Something similar to the output of rbd lock list
>
>
> cluster:
> 1 clients failing to respond to capability release
> 1 MDSs report slow requests
What would be the best way to get an overview of all client connetions.
Something similar to the output of rbd lock list
cluster:
1 clients failing to respond to capability release
1 MDSs report slow requests
ceph daemon mds.a dump_ops_in_flight
{
"ops": [
Hi all,
Thanks for your responses. I managed to re-ip the OSDs
I did not need to set cluster network or public network in [global], just
changing the address in the [osd.#] section was sufficient.
In my environment, the catalyst was a misconfiguration on the network side.
After I provided ipe
24 matches
Mail list logo