Agreed. In an ideal world I would have interleaved all my compute, long term
storage and processing posix. Unfortunately, business doesn't always work out
so nicely so I'm left with buying and building out to match changing needs. In
this case we are a small part of a larger org and have been al
Hello,
I am using the ceph 10.2.5 release version. Is this version's cephFS
support hadoop cluster requirement? (Anyone using the same)
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
On Mon, Apr 24, 2017 at 2:41 AM, Xiaoxi Chen wrote:
> Well, I can definitely build my own,
>
> 1. Precise is NOT EOL on Hammer release, which was confirmed in
> previous mail thread. So we still need to maintain point-in-time
> hammer package for end users.
Ceph Hammer is EOL
>
> 2. It is NOT ON
Hello Orit,
Could it be that something has changed in 10.2.5+ which is related to
reading the endpoints from the zone/period config?
In my master zone I have specified the endpoint with a trailing
backslash (which is also escaped), however I do not define the secondary
endpoint this way. Am
On Mon, Apr 24, 2017 at 8:53 AM, Alfredo Deza wrote:
> On Mon, Apr 24, 2017 at 2:41 AM, Xiaoxi Chen wrote:
>> Well, I can definitely build my own,
>>
>> 1. Precise is NOT EOL on Hammer release, which was confirmed in
>> previous mail thread. So we still need to maintain point-in-time
>> hammer pa
This is the third development checkpoint release of Luminous, the next
long term
stable release.
Major changes from v12.0.1
--
* The original librados rados_objects_list_open (C) and objects_begin
(C++) object listing API, deprecated in Hammer, has finally been
removed.
Hi Ben,
On Mon, Apr 24, 2017 at 4:36 PM, Ben Morrice wrote:
> Hello Orit,
>
> Could it be that something has changed in 10.2.5+ which is related to
> reading the endpoints from the zone/period config?
>
I don't remember any change for endpoints config, but I will go over
the changes to make sure
Hey,
Quick question hopefully have tried a few Google searches but noting concrete.
I am running KVM VM's using KRBD, if I add and remove CEPH mon's are the
running VM's updated with this information. Or do I need to reboot the VM's for
them to be provided with the change of MON's?
Thanks!
Sen
Hi everyone,
so this will be a long email — it's a summary of several off-list
conversations I've had over the last couple of weeks, but the TL;DR
version is this question:
How can a Ceph cluster maintain near-constant performance
characteristics while supporting a steady intake of a large number
One guest VM on my test cluster has hung for more than 24 hours while running a
fio test on an RBD device, but other VMs accessing other images in the same
pool are fine. I was able to reproduce the problem by running “rbd info” on
the same pool as the stuck VM with some debug tracing on (see l
On Mon, Apr 24, 2017 at 2:53 PM, Phil Lacroute
wrote:
> 2017-04-24 11:30:57.058233 7f5512ffd700 1 -- 192.168.206.17:0/3282647735
> --> 192.168.206.13:6804/22934 -- osd_op(client.4375.0:3 1.af6f1e38
> rbd_header.1058238e1f29 [call rbd.get_size,call rbd.get_object_prefix] snapc
> 0=[] ack+read+know
On Mon, Apr 24, 2017 at 10:05 AM, Alfredo Deza wrote:
> On Mon, Apr 24, 2017 at 8:53 AM, Alfredo Deza wrote:
>> On Mon, Apr 24, 2017 at 2:41 AM, Xiaoxi Chen wrote:
>>> Well, I can definitely build my own,
>>>
>>> 1. Precise is NOT EOL on Hammer release, which was confirmed in
>>> previous mail t
Jason,
Thanks for the suggestion. That seems to show it is not the OSD that got stuck:
ceph7:~$ sudo rbd -c debug/ceph.conf info app/image1
…
2017-04-24 13:13:49.761076 7f739aefc700 1 -- 192.168.206.17:0/1250293899 -->
192.168.206.13:6804/22934 -- osd_op(client.4384.0:3 1.af6f1e38
rbd_header.
On 04/24/17 22:23, Phil Lacroute wrote:
> Jason,
>
> Thanks for the suggestion. That seems to show it is not the OSD that
> got stuck:
>
> ceph7:~$ sudo rbd -c debug/ceph.conf info app/image1
> …
> 2017-04-24 13:13:49.761076 7f739aefc700 1 --
> 192.168.206.17:0/1250293899 --> 192.168.206.13:6804/
Just to cover all the bases, is 192.168.206.13:6804 really associated
with a running daemon for OSD 11?
On Mon, Apr 24, 2017 at 4:23 PM, Phil Lacroute
wrote:
> Jason,
>
> Thanks for the suggestion. That seems to show it is not the OSD that got
> stuck:
>
> ceph7:~$ sudo rbd -c debug/ceph.conf in
Yes it is the correct IP and port:
ceph3:~$ netstat -anp | fgrep 192.168.206.13:6804
tcp0 0 192.168.206.13:6804 0.0.0.0:* LISTEN
22934/ceph-osd
I turned up the logging on the osd and I don’t think it received the request.
However I also noticed a large num
I would double-check your file descriptor limits on both sides -- OSDs
and the client. 131 sockets shouldn't make a difference. Port is open
on any possible firewalls you have running?
On Mon, Apr 24, 2017 at 8:14 PM, Phil Lacroute
wrote:
> Yes it is the correct IP and port:
>
> ceph3:~$ netstat
Anyone?
On Sat, Apr 22, 2017 at 12:33 PM, Henry Ngo wrote:
> I followed the install doc however after deploying the monitor, the doc
> states to start the mon using Upstart. I learned through digging around
> that the Upstart package is not installed using Make Install so it won't
> work. I trie
Dear ceph users,
I am running the following setup:-
- 6 x osd servers (centos 7, mostly HP DL180se G6 with SA P410 controllers)
- Each osd server has 1-2 SSD journals, each handling ~5 7.2k SATA RE disks
- ceph-0.94.10
Normal operations work OK, however when a single disk failed (or
abrupt 'ceph
Hi Xiaoxi
Just wanna to confirm again, according to the definition of
"LTS" in ceph, Hammer suppose not EOL till Luminous is released,
This is correct.
before that, can we expecting hammer upgrades and packages on
Precise/Other old OS will still be provided?
We have all our serv
20 matches
Mail list logo