Hey Cephers,
As many of you know, we've just announced the Cephalocon APAC 2018
which happens on Beijing on March 22 and 23 at the JW Marriott Hotel,
as well the Call for Proposals, that is acceptig talks until January
31, 2018.
This is a project we've been working for a long time now and that we
The errors you're seeing there don't look like related to
elasticsearch. It's a generic radosgw related error that says that it
failed to reach the rados (ceph) backend. You can try bumping up the
messenger log (debug ms =1) and see if there's any hint in there.
Yehuda
On Fri, Jan 12, 2018 at 12:
Hello,
I'm wondering if it's possible to grow a volume (such as in a cloud/VM
environment) and use pvresize/lvextend to utilize the extra space in my
pool.
I am testing with the following environment:
* Running on cloud provider (Google Cloud)
* 3 nodes, 1 OSD each
* 1 storage pool with "size" o
So I did the exact same thing using Kraken and the same set of VMs, no
issue. What is the magic to make it work in Luminous? Anyone lucky enough
to have this RGW ElasticSearch working using Luminous?
On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang wrote:
> Hi Yehuda,
>
> Thanks for replying.
>
>
Thank you for the explanation, Brad. I will change that setting and see how
it goes.
Subhachandra
On Thu, Jan 11, 2018 at 10:38 PM, Brad Hubbard wrote:
> On Fri, Jan 12, 2018 at 11:27 AM, Subhachandra Chandra
> wrote:
> > Hello,
> >
> > We are running experiments on a Ceph cluster before
Rgw.buckets ( which is where the data is being sent ). I am just surprised
that a few incomplete PGs would grind three gateways to a halt. Granted, the
incomplete part of a large hardware failure situation we had and having a
min_size setting of 1 didn’t help the situation. We are not complet
Hi,
can someone, comment/confirm my planned OSD replacement procedure?
It would be very helpful for me.
Dietmar
Am 11. Januar 2018 17:47:50 MEZ schrieb Dietmar Rieder
:
>Hi Alfredo,
>
>thanks for your coments, see my answers inline.
>
>On 01/11/2018 01:47 PM, Alfredo Deza wrote:
>> On Thu, Jan
Hi all,
I installed a new Luminous 12.2.2 cluster. The monitors were up at
first, but quickly started failing, segfaulting.
I only installed some mons, mgr, mds with ceph-deploy and osds with ceph
volume. No pools or fs were created yet.
When I start all mons again, there is a short window
Hi David,
To follow up on this I had a 4th drive fail (out of 12) and have opted to
order the below disks as a replacement, I have an ongoing case with Intel
via the supplier - Will report back anything useful - But I am going to
avoid the Intel s4600 2TB SSD's for the moment.
1.92TB Samsung SM86
Well, if a stranger have access to my whole Ceph data (this, all my VMs
& rgw's data), I don't mind if he gets root access too :)
On 01/12/2018 10:18 AM, Van Leeuwen, Robert wrote:
Ceph runs on a dedicated hardware, there is nothing there except Ceph,
and the ceph daemons have already all p
> Ceph runs on a dedicated hardware, there is nothing there except Ceph,
>and the ceph daemons have already all power on ceph's data.
>And there is no random-code execution allowed on this node.
>
>Thus, spectre & meltdown are meaning-less for Ceph's node, and
>mitigations should
"ceph versions" returned all daemons as running 12.2.1.
On Fri, Jan 12, 2018 at 8:00 AM, Janne Johansson wrote:
> Running "ceph mon versions" and "ceph osd versions" and so on as you do the
> upgrades would have helped I guess.
>
>
> 2018-01-11 17:28 GMT+01:00 Luis Periquito :
>>
>> this was a bi
Hi,
While trying to get an OSD back in the test cluster, which had been
dropped out for unknown reason, we see a RocksDB Segmentation fault
during "compaction". I increased debugging to 20/20 for OSD / RocksDB,
see part of the logfile below:
... 49477, 49476, 49475, 49474, 49473, 49472, 49471, 49
Running "ceph mon versions" and "ceph osd versions" and so on as you do the
upgrades would have helped I guess.
2018-01-11 17:28 GMT+01:00 Luis Periquito :
> this was a bit weird, but is now working... Writing for future
> reference if someone faces the same issue.
>
> this cluster was upgraded
14 matches
Mail list logo