Hello,
some time ago I upgraded our 6 node cluster (0.94.9) running on Ubuntu from
Trusty
to Xenial.
The problem here was that with the os update also ceph is upgraded what we did
not want
in the same step because then we had to upgrade all nodes at the same time.
Therefore we did it node by n
On Wed, Mar 1, 2017 at 9:06 AM, Marius Vaitiekunas <
mariusvaitieku...@gmail.com> wrote:
>
>
> On Mon, Feb 27, 2017 at 11:40 AM, Marius Vaitiekunas <
> mariusvaitieku...@gmail.com> wrote:
>
>>
>>
>> On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitiekunas <
>> mariusvaitieku...@gmail.com> wrote:
>>
>>>
On Mon, Feb 27, 2017 at 11:40 AM, Marius Vaitiekunas <
mariusvaitieku...@gmail.com> wrote:
>
>
> On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitiekunas <
> mariusvaitieku...@gmail.com> wrote:
>
>>
>>
>> On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub > > wrote:
>>
>>> On Fri, Feb 24, 2017 at 3
We do try to use DNS to hide the IP and achieve kinds of HA, but failed.
mount.ceph will resolve whatever you provide, to IP address, and pass it to
kernel.
2017-02-28 16:14 GMT+08:00 Robert Sander :
> On 28.02.2017 07:19, gjprabu wrote:
>
> > How to hide internal ip address on ceph
On 02/28/2017 09:53 PM, WRIGHT, JON R (JON R) wrote:
I currently have a situation where the monitors are running at 100% CPU,
and can't run any commands because authentication times out after 300
seconds.
I stopped the leader, and the resulting election picked a new leader,
but that monitor show
I currently have a situation where the monitors are running at 100% CPU,
and can't run any commands because authentication times out after 300
seconds.
I stopped the leader, and the resulting election picked a new leader,
but that monitor shows exactly the same behavor.
Now both monitors *th
Hello,
I have a strange situation:
On a host server we are running 5 VMs. The VMs have their disks provisioned by
cinder from a ceph cluster and are attached by quemu-kvm using librbd.
We have a very strange situation when the VMs apparently have stopped to work
for a few seconds (10-20), and a
On Tue, Feb 28, 2017 at 5:44 AM, George Mihaiescu wrote:
> Hi Yehuda,
>
> I've ran the "radosgw-admin orphans find" command again, but captured its
> output this time.
>
> There are both "shadow" files and "multipart" files detected as leaked.
>
> leaked:
> default.34461213.1__multipart_data/d2a14
Quick update. So I'm trying out the procedure as documented here.
So far I've:
1. Stopped ceph-mds
2. set noout, norecover, norebalance, nobackfill
3. Stopped all ceph-osd
4. Stopped ceph-mon
5. Installed new OS
6. Started ceph-mon
7. Started all ceph-osd
This is where I've stopped. All but one
> Op 27 februari 2017 om 15:59 schreef Jan Kasprzak :
>
>
> Hello,
>
> Gregory Farnum wrote:
> : On Mon, Feb 20, 2017 at 11:57 AM, Jan Kasprzak wrote:
> : > Gregory Farnum wrote:
> : > : On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote:
> : > : >
> : > : > I have been using CEPH RBD
Hi Yehuda,
I've ran the "radosgw-admin orphans find" command again, but captured its
output this time.
There are both "shadow" files and "multipart" files detected as leaked.
leaked:
default.34461213.1__multipart_data/d2a14aeb-a384-51b1-8704-fe76a9a6f5f5.-j0vqDrC0wr44bii2ytrtpcrlnspSyE.44
leaked
In that case, live migration or power off/power on of the VMs is the
only alternative (after you verify that your directories are properly
created and that QEMU is able to write to them).
On Tue, Feb 28, 2017 at 3:41 AM, Laszlo Budai wrote:
> Hello,
>
> Thank you for the answer.
> I don't have th
Exactly what happened to me!
Very gd
Il 28/02/2017 12:49, Mehmet ha scritto:
I assume this is the right Way. I had done a Disaster Recovery test a
few months ago with Jewel on an only OSD Server. Just reinstall the OS
an than ceph. Do not Touch the osds they will automatically Start.
Am
On 02/27/17 18:01, Heller, Chris wrote:
> First I bring down the Ceph FS via `ceph mds cluster_down`.
> Second, to prevent OSDs from trying to repair data, I run `ceph osd
> set noout`
> Finally I stop the ceph processes in the following order: ceph-mds,
> ceph-mon, ceph-osd
>
This is the wrong pro
/Hi,/
/
actually i can´t install hammer on wheezy:/
/~# cat /etc/apt/sources.list.d/ceph.list
deb http://download.ceph.com/debian-hammer/ wheezy main
~# cat /etc/issue
Debian GNU/Linux 7 \n \l
/
/~# apt-cache search ceph
ceph-deploy - Ceph-deploy is an easy to use configuration tool
~# apt-ca
Hi,
please check if you have SElinux enabled on your system(s) (CentOS/RHEL are
good candidates for that).
AFAIK I had to write my own pp file to support the socket file generation.
You could also try to use "setenforce 0" to set SElinux to permissive mode
and restart ceph (on that node).
HTH
Be
Hello,
Thank you for the answer.
I don't have the admin socket either :(
the ceph subdirectory is missing in /var/run.
What would be the steps to get the socket?
Kind regards,
Laszlo
On 28.02.2017 05:32, Jason Dillaman wrote:
On Mon, Feb 27, 2017 at 12:36 PM, Laszlo Budai wrote:
Currently m
On 28.02.2017 07:19, gjprabu wrote:
> How to hide internal ip address on cephfs mounting. Due to
> security reason we need to hide ip address. Also we are running docker
> container in the base machine and which will shown the partition details
> over there. Kindly let us know is ther
18 matches
Mail list logo