Hi,
Is anyone know if it’s possible to re install ceph on a host and keep osd
without wipe data on them ?
Hope you can help me,
Thanks in advance.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
On 27. sep. 2017 10:09, Pierre Palussiere wrote:
Hi,
Is anyone know if it’s possible to re install ceph on a host and keep osd
without wipe data on them ?
Hope you can help me,
it depends... if you have journal on same drive as osd, you should be
able to eject the drive from a server, conn
As the subject says... any ceph fs administrative command I try to run hangs
forever and kills monitors in the background - sometimes they come back, on a
couple of occasions I had to manually stop/restart a suffering mon. Trying to
load the filesystem tab in the ceph-mgr dashboard dumps an erro
On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh
wrote:
> As the subject says... any ceph fs administrative command I try to run hangs
> forever and kills monitors in the background - sometimes they come back, on a
> couple of occasions I had to manually stop/restart a suffering mon. Trying to
Hi,
we are currently working on a ceph solution for one of our customers.
They run a file hosting and they need to store approximately 100 million
of pictures(thumbnails). Their current code works with FTP, that they
use as a storage. We thought that we could use cephfs for this, but i am
not
Hello,
I try to mount a cephfs filesystem from fresh luminous cluster.
With the latest kernel 4.13.3, it works
> $ sudo mount.ceph
> iccluster041.iccluster,iccluster042.iccluster,iccluster054.iccluster:/ /mnt
> -v -o name=container001,secretfile=/tmp/secret
> parsing options: name=container001
On 27/09/17 12:32, John Spray wrote:
> On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh
> wrote:
>> As the subject says... any ceph fs administrative command I try to run hangs
>> forever and kills monitors in the background - sometimes they come back, on
>> a couple of occasions I had to manua
Hi Folks!
I'm totally stuck
rdma is running on my nics, rping udaddy etc will give positive results.
cluster consist of:
proxmox-ve: 5.0-23 (running kernel: 4.10.17-3-pve)
pve-manager: 5.0-32 (running version: 5.0-32/2560e073)
system(4 nodes): Supermicro 2028U-TN24R4T+
2 port Mellanox connect
Hello all,
I have setup a Ceph cluster consisting of one monitor, 32 OSD hosts (1 OSD of
size 320GB per host) and 16 clients which are reading
and writing to the cluster. I have one erasure coded pool (shec plugin) with
k=8, m=4, c=3 and pg_num=256. Failure domain is host.
I am able to reach a
Hello,
> Try to work with the tunables:
>
> $ *ceph osd crush show-tunables*
> {
> "choose_local_tries": 0,
> "choose_local_fallback_tries": 0,
> "choose_total_tries": 50,
> "chooseleaf_descend_once": 1,
> "chooseleaf_vary_r": 1,
> "chooseleaf_stable": 0,
> "straw_calc
On Wed, Sep 27, 2017 at 8:33 PM, Gerhard W. Recher
wrote:
> Hi Folks!
>
> I'm totally stuck
>
> rdma is running on my nics, rping udaddy etc will give positive results.
>
> cluster consist of:
> proxmox-ve: 5.0-23 (running kernel: 4.10.17-3-pve)
> pve-manager: 5.0-32 (running version: 5.0-32/2560e
Haomai,
ibstat
CA 'mlx4_0'
CA type: MT4103
Number of ports: 2
Firmware version: 2.40.7000
Hardware version: 0
Node GUID: 0x248a070300e26070
System image GUID: 0x248a070300e26070
Port 1:
State: Active
Physical
You can also use ceph-fuse instead of the kernel driver to mount cephfs. It
supports all of the luminous features.
On Wed, Sep 27, 2017, 8:46 AM Yoann Moulin wrote:
> Hello,
>
> > Try to work with the tunables:
> >
> > $ *ceph osd crush show-tunables*
> > {
> > "choose_local_tries": 0,
> >
When you lose 2 osds you have 30 osds accepting the degraded data and
performing the backfilling. When the 2 osds are added back in you only have
2 osds receiving the majority of the data from the backfilling. 2 osds
have a lot less available iops and spindle speed than the other 30 did when
they
On Wed, Sep 27, 2017 at 12:57 PM, Josef Zelenka
wrote:
> Hi,
>
> we are currently working on a ceph solution for one of our customers. They
> run a file hosting and they need to store approximately 100 million of
> pictures(thumbnails). Their current code works with FTP, that they use as a
> stora
I've reinstalled a host many times over the years. We used dmcrypt so I
made sure to back up the keys for that. Other than that it is seamless as
long as your installation process only affects the root disk. If it
affected any osd or journal disk, then you would need to mark those osds
out and re-
This isn't an answer, but a suggestion to try and help track it down as I'm
not sure what the problem is. Try querying the admin socket for your osds
and look through all of their config options and settings for something
that might explain why you have multiple deep scrubs happening on a single
os
Just to add, assuming other settings are default, IOPS and maximum physical
write speed are probably not the actual limiting factors in the tests you have
been doing; ceph by default limits recovery I/O on any given OSD quite a bit in
order to ensure recovery operations don't adversely impact cl
Yep ROcE
i followed up all recommendations in mellanox papers ...
*/etc/security/limits.conf*
* soft memlock unlimited
* hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
also set properties on daemons (chapter 11) in
https://community.mellanox.com/docs/DOC-27
do you set local gid option?
On Wed, Sep 27, 2017 at 9:52 PM, Gerhard W. Recher
wrote:
> Yep ROcE
>
> i followed up all recommendations in mellanox papers ...
>
> */etc/security/limits.conf*
>
> * soft memlock unlimited
> * hard memlock unlimited
> root soft memlock unlimited
> root hard mem
How to set local gid option ?
I have no glue :)
Gerhard W. Recher
net4sec UG (haftungsbeschränkt)
Leitenweg 6
86929 Penzing
+49 171 4802507
Am 27.09.2017 um 15:59 schrieb Haomai Wang:
> do you set local gid option?
>
> On Wed, Sep 27, 2017 at 9:52 PM, Gerhard W. Recher
> wrote:
>> Yep ROcE ...
https://community.mellanox.com/docs/DOC-2415
On Wed, Sep 27, 2017 at 10:01 PM, Gerhard W. Recher
wrote:
> How to set local gid option ?
>
> I have no glue :)
>
> Gerhard W. Recher
>
> net4sec UG (haftungsbeschränkt)
> Leitenweg 6
> 86929 Penzing
>
> +49 171 4802507
> Am 27.09.2017 um 15:59 schrie
ah ok
but as i stated before : ceph.conf is a cluster wide file on proxmox!
so if i specify
[global]
//Set local GID for ROCEv2 interface used for CEPH
//The GID corresponding to IPv4 or IPv6 networks
//should be taken from show_gids command output
//This parameter should be uniquely set
Haomai,
I looked at your presentation, so i guess you already have a running
cluster with RDMA & mellanox
(https://www.youtube.com/watch?v=Qb2SUWLdDCw)
Is nobody out there having a running cluster with RDMA ?
any help is appreciated !
Gerhard W. Recher
net4sec UG (haftungsbeschränkt)
Leitenweg
Try to work with the tunables:
$ *ceph osd crush show-tunables*
{
"choose_local_tries": 0,
"choose_local_fallback_tries": 0,
"choose_total_tries": 50,
"chooseleaf_descend_once": 1,
"chooseleaf_vary_r": 1,
"chooseleaf_stable": 0,
"straw_calc_version": 1,
"allowed_buc
Le 27/09/2017 à 15:15, David Turner a écrit :
> You can also use ceph-fuse instead of the kernel driver to mount cephfs. It
> supports all of the luminous features.
OK thanks, I will try this after, I need to be able to mount the cephfs
directly into containers, I don't know what will the best w
Just for clarification.
Did you upgrade your cluster from Hammer to Luminous, then hit an assertion?
On Wed, Sep 27, 2017 at 8:15 PM, Richard Hesketh
wrote:
> As the subject says... any ceph fs administrative command I try to run hangs
> forever and kills monitors in the background - sometimes t
Yep ROcE
i followed up all recommendations in mellanox papers ...
*/etc/security/limits.conf*
* soft memlock unlimited
* hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
also set properties on daemons (chapter 11) in
https://community.mellanox.com/docs/DOC-27
On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
wrote:
> On 27/09/17 12:32, John Spray wrote:
>> On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh
>> wrote:
>>> As the subject says... any ceph fs administrative command I try to run
>>> hangs forever and kills monitors in the background - someti
Josef, my comments based on experience with cephFS(Jewel with 1MDS)
community(free) version.
* cephFS(Jewel) considering 1 MDS(stable) performs horrible with "small million
KB size files", even after MDS cache, dir frag tuning etc.
* cephFS(Jewel) considering 1 MDS(stable) performs great for "
Hello,
I have run into an issue while upgrading a Ceph cluster from Hammer to
Jewel on CentOS. It's a small cluster with 3 monitoring servers and a
humble 6 OSDs distributed over 3 servers.
I've upgraded the 3 monitors successfully to 10.2.7. They appear to be
running fine except for this health
There are new PG states that cause health_err. In this case it is
undersized that is causing this state.
While I decided to upgrade my tunables before upgrading the rest of my
cluster, it does not seem to be a requirement. However I would recommend
upgrading them sooner than later. It will cause a
previously we have a infiniband cluster, recently we deploy a roce
cluster. they are both test purpose for users.
On Wed, Sep 27, 2017 at 11:38 PM, Gerhard W. Recher
wrote:
> Haomai,
>
> I looked at your presentation, so i guess you already have a running
> cluster with RDMA & mellanox
> (https:/
Hey Cephers,
Just a reminder that the monthly Ceph Tech Talk will be this Thursday
at 1pm (EDT). This month John Spray will be talking about ceph-mgr.
Everyone is invited to join us.
http://ceph.com/ceph-tech-talks/
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Community Manager
Open Source and
Hey Cephers,
This is just a friendly reminder that the next Ceph Developer Montly
meeting is coming up:
http://wiki.ceph.com/Planning
If you have work that you're doing that it a feature work, significant
backports, or anything you would like to discuss with the core team,
please add it to the
On 17-09-27 14:57, Josef Zelenka wrote:
Hi,
we are currently working on a ceph solution for one of our customers.
They run a file hosting and they need to store approximately 100
million of pictures(thumbnails). Their current code works with FTP,
that they use as a storage. We thought that we
36 matches
Mail list logo