Hi,
I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
host-3 & (osd.22) host-6.
user@host-1:~$ sudo ceph osd tree
ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 184.67990 root default
-7 82.07996 chassis chassis2
-4 41.03998 host host-
>>As I still haven't heard or seen about any upstream distros for Debian
>>Jessie (see also [1]),
Gitbuilder is already done for jessie
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
@Sage : Don't known if something is blocking to release package officially ?
- Mail original --
For a moment it de-list removed OSD's and after sometime it again
comes up in ceph osd tree listing.
On Fri, Jul 31, 2015 at 12:45 PM, Mallikarjun Biradar
wrote:
> Hi,
>
> I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
> host-3 & (osd.22) host-6.
>
> user@host-1:~$ sudo ceph
Hi,I had the same problem. Aparently civetweb can talk https when run standalone. But I didn't find out how to pass the necessary options to civetweb through ceph. So, I put haproxy in front of civerweb. haproxy terminates the https connection and forwards the requests in plain text to civerwe
I know a few other people here were battling with the occasional issue of OSD
being extremely slow when starting.
I personally run OSDs mixed with KVM guests on the same nodes, and was baffled
by this issue occuring mostly on the most idle (empty) machines.
Thought it was some kind of race condi
On Thu, 30 Jul 2015 06:54:13 -0700 (PDT), Sage Weil
wrote:
> So... given that, I'd like to gauge user interest in these old distros.
> Specifically,
>
> CentOS6 / RHEL6
> Ubuntu precise 12.04
> Debian wheezy
>
> Would anyone miss them?
>
Well, Centos 6 will be supported to 2020, and cen
On 31/07/15 06:27, Stijn De Weirdt wrote:
wouldn't it be nice that ceph does something like this in background
(some sort of network-scrub). debugging network like this is not that
easy (can't expect admins to install e.g. perfsonar on all nodes
and/or clients)
something like: every X min, ea
On 31/07/15 09:47, Mallikarjun Biradar wrote:
For a moment it de-list removed OSD's and after sometime it again
comes up in ceph osd tree listing.
Is the OSD service itself definitely stopped? Are you using any
orchestration systems (puppet, chef) that might be re-creating its auth
key et
Yeah. OSD service stopped.
Nope, I am not using any orchestration system.
user@host-1:~$ ps -ef | grep ceph
root 2305 1 7 Jul27 ?06:52:36 /usr/bin/ceph-osd
--cluster=ceph -i 3 -f
root 2522 1 6 Jul27 ?06:19:42 /usr/bin/ceph-osd
--cluster=ceph -i 0 -f
root 27
I am using hammer 0.94
On Fri, Jul 31, 2015 at 4:01 PM, Mallikarjun Biradar
wrote:
> Yeah. OSD service stopped.
> Nope, I am not using any orchestration system.
>
> user@host-1:~$ ps -ef | grep ceph
> root 2305 1 7 Jul27 ?06:52:36 /usr/bin/ceph-osd
> --cluster=ceph -i 3 -f
> roo
On 07/31/2015 05:21 AM, John Spray wrote:
On 31/07/15 06:27, Stijn De Weirdt wrote:
wouldn't it be nice that ceph does something like this in background
(some sort of network-scrub). debugging network like this is not that
easy (can't expect admins to install e.g. perfsonar on all nodes
and/or c
rbd is already thin provisioned. when you set its size, your setting the
maximum size. its explained here,
http://ceph.com/docs/master/rbd/rados-rbd-cmds/
On Thu, Jul 30, 2015 at 12:04 PM Robert LeBlanc
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> I'll take a stab at this.
>
>
also, you probably want to reclaim unused space when you delete files.
http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
On Fri, Jul 31, 2015 at 3:54 AM pixelfairy wrote:
> rbd is already thin provisioned. when you set its size, your setting the
> maximum size. its explained here,
Hello everybody
We have ceph cluster that consist of 8 host with 12 osd per each host. It's 2T
SATA disks.
[13:23]:[root@se087 ~]# ceph osd tree
ID WEIGHTTYPE NAMEUP/DOWN REWEIGHT
PRIMARY-AFFINITY
-1 182.99203 root default
according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
you have two choices,
format 1: you can mount with rbd kernel module
format 2: you can clone
just mapped and mounted a this image,
rbd image 'vm-101-disk-2': size 5120 MB in 1280 objects order 22 (4096 kB
objects) block_name_pre
Jan,
this is very handy to know! Thanks for sharing with us!
People, do you believe that it would be nice to have a place where we
can gather either good practices or problem resolutions or tips from the
community? We could have a voting system and those with the most votes
(or above a thresh
On Fri, 31 Jul 2015, Alexandre DERUMIER wrote:
> >>As I still haven't heard or seen about any upstream distros for Debian
> >>Jessie (see also [1]),
>
> Gitbuilder is already done for jessie
>
> http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
>
> @Sage : Don't known if something is blo
On Fri, Jul 31, 2015 at 2:21 PM, pixelfairy wrote:
> according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
> you have two choices,
>
> format 1: you can mount with rbd kernel module
> format 2: you can clone
>
> just mapped and mounted a this image,
> rbd image 'vm-101-disk-2': size
Thanks for your quick action!!
- Shinobu
On Fri, Jul 31, 2015 at 11:01 PM, Ilya Dryomov wrote:
> On Fri, Jul 31, 2015 at 2:21 PM, pixelfairy wrote:
> > according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
> > you have two choices,
> >
> > format 1: you can mount with rbd kerne
That's good to hear. Thanks for the heads up. We're going to be
getting another pile of hardware in the next couple of weeks and I'd
prefer to not have to start with Wheezy just to have to move to Jessie a
little bit later on. As someone said earlier, OS rollouts take some care
to do in larg
On Fri, Jul 31, 2015 at 5:47 PM, Jan Schermer wrote:
> I know a few other people here were battling with the occasional issue of OSD
> being extremely slow when starting.
>
> I personally run OSDs mixed with KVM guests on the same nodes, and was
> baffled by this issue occuring mostly on the mos
Hi,
I was trying rados bench, and first wrote 250 objects from 14 hosts with
--no-cleanup. Then I ran the read tests from the same 14 hosts and ran
into this:
[root@osd007 test]# /usr/bin/rados -p ectest bench 100 seq
2015-07-31 17:52:51.027872 7f6c40de17c0 -1 WARNING: the following
dangero
> On 31 Jul 2015, at 17:28, Haomai Wang wrote:
>
> On Fri, Jul 31, 2015 at 5:47 PM, Jan Schermer wrote:
>> I know a few other people here were battling with the occasional issue of
>> OSD being extremely slow when starting.
>>
>> I personally run OSDs mixed with KVM guests on the same nodes,
dear Ceph experts;
I am pretty new in ceph project and we are working on a management
infrastructure and using Ceph / Calamari as our storage resource.
I have some basic questions:
1) what is the purpose of installing and configuring salt-master and
salt-minion in Ceph environment?
is this tru
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I usually do the crush rm step second to last. I don't know if your
modifying the osd after removing it from the CRUSH is putting it back
in.
1. Stop OSD process
2. ceph osd rm
3. ceph osd crush rm osd.
4. ceph auth del osd.
Can you try the crush rm
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Even just a ping at max MTU set with nodefrag could tell a lot about
connectivity issues and latency without a lot of traffic. Using Ceph
messenger would be even better to check firewall ports. I like the
idea of incorporating simple network checks i
I remember reading that ScaleIO (I think?) does something like this by
regularly sending reports to a multicast group, thus any node with issues (or
just overload) is reweighted or avoided automatically on the client. OSD map is
the Ceph equivalent I guess. It makes sense to gather metrics and p
Most folks have either probably already left or are on their way out the
door late on a friday, but I just wanted to say Happy SysAdmin day to
all of the excellent System Administrators out there running Ceph
clusters. :)
Mark
___
ceph-users mailing
Thanks Mark you too
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
On 7/31/15, 3:02 PM, "ceph-users on behalf of Mark Nelson"
wrote:
>Most folks have either probably already left or are on their way out the
>door late on a friday, but I just wanted to say Hap
May your bytes stay with you :)
Happy bofhday!
Jan
> On 01 Aug 2015, at 00:10, Michael Kuriger wrote:
>
> Thanks Mark you too
>
>
>
>
> Michael Kuriger
> Sr. Unix Systems Engineer
> * mk7...@yp.com |( 818-649-7235
>
>
>
>
>
> On 7/31/15, 3:02 PM, "ceph-users on behalf of Mark Nelson"
- Original Message -
From: "Butkeev Stas"
To: ceph-us...@ceph.com, ceph-commun...@lists.ceph.com, supp...@ceph.com
Sent: Friday, 31 July, 2015 9:10:40 PM
Subject: [ceph-users] problem with RGW
>Hello everybody
>
>We have ceph cluster that consist of 8 host with 12 osd per each host. It'
I encountered a similar problem. Incoming firewall ports were blocked
on one host. So the other OSDs kept marking that OSD as down. But, it
could talk out, so it kept saying 'hey, i'm up, mark me up' so then
the other OSDs started trying to send it data again, causing backed up
requests.. Which goe
32 matches
Mail list logo