Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 19:39 schreef Samuel Just : > > > I think? Probably worth reproducing on a vstart cluster to validate > the fix. Didn't we introduce something in the mon to validate new > crushmaps? Hammer maybe? I ended up injecting a fixed CRUSHMap into osdmap 1432 and 1433 on this c

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Samuel Just
I think? Probably worth reproducing on a vstart cluster to validate the fix. Didn't we introduce something in the mon to validate new crushmaps? Hammer maybe? -Sam On Tue, Apr 26, 2016 at 8:09 AM, Wido den Hollander wrote: > >> Op 26 april 2016 om 16:58 schreef Samuel Just : >> >> >> Can you a

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 16:58 schreef Samuel Just : > > > Can you attach the OSDMap (ceph osd getmap -o )? > -Sam > Henrik contacted me to look at this and this is what I found: 0x00b18b81 in crush_choose_firstn (map=map@entry=0x1f00200, bucket=0x0, weight=weight@entry=0x1f2b880, weigh

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Samuel Just
Can you attach the OSDMap (ceph osd getmap -o )? -Sam On Tue, Apr 26, 2016 at 2:07 AM, Henrik Svensson wrote: > Hi! > > We got a three node CEPH cluster with 10 OSD each. > > We bought 3 new machines with additional 30 disks that should reside in > another location. > Before adding these machine