[ceph-users] ceph-osd - No Longer Creates osd.X upon Launch - Bug ?

2015-02-05 Thread Ron Allred
Hello,

The latest ceph-osd in Firefly v0.80.8, no longer auto creates its osd.X
entry, in the osd map, which it was assigned via ceph.conf.

I am very aware documentation states "ceph osd create", can do this job,
but this command only assigns the next sequential osd.X number.  This is
highly undesirable.  For _years_, we have assigned number ranges to each
OSD server for an organized multi-tier (SSD / SAS / SATA) crush map.
(leaving gaps in osd numbering, naturally.)  Skipping 'ceph osd create'
entirely.

We are now facing a problem that an OSD remove+replace, now can't use
it's former osd.X ID.  Making a huge mess of documentation, number
patterning, and disk labeling.

Is there a work-around to forcefully create an osd.X number??






signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSD MTBF

2014-10-02 Thread Ron Allred

One thing being missed,

Samsung 850 Pro has only been available for about 1-2 months.

The OP, noted that drives are failing after approx 1 year.  This would 
probably mean the SSDs are actually Samsung 840 Pro.  The 
write-durabilities of 850 and 840 are quite different.



That being said, Samsung 8X0 Pros are desktop drives.  Only "Data 
Center" grade SSDs should be used with Ceph, with decent TBW/day >= 5 
years.


You should be looking at Intel DC37xx, OCZ Intrepid 3800, HGST, etc.  
Samsung recently released the 845DC (PRO/EVO) aimed at datacenters.  
These have decent TBW specs, but not very much is known about them in 
real-use yet.


Spend a full day reading storagesearch.com, it can save you THOUSANDS of 
dollars, when selecting an SSD for Datacenter use.


Regards,
Ron

On 09/29/2014 02:31 AM, Emmanuel Lacour wrote:

Dear ceph users,


we are managing ceph clusters since 1 year now. Our setup is typically
made of Supermicro servers with OSD sata drives and journal on SSD.

Those SSD are all failing one after the other after one year :(

We used Samsung 850 pro (120Go) with two setup (small nodes with 2 ssd,
2 HD in 1U):

1) raid 1 :( (bad idea, each SSD support all the OSDs journals writes :()
2) raid 1 for OS (nearly no writes) and dedicated partition for journals
   (one per OSD)


I'm convinced that the second setup is better and we migrate old setup
to this one.

Thought, statistics gives 60GB (option 2) to 100 GB (option 1) writes per day 
on SSD on a not
really over loaded cluster. Samsung claims to give 5 years warranty if
under 40GB/day. Those numbers seems very low to me.

What are your experiences on this? What write volumes do you encounter,
on wich SSD models, which setup and what MTBF?




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Arbitrary OSD Number Assignment

2015-02-01 Thread Ron Allred

Hello,

In the past we've been able to manually create specific and arbitrary 
OSD numbers. Using the procedure:


1. Add OSD.# to ceph.conf (replicate)
2. Make necessary dir in /var/lib/ceph/osd/ceph-#
3. Create OSD+Journal partitions and filesystems, then mount it
4. Init data dirs with: ceph-osd -i # --mkfs --mkjournal
5. Create osd.# auth via keyfile
6. Edit crushmap, if necessary, reinject
7. Execute ceph-osd, as normal
* The above step would create the OSD.# in OSD Map, if it did not 
already exist, while launching the OSD daemon
** This procedure has also avoided the need to ever run the manual 
deployment command of "ceph osd create [uuid]"


We have been defining per-host OSD number ranges to quickly identify 
which host holds an OSD number, and this also makes crushmap editing 
more intuitive, and based on easy number patterns.  This has worked 
since pre-Argonaut.


It seems the newest point release of Firefly, the ceph-osd daemon no 
longer creates it's OSD entry upon first-launch.


Is there a back-door, or "--yes-i-really-mean-it" work around to 
accomplish this need?  Going to sequential OSD number assignments would 
be **VERY** painful in our work flow.



May I suggest adding an optional 2nd param to "ceph osd create [uuid] 
[--osd-num=#]", which would do the internal work of verifying 
uniqueness, creation, and setting max_osd?



Best Regards,
Ron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com