Re: [ceph-users] Which OS for fresh install?

2014-07-23 Thread Dimitri Maziuk
On 07/23/2014 04:09 PM, Bachelder, Kurt wrote: > 2.) update your grub.conf to boot to the appropriate image (default=0, or > whatever kernel in the list you want to boot from). Actually, edit /etc/sysconfig/kernel, set DEFAULTKERNEL=kernel-lt before installing it. -- Dimitri Maziuk Prog

Re: [ceph-users] Problem installing ceph from package manager / ceph repositories

2014-06-11 Thread Dimitri Maziuk
ority = X to ceph.repo. X should be less than EPEL's priority, the default is I believe 99. Option 2: add exclude = ceph_package(s) to epel.repo. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP dig

Re: [ceph-users] Recommended way to use Ceph as storage for file server

2014-06-02 Thread Dimitri Maziuk
10 years" rule of thumb, cephfs will become stable enough for production use sometime between 2017 and 2022 dep. on whether you start counting from Sage's thesis defense or from the first official code release. ;) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank,

Re: [ceph-users] Is there a way to repair placement groups? [Offtopic - ZFS]

2014-05-28 Thread Dimitri Maziuk
6 that's faster than hardware raid 10 -- it may take some work but it should be perfectly doable. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___

Re: [ceph-users] How to backup mon-data?

2014-05-27 Thread Dimitri Maziuk
That sounds more relevant than OOM due to slab fragmentation -- as I understand it, basically that's a concern if you don't have enough ram, in which case you've a problem zfs or no zfs. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu si

Re: [ceph-users] How to backup mon-data?

2014-05-23 Thread Dimitri Maziuk
On 05/23/2014 03:06 PM, Craig Lewis wrote: > 1: ZFS or Btrfs snapshots could do this, but neither one are recommended > for production. Out of curiosity, what's the current beef with zfs? I know what problems are cited for btrfs, but I haven't heard much about zfs lately. --

Re: [ceph-users] PCI-E SSD Journal for SSD-OSD Disks

2014-05-15 Thread Dimitri Maziuk
On 05/15/2014 01:19 PM, Tyler Wilson wrote: > Would running a different distribution affect this at all? Our target was > CentOS 6 however if a more > recent kernel would make a difference we could switch. FWIW you can run centos 6 with 3.10 kernel from elrepo. -- Dimitri Maziuk P

Re: [ceph-users] Journal SSD durability

2014-05-13 Thread Dimitri Maziuk
ch in it. In case of maintenance shutdown on one side, somebody must manually throw the switch. The first time powerco had to do maintenance it turned out nobody there knew they needed to call the building first. Which was just as well since nobody in the building knew to take that call. Or was cer

Re: [ceph-users] NFS over CEPH - best practice

2014-05-13 Thread Dimitri Maziuk
On 5/13/2014 9:43 AM, Andrei Mikhailovsky wrote: Dima, do you have any examples / howtos for this? I would love to give it a go. Not really: I haven't done this myself. Google for "tgtd failover with heartbeat", you should find something useful. The setups I have are heartbeat (3.0.x) managi

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Dimitri Maziuk
Andrei? -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Dimitri Maziuk
PS. (now that I looked) see e.g. http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/ Dima signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Dimitri Maziuk
On 5/12/2014 4:52 AM, Andrei Mikhailovsky wrote: Leen, thanks for explaining things. I does make sense now. Unfortunately, it does look like this technology would not fulfill my requirements as I do need to have an ability to perform maintenance without shutting down vms. I've no idea how muc

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-08 Thread Dimitri Maziuk
On 5/7/2014 7:35 PM, Craig Lewis wrote: Because of the very low recovery parameters, there's on a single backfill running. `iostat -dmx 5 5` did report 100% util on the osd that is backfilling, but I expected that. Once backfilling moves on to a new osd, the 100% util follows the backfill oper

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Dimitri Maziuk
; smartmontools hasn't emailed me about a failing disk. The same thing is > happening to more than 50% of my OSDs, in both nodes. check 'iostat -dmx 5 5' (or some other numbers) -- if you see 100%+ disk utilization, that could be the dying one. -- Dimitri Maziuk Programmer/sys

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Dimitri Maziuk
much of that going on. (Our servers average .01% utilization on system drives, most of it log writes.) I can see placing os and journals on the same disks, then ssds make sense because that's where journals are. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://ww

Re: [ceph-users] The Ceph disk I would like to have

2014-03-25 Thread Dimitri Maziuk
On 03/25/2014 10:49 AM, Loic Dachary wrote: > Hi, > > It's not available yet but ... are we far away ? It's a pity Pi doesn't do SATA. Otherwise all you'd need's a working arm port and some scripting... -- Dimitri Maziuk Programmer/sysadmin

Re: [ceph-users] The next generation beyond Ceph

2014-03-21 Thread Dimitri Maziuk
ld really benefit from. It is currently still in stealth mode, but it's already very big in Nigeria. Would you send us all your bank account passwords so we can educate you about our offer? ;) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

Re: [ceph-users] RBD module - RHEL 6.4

2014-01-29 Thread Dimitri Maziuk
build yourself is out of the question entirely. Second, it's usually not about technology, it's about auditors with checklists. The fact that you can do it and it will most likely work just fine has nothing to do with it. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -

Re: [ceph-users] RBD module - RHEL 6.4

2014-01-29 Thread Dimitri Maziuk
el does not have rbd.ko so I'm sure the upstream rhel one doesn't either. ELRepo's kernel 3.10 has it, but that's not going to help you. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Descrip

Re: [ceph-users] Ceph / Dell hardware recommendation

2014-01-15 Thread Dimitri Maziuk
e a "bios update" that turns that bit off and it stays that way for a while... and then they release the next h/w model and the cycle repeats again. ;) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description:

Re: [ceph-users] Ceph / Dell hardware recommendation

2014-01-15 Thread Dimitri Maziuk
mplains that the drives/configuration "is not supported, contact your Dell representative for replacement. Press F1 to boot". -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature __

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Dimitri Maziuk
On 1/15/2014 9:16 AM, Mark Nelson wrote: On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote: For the system disk, do you use some kind of internal flash memory disk ? We probably should have, but ended up with I think just a 500GB 7200rpm disk, whatever was cheapest. :) If your system has to

Re: [ceph-users] Ceph as offline S3 substitute and peer-to-peer fileshare?

2014-01-02 Thread Dimitri Maziuk
On 01/02/2014 04:20 PM, Alek Storm wrote: > Anything? Would really appreciate any wisdom at all on this. I think what you're looking for is called git. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP

Re: [ceph-users] Cluster Performance very Poor

2013-12-27 Thread Dimitri Maziuk
On 12/27/2013 05:10 PM, German Anders wrote: > 1048576000 bytes (1.0 GB) copied, 10.2545 s, 102 MB/s FWIW I've a crappy crucial v4 ssd that clocks about 106MB/s on sequential i/o... Not sure how much you expect to see, esp. if you have a giga*bit* link to some of the disks. -- Dimitr

Re: [ceph-users] When will Ceph FS be ready for use with production data

2013-12-21 Thread Dimitri Maziuk
On 12/21/2013 10:04 AM, Wido den Hollander wrote: On 12/21/2013 02:50 PM, Yan, Zheng wrote: I don't know when inktank will claim Cephfs is stable. But as a cephfs developer, I already have trouble to find new issue in my test setup. If you are willing to help improve cephfs, please try cephfs

Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Dimitri Maziuk
On 12/06/2013 04:28 PM, Alek Paunov wrote: > On 07.12.2013 00:11, Dimitri Maziuk wrote: >> 6 months lifecycle and having to os-upgrade your entire data center 3 >> times a year? >> >> (OK maybe it's "18 months" and "once every 9 months") >

Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Dimitri Maziuk
center 3 times a year? (OK maybe it's "18 months" and "once every 9 months") -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___

Re: [ceph-users] Is Ceph a provider of block device too ?

2013-11-21 Thread Dimitri Maziuk
-- I was referring to "stacked" setup where you make a drbd raid-1 w/ 2 hosts and then a drbd raid-1 w/ the that drbd device and another host. I don't believe drbd can keep 3 replicas any other way -- unlike ceph, obviously. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW

Re: [ceph-users] Is Ceph a provider of block device too ?

2013-11-21 Thread Dimitri Maziuk
ation, Basic DRBD is RAID-1 over network. You don't "replicate" the filesystem, you have it backed by 2 devices one of which happens to be on another computer. Less basic DRBD allows you to mount your gluster fs on both hosts or add another DRBD on top to mirror your filesystem to

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-20 Thread Dimitri Maziuk
crossover cable on eth1: 1000baseT/Full. "Protocol B" would probably speed up the writes, but when I run things that write a lot I make them write to /var/tmp anyway... cheers, -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-19 Thread Dimitri Maziuk
gray-Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files:max:min/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > %CP > 5:1048576:4096 3 0 1952 29 1385 13 2 0 1553 18 575 > 5 > Latency 16383ms 19662

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-15 Thread Dimitri Maziuk
On 2013-11-15 08:26, Gautam Saxena wrote: Yip, I went to the link. Where can the script ( nfsceph) be downloaded? How's the robustness and performance of this technique? (That is, is there are any reason to believe that it would more/less robust and/or performant than option #3 mentioned in the

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-14 Thread Dimitri Maziuk
g at the same issue and (FWIW) have a similar idea to your > opt.3. I believe they call it a "gateway" & it's what everyone, from Swift to Amplidata has. Cehpfs is in fact one of ceph's big selling points, without it, why not put your nfs/samba gateway on top of swift?

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Dimitri Maziuk
On 2013-11-06 08:37, Mark Nelson wrote: ... Taking this even further, options like the hadoop fat twin nodes with 12 drives in 1U potentially could be even denser, while spreading the drives out over even more nodes. Now instead of 4-5 large dense nodes you have maybe 35-40 small dense nodes. T

Re: [ceph-users] Red Hat clients

2013-10-30 Thread Dimitri Maziuk
ware... then you get rbd. As long as you don't 'yum update' the kernel. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ ceph-u

Re: [ceph-users] CephFS and clients [was: CephFS & Project Manila (OpenStack)]

2013-10-23 Thread Dimitri Maziuk
nt suing anybody won't help indeed. All I need to do is subvert one "trusted" hypervisor, and then your "the entire storage infrastructure" is just as dead. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- ht

Re: [ceph-users] CephFS & Project Manila (OpenStack)

2013-10-23 Thread Dimitri Maziuk
On 10/23/2013 12:53 PM, Gregory Farnum wrote: > On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk wrote: >> On 2013-10-22 22:41, Gregory Farnum wrote: >> ... >> >>> Right now, unsurprisingly, the focus of the existing Manila developers >>> is on Option 1: it&#

Re: [ceph-users] CephFS & Project Manila (OpenStack)

2013-10-23 Thread Dimitri Maziuk
On 2013-10-22 22:41, Gregory Farnum wrote: ... Right now, unsurprisingly, the focus of the existing Manila developers is on Option 1: it's less work than the others and supports the most common storage protocols very well. But as mentioned, it would be a pretty poor fit for CephFS I must be mis

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Dimitri Maziuk
nd raid replication for your cluster & budget. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists.ceph.

Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Dimitri Maziuk
On 2013-10-02 07:35, Loic Dachary wrote: Hi, I would not use RAID5 since it would be redundant with what Ceph provides. I would not use raid-5 (or 6) because its safety on modern drives is questionable and because I haven't seen anyone comment on ceph's performance -- e.g. openstack docs exp

Re: [ceph-users] some newbie questions...

2013-08-31 Thread Dimitri Maziuk
On 2013-08-31 11:36, Dzianis Kahanovich wrote: Johannes Klarenbeek пишет: 1) i read somewhere that it is recommended to have one OSD per disk in a production environment. is this also the maximum disk per OSD or could i use multiple disks per OSD? and why? you could use multiple disks

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Dimitri Maziuk
On 08/30/2013 01:51 PM, Mark Nelson wrote: > On 08/30/2013 01:47 PM, Dimitri Maziuk wrote: >> (There's nothing wrong with raid as long it's >0.) > > One exception: Some controllers (looking at you LSI!) don't expose disks > as JBOD or if they do, don'

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Dimitri Maziuk
o it's job and avoid RAID. > > Typical traffic is fine - its just been an issue tonight :) If you hosed and have to recover an 9TB filesystem, you'll have problems no matter what, ceph or no ceph. You *will* have a disk failure every once in a while, and there's no &quo

Re: [ceph-users] Hardware recommendation / calculation for large cluster

2013-05-11 Thread Dimitri Maziuk
on "desktop" wd drives compared to seagates. Aligning partitions to 4096, 16384, or any other sector boundary didn't seem to make any difference. So we quit buying wds. Consider seagates, they go to 4TB in both "enterprise" and desktop lines, too. HTH -- Dimitri M

Re: [ceph-users] RadosGW High Availability

2013-05-09 Thread Dimitri Maziuk
s return the list in the same order. That could be how all your clients always pick the same server. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ ceph-

Re: [ceph-users] interesting crush rules

2013-05-01 Thread Dimitri Maziuk
in the off-site "rack 1"). - pick all osds from group "compute nodes" and place complete copy of everything on each (data placement on compute grids). (Obviously, there's also the bit about getting the clients to read from the right osd.) -- Dimitri Maziuk Programmer/sysad

Re: [ceph-users] Ceph mon quorum

2013-04-05 Thread Dimitri Maziuk
On 04/05/2013 12:38 PM, Jeff Anderson-Lee wrote: > The point is I believe that you don't need a 3rd replica of everything, > just a 3rd MON running somewhere else. Bear in mind that you still need a physical machine somewhere in that "somewhere else". -- Dimitri Maziu

Re: [ceph-users] Ceph mon quorum

2013-04-05 Thread Dimitri Maziuk
t's about rooms, but let's say rack == room == colocation facility. And I have two of those. Are you saying I need a 3rd colo with all associated overhead to have a usable replica of my data in colo #2? -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.w

Re: [ceph-users] Ceph mon quorum

2013-04-05 Thread Dimitri Maziuk
On 4/5/2013 7:57 AM, Wido den Hollander wrote: You always need a majority of your monitors to be up. In this case you loose 66% of your monitors, so mon.b can't get a majority. With 3 monitors you need at least 2 to be up to have your cluster working. That's kinda useless, isn't it? I'd've th

Re: [ceph-users] Status of Mac OS and Windows PC client

2013-03-19 Thread Dimitri Maziuk
Windows is a real pain: you have to map attributes onto a completely different model. You have to have samba to deal with ownership and permissions anyway, you might as well re-exports cephfs via cifs. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.ed

Re: [ceph-users] Raw disks under OSDs or HW-RAID6 is better?

2013-03-10 Thread Dimitri Maziuk
On 3/8/2013 7:17 AM, Mihály Árva-Tóth wrote: Hello, We're planning 3 hosts, 12 HDDs in each host. Which is the better? If we set up 1 OSD - 1 HDD structure or create hardware RAID-6 all of 12 HDDs and only one OSD uses the whole disk space in one host? I suspect the issue is what you're going

[ceph-users] Before you put journals on SSDs

2013-03-08 Thread Dimitri Maziuk
read https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault Dima ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CephFS First product release discussion

2013-03-06 Thread Dimitri Maziuk
On 3/5/2013 11:01 PM, Neil Levine wrote: As an extra request, it would be great if people explained a little about their use-case for the filesystem so we can better understand how the features requested map to the type of workloads people are trying. For the simple case of a basic file server:

Re: [ceph-users] CephFS First product release discussion

2013-03-05 Thread Dimitri Maziuk
On 03/05/2013 03:25 PM, Dimitri Maziuk wrote: > On 03/05/2013 02:13 PM, Steven Presser wrote: >> I'm currently running centos on 3.6.9 and only haven't updated it >> because of my own laziness. I'd be happy to provide .config files for >> this. I mean, thank

Re: [ceph-users] CephFS First product release discussion

2013-03-05 Thread Dimitri Maziuk
of programming projects. Generally if I can't 'yum install' it, I'm not using it. In this case, our setup ain't broken so it's a bit hard to justify any time spent fixing it -- especially if I can't get ceph to put data where I want it in the first place. -- D

Re: [ceph-users] CephFS First product release discussion

2013-03-05 Thread Dimitri Maziuk
and quota support eventually would be nice to > have. Anything else is gravy. I need to a) get cephfs back-ported to at least 3.0 kernels as this is the only version feasible on centos 6 & co, and b) control data placement down to specific osd. -- Dimitri Maziuk Programmer/sysadmin BioMagRes

Re: [ceph-users] mds laggy or crashed

2013-02-26 Thread Dimitri Maziuk
On 2/26/2013 3:34 AM, femi anjorin wrote: 2. when the ceph health is not ok ..example if mds is laggy should ceph-fuse have issues? Issue like - difficulty in accessing the mount point.. I had issues accessing cephfs mountpoint (kernel client, not fuse) while it was complaining about laggy md