On 07/23/2014 04:09 PM, Bachelder, Kurt wrote:
> 2.) update your grub.conf to boot to the appropriate image (default=0, or
> whatever kernel in the list you want to boot from).
Actually, edit /etc/sysconfig/kernel, set DEFAULTKERNEL=kernel-lt before
installing it.
--
Dimitri Maziuk
Prog
ority = X to ceph.repo.
X should be less than EPEL's priority, the default is I believe 99.
Option 2: add exclude = ceph_package(s) to epel.repo.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP dig
10 years" rule of thumb, cephfs will become
stable enough for production use sometime between 2017 and 2022 dep. on
whether you start counting from Sage's thesis defense or from the first
official code release. ;)
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank,
6 that's faster than hardware raid 10 -- it may
take some work but it should be perfectly doable.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
___
That sounds more relevant than OOM due to slab fragmentation -- as I
understand it, basically that's a concern if you don't have enough ram,
in which case you've a problem zfs or no zfs.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
si
On 05/23/2014 03:06 PM, Craig Lewis wrote:
> 1: ZFS or Btrfs snapshots could do this, but neither one are recommended
> for production.
Out of curiosity, what's the current beef with zfs? I know what problems
are cited for btrfs, but I haven't heard much about zfs lately.
--
On 05/15/2014 01:19 PM, Tyler Wilson wrote:
> Would running a different distribution affect this at all? Our target was
> CentOS 6 however if a more
> recent kernel would make a difference we could switch.
FWIW you can run centos 6 with 3.10 kernel from elrepo.
--
Dimitri Maziuk
P
ch in it. In case of maintenance shutdown on one side, somebody must
manually throw the switch.
The first time powerco had to do maintenance it turned out nobody there
knew they needed to call the building first. Which was just as well
since nobody in the building knew to take that call. Or was cer
On 5/13/2014 9:43 AM, Andrei Mikhailovsky wrote:
Dima, do you have any examples / howtos for this? I would love to give
it a go.
Not really: I haven't done this myself. Google for "tgtd failover with
heartbeat", you should find something useful.
The setups I have are heartbeat (3.0.x) managi
Andrei?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
PS. (now that I looked) see e.g.
http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/
Dima
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists
On 5/12/2014 4:52 AM, Andrei Mikhailovsky wrote:
Leen,
thanks for explaining things. I does make sense now.
Unfortunately, it does look like this technology would not fulfill my
requirements as I do need to have an ability to perform maintenance
without shutting down vms.
I've no idea how muc
On 5/7/2014 7:35 PM, Craig Lewis wrote:
Because of the very low recovery parameters, there's on a single
backfill running. `iostat -dmx 5 5` did report 100% util on the osd
that is backfilling, but I expected that. Once backfilling moves on to
a new osd, the 100% util follows the backfill oper
; smartmontools hasn't emailed me about a failing disk. The same thing is
> happening to more than 50% of my OSDs, in both nodes.
check 'iostat -dmx 5 5' (or some other numbers) -- if you see 100%+ disk
utilization, that could be the dying one.
--
Dimitri Maziuk
Programmer/sys
much of that going on. (Our servers
average .01% utilization on system drives, most of it log writes.)
I can see placing os and journals on the same disks, then ssds make
sense because that's where journals are.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://ww
On 03/25/2014 10:49 AM, Loic Dachary wrote:
> Hi,
>
> It's not available yet but ... are we far away ?
It's a pity Pi doesn't do SATA. Otherwise all you'd need's a working arm
port and some scripting...
--
Dimitri Maziuk
Programmer/sysadmin
ld really benefit from.
It is currently still in stealth mode, but it's already very big in
Nigeria. Would you send us all your bank account passwords so we can
educate you about our offer?
;)
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
build yourself is out of the question entirely.
Second, it's usually not about technology, it's about auditors with
checklists. The fact that you can do it and it will most likely work
just fine has nothing to do with it.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -
el does not have rbd.ko so I'm sure the upstream
rhel one doesn't either. ELRepo's kernel 3.10 has it, but that's not
going to help you.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Descrip
e a "bios update" that turns that bit off and it stays that way
for a while... and then they release the next h/w model and the cycle
repeats again. ;)
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description:
mplains that the drives/configuration "is not
supported, contact your Dell representative for replacement. Press F1 to
boot".
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
__
On 1/15/2014 9:16 AM, Mark Nelson wrote:
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
For the system disk, do you use some kind of internal flash memory disk ?
We probably should have, but ended up with I think just a 500GB 7200rpm
disk, whatever was cheapest. :)
If your system has to
On 01/02/2014 04:20 PM, Alek Storm wrote:
> Anything? Would really appreciate any wisdom at all on this.
I think what you're looking for is called git.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP
On 12/27/2013 05:10 PM, German Anders wrote:
> 1048576000 bytes (1.0 GB) copied, 10.2545 s, 102 MB/s
FWIW I've a crappy crucial v4 ssd that clocks about 106MB/s on
sequential i/o... Not sure how much you expect to see, esp. if you have
a giga*bit* link to some of the disks.
--
Dimitr
On 12/21/2013 10:04 AM, Wido den Hollander wrote:
On 12/21/2013 02:50 PM, Yan, Zheng wrote:
I don't know when inktank will claim Cephfs is stable. But as a cephfs
developer, I already have trouble to find new issue in my test setup.
If you are willing to help improve cephfs, please try cephfs
On 12/06/2013 04:28 PM, Alek Paunov wrote:
> On 07.12.2013 00:11, Dimitri Maziuk wrote:
>> 6 months lifecycle and having to os-upgrade your entire data center 3
>> times a year?
>>
>> (OK maybe it's "18 months" and "once every 9 months")
>
center 3
times a year?
(OK maybe it's "18 months" and "once every 9 months")
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
___
-- I was referring to "stacked" setup where you make a
drbd raid-1 w/ 2 hosts and then a drbd raid-1 w/ the that drbd device
and another host. I don't believe drbd can keep 3 replicas any other way
-- unlike ceph, obviously.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW
ation,
Basic DRBD is RAID-1 over network. You don't "replicate" the filesystem,
you have it backed by 2 devices one of which happens to be on another
computer.
Less basic DRBD allows you to mount your gluster fs on both hosts or add
another DRBD on top to mirror your filesystem to
crossover cable on eth1: 1000baseT/Full. "Protocol B" would probably
speed up the writes, but when I run things that write a lot I make them
write to /var/tmp anyway...
cheers,
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
gray-Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files:max:min/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
> %CP
> 5:1048576:4096 3 0 1952 29 1385 13 2 0 1553 18 575
> 5
> Latency 16383ms 19662
On 2013-11-15 08:26, Gautam Saxena wrote:
Yip,
I went to the link. Where can the script ( nfsceph) be downloaded? How's
the robustness and performance of this technique? (That is, is there are
any reason to believe that it would more/less robust and/or performant
than option #3 mentioned in the
g at the same issue and (FWIW) have a similar idea to your
> opt.3.
I believe they call it a "gateway" & it's what everyone, from Swift to
Amplidata has. Cehpfs is in fact one of ceph's big selling points,
without it, why not put your nfs/samba gateway on top of swift?
On 2013-11-06 08:37, Mark Nelson wrote:
...
Taking this even further, options like the hadoop fat twin nodes with 12
drives in 1U potentially could be even denser, while spreading the
drives out over even more nodes. Now instead of 4-5 large dense nodes
you have maybe 35-40 small dense nodes. T
ware... then you get rbd. As long as you
don't 'yum update' the kernel.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
___
ceph-u
nt suing
anybody won't help indeed.
All I need to do is subvert one "trusted" hypervisor, and then your "the
entire storage infrastructure" is just as dead.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- ht
On 10/23/2013 12:53 PM, Gregory Farnum wrote:
> On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk wrote:
>> On 2013-10-22 22:41, Gregory Farnum wrote:
>> ...
>>
>>> Right now, unsurprisingly, the focus of the existing Manila developers
>>> is on Option 1: it
On 2013-10-22 22:41, Gregory Farnum wrote:
...
Right now, unsurprisingly, the focus of the existing Manila developers
is on Option 1: it's less work than the others and supports the most
common storage protocols very well. But as mentioned, it would be a
pretty poor fit for CephFS
I must be mis
nd raid replication for your cluster & budget.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.
On 2013-10-02 07:35, Loic Dachary wrote:
Hi,
I would not use RAID5 since it would be redundant with what Ceph provides.
I would not use raid-5 (or 6) because its safety on modern drives is
questionable and because I haven't seen anyone comment on ceph's
performance -- e.g. openstack docs exp
On 2013-08-31 11:36, Dzianis Kahanovich wrote:
Johannes Klarenbeek пишет:
1) i read somewhere that it is recommended to have one OSD per disk in a
production environment.
is this also the maximum disk per OSD or could i use multiple disks per
OSD? and why?
you could use multiple disks
On 08/30/2013 01:51 PM, Mark Nelson wrote:
> On 08/30/2013 01:47 PM, Dimitri Maziuk wrote:
>> (There's nothing wrong with raid as long it's >0.)
>
> One exception: Some controllers (looking at you LSI!) don't expose disks
> as JBOD or if they do, don'
o it's job and avoid RAID.
>
> Typical traffic is fine - its just been an issue tonight :)
If you hosed and have to recover an 9TB filesystem, you'll have problems
no matter what, ceph or no ceph. You *will* have a disk failure every
once in a while, and there's no &quo
on "desktop" wd drives
compared to seagates. Aligning partitions to 4096, 16384, or any other
sector boundary didn't seem to make any difference.
So we quit buying wds. Consider seagates, they go to 4TB in both
"enterprise" and desktop lines, too.
HTH
--
Dimitri M
s return the
list in the same order. That could be how all your clients always pick
the same server.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
___
ceph-
in the off-site "rack 1").
- pick all osds from group "compute nodes" and place complete copy of
everything on each (data placement on compute grids).
(Obviously, there's also the bit about getting the clients to read from
the right osd.)
--
Dimitri Maziuk
Programmer/sysad
On 04/05/2013 12:38 PM, Jeff Anderson-Lee wrote:
> The point is I believe that you don't need a 3rd replica of everything,
> just a 3rd MON running somewhere else.
Bear in mind that you still need a physical machine somewhere in that
"somewhere else".
--
Dimitri Maziu
t's about rooms, but let's say rack == room ==
colocation facility. And I have two of those.
Are you saying I need a 3rd colo with all associated overhead to have a
usable replica of my data in colo #2?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.w
On 4/5/2013 7:57 AM, Wido den Hollander wrote:
You always need a majority of your monitors to be up. In this case you
loose 66% of your monitors, so mon.b can't get a majority.
With 3 monitors you need at least 2 to be up to have your cluster working.
That's kinda useless, isn't it? I'd've th
Windows is a real pain: you have to map attributes onto a completely
different model. You have to have samba to deal with ownership and
permissions anyway, you might as well re-exports cephfs via cifs.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.ed
On 3/8/2013 7:17 AM, Mihály Árva-Tóth wrote:
Hello,
We're planning 3 hosts, 12 HDDs in each host. Which is the better? If we
set up 1 OSD - 1 HDD structure or create hardware RAID-6 all of 12 HDDs
and only one OSD uses the whole disk space in one host?
I suspect the issue is what you're going
read
https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault
Dima
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 3/5/2013 11:01 PM, Neil Levine wrote:
As an extra request, it would be great if people explained a little
about their use-case for the filesystem so we can better understand
how the features requested map to the type of workloads people are
trying.
For the simple case of a basic file server:
On 03/05/2013 03:25 PM, Dimitri Maziuk wrote:
> On 03/05/2013 02:13 PM, Steven Presser wrote:
>> I'm currently running centos on 3.6.9 and only haven't updated it
>> because of my own laziness. I'd be happy to provide .config files for
>> this.
I mean, thank
of programming projects. Generally if I can't 'yum install' it,
I'm not using it.
In this case, our setup ain't broken so it's a bit hard to justify any
time spent fixing it -- especially if I can't get ceph to put data where
I want it in the first place.
--
D
and quota support eventually would be nice to
> have. Anything else is gravy.
I need to a) get cephfs back-ported to at least 3.0 kernels as this is
the only version feasible on centos 6 & co, and b) control data
placement down to specific osd.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagRes
On 2/26/2013 3:34 AM, femi anjorin wrote:
2. when the ceph health is not ok ..example if mds is laggy should
ceph-fuse have issues? Issue like - difficulty in accessing the mount
point..
I had issues accessing cephfs mountpoint (kernel client, not fuse) while
it was complaining about laggy md
57 matches
Mail list logo