On 12/11/2014 11:39 AM, ano nym wrote:
>
> there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a
> msa70 which gives me about 600 MB/s continous write speed with rados
> write bench. tgt on the server with rbd backend uses this pool. mounting
> local(host) with iscsiadm, sdz is the v
On 12/13/2014 09:39 AM, Jake Young wrote:
> On Friday, December 12, 2014, Mike Christie <mailto:mchri...@redhat.com>> wrote:
>
> On 12/11/2014 11:39 AM, ano nym wrote:
> >
> > there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a
>
I do not know about perf, but here is some info on what is safe and
general info.
- If you are not using VAAI then it will use older style RESERVE/RELEASE
commands only.
If you are using VAAI ATS, and doing active/active then you need
something, like the lock/sync talked about in the slides/hamme
were describing your
results? If so, were you running oracle or something like that? Just
wondering.
On 01/27/2015 08:58 PM, Mike Christie wrote:
> I do not know about perf, but here is some info on what is safe and
> general info.
>
> - If you are not using VAAI then it will use older styl
On 01/28/2015 02:10 AM, Nick Fisk wrote:
> Hi Mike,
>
> I've been working on some resource agents to configure LIO to use implicit
> ALUA in an Active/Standby config across 2 hosts. After a week long crash
> course in pacemaker and LIO, I now have a very sore head but it looks
desperate.
I would be terribly grateful for any input.
Mike
2015-01-29 19:49:30.590913 7fa66458d7c0 0 ceph version 0.74
(c165483bc72031ed9e5cca4e52fe3dd6142c8baa), process ceph-mon, pid 18788
Corruption: 10 missing files; e.g.:
/var/lib/ceph/mon/unimatrix-0/store.db/1054928.ldb
Corruption: 10
can then inject new settings to running daemons with injectargs:
# ceph tell osd.* injectargs '--osd_max_backfills 10'
Or, your can add those to ceph.conf and restart the daemons.
Cheers,
Mike Dawson
On 12/5/2013 9:54 AM, Jonas Andersson wrote:
I mean, I have OSD's and MON'
across the cluster to decrease the risk of hot-spots.
A few other notes:
- You'll certainly want QEMU 1.4.2 or later to get asynchronous io for RBD.
- You'll likely want to enable RBD writeback cache. It helps coalesce
small writes before hitting the disks.
Cheers,
Mike
On 12/17/201
It is also useful to mention that you can set the noout flag when doing
maintenance of any given length needs to exceeds the 'mon osd down out
interval'.
$ ceph osd set noout
** no re-balancing will happen **
$ ceph osd unset noout
** normal re-balancing rules will resume **
- M
I think my wording was a bit misleading in my last message. Instead of
"no re-balancing will happen", I should have said that no OSDs will be
marked out of the cluster with the noout flag set.
- Mike
On 12/21/2013 2:06 PM, Mike Dawson wrote:
It is also useful to mention that you c
What version of qemu do you have?
The issues I had were fixed once I upgraded qemu to >=1.4.2 which
includes a critical rbd patch for asynchronous io from Josh Durgin.
Cheers,
Mike
On 12/28/2013 4:09 PM, Andrei Mikhailovsky wrote:
Hi guys,
Did anyone figure out what could be causing t
I am trying to learn about Ceph and have been looking at the documentation and
speaking to colleagues who work with it and had a question that I could not get
the answer to. As I understand it, the Crush map is updated every time a disk
is
added. This causes the OSDs to migrate their data in pl
I am trying to learn about Ceph and have been looking at the documentation and
speaking to colleagues who work with it and had a question that I could not get
the answer to. As I understand it, the Crush map is updated every time a disk
is added. This causes the OSDs to migrate their data in pl
ucket back?
Cheers
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
an IOError,
instead of causing the cluster to stop working?
On 19 March 2014 10:07, Mike Bryant wrote:
> Hi,
> I've just upgraded a test cluster to Emporer, and one of my S3 buckets
> seems to have broken.
>
> s3 access is returning a 500 code (UnknownError).
>
> Running buck
m errors.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wed, Mar 19, 2014 at 4:06 AM, Mike Bryant
> wrote:
> > So I've done some more digging, and running the radosgw in debug mode I
> > found some messages from osd.3 saying IOErr
Adam,
I believe you need the command 'ceph osd create' prior to 'ceph-osd -i X
--mkfs --mkkey' for each OSD you add.
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual
Cheers,
Mike
On 4/5/2014 7:37 PM, Adam Clark wrote:
HI all,
I am tr
? For
obvious reasons I'd prefer to avoid redeploying the OSDs.
With each release, I get a bit more worried that this legacy setup will
cause issues. If you are an operators with a cluster older than a year
or so, what have you done?
Thanks,
Dan,
Could you describe how you harvested and analyzed this data? Even
better, could you share the code?
Cheers,
Mike
On 4/16/2014 11:08 AM, Dan van der Ster wrote:
Dear ceph-users,
I've recently started looking through our FileStore logs to better
understand the VM/RBD IO patterns
Thanks Dan!
Thanks,
Mike Dawson
On 4/17/2014 4:06 AM, Dan van der Ster wrote:
Mike Dawson wrote:
Dan,
Could you describe how you harvested and analyzed this data? Even
better, could you share the code?
Cheers,
Mike
First enable debug_filestore=10, then you'll see logs like this:
20
as little as 5Mbps of traffic per host due to spindle
contention once deep-scrub and/or recovery/backfill start. Co-locating
OSD Journals on the same spinners as you have will double that likelihood.
Possible solutions include moving OSD Journals to SSD (with
Congrats, any possible conflict with RedHat's earlier acquisition of GlusterFS?
> On Apr 30, 2014, at 7:18, "Sage Weil" wrote:
>
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our part
Victor,
This is a verified issue reported earlier today:
http://tracker.ceph.com/issues/8260
Cheers,
Mike
On 4/30/2014 3:10 PM, Victor Bayon wrote:
Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
error when running the "ceph-deploy osd act
We're using netatalk on top of cephfs for serving timemachine out to
clients.
It's so-so - Apple's support for timemachine on other afp servers isn't
brilliant.
On 6 May 2014 12:32, Andrey Korolyov wrote:
> You can do this for sure using iSCSI reexport feature, AFAIK no
> working RBD implement
he cost of setting primary affinity is low enough, perhaps this
strategy could be automated by the ceph daemons.
Thanks,
Mike Dawson
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@l
rom being marked down (with or without proper cause), but that
tends to cause me more trouble than its worth.
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 5/7/2014 1:28 PM, Craig Lewis wrote:
The 5 OSDs
ng deep-scrub permanently?
0: http://www.mikedawson.com/deep-scrub-issue1.jpg
1: http://www.mikedawson.com/deep-scrub-issue2.jpg
Thanks,
Mike Dawson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
vel, but rather stays low seemingly for days at a time, until the
next onslaught. If driven by the max scrub interval, shouldn't it jump
quickly back up?
Is there way to find the last scrub time for a given PG via the CLI to
know for sure?
Thanks,
Mike Dawson
On 5/7/2014 10:59 PM
set the primary affinity:
# ceph osd primary-affinity osd.0 1
I have not scaled up my testing, but it looks like this has the
potential to work well in preventing unnecessary read starvation in
certain situations.
0: http://tracker.ceph.com/issues/8323#note-1
Cheers,
Mike Dawson
On 5/8/20
Upstart to control daemons. I never see this issue on
Ubuntu / Dumpling / sysvinit.
Has anyone else seen this issue or know the likely cause?
--
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 4
mors that it
may be open sourced at some point in the future.
Cheers,
Mike Dawson
On 5/13/2014 12:33 PM, Adrian Banasiak wrote:
Thanks for sugestion with admin daemon but it looks like single osd
oriented. I have used perf dump on mon socket and it output some
interesting data in case of monitoring
Greg/Loic,
I can confirm that "logrotate --force /etc/logrotate.d/ceph" removes the
monitor admin socket on my boxes running 0.80.1 just like the
description in Issue 7188 [0].
0: http://tracker.ceph.com/issues/7188
Should that bug be reopened?
Thanks,
Mike Dawson
On 5/13/20
needed a deep-scrub the longest.
Thanks,
Mike Dawson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ereas other software falls apart
during periods of deep-scrubs. I theorize it has to do with the
individual software's attitude about flushing to disk / buffering.
- Mike
On 5/20/2014 8:31 PM, Aaron Ten Clay wrote:
For what it's worth, version 0.79 has different headers, and the aw
Perhaps:
# mount | grep ceph
- Mike Dawson
On 5/21/2014 11:00 AM, Sharmila Govind wrote:
Hi,
I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to
figure out to which pyhsical device/partition each of the OSDs are
attached to. Is there are command that can be executed in the
type xfs (rw,noatime,inode64)
Confirm the OSD in your ceph cluster with:
user@host:~# ceph osd tree
- Mike
On 5/21/2014 11:15 AM, Sharmila Govind wrote:
Hi Mike,
Thanks for your quick response. When I try mount on the storage node
this is what I get:
*root@cephnode4:~# mount*
*/dev/sda1 on
I’m having some trouble with radosgw and keystone integration, I always get the
following error:
user does not hold a matching role; required roles: Member,user,_member_,admin
Despite my token clearly having one of the roles:
"user": {
"id": "401375297eb540bbb1c32432439827b
u, Oct 15, 2015 at 8:34 AM, Mike Lowe wrote:
>> I’m having some trouble with radosgw and keystone integration, I always get
>> the following error:
>>
>> user does not hold a matching role; required roles:
>> Member,user,_member_,admin
>>
>&g
can’t file a documentation bug.
> On Oct 15, 2015, at 2:06 PM, Mike Lowe wrote:
>
> I think so, unless I misunderstand how it works.
>
> (openstack) role list --user jomlowe --project jomlowe
> +--+--+-+-+
> | ID
private
network was also far from being fully loaded.
It would be really great to get some advice about hardware choices for
my newly planned setup.
Thanks very much and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Hello.
For our CCTV storing streams project we decided to use Ceph cluster with
EC pool.
Input requirements is not scary: max. 15Gbit/s input traffic from CCTV,
30 day storing,
99% write operations, a cluster must has grow up with out downtime.
By now our vision of architecture it like:
* 6 J
On 10 November 2015 at 10:29, Mike Almateia wrote:
> Hello.
>
> For our CCTV storing streams project we decided to use Ceph cluster with EC
> pool.
> Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30
> day storing,
> 99% write operations, a cluster
12-Nov-15 03:33, Mike Axford пишет:
On 10 November 2015 at 10:29, Mike Almateia wrote:
Hello.
For our CCTV storing streams project we decided to use Ceph cluster with EC
pool.
Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30
day storing,
99% write operations, a
18-Nov-15 14:39, Sean Redmond пишет:
Hi,
I have a performance question for anyone running an SSD only pool. Let
me detail the setup first.
12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM)
8 X intel DC 3710 800GB
Dual port Solarflare 10GB/s NIC (one front and one back)
Ceph 0.94.5
Ubuntu 14.04 (3
Hello.
Someone have list of verified/tested SSD drives for Ceph?
I thinking about Ultrastar SSD1600MM SAS SSD for our all-flash Ceph
cluster. Somebody use it in production?
--
Mike, runs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
1 GB per MDS daemon?
In my case, the standard 'mds cache size 10' makes MDS crash and/or
the cephfs is unresponsive. Larger values for 'mds cache size' seem to
work really well.
Version trusty 14.04 and hammer.
Thanks and kind regards,
Mike
__
verybody says about stability issues.
Is more than one MDS considered stable enough with hammer?
Thanks and regards,
Mike
On 11/25/15 12:51 PM, Gregory Farnum wrote:
On Tue, Nov 24, 2015 at 10:26 PM, Mike Miller wrote:
Hi,
in my cluster with 16 OSD daemons and more than 20 million files on cephf
Hi,
can some please help me with this error?
$ ceph tell mds.0 version
Error EPERM: problem getting command descriptions from mds.0
Tell is not working for me on mds.
Version: infernalis - trusty
Thanks and regards,
Mike
___
ceph-users mailing
ut journal /
journal size 0.
Thanks and regards,
Mike
---
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.22): /usr/bin/ceph-deploy disk
prep
the same mount also improved, a 10 GBit/s
network connection is easily saturated.
Thank you all so much for the discussion and the hints.
Regards,
Mike
On 4/23/16 9:51 AM, n...@fisk.me.uk wrote:
I've just looked through github for the Linux kernel and it looks like
that read ahead fi
ful so anyone have a
suggestion on another level to put it at that might be useful? go figure
that this would happen while i'm at the openstack summit and it would keep
me from paying attention to some interesting presentations.
thanks in advance for any help.
mike
__
he new osds?
thanks in advance. hopefully someone can help soon because right now the
only thing holding things together right now is a while loop doing an 'ceph
osd down 41' every minute. :(
mike
On Thu, Apr 28, 2016 at 5:49 PM, Samuel Just wrote:
> I'd guess that to make any
his disturbs the backfilling and further delays
> writes to that poor PG.
>
it definitely does seem to have an impact similar to that. the only upside
is that it clears the slow io messages though i don't know if it actually
lets the client io complete. recovery doesn'
ned was to restore from backup.
setting min_read_recency_for_promote to 1 or making sure the osds were
running .5 were sufficient to prevent it from happening though we currently
do both.
mike
On Fri, Apr 29, 2016 at 9:41 AM, Robert Sander wrote:
> Hi,
>
> yesterday we ran into a
On Fri, Apr 29, 2016 at 9:34 AM, Mike Lovell
wrote:
> On Fri, Apr 29, 2016 at 5:54 AM, Alexey Sheplyakov <
> asheplya...@mirantis.com> wrote:
>
>> Hi,
>>
>> > i also wonder if just taking 148 out of the cluster (probably just
>> marking it out) would
On 04/29/2016 11:44 AM, Ming Lin wrote:
> On Tue, Jan 19, 2016 at 1:34 PM, Mike Christie wrote:
>> Everyone is right - sort of :)
>>
>> It is that target_core_rbd module that I made that was rejected
>> upstream, along with modifications from SUSE which added persistent
unsubscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Can anybody help shed some light on this error I’m getting from radosgw?
2016-05-11 10:09:03.471649 7f1b957fa700 1 -- 172.16.129.49:0/3896104243 -->
172.16.128.128:6814/121075 -- osd_op(client.111957498.0:726 27.4742be4b
97c56252-6103-4ef4-b37a-42739393f0f1.113770300.1_interfaces [create 0~0
[
allows? We want to spread
out the purchases of the OSD nodes over a month or two but I would like to
start moving data over ASAP.
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Alex,
Thank you for your response! Yes, this is for a production environment... Do
you think the risk of data loss due to the single node be different than if it
was an appliance or a Linux box with raid/zfs?
Cheers,
Mike
> On May 13, 2016, at 7:38 PM, Alex Gorbachev wrote:
>
>
Hi Christian,
Thank you, I know what I am asking isn't a good idea... I am just trying to
avoid waiting for all three nodes before I began virtualizing our
infrastructure.
Again thanks for the responses!
Cheers,
Mike
> On May 14, 2016, at 9:56 AM, Christian Balzer wrote:
>
>
ou_died.'
the things i'm not sure on is what on osd.16 is not healthly so that
it starts dropping ping requests or why recovery of 32.10c just
stopped. it also looks like osd.16 is never updating its osd map
because it keeps using version 497664 until it commits suicide even
though the 'you_died' replies are saying there are newer versions of
the map.
so this probably wasn't that useful but those are the things that
stood out to me. sorry i'm not of much more use at the moment.
mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ean; 1 pgs stuck undersized; 1 pgs undersized; 39 requests
are blocked > 32 sec; recovery 2/414113144 objects degraded (0.000%);
recovery 1/4141131 44 objects misplaced (0.000%); noout flag(s) set
Thanks,
Mike
___
ceph-users mailing list
ceph-users@lists.
Hi,
what is the meaning of the directory "current.remove.me.846930886" is
/var/lib/ceph/osd/ceph-14?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
that doesn’t involve too much command line so other
admins that don’t know Ceph or XenServer very well can work with it. I am just
curious what others are doing… Any help is greatly appreciated!
Cheers,
Mike
___
ceph-users mailing list
ceph-users@li
Hi all,
Is there anyone using rbd for xenserver vm storage? I have XenServer 7 and the
latest Ceph, I am looking for the the best way to mount the rbd volume under
XenServer. There is not much recent info out there I have found except for
this:
http://www.mad-hacking.net/documentation/linux/h
Hi Greg,
thanks, highly appreciated. And yes, that was on an osd with btrfs. We
switched back to xfs because of btrfs instabilities.
Regards,
-Mike
On 6/27/16 10:13 PM, Gregory Farnum wrote:
On Sat, Jun 25, 2016 at 11:22 AM, Mike Miller wrote:
Hi,
what is the meaning of the directory
is
OK, so I am hoping I didn’t miss a configuration or something.
> On Jun 29, 2016, at 3:28 PM, Jake Young wrote:
>
>
>
> On Wednesday, June 29, 2016, Mike Jacobacci <mailto:mi...@flowjo.com>> wrote:
> Hi all,
>
> Is there anyone using rbd for xenserver
here…
Cheers,
Mike
> On Jun 30, 2016, at 10:23 AM, Mike Jacobacci wrote:
>
> Hi Jake,
>
> Interesting… XenServer 7 does has rbd installed but trying to map the rbd
> image with this command:
>
> # echo {ceph_monitor_ip} name={ceph_admin},secret={ceph_key} {ceph_pool}
/dev/sdc4) _read_bdev_label unable to decode label at offset 66:
buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end
of struct encoding”
Cheers,
Mike
> On Jun 30, 2016, at 10:55 AM, Mike Jacobacci wrote:
>
> I am not sure why the mapping
Hi Jake,
I will give that a try and see if that helps, thank you!
Yes I have that open in a browser tab, it gave me the idea of using ceph-deploy
to install on the xenserver.
I will update with the results.
Cheers,
Mike
> On Jun 30, 2016, at 12:42 PM, Jake Young wrote:
>
> Can yo
So after adding the ceph repo and enabling the cents-7 repo… It fails trying to
install ceph-common:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.web-ster.com
Resolving Dependencies
--> Running transaction check
---> Package ceph-common.x86_64 1:10.2.2-
a042a42, missing 200
[35074.469178] libceph: mon0 192.168.10.187:6789 socket error on read
> On Jun 30, 2016, at 6:15 PM, Jake Young wrote:
>
> See https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17112.html
> <https://www.mail-archive.com/ceph-users@lists.ceph.co
XenServer and am excited to use Ceph, I want
this to work.
Since there are no VM’s on it yet, I think I will upgrade the kernel and see
what happens.
Cheers,
Mike
> On Jun 30, 2016, at 7:40 PM, Somnath Roy wrote:
>
> It seems your client kernel is pretty old ?
> Either upgrade
Yes, I would like to know too… I decided nott to update the kernel as it could
possibly affect xenserver’s stability and/or performance.
Cheers,
Mike
> On Jun 30, 2016, at 11:54 PM, Josef Johansson wrote:
>
> Also, is it possible to recompile the rbd kernel module in XenServer? I am
So just to update, I decided to ditch XenServer and go with Openstack… Thanks
for everyone’s help with this!
Cheers,
Mike
> On Jul 1, 2016, at 1:29 PM, Mike Jacobacci wrote:
>
> Yes, I would like to know too… I decided nott to update the kernel as it
> could possibly affect
On 07/08/2016 02:22 PM, Oliver Dzombic wrote:
> Hi,
>
> does anyone have experience how to connect vmware with ceph smart ?
>
> iSCSI multipath does not really worked well.
Are you trying to export rbd images from multiple iscsi targets at the
same time or just one target?
For the HA/multiple t
this working under RHEL or CentOS is to
upgrade the kernel… Doesn’t that mean RBD isn’t support for a production
environment unless it’s Ubuntu or whatever OS supports a Kernel over 3.18?
Thanks for everyone’s help so far!
Cheers,
Mike
___
ceph-users
Hi Sean,
Thanks for the quick response, this is what I see in dimes:
set mismatch, my 102b84a842a42 < server's 40102b84a842a42, missing
400
How can I set the tunable low enough? And what does that mean for performance?
Cheers,
Mike
> On Jul 12, 2016, at 9:43 AM, S
Thanks, Can I ignore this warning then?
health HEALTH_WARN
crush map has legacy tunables (require bobtail, min is firefly)
Cheers,
Mike
> On Jul 12, 2016, at 9:57 AM, Sean Redmond wrote:
>
> Hi,
>
> Take a look at the docs here
> (http://docs.ceph.com/docs/jewe
gt;
> mon warn on legacy crush tunables = false in [mon] section in ceph.conf.
>
> Thanks
>
> On Tue, Jul 12, 2016 at 5:59 PM, Mike Jacobacci <mailto:mi...@flowjo.com>> wrote:
> Thanks, Can I ignore this warning then?
>
> health HEALTH_WARN
> crush m
On 07/20/2016 03:50 AM, Frédéric Nass wrote:
>
> Hi Mike,
>
> Thanks for the update on the RHCS iSCSI target.
>
> Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is
> it too early to say / announce).
No HA support for sure. We are looking into
On 07/20/2016 11:52 AM, Jan Schermer wrote:
>
>> On 20 Jul 2016, at 18:38, Mike Christie wrote:
>>
>> On 07/20/2016 03:50 AM, Frédéric Nass wrote:
>>>
>>> Hi Mike,
>>>
>>> Thanks for the update on the RHCS iSCSI target.
>>>
&
with it.
/*
* Code from QEMU Block driver for RADOS (Ceph) ported to a TCMU handler
* by Mike Christie.
*
* Copyright (C) 2010-2011 Christian Brunner ,
* Josh Durgin
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-
On 07/21/2016 11:41 AM, Mike Christie wrote:
> On 07/20/2016 02:20 PM, Jake Young wrote:
>>
>> For starters, STGT doesn't implement VAAI properly and you will need to
>> disable VAAI in ESXi.
>>
>> LIO does seem to implement VAAI properly, but performance is n
?
Currently, we are on hammer 0.94.5 and linux ubuntu, kernel 3.13.
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mar 1, 2013, at 6:08 PM, "McNamara, Bradley"
wrote:
> I'm new, too, and I guess I just need a little clarification on Greg's
> statement. The RBD filesystem is mounted to multiple VM servers, say, in a
> Proxmox cluster, and as long as any one VM image file on that filesystem is
> only b
Try http://ceph.com/debian-testing/dists/
On Mar 5, 2013, at 11:44 AM, Scott Kinder wrote:
> When is ceph 0.57 going to be available from the ceph.com PPA? I checked, and
> all releases under http://ceph.com/debian/dists/ seem to still be 0.56.3. Or
> am I missing something?
> _
ed in a couple weeks, but I hope to start 0.60 later today.
- Mike
On 4/8/2013 12:43 AM, Matthew Roy wrote:
I'm seeing weird messages in my monitor logs that don't correlate to
admin activity:
2013-04-07 22:54:11.528871 7f2e9e6c8700 1 --
[2001:::20]:6789/0 --> [2001:::20]:
, I have seen it on test deployments with a single monitor, so it
doesn't seem to be limited to deployments with a leader and followers.
Thanks getting this bug moving forward.
Thanks,
Mike
On 4/18/2013 6:23 PM, Gregory Farnum wrote:
There's a little bit of python called ceph-c
On 4/19/2013 11:43 AM, Gregory Farnum wrote:
On Thu, Apr 18, 2013 at 7:59 PM, Mike Dawson wrote:
Greg,
Looks like Sage has a fix for this problem. In case it matters, I have seen
a few cases that conflict with your notes in this thread and the bug report.
I have seen the bug exclusively on
When I had similar trouble, it was btrfs file deletion, and I just had to wait
until it recovered. I promptly switched to xfs. Also, if you are using a
kernel before 3.8.0 with btrfs you will loose data.
On Apr 19, 2013, at 7:20 PM, Steven Presser wrote:
> Huh. Mu whole cluster seems stuck
If it says 'active+clean' then it is OK no mater what else may additionally
have as a status. Deep scrubbing is just a normal background process that
makes sure your data is consistent and shouldn't keep you from accessing it.
Repair should only be done as a last resort, it will discard any re
2. At kernels less than 3.8 BTRFS will loose data with sparse files, so DO NOT
USE IT. I've had trouble with btrfs file deletion hanging my osd's for up to
15 minutes with kernel 3.7 with btrfs sparse file patch applied.
On Apr 23, 2013, at 8:20 PM, Steve Hindle wrote:
>
> Hi All,
>
> The
Mandell,
Not sure if you can start with a partition to see which OSD it belongs
to, but you can start with the OSDs to see what journal partition
belongs to it:
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep
osd_journal | grep -v size
- Mike
On 4/24/2013 9:05 PM
up1
But there are buckets set in the crush map (Attached).
How can I fix this?
Editing the crush map and doing setcrushmap doesn't appear to change anything.
Cheers
Mike
--
Notice: This email is confidential and may contain copyright material of
Ocado Limited (the "Company&
Mike,
I use a process like:
crushtool -c new-crushmap.txt -o new-crushmap && ceph osd setcrushmap -i
new-crushmap
I did not attempt to validate your crush map. If that command fails, I
would scrutinize your crushmap for validity/correctness.
Once you have the new crushmap injec
gitbuilder.
Thanks,
Mike
On 4/25/2013 12:17 PM, Sage Weil wrote:
On Thu, 25 Apr 2013, Martin Mailand wrote:
Hi,
if I shutdown an OSD, the OSD gets marked down after 20 seconds, after
300 seconds the osd should get marked out, an the cluster should resync.
But that doesn't happened, the OSD stays i
All of my MDS daemons have begun crashing when I start them up, and
they try to begin recovery.
Log attached
Mike
--
Mike Bryant | Systems Administrator | Ocado Technology
mike.bry...@ocado.com | 01707 382148 | www.ocado.com
--
Notice: This email is confidential and may contain copyright
Ah, looks like it was.
I've got a gitbuilder build of the mds running and it seems to be working.
Thanks!
Mike
On 30 April 2013 16:56, Kevin Decherf wrote:
> On Tue, Apr 30, 2013 at 03:10:00PM +0100, Mike Bryant wrote:
>> All of my MDS daemons have begun crashing when I star
101 - 200 of 452 matches
Mail list logo