Hi cepher,
I have a same problem
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/035999.html
Is there any solution for that?
thanks
Vida
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
On 2015-07-09T14:05:55, David Burley wrote:
> Converted a few of our OSD's (spinners) over to a config where the OSD
> journal and XFS journal both live on an NVMe drive (Intel P3700). The XFS
> journal might have provided some very minimal performance gains (3%,
> maybe). Given the low gains, we
This is usually caused by use of older kernel clients. I don't remember
exactly what version it was fixed in, but iirc we've seen the problem
with 3.14 and seen it go away with 3.18.
If your system is otherwise functioning well, this is not a critical
error -- it just means that the MDS mig
Thank you John,
All my server is ubuntu14.04 with 3.16 kernel.
Not all of clients appear this problem, the cluster seems functioning well
now.
As you say,i will change the mds_cache_size to 50 from 10 to take a
test, thanks again!
2015-07-10 17:00 GMT+08:00 John Spray :
>
> This is usuall
On 07/10/15 02:13, Christoph Adomeit wrote:
> Hi Guys,
>
> I have a ceph pool that is mixed with 10k rpm disks and 7.2 k rpm disks.
>
> There are 85 osds and 10 of them are 10k
> Size is not an issue, the pool is filled only 20%
>
> I want to somehow prefer the 10 k rpm disks so that they get more
Hi All,
So this may have been asked but I’ve googled the crap out of this so maybe my
google-fu needs work. Does anyone have any experience running a Ceph cluster
with the Ceph daemons (mons/osds/rgw) running on the same hosts as other
services (so say Docker containers, or really anything gene
We run CEPH OSDs on the same hosts as QEMU/KVM with OpenStack. You need to
segregate the processes so the OSDs have their dedicated cores and memory,
other than that it works fine. Our MONs also run on the same hosts as the
OpenStack controller nodes (L3 agents and such) - no problem here, you j
Hey,
I noticed today when trying to upgrade one of our clusters from Giant to
Hammer with ceph-deploy, but i ain't able to receive release.asc.
This happens because wget to ceph.com/git fails (seems to redirect to
git.ceph.com)
and then uses git.ceph.com that doesn't have IPv6 address at all.
>
> In a similar direction, one could try using bcache on top of the actual
> spinner. Have you tried that, too?
>
>
We haven't tried bcache/flashcache/...
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
e: da...@slashdotmedia.com
__
Which request generated this trace?
Is it nova-compute log?
> On 10 Jul 2015, at 07:13, Mario Codeniera wrote:
>
> Hi,
>
> It is my first time here. I am just having an issue regarding with my
> configuration with the OpenStack which works perfectly for the cinder and the
> glance based on K
Every HDD has a “mean access time” number (related to the rotation speed,
number of heads, etc.)
with 8ms access time this gives you 1000/8 = 125 seeks per second.
This is where it comes from :-)
Of course the best case will be better, I generally calculate 150 IOPS for any
SATA drive, 200 for S
We have tried both - you can see performance gain, but we finally went
toward ceph cache tier. It's much more flexible and gives similar gains
in terms of performance.
Downside to bcache is that you can't use it on a drive that already has
data - only new, clean partitions can be added - and (
On 2015-07-10T15:20:23, Jacek Jarosiewicz wrote:
> We have tried both - you can see performance gain, but we finally went
> toward ceph cache tier. It's much more flexible and gives similar gains in
> terms of performance.
>
> Downside to bcache is that you can't use it on a drive that already h
I have installed ceph (0.94.2 using ceph-deploy utility.. I have created
three VM with Ubuntu 14.04, ceph01, ceph02 and ceph03, each one has 3 OSD
daemons, and 1 mon, ceph01 also has ceph-deploy.
I need help, because I have read the online docs and try many things , but
I didn't find why my clust
What was your monitor node's configuration when you had multiple ceph
daemons running on them?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 9, 2015 at 5:36 PM, Quentin Hartman <
qhart...@direwolfdigital.com> wrote:
> I h
We do the same, so far no problems.
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
On Fri, Jul 10, 2015 at 6:51 AM, Jan Schermer wrote:
> We run CEPH OSDs on the same hosts as QEMU/KVM with OpenStack. You need to
> segregate the processes so the OSDs
You mean the hardware config? They are older Core2-based servers with 4GB
of RAM. Nothing special. I have one running mon and rgw, one running mon
and mds, and one run just a mon.
QH
On Fri, Jul 10, 2015 at 8:58 AM, Nate Curry wrote:
> What was your monitor node's configuration when you had mul
Hi,
I can update my ceph's packages to 0.80.10.
But i can't found informations about this version ( website, mailing list ).
Someone know where i can found these informations ?
Regards
--
--
Pierre BLONDEAU
Administrateur Systèmes & réseaux
Université
Yes that was what I meant. Thanks. Was that in a production environment?
Nate Curry
On Jul 10, 2015 11:21 AM, "Quentin Hartman"
wrote:
> You mean the hardware config? They are older Core2-based servers with 4GB
> of RAM. Nothing special. I have one running mon and rgw, one running mon
> and md
The release notes have not yet been published.
On 10/07/2015 17:31, Pierre BLONDEAU wrote:
> Hi,
>
> I can update my ceph's packages to 0.80.10.
> But i can't found informations about this version ( website, mailing list ).
> Someone know where i can found these informations ?
>
> Regards
>
>
For very small values of production. I never had more than a couple clients
hitting either of them, but they were doing "real work". Ultimately though,
we decided to just use NFS exports from a VM to do what we were trying to
do with rgw and mds.
QH
On Fri, Jul 10, 2015 at 9:47 AM, Nate Curry wr
The Ceph Admin REST API is producing SignatureDoesNotMatch access denied errors
when attempting to make a request for the user's key sub-resource. Both PUT and
DELETE actions for the /admin/user?key resource are failing even though the
string to sign on the client and the one returned by the s
Some of our users have experienced this as well:
https://github.com/deis/deis/issues/3969
One of our other users suggested performing a deep scrub of all PGs - the
suspicion is that this is caused by a corrupt file on the filesystem.
On Thu, Jul 9, 2015 at 12:53 AM, Sylvain Munaut <
s.mun...@what
Hi all,
I think I found a bug in cephfs kernel client.
When I create directory in cephfs and set layout to
ceph.dir.layout="stripe_unit=1073741824 stripe_count=1
object_size=1073741824 pool=somepool"
attepmts to write larger file will cause kernel hung or reboot.
When I'm using cephfs client
Hi,
> Some of our users have experienced this as well:
> https://github.com/deis/deis/issues/3969
>
> One of our other users suggested performing a deep scrub of all PGs - the
> suspicion is that this is caused by a corrupt file on the filesystem.
That somehow appeared right when I upgraded to ha
25 matches
Mail list logo