On 08/18/2013 10:58 PM, Wolfgang Hennerbichler wrote:
On Sun, Aug 18, 2013 at 06:57:56PM +1000, Martin Rudat wrote:
Hi,
On 2013-02-25 20:46, Wolfgang Hennerbichler wrote:
maybe some of you are interested in this - I'm using a dedicated VM to
backup important VMs which have their storage in RBD
Hello Mark,
Hello list,
I fixed the monitor issue. There was another monitor, which didn't run
any more. I've removed that - now I'm lost with the MDS still replaying
it's journal?
root@vvx-ceph-m-02:/var/lib/ceph/mon# ceph health detail
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; m
You're right, PGLog::undirty() looks suspicious. I just pushed a
branch wip-dumpling-pglog-undirty with a new config
(osd_debug_pg_log_writeout) which if set to false will disable some
strictly debugging checks which occur in PGLog::undirty(). We haven't
actually seen these checks causing excessi
Hello, I just have some small questions about Ceph Deployment models and if
this would work for us.
Currently the first question would be, is it possible to have a ceph single
node setup, where everything is on one node?
Our Application, Ceph's object storage and a database? We focus on this
deploy
On 08/19/2013 10:36 AM, Schmitt, Christian wrote:
> Hello, I just have some small questions about Ceph Deployment models and
> if this would work for us.
> Currently the first question would be, is it possible to have a ceph
> single node setup, where everything is on one node?
yes. depends on 'ev
Hi,
On 2013-08-19 15:48, Guang Yang wrote:
After walking through some documents of Ceph, I have a couple of
questions:
1. Is there any comparison between Ceph and AWS S3, in terms of the
ability to handle different work-loads (from KB to GB), with
corresponding performance report?
No idea; I
Thanks for your response.
Great.
In latest cuttlefish it is also fixed I think?
We have two problems with scrubbing:
- memory leaks
- slow requests and wrongly mark osd with bucket index down (when scrubbing)
Now we decided to turn off scrubbing and trigger it on maintenance window.
I noticed th
On 19/08/13 18:17, Guang Yang wrote:
3. Some industry research shows that one issue of file system is the
metadata-to-data ratio, in terms of both access and storage, and some
technic uses the mechanism to combine small files to large physical
files to reduce the ratio (Haystack for example),
On 08/19/2013 11:18 AM, Mark Kirkwood wrote:
> However if you use rados gateway (S3 or Swift look-alike
> api) then each client data object will be broken up into chunks at the
> rados level (typically 4M sized chunks).
=> which is a good thing in terms of replication and OSD usage
distribution.
Hello List,
The troubles to fix such a cluster continue... I get output like this now:
# ceph health
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; mds cluster is
degraded; mds vvx-ceph-m-03 is laggy
When checking for the ceph-mds processes, there are now none left... no
matter which
On 2013-08-19 18:36, Schmitt, Christian wrote:
Currently the first question would be, is it possible to have a ceph
single node setup, where everything is on one node?
Yes, definitely, I've currently got a single-node ceph 'cluster', but,
to the best of my knowledge, it's not the recommended con
On 08/18/2013 07:11 PM, Oliver Daudey wrote:
Hey all,
Also created on the tracker, under http://tracker.ceph.com/issues/6047
While playing around on my test-cluster, I ran into a problem that I've
seen before, but have never been able to reproduce until now. The use
of pool-snapshots and rbd-s
> Date: Mon, 19 Aug 2013 10:50:25 +0200
> From: Wolfgang Hennerbichler
> To:
> Subject: Re: [ceph-users] Ceph Deployments
> Message-ID: <5211dc51.4070...@risc-software.at>
> Content-Type: text/plain; charset="ISO-8859-1"
>
> On 08/19/2013 10:36 AM, Schmitt, Christian wrote:
> > Hello, I just have
On 08/19/2013 12:01 PM, Schmitt, Christian wrote:
>> yes. depends on 'everything', but it's possible (though not recommended)
>> to run mon, mds, and osd's on the same host, and even do virtualisation.
>
> Currently we don't want to virtualise on this machine since the
> machine is really small, a
Hi ceph-users,
I would like to check if there is any manual / steps which can let me try to
deploy ceph in RHEL?
Thanks,
Guang___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2013/8/19 Wolfgang Hennerbichler :
> On 08/19/2013 12:01 PM, Schmitt, Christian wrote:
>>> yes. depends on 'everything', but it's possible (though not recommended)
>>> to run mon, mds, and osd's on the same host, and even do virtualisation.
>>
>> Currently we don't want to virtualise on this machin
On Mon, Aug 19, 2013 at 6:09 PM, Guang Yang wrote:
> Hi ceph-users,
> I would like to check if there is any manual / steps which can let me try to
> deploy ceph in RHEL?
Setup with ceph-deploy: http://dachary.org/?p=1971
Official documentation will also be helpful:
http://ceph.com/docs/master/sta
Hi,
I have an OSD which crash every time I try to start it (see logs below).
Is it a known problem ? And is there a way to fix it ?
root! taman:/var/log/ceph# grep -v ' pipe' osd.65.log
2013-08-19 11:07:48.478558 7f6fe367a780 0 ceph version 0.61.7
(8f010aff684e820ecc837c25ac77c7a05d7191ff), pro
I have a 3 nodes, 15 osds ceph cluster setup:* 15 7200 RPM SATA disks, 5 for
each node.* 10G network* Intel(R) Xeon(R) CPU E5-2620(6 cores) 2.00GHz, for
each node.* 64G Ram for each node.
I deployed the cluster with ceph-deploy, and created a new data pool for
cephfs.Both the data and metadata
Hi,
I just noticed that in dumpling the "ceph" cli tool no longer utilises the
"CEPH_ARGS" environment variable. This is used by openstack cinder to specifiy
the cephx user. Ref:
http://ceph.com/docs/next/rbd/rbd-openstack/#configure-openstack-to-use-ceph
I modifiied this line in /usr/share
Sorry, forget to tell the OS and kernel version.
It's Centos 6.4 with kernel 3.10.6 .fio 2.0.13 .
From: dachun...@outlook.com
To: ceph-users@lists.ceph.com
Date: Mon, 19 Aug 2013 11:28:24 +
Subject: [ceph-users] Poor write/random read/random write performance
I have a 3 nodes, 15 osds ceph
On Mon, Aug 19, 2013 at 4:26 AM, Nico Massenberg
wrote:
> Hi Alfredo,
>
> thanks for your response. I updated ceph-deploy to v1.2.1 and got the v0.5.2
> of pushy from:
> https://launchpad.net/pushy/+download
Ah, there *should* not be need for that as the packages we publish
also have pushy listed
On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov
wrote:
> << causing this to << filesystem and prevent this.
>
> Hi. Any changes (
>
> Can you create a build that passes the -t flag with mount?
>
I tried going through these steps again and could not get any other
ideas except to pass in that f
On 08/19/2013 06:28 AM, Da Chun Ng wrote:
I have a 3 nodes, 15 osds ceph cluster setup:
* 15 7200 RPM SATA disks, 5 for each node.
* 10G network
* Intel(R) Xeon(R) CPU E5-2620(6 cores) 2.00GHz, for each node.
* 64G Ram for each node.
I deployed the cluster with ceph-deploy, and created a new dat
nt Antivirus, versie van database
viruskenmerken 8703 (20130819) __
Het bericht is gecontroleerd door ESET Endpoint Antivirus.
http://www.eset.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks very much! Mark.Yes, I put the data and journal on the same disk, no SSD
in my environment.My controllers are general SATA II.
Some more questions below in blue.
Date: Mon, 19 Aug 2013 07:48:23 -0500
From: mark.nel...@inktank.com
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Poor
On Mon, 19 Aug 2013, Mostowiec Dominik wrote:
> Thanks for your response.
> Great.
>
> In latest cuttlefish it is also fixed I think?
>
> We have two problems with scrubbing:
> - memory leaks
> - slow requests and wrongly mark osd with bucket index down (when scrubbing)
The slow requests can tri
On Mon, 19 Aug 2013, S?bastien Han wrote:
> Hi,
>
> The new version of the driver (for Havana) doesn't need the CEPH_ARGS
> argument, the driver now uses the librbd and librados (not the CLI anymore).
>
> I guess a better patch will result in:
>
> stdout, _ = self._execute('ceph', '--id', 'self
On Mon, 19 Aug 2013, S?bastien Han wrote:
> Hi guys,
>
> While reading a developer doc, I came across the following options:
>
> * osd balance reads = true
> * osd shed reads = true
> * osd shed reads min latency
> * osd shed reads min latency diff
>
> The problem is that I can't find any of the
On Sunday, August 18, 2013, Guang Yang wrote:
> Hi ceph-users,
> This is Guang and I am pretty new to ceph, glad to meet you guys in the
> community!
>
> After walking through some documents of Ceph, I have a couple of questions:
> 1. Is there any comparison between Ceph and AWS S3, in terms of
Have you ever used the FS? It's missing an object which we're
intermittently seeing failures to create (on initial setup) when the
cluster is unstable.
If so, clear out the metadata pool and check the docs for "newfs".
-Greg
On Monday, August 19, 2013, Georg Höllrigl wrote:
> Hello List,
>
> The
On 08/19/2013 08:59 AM, Da Chun Ng wrote:
> Thanks very much! Mark.
> Yes, I put the data and journal on the same disk, no SSD in my environment.
> My controllers are general SATA II.
Ok, so in this case the lack of WB cache on the controller and no SSDs
for journals is probably having an effect.
Thank you! Testing now.
How about pg num? I'm using the default size 64, as I tried with (100 *
osd_num)/replica_size, but it decreased the performance surprisingly.
> Date: Mon, 19 Aug 2013 11:33:30 -0500
> From: mark.nel...@inktank.com
> To: dachun...@outlook.com
> CC: ceph-users@lists.ceph.com
On 08/19/2013 12:05 PM, Da Chun Ng wrote:
> Thank you! Testing now.
>
> How about pg num? I'm using the default size 64, as I tried with (100 *
> osd_num)/replica_size, but it decreased the performance surprisingly.
Oh! That's odd! Typically you would want more than that. Most likely
you aren
Actually, I wrote the Quick Start guides so that you could do exactly
what you are trying to do, but mostly from a "kick the tires"
perspective so that people can learn to use Ceph without imposing
$100k worth of hardware as a requirement. See
http://ceph.com/docs/master/start/quick-ceph-deploy/
I
On Mon, Aug 19, 2013 at 9:07 AM, Sage Weil wrote:
> On Mon, 19 Aug 2013, S?bastien Han wrote:
>> Hi guys,
>>
>> While reading a developer doc, I came across the following options:
>>
>> * osd balance reads = true
>> * osd shed reads = true
>> * osd shed reads min latency
>> * osd shed reads min la
That sounds bad for me.
As said one of the things we consider is a one node setup, for production.
Not every Customer will afford hardware worth more than ~4000 Euro.
Small business users don't need need the biggest hardware, but i don't
think it's a good way to have a version who uses the filesyst
What you are trying to do will work, because you will not need any kernel
related code for object storage, so a one node setup will work for you.
--
Sent from my mobile device
On 19.08.2013, at 20:29, "Schmitt, Christian" wrote:
> That sounds bad for me.
> As said one of the things we consid
Wolfgang is correct. You do not need VMs at all if you are setting up
Ceph Object Storage. It's just Apache, FastCGI, and the radosgw daemon
interacting with the Ceph Storage Cluster. You can do that on one box
no problem. It's still better to have more drives for performance
though.
On Mon, Aug 1
On Fri, Aug 16, 2013 at 5:47 AM, Mostowiec Dominik
wrote:
> Hi,
> Thanks for your response.
>
>> It's possible, as deep scrub in particular will add a bit of load (it
>> goes through and compares the object contents).
>
> It is possible that the scrubbing blocks access(RW or only W) to bucket inde
We've made another point release for Cuttlefish. This release contains a
number of fixes that are generally not individually critical, but do trip
up users from time to time, are non-intrusive, and have held up under
testing.
Notable changes include:
* librados: fix async aio completion wakeu
Hey Samuel,
Thanks! I installed your version, repeated the same tests on my
test-cluster and the extra CPU-loading seems to have disappeared. Then
I replaced one osd of my production-cluster with your modified version
and it's config-option and it seems to be a lot less CPU-hungry now.
Although
Hi Oliver,
Glad that helped! How much more efficient do the cuttlefish OSDs seem
at this point (with wip-dumpling-pglog-undirty)? On modern Intel
platforms we were actually hoping to see CPU usage go down in many cases
due to the use of hardware CRC32 instructions.
Mark
On 08/19/2013 03:0
Le lundi 19 août 2013 à 12:27 +0200, Olivier Bonvalet a écrit :
> Hi,
>
> I have an OSD which crash every time I try to start it (see logs below).
> Is it a known problem ? And is there a way to fix it ?
>
> root! taman:/var/log/ceph# grep -v ' pipe' osd.65.log
> 2013-08-19 11:07:48.478558 7f6fe3
Hey Mark,
If I look at the "wip-dumpling-pglog-undirty"-version with regular top,
I see a slightly higher base-load on the osd, with significantly more
and higher spikes in it than the Dumpling-osds. Looking with `perf
top', "PGLog::undirty()" is still there, although pulling significantly
less C
Hi,
> Is that the only slow request message you see?
No.
Full log: https://www.dropbox.com/s/i3ep5dcimndwvj1/slow_requests.txt.tar.gz
It start from:
2013-08-16 09:43:39.662878 mon.0 10.174.81.132:6788/0 4276384 : [DBG] osd.4
10.174.81.131:6805/31460 reported failed by osd.50 10.174.81.135:6842/26
Hi,
> Yes, it definitely can as scrubbing takes locks on the PG, which will prevent
> reads or writes while the message is being processed (which will involve the
> rgw index being scanned).
It is possible to tune scrubbing config for eliminate slow requests and marking
osd down when large rgw b
>
> We've made another point release for Cuttlefish. This release contains a
> number of fixes that are generally not individually critical, but do trip
> up users from time to time, are non-intrusive, and have held up under
> testing.
>
> Notable changes include:
>
> * librados: fix async aio
On Mon, Aug 19, 2013 at 3:09 PM, Mostowiec Dominik
wrote:
> Hi,
>> Yes, it definitely can as scrubbing takes locks on the PG, which will
>> prevent reads or writes while the message is being processed (which will
>> involve the rgw index being scanned).
> It is possible to tune scrubbing config
On Mon, 19 Aug 2013, James Harper wrote:
> >
> > We've made another point release for Cuttlefish. This release contains a
> > number of fixes that are generally not individually critical, but do trip
> > up users from time to time, are non-intrusive, and have held up under
> > testing.
> >
> > No
> On Mon, 19 Aug 2013, James Harper wrote:
> > >
> > > We've made another point release for Cuttlefish. This release contains a
> > > number of fixes that are generally not individually critical, but do trip
> > > up users from time to time, are non-intrusive, and have held up under
> > > testing.
Thanks Mark.
What is the design considerations to break large files into 4M chunk rather
than storing the large file directly?
Thanks,
Guang
From: Mark Kirkwood
To: Guang Yang
Cc: "ceph-users@lists.ceph.com"
Sent: Monday, August 19, 2013 5:18 PM
Subject:
Thanks Greg.
Some comments inline...
On Sunday, August 18, 2013, Guang Yang wrote:
Hi ceph-users,
>This is Guang and I am pretty new to ceph, glad to meet you guys in the
>community!
>
>
>After walking through some documents of Ceph, I have a couple of questions:
> 1. Is there any comparison
On Monday, August 19, 2013, Guang Yang wrote:
> Thanks Greg.
>
> Some comments inline...
>
> On Sunday, August 18, 2013, Guang Yang wrote:
>
> Hi ceph-users,
> This is Guang and I am pretty new to ceph, glad to meet you guys in the
> community!
>
> After walking through some documents of Ceph, I h
On 20/08/13 13:27, Guang Yang wrote:
Thanks Mark.
What is the design considerations to break large files into 4M chunk
rather than storing the large file directly?
Quoting Wolfgang from previous reply:
=> which is a good thing in terms of replication and OSD usage
distribution
...which co
Transferring this back the ceph-users. Sorry, I can't help with rbd issues.
One thing I will say is that if you are mounting an rbd device with a
filesystem on a machine to export ftp, you can't also export the same device
via iSCSI.
David Zafman
Senior Developer
http://www.inktank.com
On A
56 matches
Mail list logo