When using RBD backend for Openstack volume, i can easily surpass 200MB/s.
But when using "rbd import" command, eg:
# rbd import --pool test Debian.raw volume-Debian-1 --new-format --id
volumes
I only can import at speed ~ 30MB/s
I don't know why rbd import slow? What can i do to improve import sp
Hi there,
I am new to Ceph and still learning its performance capabilities, but I
would like to share my performance results in the hope that they are useful
to others, and also to see if there is room for improvement in my setup.
Firstly, a little about my setup:
3 servers (quad-core CPU, 16GB
Hi,
i want to use my ceph for a backup-repository holding virtual-tapes.
When i copy files from my existing backup-system to ceph all files are
cut at 1TB. The biggest files are arounf 5TB for now.
So i'm afraid, that the actual filesizelimit is set to 1TB.
How can I increase this limit ?
Can th
Hi,
I’m slightly confused about one thing we are observing at the moment. We’re
testing the shutdown/removal of OSD servers and noticed twice as much
backfilling as expected. This is what we did:
1. service ceph stop on some OSD servers.
2. ceph osd out for the above OSDs (to avoid waiting for t
On 01/09/2014 12:06 PM, Markus Goldberg wrote:
Hi,
i want to use my ceph for a backup-repository holding virtual-tapes.
When i copy files from my existing backup-system to ceph all files are
cut at 1TB. The biggest files are arounf 5TB for now.
CephFS? RBD? Or the RADOS Gateway?
So i'm afraid
I think I found a comment in the documentation that's not inteded to be
there:
http://ceph.com/docs/master/rbd/rbd-snapshot/
"For the rollback section, you could mention that rollback means
overwriting the current version with data from a snapshot, and takes
longer with larger images. So cloning i
Hi Markus,
There is a compile time limit of 1 tb per file in cephfs. We can increase that
pretty easily. I need to check whether it can be safely switched to a
configurable...
sage
Markus Goldberg wrote:
>Hi,
>i want to use my ceph for a backup-repository holding virtual-tapes.
>When i copy
I've installed kraken on my cluster and it works fine !
Thanks for this release :)
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
Sent: jeudi 9 janvier 2014 06:31
To: ceph-us...@ceph.com
Subject: [ceph-users] The Kraken has b
Excellent Stuff Donald .
Thanks :-)
Many Thanks
Karan Singh
- Original Message -
From: "Don Talton (dotalton)"
To: ceph-us...@ceph.com
Sent: Thursday, 9 January, 2014 7:31:16 AM
Subject: [ceph-users] The Kraken has been released!
The first phase of Kraken (free) dashboard f
Here’s a more direct question. Given this osd tree:
# ceph osd tree |head
# idweight type name up/down reweight
-1 2952root default
-2 2952room 0513-R-0050
-3 262.1 rack RJ35
...
-14 135.8 rack RJ57
-51 0
HI Mordur,
I'm definitely straining my memory on this one, but happy to help if I can?
I'm pretty sure I did not figure it out -- you can see I didn't get
any feedback from the list. What I did do, however, was uninstall
everything and try the same setup with mkcephfs, which worked fine at
the t
On Thu, Jan 9, 2014 at 9:45 AM, Travis Rhoden wrote:
> HI Mordur,
>
> I'm definitely straining my memory on this one, but happy to help if I can?
>
> I'm pretty sure I did not figure it out -- you can see I didn't get
> any feedback from the list. What I did do, however, was uninstall
> everythin
On Thu, Jan 9, 2014 at 9:48 AM, Alfredo Deza wrote:
> On Thu, Jan 9, 2014 at 9:45 AM, Travis Rhoden wrote:
>> HI Mordur,
>>
>> I'm definitely straining my memory on this one, but happy to help if I can?
>>
>> I'm pretty sure I did not figure it out -- you can see I didn't get
>> any feedback from
Hi all,
I am new here and glad to see you guys.
Thanks for your hard work for providing a more stable, powerful,functional
ceph.
I was reading the source code of Ceph-0.72.2, and I've got a question:
what is the data packet format of Ceph?Or how are the packets packaged?
We know that typical tcp p
Am 09.01.2014 10:25, schrieb Bradley Kite:
> 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM disks (4TB)
> plus a 160GB SSD.
> [...]
> By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops and ~2000
> iops write (but with 15KRPM SAS disks).
I think that comparing Ceph on
Hi guys, thank you very much for your feedback. I'm new to Ceph, so I
ask you to be patient with my newbie-ness.
I'm dealing with the same issue although I'm not using ceph-deploy. I
installed manually (for learning purposes) a small test cluster of three
nodes, one to host the single mon and two
On 9 January 2014 15:44, Christian Kauhaus wrote:
> Am 09.01.2014 10:25, schrieb Bradley Kite:
> > 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM disks
> (4TB)
> > plus a 160GB SSD.
> > [...]
> > By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops and
> ~2000
> > iop
I've recently run into an issue with the RBD kernel client in emperor where I'm
mapping and formatting an image, then repeatedly mounting it, writing data to
it, unmounting it, and snapshotting it. Nearly every time (with only one
exception so far), the driver appears to deadlock after the eight
I had the same issue a couple of days ago. I fixed it by applying
pending patches at
https://git.kernel.org/cgit/linux/kernel/git/sage/ceph-client.git/?h=for-stable-3.10.24
According to recent mails, they should be included in next maintenance
linux release 3.10.26
On 01/09/14 17:44, Stephen
On 01/09/2014 10:43 AM, Bradley Kite wrote:
On 9 January 2014 15:44, Christian Kauhaus mailto:k...@gocept.com>> wrote:
Am 09.01.2014 10:25, schrieb Bradley Kite:
> 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM
disks (4TB)
> plus a 160GB SSD.
> [...]
>
I have done some similar testing before.
Here are a few things to keep in mind.
1) Ceph writes to the journal then to the filestore. If you put bcache in
front of the entire OSD. That causes 4 io write operations for each single
ceph write, per osd. One to the journal/cache, second to the journ
Thanks Wolfgang. The fix should be in shortly.
On Thu, Jan 9, 2014 at 3:52 AM, Wolfgang Hennerbichler wrote:
> I think I found a comment in the documentation that's not inteded to be
> there:
>
> http://ceph.com/docs/master/rbd/rbd-snapshot/
> "For the rollback section, you could mention that ro
On Thu, Jan 9, 2014 at 11:15 AM, Mordur Ingolfsson wrote:
> Hi guys, thank you very much for your feedback. I'm new to Ceph, so I ask
> you to be patient with my newbie-ness.
>
> I'm dealing with the same issue although I'm not using ceph-deploy. I
> installed manually (for learning purposes) a s
One more incentive to learn django :-)
On 09/01/2014 06:31, Don Talton (dotalton) wrote:
> The first phase of Kraken (free) dashboard for Ceph cluster monitoring is
> complete. You can grab it here (https://github.com/krakendash/krakendash)
>
>
>
> Pictures here http://imgur.com/a/JoVPy
>
>
On Thu, Jan 9, 2014 at 6:27 AM, Dan Van Der Ster
wrote:
> Here’s a more direct question. Given this osd tree:
>
> # ceph osd tree |head
> # idweight type name up/down reweight
> -1 2952root default
> -2 2952room 0513-R-0050
> -3 262.1 ra
Thanks Greg. One thought I had is that I might try just crush rm'ing the OSD
instead of or just after marking it out... That should avoid the double
rebalance, right?
Cheers, Dan
On Jan 9, 2014 7:57 PM, Gregory Farnum wrote:
On Thu, Jan 9, 2014 at 6:27 AM, Dan Van Der Ster
wrote:
> Here’s a m
Yep!
On Thu, Jan 9, 2014 at 11:01 AM, Dan Van Der Ster
wrote:
> Thanks Greg. One thought I had is that I might try just crush rm'ing the OSD
> instead of or just after marking it out... That should avoid the double
> rebalance, right?
>
> Cheers, Dan
>
> On Jan 9, 2014 7:57 PM, Gregory Farnum wr
Awesome.
You guys feel free to contribute to
https://github.com/dmsimard/python-cephclient as well :)
David Moreau Simard
IT Architecture Specialist
http://iweb.com
On Jan 9, 2014, at 12:31 AM, Don Talton (dotalton) wrote:
> The first phase of Kraken (free) dashboard for Ceph cluster monitor
The ceph-deploy git master should now handle wheezy/sid, jessie/sid
combinations now.
On Wed, Jan 8, 2014 at 11:23 AM, Alfredo Deza wrote:
> On Wed, Jan 8, 2014 at 2:00 PM, Gilles Mocellin
> wrote:
> > Le 08/01/2014 02:46, Christian Balzer a écrit :
> >
> >> It is what it is.
> >> As in, sid (u
The ceph-deploy git master should be able to handle wheezy/sid jessie/sid
strings now (and anything else following a/b syntax and not raise unless we
have flagged the version for non-support)
On Sun, Dec 1, 2013 at 6:38 PM, James Harper
wrote:
> >
> > On Sun, Dec 1, 2013 at 6:47 PM, James Harper
Hi Bradley,
I did the similar benchmark recently, and the result is no better than yours.
My setup:
3 servers(CPU: Intel Xeon E5-2609 0 @ 2.40GHz, RAM: 32GB), I used only 2 SATA
7.2k RPM disk(2 TB) plus a 400 GB SSD for OSD in total.
Servers are connected with 10gbps ethernet.
Replication level:
Emperor
Mostly our evaluation is going pretty well, but things like this are
driving us crazy. Less crazy then when we were evaluating swift however :)
The part I am most worried about is we have no idea how to even start
finding out what is happening here. The admin interface has very little
doc
I've noticed this on 2 (development) clusters that I have with pools
having size 1. I guess my first question would be - is this expected?
Here's some detail from one of the clusters:
$ ceph -v
ceph version 0.74-621-g6fac2ac (6fac2acc5e6f77651ffcd7dc7aa833713517d8a6)
$ ceph osd dump
epoch 104
With pool size of 1 the scrub can still do some consistency checking. These
are things like missing attributes, on-disk size doesn’t match attribute size,
non-clone without a head, expected clone. You could check the osd logs to see
what they were.
The pg below only had 1 object in error and
Hello:
I encountered when installing a problem can not be resolved, I hope
ceph can help me, thank you very much!
I installed the 0.56.4 version,
1, yum install dependencies
2, compiled 0.56.4.tar.gz
3, make && make install
4 start
5, ceph-fu
Hi,I recommend you deploy ceph in ubuntu system and using ceph-deploy.
you can find guide in http://ceph.com/docs/master/start/quick-ceph-deploy/
hnuzhou...@gmail.com
发件人: liuweid
发送时间: 2014-01-10 09:32
收件人: ceph-users
主题: [ceph-users]ceph安装访问问题
Hello:
I encountered when installing a
Dear all,
I have some pgs that are stuck_unclean, I'm trying to understand why.
Hopefully someone can help me shed some light on it.
For example, one of them is
# ceph pg dump_stuck unclean
1.fa 0 0 0 0 0 0 0 active+remapped 2014-01-10 11:18:53.147842 0'0 6452:4272
[7] [7,3] 0'0 2014-01-09 11:37
On 10/01/14 16:18, David Zafman wrote:
With pool size of 1 the scrub can still do some consistency checking. These
are things like missing attributes, on-disk size doesn’t match attribute size,
non-clone without a head, expected clone. You could check the osd logs to see
what they were.
The
On 01/10/2014 05:13 AM, YIP Wai Peng wrote:
Dear all,
I have some pgs that are stuck_unclean, I'm trying to understand why.
Hopefully someone can help me shed some light on it.
For example, one of them is
# ceph pg dump_stuck unclean
1.fa000active+remapped2014-01-10
11:18:53.1478420'06452:
Hi Wido,
Thanks for the reply. I've dumped the query below.
"recovery_state" doesn't say anything, there are also no missing or
unfounded objects. What else could be wrong?
- WP
P.S: I am running tunables optimal already.
{ "state": "active+remapped",
"epoch": 6500,
"up": [
7],
40 matches
Mail list logo