On 17 Sep 2013, at 21:47, Jason Villalta wrote:
> dd if=ddbenchfile of=/dev/null bs=8K
> 819200 bytes (8.2 GB) copied, 19.7318 s, 415 MB/s
As a general point, this benchmark may not do what you think it does, depending
on the version of dd, as writes to /dev/null can be heavily optimised.
Hello to all,
Thanks for your answers.
Well... after an awful night, I found the problem...
It was a MTU mistake !
No relation with Ceph !
So sorry for the noise, and thanks again.
Best Regards - Cordialement
Alexis
___
ceph-users mailing list
ceph-u
That dd give me this.
dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
819200 bytes (8.2 GB) copied, 31.1807 s, 263 MB/s
Which makes sense because the SSD is running as SATA 2 which should give
3Gbps or ~300MBps
I am still trying to better understand the speed difference between the
http://rpm.repo.onapp.com/repo/centos/6/x86_64/
On Wed, Sep 18, 2013 at 4:32 AM, Aquino, BenX O wrote:
> Hello Ceph Users Group,
>
> Looking for rbd.ko for Centos6.3_x64 (2.6.32) or Centos6.4_x64 (2.6.38).*
> ***
>
> Or point me to a buildable source or a rpm kernel package that has it.**
Hi,
We just finished debugging a problem with RBD-backed Glance image creation
failures, and thought our workaround would be useful for others. Basically, we
found that during an image upload, librbd on the glance api server was
consuming many many processes, eventually hitting the 1024 nproc li
Hi all,
There is a new release of ceph-deploy, the easy ceph deployment tool.
There were a good amount of bug fixes into this release and a wealth
of improvements. Thanks
to all of you who contributed patches and issues, and thanks to Dmitry
Borodaenko and
Andrew Woodward for extensively testing
Dell - Internal Use - Confidential
Hi,
I read in the ceph documentation that one of the main performance snags in ceph
was running the OSDs and journal files on the same disks and you should
consider at a minimum running the journals on SSDs.
Given I am looking to design a 150 TB cluster, I'm c
Hello Timofey,
You still see your images with "rbd ls"?
which format (1 or 2) do you use ?
Laurent Barbe
Le 18/09/2013 08:54, Timofey a écrit :
I rename few images when cluster was in degradeted state. Now I can't map one
of them with error:
rbd: add failed: (6) No such device or address
I
I use format 1.
Yes I see images, but can't map it.
> Hello Timofey,
>
> You still see your images with "rbd ls"?
> which format (1 or 2) do you use ?
>
>
> Laurent Barbe
>
>
> Le 18/09/2013 08:54, Timofey a écrit :
>> I rename few images when cluster was in degradeted state. Now I can't map
What is return by rbd info ?
Do you see your image in rbd_directory object ?
(replace rbd by the correct pool) :
# rados get -p rbd rbd_directory - | strings
Do you have an object called oldname.rbd or newname.rbd ?
# rados get -p rbd oldname.rbd - | strings
# rados get -p rbd newname.rbd - | st
Ian,
There are two schools of thought here. Some people say, run the journal
on a separate partition on the spinner alongside the OSD partition, and
don't mess with SSDs for journals. This may be the best practice for an
architecture of high-density chassis.
The other design is to use SSDs f
Excellent overview Mike!
Mark
On 09/18/2013 10:03 AM, Mike Dawson wrote:
Ian,
There are two schools of thought here. Some people say, run the journal
on a separate partition on the spinner alongside the OSD partition, and
don't mess with SSDs for journals. This may be the best practice for an
Dell - Internal Use - Confidential
Thanks Mike, great info!
-Original Message-
From: Mike Dawson [mailto:mike.daw...@cloudapt.com]
Sent: 18 September 2013 16:04
To: Porter, Ian M; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD and Journal Files
Ian,
There are two schools of thou
Am 18.09.2013 17:03, schrieb Mike Dawson:
I think you'll be OK on CPU and RAM.
I'm running latest dumpling here and with default settings each osd consumes
more than 3 GB RAM peak. So with 48 GB RAM it would not be possible to run the
desired 18 osds. I filed a bug report for this here
htt
Which kernel version are you using on client ?
Status of pgs ?
# uname -a
# ceph pg stat
Laurent
Le 18/09/2013 17:45, Timofey a écrit :
yes, format 1:
rbd info cve-backup | grep format
format: 1
no, about this image:
dmesg | grep rbd
[ 294.355188] rbd: loaded rbd (rados block device)
[
Thanks Raj,
which of these rpm version you've used on production machines.
Thanks again in advance.
Regards,
-ben
From: raj kumar [mailto:rajkumar600...@gmail.com]
Sent: Wednesday, September 18, 2013 6:09 AM
To: Aquino, BenX O
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rbd in cento
Hi Alexis,
Great to hear you fixed your problem! Would you care to describe in more
detail what the fix was, in case other people experience the same issues as
you did.
Thanks
Darren
On 18 September 2013 10:12, Alexis GÜNST HORN wrote:
> Hello to all,
> Thanks for your answers.
>
> Well... af
Now I try mount cve-backup again. It have mounted ok now and I copy out all
data from it.
I can't continue use ceph in production now :(
It need very high expirence with ceph for fast detect place of error and
fast repair it.
I try continue use it for data without critical avaliable (for example
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Mike Dawson
>
> you need to understand losing an SSD will cause
>the loss of ALL of the OSDs which had their journal on the failed SSD.
>
>First, you probably don't want
Hi David,
You're welcome to join the next teuthology meeting. It's going to happen
thursday 19th september (i.e. tomorrow from where I stand ) at 6pm paris time (
CEST ). The location ( mumble, irc ... ) will be announced at 5:30pm paris time
( CEST ) on irc.oftc.net#ceph-devel .
Cheers
On 1
Joseph,
With properly architected failure domains and replication in a Ceph
cluster, RAID1 has diminishing returns.
A well-designed CRUSH map should allow for failures at any level of your
hierarchy (OSDs, hosts, racks, rows, etc) while protecting the data with
a configurable number of copie
On Tue, Sep 17, 2013 at 10:07 PM, david zhang wrote:
> Hi ceph-users,
>
> Previously I sent one mail to ask for help on ceph unit test and function
> test. Thanks to one of your guys, I got replied about unit test.
>
> Since we are planning to use ceph, but with strict quality bar inside, we
> hav
What do you mean by index documents? Objects in each bucket are
already kept in an index object; it's how we do listing and things.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Sep 17, 2013 at 11:37 PM, Jeppesen, Nelson
wrote:
> Is there a way to enable index docume
Just got done deploying the largest ceph install I've had yet (9 boxes,
179TB), , and I used ceph-deploy, but not without much consternation. I
have a question before I file a bug report.
Is the expectation that the deploy host will never be used as the admin
host? I ran into various issues rela
Any other thoughts on this thread guys. I am just crazy to want near
native SSD performance on a small SSD cluster?
On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta wrote:
> That dd give me this.
>
> dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
> 819200 bytes (8.2 GB) copied, 3
FWIW, we run into this same issue, and cannot get a good enough SSD:
spinning ratio, and decided on simply running the journals on each
(spinning) drive, for hosts that have 24 slots. The problem gets even
worse when we're talking about some of the newer boxes.
Warren
Warren
On Wed, Sep 18, 20
Well, in a word, yes. You really expect a network replicated storage system in
user space to be comparable to direct attached ssd storage? For what it's
worth, I've got a pile of regular spinning rust, this is what my cluster will
do inside a vm with rbd writeback caching on. As you can see, l
>>-Original Message-
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>
>>Again, in this next coming release, you will be able to tell
>>ceph-deploy to just install the packages without mangling your repos
>>(or installing keys)
>
Updated to new ceph-deploy release 1.2.6 today but I
Our v0.69 development release of Ceph is ready! The most notable
user-facing new feature is probably improved support for CORS in the
radosgw. There has also been a lot of new work going into the tree behind
the scenes on the OSD that is laying the groundwork for tiering and cache
pools. As
Thank Mike,
High hopes right ;)
I guess we are not doing too bad compared to you numbers then. Just wish
the gap was a little closer between native and ceph per osd.
C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s30 -o8 -fsequential -b1024 -BH
-LS
c:\TestFile.dat
sqlio v1.5.SG
using system counter
Hi to all.
Actually I'm building a test cluster with 3 OSD servers connected with
IPoIB for cluster networks and 10GbE for public network.
I have to connect these OSDs to some MONs servers located in another
rack with no gigabit or 10Gb connection.
Could I use some 10/100 networks ports? Which ki
On Wed, Sep 18, 2013 at 3:58 PM, Gruher, Joseph R
wrote:
>>>-Original Message-
>>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>>
>>>Again, in this next coming release, you will be able to tell
>>>ceph-deploy to just install the packages without mangling your repos
>>>(or installi
On Wed, Sep 18, 2013 at 2:56 PM, Warren Wang wrote:
> Just got done deploying the largest ceph install I've had yet (9 boxes,
> 179TB), , and I used ceph-deploy, but not without much consternation. I
> have a question before I file a bug report.
>
> Is the expectation that the deploy host will ne
It's a feature Amazon added a few years back, it allows you to see a default
document.
For example, let's say I have http://mybucket.s3.ceph.com/index.html as my
website, I can set my bucket default index to index.html. Then I can browse to
http://mybucket.s3.ceph.com, it'll return my webpage.
On Wed, Sep 18, 2013 at 6:33 AM, Dan Van Der Ster
wrote:
> Hi,
> We just finished debugging a problem with RBD-backed Glance image creation
> failures, and thought our workaround would be useful for others. Basically,
> we found that during an image upload, librbd on the glance api server was
>
Hello,
Just restarted one of my mons after a month of uptime, memory commit
raised ten times high than before:
13206 root 10 -10 12.8g 8.8g 107m S65 14.0 0:53.97 ceph-mon
normal one looks like
30092 root 10 -10 4411m 790m 46m S 1 1.2 1260:28 ceph-mon
monstore has simul
Hi,
My OSDs are not joining the cluster correctly,
because the nonce they assume and receive from the peer are different.
It says "wrong node" because of the entity_id_t peer_addr (i.e., the
combination of the IP address, port number, and the nonce) is different.
Now, my questions are:
1, Are th
On 09/17/2013 03:30 PM, Somnath Roy wrote:
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public netwo
Hey,
On Wed, 18 Sep 2013, Yasuhiro Ohara wrote:
>
> Hi,
>
> My OSDs are not joining the cluster correctly,
> because the nonce they assume and receive from the peer are different.
> It says "wrong node" because of the entity_id_t peer_addr (i.e., the
> combination of the IP address, port number,
Using latest ceph-deploy:
ceph@cephtest01:/my-cluster$ sudo ceph-deploy --version
1.2.6
I get this failure:
ceph@cephtest01:/my-cluster$ sudo ceph-deploy install cephtest03 cephtest04
cephtest05 cephtest06
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster
ceph hosts ce
On Sep 18, 2013, at 11:50 PM, Gregory Farnum
wrote:
> On Wed, Sep 18, 2013 at 6:33 AM, Dan Van Der Ster
> wrote:
>> Hi,
>> We just finished debugging a problem with RBD-backed Glance image creation
>> failures, and thought our workaround would be useful for others. Basically,
>> we found tha
Hello Greg,
2013/9/17 Gregory Farnum
> Well, that all looks good to me. I'd just keep writing and see if the
> distribution evens out some.
> You could also double or triple the number of PGs you're using in that
> pool; it's not atrocious but it's a little low for 9 OSDs.
>
Okay I see, thank y
42 matches
Mail list logo