Le 19/12/2014 02:18, Craig Lewis a écrit :
> The daemons bind to *,
Yes but *only* for the OSD daemon. Am I wrong?
Personally I must provide IP addresses for the monitors
in the /etc/ceph/ceph.conf, like this:
[global]
mon host = 10.0.1.1, 10.0.1.2, 10.0.1.3
Or like this:
[mon.1]
mon addr = 1
Hi all,
I had 12 OSD's in my cluster with 2 OSD nodes. One of the OSD was in down
state, I have removed that PG from cluster, by removing crush rule for that
OSD.
Now cluster with 11 OSD's, started rebalancing. After sometime, cluster
status was
ems@rack6-client-5:~$ sudo ceph -s
cluster eb5
Hello,
On Thu, 18 Dec 2014 23:45:57 -0600 Sean Sullivan wrote:
> Wow Christian,
>
> Sorry I missed these in line replies. Give me a minute to gather some
> data. Thanks a million for the in depth responses!
>
No worries.
> I thought about raiding it but I needed the space unfortunately. I had
Wow Christian,
Sorry I missed these in line replies. Give me a minute to gather some data.
Thanks a million for the in depth responses!
I thought about raiding it but I needed the space unfortunately. I had a
3x60 osd node test cluster that we tried before this and it didn't have
this floppi
thanks!
It would be really great in the right hands. Through some stroke of luck
it's in mine. The flapping osd is becoming a real issue at this point as it
is the only possible lead I have to why the gateways are transferring so
slowly. The weird issue is that I can have 8 or 60 transfers goin
Hello,
Nice cluster, I wouldn't mind getting my hand or her ample nacelles, er,
wrong movie. ^o^
On Thu, 18 Dec 2014 21:35:36 -0600 Sean Sullivan wrote:
> Hello Yall!
>
> I can't figure out why my gateways are performing so poorly and I am not
> sure where to start looking. My RBD mounts seem
Thanks for the reply Gegory,
Sorry if this is in the wrong direction or something. Maybe I do not
understand
To test uploads I either use bash time and either python-swiftclient or
boto key.set_contents_from_filename to the radosgw. I was unaware that
radosgw had any type of throttle settings in
What kind of uploads are you performing? How are you testing?
Have you looked at the admin sockets on any daemons yet? Examining the OSDs
to see if they're behaving differently on the different requests is one
angle of attack. The other is look into is if the RGW daemons are hitting
throttler limit
Hello Yall!
I can't figure out why my gateways are performing so poorly and I am not
sure where to start looking. My RBD mounts seem to be performing fine
(over 300 MB/s) while uploading a 5G file to Swift/S3 takes 2m32s
(32MBps i believe). If we try a 1G file it's closer to 8MBps. Testing
with nu
Hi John,
Yes, no problem! I have few items that I noticed. They are:
1. The missing 'data' and 'metadata' pools
http://ceph.com/docs/master/install/manual-deployment/
Monitor Bootstrapping -> Steps # 17 & 18
2. The setting 'mon initial members'
On page
'http://ceph
On Fri, 19 Dec 2014 12:28:48 +1000 Lindsay Mathieson wrote:
> On 19 December 2014 at 11:14, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Thu, 18 Dec 2014 16:12:09 -0800 Craig Lewis wrote:
> >
> > Firstly I'd like to confirm what Craig said about small clusters.
> > I just changed my four sto
On 19 December 2014 at 11:14, Christian Balzer wrote:
>
> Hello,
>
> On Thu, 18 Dec 2014 16:12:09 -0800 Craig Lewis wrote:
>
> Firstly I'd like to confirm what Craig said about small clusters.
> I just changed my four storage node test cluster from 1 OSD per node to 4
> and it can now saturate a 1
Okay, this is rather unrelated to Ceph but I might as well mention how this
is fixed. When using the Juno-Release OpenStack pages the
'rbd_store_chunk_size = 8' now sets it to 8192 bytes rather than 8192 kB
(8MB) causing quite a bit more objects to be stored and deleted. Setting
this to 8192 got me
Hey All,
On a new Cent7 deployment with firefly I'm noticing a strange behavior when
deleting RBD child disks. It appears upon deletion cpu usage on each OSD
process raises to about 75% for 30+ seconds. On my previous deployments
with CentOS 6.x and Ubuntu 12/14 this was never a problem.
Each RBD
The daemons bind to *, so adding the 3rd interface to the machine will
allow you to talk to the daemons on that IP.
I'm not really sure how you'd setup the management network though. I'd
start by setting the ceph.conf public network on the management nodes to
have the public network 10.0.2.0/24,
Hello,
On Thu, 18 Dec 2014 16:12:09 -0800 Craig Lewis wrote:
Firstly I'd like to confirm what Craig said about small clusters.
I just changed my four storage node test cluster from 1 OSD per node to 4
and it can now saturate a 1GbE link (110MB/s) where before it peaked at
50-60MB/s. Of course no
Thanks, I'll look into these.
On Thu, Dec 18, 2014 at 5:12 PM, Craig Lewis
wrote:
>
> I think this is it:
> https://engage.redhat.com/inktank-ceph-reference-architecture-s-201409080939
>
> You can also check out a presentation on Cern's Ceph cluster:
> http://www.slideshare.net/Inktank_Ceph/scali
Hi,
Is it possible to have 2 different public networks in a Ceph cluster?
I explain my question below.
Currently, I have 3 identical nodes in my Ceph cluster. Each node has:
- only 1 monitor;
- n osds (we don't care about the value n here);
- and 3 interfaces.
One interface for the "cluster" ne
I think this is it:
https://engage.redhat.com/inktank-ceph-reference-architecture-s-201409080939
You can also check out a presentation on Cern's Ceph cluster:
http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern
At large scale, the biggest problem will likely be network I/O on the
inter-s
I'm interested to know if there is a reference to this reference
architecture. It would help alleviate some of the fears we have about
scaling this thing to a massive scale (10,000's OSDs).
Thanks,
Robert LeBlanc
On Thu, Dec 18, 2014 at 3:43 PM, Craig Lewis
wrote:
>
>
>
> On Thu, Dec 18, 2014 at
Just to update I’ve created all .rgw and .users pool as a erasure pool, but
making another tests I realized that if I created just data_pool (.rgw.buckets)
as a erasure pool everything going fine.
So I’d like to know someone know if there are some incompatibility between the
radosgw and other
On Thu, Dec 18, 2014 at 5:16 AM, Patrick McGarry
wrote:
>
>
> > 2. What should be the minimum hardware requirement of the server (CPU,
> > Memory, NIC etc)
>
> There is no real "minimum" to run Ceph, it's all about what your
> workload will look like and what kind of performance you need. We have
On 19/12/14 03:01, Lindsay Mathieson wrote:
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
The effect of this is *highly* dependent to the SSD make/model. My m550
work vastly better if the journal is a file on a filesystem as opposed
to a partition.
Obviously the Intel S3700/S3500 are a b
Hello,
I’m trying to use an erasure pool as data pool for the radogw but every time I
try create a new bucket I receive a 500 error code as you can see below. I’ll
need add a replicated cache pool in front of the erasure pool to be able to
write data using radosgw?
See the radosgw log error:
Before we base thousands of VM image clones off of one or more snapshots, I
want to test what happens when the snapshot becomes corrupted. I don't
believe the snapshot will become corrupted through client access to the
snapshot, but some weird issue with PGs being lost or forced to be lost,
solar f
On Thu, Dec 18, 2014 at 8:40 PM, Lindsay Mathieson
wrote:
>> I don't know how Proxmox configures it, but
>> I assume you're storing the disk images as single files on the FS?
>
> its a single KVM QCOW2 file.
Like the cache mode, the image format might be an interesting thing to
experiment with.
On Thu, 18 Dec 2014 11:23:42 AM Gregory Farnum wrote:
> Do you have any information about *how* the drive is corrupted; what
> part Win7 is unhappy with?
Failure to find the boot sector I think, I'll run it again and take a screen
shot.
> I don't know how Proxmox configures it, but
> I assume y
Udo,
I was wondering yesterday if aligning the LVM VG to 4MB would provide any
performance benefit. My hunch is that it would, much like erasure blocks on
SSDs (probably not so much now). I haven't had a chance to test it though.
If you do, I'd like to know your results.
On Thu, Dec 18, 2014 at 1
On Thu, 18 Dec 2014 08:41:21 PM Udo Lembke wrote:
> have you tried the different cache-options (no cache, write through,
> ...) which proxmox offer, for the drive?
I tried with writeback and it didn't corrupt.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
__
On Thu, Dec 18, 2014 at 11:24 AM, Gregory Farnum wrote:
> On Thu, Dec 18, 2014 at 4:04 AM, Daniele Venzano wrote:
>> Hello,
>>
>> I have been trying to upload multi-gigabyte files to CEPH via the object
>> gateway, using both the swift and s3 APIs.
>>
>> With file up to about 2GB everything works
Discard is supported in kernel 3.18 rc1 or greater as per
https://lkml.org/lkml/2014/10/14/450
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Robert Sander
> Sent: Friday, December 12, 2014 7:01 AM
> To: ceph-users@lists.ceph.com
> Subje
Hi Lindsay,
have you tried the different cache-options (no cache, write through,
...) which proxmox offer, for the drive?
Udo
On 18.12.2014 05:52, Lindsay Mathieson wrote:
> I'be been experimenting with CephFS for funning KVM images (proxmox).
>
> cephfs fuse version - 0.87
>
> cephfs kernel mod
On Wed, Dec 17, 2014 at 8:52 PM, Lindsay Mathieson
wrote:
> I'be been experimenting with CephFS for funning KVM images (proxmox).
>
> cephfs fuse version - 0.87
>
> cephfs kernel module - kernel version 3.10
>
>
> Part of my testing involves running a Windows 7 VM up and running
> CrystalDiskMark
On Thu, Dec 18, 2014 at 4:04 AM, Daniele Venzano wrote:
> Hello,
>
> I have been trying to upload multi-gigabyte files to CEPH via the object
> gateway, using both the swift and s3 APIs.
>
> With file up to about 2GB everything works as expected.
>
> With files bigger than that I get back a "400 B
On 12/18/2014 10:49 AM, Travis Rhoden wrote:
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
It shouldn't matter which format you use.
Josh
___
ceph-users mai
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
- Travis
On Mon, Dec 15, 2014 at 8:58 AM, Max Power <
mailli...@ferienwohnung-altenbeken.de> wrote:
>
> > Ilya Dryomov hat am 12. Dezember 2014 um
> 18:00
On Thu, Dec 18, 2014 at 6:10 PM, JIten Shah wrote:
> So what happens if we upgrade from Firefly to Giant? Do we loose the pools?
Sure, you didn't have any data you wanted to keep, right? :-D
Seriously though, no, we don't delete anything during an upgrade.
It's just newly installed clusters that
No !
It would have been a really bad idea. I upgraded without losing my
default pools, hopefully ;)
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On jeu., 2014-12-18 at 10:10 -0800, JIten Shah wrote:
> So what happens if we upgrade from Firefly to Giant? Do
So what happens if we upgrade from Firefly to Giant? Do we loose the pools?
—Jiten
On Dec 18, 2014, at 5:12 AM, Thomas Lemarchand
wrote:
> I remember reading somewhere (maybe in changelogs) that default pools
> were not created automatically anymore.
>
> You can create pools you need yourself.
Can you point out the specific page that's out of date so that we can update it?
Thanks,
John
On Thu, Dec 18, 2014 at 5:52 PM, Dyweni - Ceph-Users
<6exbab4fy...@dyweni.com> wrote:
> Thanks!!
>
> Looks like the the manual installation instructions should be updated, to
> eliminate future confusion
Thanks!!
Looks like the the manual installation instructions should be updated,
to eliminate future confusion.
Dyweni
On 2014-12-18 07:11, John Spray wrote:
No mistake -- the Ceph FS pools are no longer created by default, as
not everybody needs them. Ceph FS users now create these pools
On 12/18/2014 03:52 PM, Thomas Lemarchand wrote:
> Hi Wido,
>
> I'm really interested in your script.
> Will you release it ? I'm sure I'm not the only one interested ;)
>
Well, it's not a general script to backup CephFS with. It's a fairly
simple Bash script I'm writing for a specific situation
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
> The effect of this is *highly* dependent to the SSD make/model. My m550
> work vastly better if the journal is a file on a filesystem as opposed
> to a partition.
>
> Obviously the Intel S3700/S3500 are a better choice - but the OP has
> al
Hi Wido,
I'm really interested in your script.
Will you release it ? I'm sure I'm not the only one interested ;)
If you need some help (testing or something else), don't hesitate to ask
me.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On jeu., 2014-12-18
On 12/18/2014 03:37 PM, Sage Weil wrote:
> On Thu, 18 Dec 2014, Wido den Hollander wrote:
>> Hi,
>>
>> I've been playing around a bit with the recursive statistics for CephFS
>> today and I'm seeing some behavior with the rstats what I don't understand.
>>
>> I /A/B/C in my CephFS.
>>
>> I changed
On Thu, 18 Dec 2014, Wido den Hollander wrote:
> Hi,
>
> I've been playing around a bit with the recursive statistics for CephFS
> today and I'm seeing some behavior with the rstats what I don't understand.
>
> I /A/B/C in my CephFS.
>
> I changed a file in 'C' and the ceph.dir.rctime xattr chan
Hi,
I've been playing around a bit with the recursive statistics for CephFS
today and I'm seeing some behavior with the rstats what I don't understand.
I /A/B/C in my CephFS.
I changed a file in 'C' and the ceph.dir.rctime xattr changed
immediately. I've been waiting for 60 minutes now, but /A a
Howdy Ceph rangers,
Just wanted to kick off a bit of holiday cheer from our Ceph family to
yours. As a part of the QEMU advent calendar [0], we have finally
built a quick-and-dirty Ceph image for the purposes of trying and
experimenting with Ceph.
Feel free to download it [1] and try it out, or s
Hey Debashish,
On Thu, Dec 18, 2014 at 6:21 AM, Debashish Das wrote:
> Hi Guys,
>
> I am very new to Ceph & have couple of questions -
>
> 1. Can we install Ceph in a single node (both Monitor & OSD).
You can, but I would only recommend it for testing/experimentation. No
production (or even pre
I remember reading somewhere (maybe in changelogs) that default pools
were not created automatically anymore.
You can create pools you need yourself.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On jeu., 2014-12-18 at 06:52 -0600, Dyweni - Ceph-Users wrote
No mistake -- the Ceph FS pools are no longer created by default, as
not everybody needs them. Ceph FS users now create these pools
explicitly:
http://ceph.com/docs/master/cephfs/createfs/
John
On Thu, Dec 18, 2014 at 12:52 PM, Dyweni - Ceph-Users
<6exbab4fy...@dyweni.com> wrote:
> Hi All,
>
>
>
Hi Guys,
I am very new to Ceph & have couple of questions -
1. Can we install Ceph in a single node (both Monitor & OSD).
2. What should be the minimum hardware requirement of the server (CPU,
Memory, NIC etc)
3. Any webpage where I can find the installation guide to install Ceph in
one node.
I
Hi All,
Just setup the monitor for a new cluster based on Giant (0.87) and I
find that only the 'rbd' pool was created automatically. I don't see
the 'data' or 'metadata' pools in 'ceph osd lspools' or the log files.
I haven't setup any OSDs or MDSs yet. I'm following the manual
deploymen
Hello,
I have been trying to upload multi-gigabyte files to CEPH via the object
gateway, using both the swift and s3 APIs.
With file up to about 2GB everything works as expected.
With files bigger than that I get back a "400 Bad Request" error, both
with S3 (boto) and Swift clients.
Enabling de
On Wed, Dec 17, 2014 at 10:31 PM, McNamara, Bradley
wrote:
> However, when testing this, when changes are made to the read-write Samba
> server the changes don’t seem to be seen by the read-only Samba server. Is
> there some file system caching going on that will eventually be flushed?
As others
On Thu, 18 Dec 2014 10:28:39 AM Thomas Lemarchand wrote:
> I too find Ceph fuse more stable.
Yah, interesting isn't it. The performance was surprisingly good to.
>
> However, you really should do your tests with a much more recent
> kernel ! 3.10 is old.
Unfortunately I have no choice in that r
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
> My m550
> work vastly better if the journal is a file on a filesystem as opposed
> to a partition.
Any particular filesystem? ext4? xfs? or doesn't matter?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
Kevin,
Yes, that is just too old for vxattrs (the earliest tag with vxattr
support in fuse is v0.57~84^2~6).
In Ceph FS terms, 0.56 is pretty ancient. Because the filesystem is
under active development, you should use a much more recent version
for clusters with Ceph FS enabled -- at least firef
Hello,
> I have a somewhat interesting scenario. I have an RBD of 17TB formatted using
> XFS. I would like it accessible from two different hosts, one mapped/mounted
> read-only, and one mapped/mounted as read-write. Both are shared using Samba
> 4.x. One Samba server gives read-only access to the
I too find Ceph fuse more stable.
However, you really should do your tests with a much more recent
kernel ! 3.10 is old.
I think there is Ceph improvements in every kernel version since a long
time.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On jeu., 201
Gregory Farnum writes:
>
>
> Cache tiering is a stable, functioning system. Those particular commands
are for testing and development purposes, not something you should run
(although they ought to be safe).-Greg
Thanks for your reply!
I'll put cache tiering into my production cluster!
The effect of this is *highly* dependent to the SSD make/model. My m550
work vastly better if the journal is a file on a filesystem as opposed
to a partition.
Obviously the Intel S3700/S3500 are a better choice - but the OP has
already purchased Sammy 840's, so I'm trying to suggest options to
Hi all,
I have some fileserver with insufficient read speed.
Enabling read ahead inside the VM improve the read speed, but it's
looks, that this has an drawback during lvm-operations like pvmove.
For test purposes, I move the lvm-storage inside an VM from vdb to vdc1.
It's take days, because it's
Hi Mark,
On 18.12.2014 07:15, Mark Kirkwood wrote:
> While you can't do much about the endurance lifetime being a bit low,
> you could possibly improve performance using a journal *file* that is
> located on the 840's (you'll need to symlink it - disclaimer - have
> not tried this myself, but wil
64 matches
Mail list logo