On 21 June 2013 16:32, Sage Weil wrote:
> On Fri, 21 Jun 2013, Damien Churchill wrote:
>> Hi,
>>
>> I've built a copy of linux 3.10-rc6 (and added the patch from
>> ceph-client/for-linus) however when I try and map a rbd image created
>> with:
>>
>> # rbd create test-format-2 --size 10240 --format
Maybe I'm missing something obvious here. Or maybe this is the way it has to
be. I haven't found an answer via Google.
I'm experimenting with ceph 0.61.4 and RBD under Ubuntu 13.0x. I create a
RADOS block device (test), map it, format it as ext4 or xfs, and mount it. No
problem. I grow the
How do you manage cache coherency with Varnish?
On Jun 21, 2013, at 6:09 AM, Artem Silenkov wrote:
> This picture shows the way we do it
> http://habrastorage.org/storage2/1ed/532/627/1ed5326273399df81f3a73179848a404.png
>
> Regards, Artem Silenkov, 2GIS TM.
> ---
> 2GIS LLC
>
> http://2gis.r
On Jun 21, 2013, at 5:00 PM, Mandell Degerness wrote:
> There is a scenario where we would want to remove a monitor and, at a
> later date, re-add the monitor (using the same IP address). Is there
> a supported way to do this? I tried deleting the monitor directory
> and rebuilding from scratch
I'm just finding my way around the Ceph documentation. What I'm hoping
to build are servers with 24 data disks and one O/S disk. From what I've
read, the recommended configuration is to run 24 separate OSDs (or 23 if
I have a separate journal disk/SSD), and not have any sort of in-server
RAID.
On Jun 24, 2013, at 9:13 AM, Edward Huyer wrote:
> I’m experimenting with ceph 0.61.4 and RBD under Ubuntu 13.0x. I create a
> RADOS block device (test), map it, format it as ext4 or xfs, and mount it.
> No problem. I grow the underlying RBD. lsblk on both /dev/rbd/rbd/test and
> /dev/rbd1
What do you mean 'bring up the second monitor with enough information'?
Here are the basic steps I took. It fails on step 4. If I skip step 4, I get a
number out of range error.
1. ceph auth get mon. -o /tmp/auth
2. ceph mon getmap -o /tmp/map
3. sudo ceph-mon -i 1 --mkfs --monmap /tmp/map
On Jun 24, 2013, at 11:22 AM, Brian Candler wrote:
> I'm just finding my way around the Ceph documentation. What I'm hoping to
> build are servers with 24 data disks and one O/S disk. From what I've read,
> the recommended configuration is to run 24 separate OSDs (or 23 if I have a
> separate
On Mon, Jun 24, 2013 at 10:36 AM, Jeppesen, Nelson
wrote:
> What do you mean ‘bring up the second monitor with enough information’?
>
>
>
> Here are the basic steps I took. It fails on step 4. If I skip step 4, I get
> a number out of range error.
>
>
>
> 1. ceph auth get mon. -o /tmp/auth
>
> 2.
On 24/06/2013 18:41, John Nielsen wrote:
The official documentation is maybe not %100 idiot-proof, but it is
step-by-step:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
If you lose a disk you want to remove the OSD associated with it. This will
trigger a data migration so you a
> -Original Message-
> From: John Nielsen [mailto:li...@jnielsen.net]
> Sent: Monday, June 24, 2013 1:24 PM
> To: Edward Huyer
> Cc: ceph-us...@ceph.com
> Subject: Re: [ceph-users] Resizing filesystem on RBD without
> unmount/mount cycle
>
> On Jun 24, 2013, at 9:13 AM, Edward Huyer wrote
If you remove the OSD after it fails from the cluster and the crushmap, the
cluster will automatically re-assign that number to the new OSD when you run
ceph osd create with no arguments.
Here's my procedure for manually adding OSDs. This part of the documentation I
wrote for myself, or in the
On 24/06/2013 20:27, Dave Spano wrote:
If you remove the OSD after it fails from the cluster and the
crushmap, the cluster will automatically re-assign that number to the
new OSD when you run ceph osd create with no arguments.
OK - although obviously if you're going to make a disk with a label
Hello folks,
Is PG splitting considered stable now? I feel like I used to see it
discussed all the time (and how it wasn't quite there), but haven't
heard anything about it in a while. I remember seeing related bits in
release notes and such, but never an announcement that "you can now
increase
I'm testing the change (actually re-starting the monitors after the
monitor removal), but this brings up the issue with why we didn't want
to do this in the first place: When reducing the number of monitors
from 5 to 3, we are guaranteed to have a service outage for the time
it takes to restart at
On Mon, 24 Jun 2013, Mandell Degerness wrote:
> I'm testing the change (actually re-starting the monitors after the
> monitor removal), but this brings up the issue with why we didn't want
> to do this in the first place: When reducing the number of monitors
> from 5 to 3, we are guaranteed to hav
Hmm. This is a bit ugly from our perspective, but not fatal to your
design (just our implementation). At the time we run the rm, the
cluster is smaller and so the restart of each monitor is not fatal to
the cluster. The problem is on our side in terms of guaranteeing
order of behaviors.
On Mon,
I also have problems keeping my time in sync on VMWare virtual
machines. My problems occurs most when the VM Host is oversubscribed,
or when I'm doing stress tests. I ended up disabling ntpd in the
guests, and enabled Host Time Sync using the VMWare Guest Tools. All of
my VMWare Hosts runs n
Hi Jens,
On 6/17/13 05:02 AM, Jens Kristian Søgaard wrote:
Hi Stratos,
you might want to take a look at Synnefo. [1]
I did take a look at it earlier, but decided not to test it.
Mainly I was deterred because I found the documentation a bit lacking.
I opened up the section on File Storage a
I've looked into this a bit, and the best I've come up with is to
snapshot all of the RGW pools. I asked a similar question before:
http://comments.gmane.org/gmane.comp.file-systems.ceph.user/855
I am planning to have a 2nd cluster for disaster recovery, with some
in-house geo-replication.
On Mon, 24 Jun 2013, Mandell Degerness wrote:
> Hmm. This is a bit ugly from our perspective, but not fatal to your
> design (just our implementation). At the time we run the rm, the
> cluster is smaller and so the restart of each monitor is not fatal to
> the cluster. The problem is on our side
On 25/06/2013 5:59 AM, Brian Candler wrote:
On 24/06/2013 20:27, Dave Spano wrote:
Here's my procedure for manually adding OSDs.
The other thing I discovered is not to wait between steps; some changes result in a new
crushmap, that then triggers replication. You want to speed through the step
The issue, Sage, is that we have to deal with the cluster being
re-expanded. If we start with 5 monitors and scale back to 3, running
the "ceph mon remove N" command after stopping each monitor and don't
restart the existing monitors, we cannot re-add those same monitors
that were previously remov
That's where 'ceph osd set noout' comes in handy.
On Jun 24, 2013, at 7:28 PM, Nigel Williams wrote:
> On 25/06/2013 5:59 AM, Brian Candler wrote:
>> On 24/06/2013 20:27, Dave Spano wrote:
>>> Here's my procedure for manually adding OSDs.
>
> The other thing I discovered is not to wait betwee
Any plans to build a set of packages for Fedora 19 yet?
F19 has qemu 1.4.2 packaged and we would like to try it with ceph
cuttlefish.
Attempting to install the F18 ceph .6.1.4 bumps into a dependency on
libboost_system-mt.so.1.50.0()(64bit).
The version of libboost on F19 is 1.53 :(
I will have
On Tue, 25 Jun 2013, Darryl Bond wrote:
> Any plans to build a set of packages for Fedora 19 yet?
> F19 has qemu 1.4.2 packaged and we would like to try it with ceph
> cuttlefish.
>
> Attempting to install the F18 ceph .6.1.4 bumps into a dependency on
> libboost_system-mt.so.1.50.0()(64bit).
> Th
Hi All,
One of my three mons failed to start. Below is the error in the mon log. I
tried to attach the complete log, but it's limited.
I can't tell what's happening to it.
--- begin dump of recent events ---
0> 2013-06-25 11:18:47.177334 7f46a868b7c0 -1 *** Caught signal (Aborted)
**
in th
On Tue, 25 Jun 2013, Da Chun wrote:
> Hi All,
> One of my three mons failed to start. Below is the error in the mon log. I
> tried to attach the complete log, but it's limited.
> I can't tell what's happening to it.
>
> --- begin dump of recent events ---
> 0> 2013-06-25 11:18:47.177334 7f46a
Here they are:
2013-06-25 11:18:47.040064 7f46a868b7c0 0 ceph version 0.61.4
(1669132fcfc27d0c0b5e5bb93ade59d147e23404), process ceph-mon, pid 14099
2013-06-25 11:18:47.169526 7f46a868b7c0 1 mon.ceph-node0@-1(probing) e1
preinit fsid 5436253a-8ecc-4509-a3ef-4bfd68387189
2013-06-25 11:18:47.17
Good day!
Basically we don't have to. Write operations are comparable rare and made
from one point. As for read operations - we have low TTL - 1 second. So
varnish is not basically cache here but balancer and rps eater.
If any need in coherency you could write directly to radosgw and setup a
low t
On 25 Jun 2013, at 00:39, Mandell Degerness wrote:
> The issue, Sage, is that we have to deal with the cluster being
> re-expanded. If we start with 5 monitors and scale back to 3, running
> the "ceph mon remove N" command after stopping each monitor and don't
> restart the existing monitors, we
Precisely. This is what we need to do. It is just a case of
adjusting our process to make that possible. As I stated a couple
e-mails ago, the design of Ceph allows it, it is just a bit of a
challenge to fit it into our existing processes. It's on me now to
fix the process.
On Mon, Jun 24, 201
32 matches
Mail list logo