Hello,
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessie, thus ceph
0.48. And yes, I very much prefer a distro supported package.
Single mon and osd1 on node a, osd2 on node b.
1GbE direct interlink between the nodes, used exclusively for
hi all,
offical manual says,
==
STOPPING W/OUT REBALANCING
Periodically, you may need to perform maintenance on a subset of your cluster,
or resolve a problem that affects a failure
The OSD can be stopped from the host directly,
sudo stop ceph-osd id=3
I don't know if that's the 'proper' way mind.
On 2013-12-16 09:40, david.zhang...@gmail.com wrote:
ceph osd start osd.{num}
=
On 12/16/2013 10:48 AM, James Pearce wrote:
The OSD can be stopped from the host directly,
sudo stop ceph-osd id=3
Or use:
service ceph stop osd.3 (depends if you use upstart or not).
The manual in this case is not correct I think.
Wido
I don't know if that's the 'proper' way mind.
On 2
2013/11/7 Kyle Bader :
> Ceph handles it's own logs vs using syslog so I think your going to have to
> write to tmpfs and have a logger ship it somewhere else quickly. I have a
> feeling Ceph logs will eat a USB device alive, especially if you have to
> crank up debugging.
I wasn't aware of this.
Thanks for the info everyone.
On Dec 16, 2013 1:23 AM, "Kyle Bader" wrote:
> >> Has anyone tried scaling a VMs io by adding additional disks and
> >> striping them in the guest os? I am curious what effect this would have
> >> on io performance?
>
> > Why would it? You can also change the stripe
Hi,
I have started stress testing the ZFS OSD backend on ceph 0.72.1 that I
built with ZFS support. Below is one of the issues I have been looking
at this morning. My main question is should I just open "new issues"
at http://tracker.ceph.com/ as I find these problems in my testing?
2013-1
Hi,
Sorry to revive this old thread, but I wanted to update you on the current
pains we're going through related to clients' nproc (and now nofile)
ulimits. When I started this thread we were using RBD for Glance images
only, but now we're trying to enable RBD-backed Cinder volumes and are not
rea
On Mon, Dec 16, 2013 at 11:08 AM, Dan van der Ster
wrote:
> Hi,
>
> Sorry to revive this old thread, but I wanted to update you on the current
> pains we're going through related to clients' nproc (and now nofile)
> ulimits. When I started this thread we were using RBD for Glance images
> only, bu
On Mon, Dec 16, 2013 at 4:35 AM, Gandalf Corvotempesta
wrote:
> 2013/11/7 Kyle Bader :
>> Ceph handles it's own logs vs using syslog so I think your going to have to
>> write to tmpfs and have a logger ship it somewhere else quickly. I have a
>> feeling Ceph logs will eat a USB device alive, espec
On Dec 16, 2013 8:26 PM, Gregory Farnum wrote:
>
> On Mon, Dec 16, 2013 at 11:08 AM, Dan van der Ster
> wrote:
> > Hi,
> >
> > Sorry to revive this old thread, but I wanted to update you on the current
> > pains we're going through related to clients' nproc (and now nofile)
> > ulimits. When I s
On 12/16/2013 2:36 PM, Dan Van Der Ster wrote:
On Dec 16, 2013 8:26 PM, Gregory Farnum wrote:
On Mon, Dec 16, 2013 at 11:08 AM, Dan van der Ster
wrote:
Hi,
Sorry to revive this old thread, but I wanted to update you on the current
pains we're going through related to clients' nproc (and now
Are there any docs on how I can repair the inconsistent pgs? Or any thoughts on
the crash of OSD? Thanks!
From: Jeppesen, Nelson
Sent: Thursday, December 12, 2013 10:58 PM
To: 'ceph-users@lists.ceph.com'
Subject: Ceph incomplete pg
I have an issue with incomplete pgs, I've tried repairing it but
Hi,
I am try to add mon host using ceph-deploy mon create kvm2, but its not
working and giving me an error.
[kvm2][DEBUG ] determining if provided host has same hostname in remote
[kvm2][DEBUG ] get remote short hostname
[kvm2][DEBUG ] deploying mon to kvm2
[kvm2][DEBUG ] get remote short hostnam
So it sounds like there is only interest by two people. FYI, was looking
for sometime in mid Jan.
Andrew
Mirantis
On Wed, Dec 11, 2013 at 4:59 PM, Andrew Woodward wrote:
> I'd like to get a pulse on any interest in having a meetup in the SF South
> bay (Mountain View CA, USA).
>
> --
> Andrew
[kvm2][WARNIN] kvm2 is not defined in `mon initial members`
The above is why. When you run 'ceph-deploy new', pass it all the machines you
intend to use as mons, eg
'ceph-deploy new mon1 mon2 mon3'
Or alternately, you can modify the ceph.conf file in your bootstrap directory.
And the mon and t
This indicates you have multiple networks on the new mon host, but no
definition in your ceph.conf as to which network is public.
In your ceph.conf, add:
public network = 192.168.1.0/24
cluster network = 192.168.2.0/24
(Fix the subnet definitions for your environment)
Then, re-try your new mon
Hi Andrew,
That would be motivation enough for me to want to meet these two persons over a
beer or a diner :-) It gets more complicated to do that when there are more
than ten.
Cheers
On 16/12/2013 22:28, Andrew Woodward wrote:
> So it sounds like there is only interest by two people. FYI, was
Hey Guys,
Ross and I were discussing a few pages on Ceph.com that we thought
needed an update and I figured it might be a good idea to go through
and audit Ceph.com in general, just to get an idea of what we're up
against. I started a simple pad in case the Trello board is a bit too
daunting.
An
Karan,
This all looks great. I'd encourage you to submit some of this information
into the ceph docs, some of the openstack integration docs are getting a
little dated
Andrew
On Fri, Dec 6, 2013 at 12:24 PM, Karan Singh wrote:
> Hello Cephers
>
> I would like to say a BIG THANKS to ceph commu
HI Don,
Well the result is same even after
ceph-deploy new kvm2
Br.
Umar
On Tue, Dec 17, 2013 at 2:35 AM, Don Talton (dotalton)
wrote:
> [kvm2][WARNIN] kvm2 is not defined in `mon initial members`
>
>
>
> The above is why. When you run ‘ceph-deploy new’, pass it all the machines
> you inten
Hi Michael,
I have only single interface as 192.168.1.x on my ceph hosts. Then what i
need to define?
Br.
Umar
On Tue, Dec 17, 2013 at 2:37 AM, Michael Kidd wrote:
> This indicates you have multiple networks on the new mon host, but no
> definition in your ceph.conf as to which network is pub
Thanks for your reply.
root@rceph0:~# radosgw-admin zone get --name client.radosgw.us-west-1
{ "domain_root": ".us-west.rgw.root",
"control_pool": ".us-west.rgw.control",
"gc_pool": ".us-west.rgw.gc",
"log_pool": ".us-west.log",
"intent_log_pool": ".us-west.intent-log",
"usage_log_pool":
On Mon, Dec 16, 2013 at 8:22 PM, lin zhou 周林 wrote:
> Thanks for your reply.
> root@rceph0:~# radosgw-admin zone get --name client.radosgw.us-west-1
> { "domain_root": ".us-west.rgw.root",
> "control_pool": ".us-west.rgw.control",
> "gc_pool": ".us-west.rgw.gc",
> "log_pool": ".us-west.log",
I am currently trying to figure out how to debug pgs issues myself and
the debugging documentation I have found has not been that helpful. In
my case the underlying problem is probably ZFS which I am using for my
OSDs, but it would be nice to be able to recover what I can. My health
output is
Hello,
I have 2 node ceph cluster, I just rebooted both of the host just for
testing that after rebooting the cluster remain work or not, and the result
was cluster unable to start.
here is ceph -s output
health HEALTH_WARN 704 pgs stale; 704 pgs stuck stale; mds cluster is
degraded; 1/1 in
Hello,
I've been doing a lot of reading and am looking at the following design
for a storage cluster based on Ceph. I will address all the likely
knee-jerk reactions and reasoning below, so hold your guns until you've
read it all. I also have a number of questions I've not yet found the
answer to
27 matches
Mail list logo