Hi,
The Smart Values are looking good as far as I can see, and also if I mark the
osd down and let the rebuild to other osds happen I'm running into the same
problem on another osd.
Jonas
Von: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Udo Lemb
I still have bad the luck to figure out what is the problem making
authentication failure, so in order to get the cluster back, I tried:
1. stop all daemons (mon & osd)
2. change the configuration to disable cephx
3. start mon daemons (3 in total)
4. start osd daemon one by one
After fini
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Ceph / Dell hardware recommendation
On 01/15/2014 08:29 AM, Derek Yarnell wrote:
> On 1/15/14, 9:20 AM, Mark Nelson wrote:
>> I guess I'd probably look at the R520 in an 8 bay configuration with
>> an
>> E5-2407 and 4 1TB data disks per chas
Le 16/01/2014 10:16, NEVEU Stephane a écrit :
Thank you all for comments,
So to sum up a bit, it's a reasonable compromise to buy :
2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, SAS
6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 1.2TB,
SAS 6G
On Wed, Jan 15, 2014 at 5:42 AM, Sage Weil wrote:
>
> [...]
>
> * rbd: support for 4096 mapped devices, up from ~250 (Ilya Dryomov)
Just a note, v0.75 simply adds some of the infrastructure, the actual
support for this will arrive with kernel 3.14. The theoretical limit
is 65536 mapped devices,
Greetings denizens of the Ceph universe!
As you may have noticed, Inktank has announced the next "Ceph Day"
which will be held in Frankfurt, Germany on February 27th [0]. While
Inktank may be the one putting on these events, our hope is that the
real stars will be from the community. We already
Hi Guang,
On Thu, 16 Jan 2014, Guang wrote:
> I still have bad the luck to figure out what is the problem making
> authentication failure, so in order to get the cluster back, I tried:
> 1. stop all daemons (mon & osd)
> 2. change the configuration to disable cephx
> 3. start mon daemons (3
For our ~400 TB Ceph deployment, we bought:
(2) R720s w/ dual X5660s and 96 GB of RAM
(1) 10Gb NIC (2 interfaces per card)
(4) MD1200s per machine
...and a boat load of 4TB disks!
In retrospect, I would almost certainly would have gotten more servers. During
heavy
Hello,
On two separate occasions I have lost power to my Ceph cluster. Both times, I
had trouble bringing the cluster back to good health. I am wondering if I need
to config something that would solve this problem?
After powering back up the cluster, "ceph health" revealed stale pages, mds
clus
Hello,
Osds are not starting on any of the nodes after I upgraded ceph-0.67.4 to
emperor 0.72.2. Tried to start osd see the following verbose output. The
same error comes up on all nodes when starting osds.
[root@ceph2 ~]# service ceph -v start osd.20
/usr/bin/ceph-conf -c /etc/ceph/ceph.conf -n
Hi Patrick,
I would be happy to present the Ceph User Committee first the first time :-)
Cheers
On 16/01/2014 18:30, Patrick McGarry wrote:
> Greetings denizens of the Ceph universe!
>
> As you may have noticed, Inktank has announced the next "Ceph Day"
> which will be held in Frankfurt, German
Hi Gagan,
You have 30 osd with 12 pools and only 6048 PG. Some of your pools must
have pretty low PG numbers. I think it looks for a 'skew' in numbers and
issues a warning now as well as if you've pools which have 'too many'
objects per placement group.
Run: ~$ceph osd dump | grep 'pg_num'
A
On Thu, 16 Jan 2014 15:51:17 +0200 Ilya Dryomov wrote:
> On Wed, Jan 15, 2014 at 5:42 AM, Sage Weil wrote:
> >
> > [...]
> >
> > * rbd: support for 4096 mapped devices, up from ~250 (Ilya Dryomov)
>
> Just a note, v0.75 simply adds some of the infrastructure, the actual
> support for this will a
Things to check:
1. Check the network if the IP address change, if the cable is plugged-in
2. Check the data directory(/var/lib/ceph/), usually you would mount the
osd data directory to a separate disk.
It would be helpful if you can post the logs of ceph here.
Best Regards
Wei
F
> On two separate occasions I have lost power to my Ceph cluster. Both times, I
> had trouble bringing the cluster back to good health. I am wondering if I
> need to config something that would solve this problem?
No special configuration should be necessary, I've had the unfortunate
luck of wit
15 matches
Mail list logo