On 7/02/2018 8:23 AM, Kyle Hutson wrote:
> We had a 26-node production ceph cluster which we upgraded to Luminous
> a little over a month ago. I added a 27th-node with Bluestore and
> didn't have any issues, so I began converting the others, one at a
> time. The first two went off pretty smoothly,
Afaik you can configure the use of an rbd device like below.
- Am I correct in assuming that the first one is not recommended because
it can use some caching? (I thought I noticed a speed difference between
these 2, and assumed it was related to caching).
- I guess in both cases the kernel mo
I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
running linux-generic-hwe-16.04 (4.13.0-32-generic).
Works fine, except that it does not support the latest features: I had
to disable exclusive-lock,fast-diff,object-map,deep-flatten on the
image. Otherwise it runs well.
Hi!
I just want to thank all organizers and speakers for the awesome Ceph
Day at Darmstadt, Germany yesterday.
I learned of some cool stuff I'm eager to try out (NFS-Ganesha for RGW,
openATTIC,...), Organization and food were great, too.
Cheers,
Martin
On Thu, Feb 8, 2018 at 11:20 AM, Martin Emrich
wrote:
> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>
> Works fine, except that it does not support the latest features: I had to
> disable exclusive-lock,fast-diff,ob
Hi Christian,
First of all, thanks for all the great answers and sorry for the late
reply.
On Tue, 2018-02-06 at 10:47 +0900, Christian Balzer wrote:
> Hello,
>
> > I'm not a "storage-guy" so please excuse me if I'm missing /
> > overlooking something obvious.
> >
> > My question is in the
Am 08.02.18 um 11:50 schrieb Ilya Dryomov:
On Thu, Feb 8, 2018 at 11:20 AM, Martin Emrich
wrote:
I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
running linux-generic-hwe-16.04 (4.13.0-32-generic).
Works fine, except that it does not support the latest features: I had t
2018-02-08 11:20 GMT+01:00 Martin Emrich :
> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>
> Works fine, except that it does not support the latest features: I had to
> disable exclusive-lock,fast-diff,object-map,de
On Thu, Feb 8, 2018 at 12:54 PM, Kevin Olbrich wrote:
> 2018-02-08 11:20 GMT+01:00 Martin Emrich :
>>
>> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
>> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>>
>> Works fine, except that it does not support the latest feat
I have the same problem.
Configuration:
4 HW servers Debian GNU/Linux 9.3 (stretch)
Ceph luminous 12.2.2
Now I installed on these servers ceph version 10.2.10, OSDs activate is fine.
>Среда, 7 февраля 2018, 19:54 +03:00 от "Cranage, Steve"
>:
>
>Greetings ceph-users. I have been trying to b
Hi all.
We use RBDs as storage of data for applications.
If application itself can do replication (for example Cassandra),
we want to get profit (HA) from replication on app level.
But we can't if all RBDs are in same pool.
If all RBDs are in same pool - then all rbds are tied up with one set of
The only clue I have run across so far is that the osd daemons ceph-deploy
attempts to create on the failing OSD server (osd3) are two of the same
osd-id's just created on the last osd server deployed (osd2). So from the osd
tree listing - osd1 has osd.0, osd.1, osd.2 and osd.3. The next server,
Aleksei-
This won't be a ceph answer. Most virtualization platforms you will have a
type of disk called ephemeral, it is usually storage composed of disks on
the hypervisor, possibly RAID with parity, usually not backed up. You may
want to consider running your Cassandra instances on the ephemeral
Hi list,
I am testing cache tier in writeback mode.
The test resutl is confusing.The write performance is worse than without a
cache tier.
The hot storage pool is an all ssd pool and the cold storage pool is an all hdd
pool. I also created a hddpool and a ssdpool with the same crush rule as the
Hey all,
We are trying to get an erasure coding cluster up and running but we are having
a problem getting the cluster to remain up if we lose an OSD host.
Currently we have 6 OSD hosts with 6 OSDs a piece. I'm trying to build an EC
profile and a crush rule that will allow the cluster to con
Hi Everyone,
I recently added an OSD to an active+clean Jewel (10.2.3) cluster and
was surprised to see a peak of 23% objects degraded. Surely this should
be at or near zero and the objects should show as misplaced?
I've searched and found Chad William Seys' thread from 2015 but didn't
see a
Hello,
On Thu, 8 Feb 2018 10:58:43 + Patrik Martinsson wrote:
> Hi Christian,
>
> First of all, thanks for all the great answers and sorry for the late
> reply.
>
You're welcome.
>
> On Tue, 2018-02-06 at 10:47 +0900, Christian Balzer wrote:
> > Hello,
> >
> > > I'm not a "storage-g
17 matches
Mail list logo