smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi guys,
our ceph cluster is performing way less than it could, based on the disks we
are using. We could narrow it down to the storage controller (LSI SAS3008 HBA)
in combination with an SAS expander. Yesterday we had a meeting with our
hardware reseller and sale representatives of the hardwar
Hi folks,
I want to use the community repository http://download.ceph.com/debian-luminous
for my luminous cluster instead of the packages provided by ubuntu itself. But
apparently only the ceph-deploy package is available for bionic (Ubuntu 18.04).
All packages exist for trusty though. Is this
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi folks,
I have a nautilus 14.2.1 cluster with a non-default cluster name (ceph_stag
instead of ceph). I set “cluster = ceph_stag” in /etc/ceph/ceph_stag.conf.
ceph-volume is using the correct config file but does not use the specified
clustername. Did I hit a bug or do I need to define the cl
-
-
Von: John Petrini
Datum: Freitag, 7. Juni 2019 um 15:49
An: "Stolte, Felix"
Cc: Sinan Polat , ceph-users
Betreff: Re: [
nd regards,
Sinan Polat
> Op 7 juni 2019 om 12:47 schreef "Stolte, Felix" :
>
>
> Hi Sinan,
>
> that would be great. The numbers should differ a lot, since you have an
all
> flash pool, but it would be inte
command on my cluster?
Sinan
> Op 7 jun. 2019 om 08:52 heeft Stolte, Felix het
volgende geschreven:
>
> I have no performance data before we migrated to bluestore. You should
start a separate topic regarding your question.
>
> Could anyone wit
d know what the difference
is iops? And is the advantage more or less when your sata hdd's are
slower?
-Original Message-----
From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
Sent: donderdag 6 juni 2019 10:47
To: ceph-users
Subject: [ceph-users] Expected
Hello folks,
we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB
SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs
have 10Gb for Public and Cluster Network. The cluster is running stable for
over a year. We didn’t had a closer look on IO un
Hi,
is anyone running an active-passive nfs-ganesha cluster with cephfs backend and
using the rados_kv recovery backend? My setup runs fine, but takeover is giving
me a headache. On takeover I see the following messages in ganeshas log file:
29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 :
08.05.19, 18:33 schrieb "Patrick Donnelly" :
On Wed, May 8, 2019 at 4:10 AM Stolte, Felix wrote:
>
> Hi folks,
>
> we are running a luminous cluster and using the cephfs for fileservices.
We use Tivoli Storage Manager to backup all data in the ceph filesystem
8, 2019 at 1:10 PM Stolte, Felix wrote:
>
> Hi folks,
>
> we are running a luminous cluster and using the cephfs for fileservices.
We use Tivoli Storage Manager to backup all data in the ceph filesystem to tape
for disaster recovery. Backup runs on two dedicated serv
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi folks,
we are running a luminous cluster and using the cephfs for fileservices. We use
Tivoli Storage Manager to backup all data in the ceph filesystem to tape for
disaster recovery. Backup runs on two dedicated servers, which mounted the
cephfs via kernel mount. In order to complete the Ba
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi folks,
we are using nfs-ganesha to expose cephfs (Luminous) to nfs clients. I want to
make use of snapshots, but limit the creation of snapshots to ceph admins. I
read about cephx capabilities which allow/deny the creation of snapshots a
while ago, but I can’t find the info anymore. Can some
Hello cephers,
is anyone using Fujitsu Hardware for Ceph OSDs with the PRAID EP400i
Raidcontroller in JBOD Mode? We are having three identical servers with
identical Disk placement. First three Slots are SSDs for journaling and
remaining nine slots with SATA Disks. Problem is, that in Ubuntu (and
: Freitag, 11. Dezember 2015 15:17
An: Stolte, Felix; Jens Rosenboom
Cc: ceph-us...@ceph.com
Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis
Hi Felix,
Could you try again ? Hopefully that's the right one :-)
https://raw.githubusercontent.com/dachary
Hi Jens,
output is attached (stderr + stdout)
Regards
-Ursprüngliche Nachricht-
Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de]
Gesendet: Freitag, 11. Dezember 2015 09:10
An: Stolte, Felix
Cc: Loic Dachary; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in
. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org]
Gesendet: Freitag, 11. Dezember 2015 02:12
An: Stolte, Felix; ceph-us...@ceph.com
Betreff
twoch, 9. Dezember 2015 23:55
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis
Hi Felix,
It would be great if you could try the fix from
https://github.com/dachary/ceph/commit/7395a6a0c5776d4a92728f1abf0e8a87e5d5e
4bb . It's only changin
Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org]
Gesendet: Dienstag, 8. Dezember 2015 15:17
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: [
orsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org]
Gesendet: Dienstag, 8. Dezember 2015 15:06
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis
Hi Felix,
Coul
er),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-----Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org]
Gesendet: Samstag, 5. Dezember 2015 19:29
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: AW: [ceph-users] ceph-
: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org]
Gesendet: Donnerstag, 3. Dezember 2015 11:01
An: Stolte, Felix; ceph-us
Hi all,
i upgraded from hammer to infernalis today and even so I had a hard time
doing so I finally got my cluster running in a healthy state (mainly my
fault, because I did not read the release notes carefully).
But when I try to list my disks with "ceph-disk list" I get the following
Trac
Hi all,
is anyone running nova compute on ceph OSD Servers and could share his
experience?
Thanks and Regards,
Felix
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des
Hello everyone,
we are currently testing ceph (Hammer) and Openstack (Kilo) on Ubuntu 14.04
LTS Servers. Yesterday I tried to setup the radosgateway with keystone
integration for swift via ceph-deploy. I followed the instructions on
http://ceph.com/docs/master/radosgw/keystone/ and
http://ceph
30 matches
Mail list logo