On Mon, Jan 28, 2019 at 7:31 AM ST Wong (ITSC) wrote:
>
> > That doesn't appear to be an error -- that's just stating that it found a
> > dead client that was holding the exclusice-lock, so it broke the dead
> > client's lock on the image (by blacklisting the client).
>
> As there is only 1 RBD
Upgrading to 4.15.0-43-generic fixed the problem.
Best,
Martin
On Fri, Jan 25, 2019 at 9:43 PM Ilya Dryomov wrote:
>
> On Fri, Jan 25, 2019 at 9:40 AM Martin Palma wrote:
> >
> > > Do you see them repeating every 30 seconds?
> >
> > yes:
> >
> > Jan 25 09:34:37 sdccgw01 kernel: [6306813.737615]
The "rbdmap" unit needs rbdmap and fstab to be configured for each volume, what
if the map and mount are done by applications instead of the system unit? See,
we don't write each volume info into /etc/ceph/rbdmap /etc/fstab, and if the
"rbdmap" systemd unit is stopped unexpected, not by rebootin
Hi,
Am 23.01.19 um 23:28 schrieb Ketil Froyn:
> How is the commercial support for Ceph?
At Heinlein Support we also offer independent
ceph consulting. We are concentrating on the
German speaking regions of Europe.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
On Mon, Jan 28, 2019 at 4:48 AM Gao, Wenjun wrote:
>
> The "rbdmap" unit needs rbdmap and fstab to be configured for each volume,
> what if the map and mount are done by applications instead of the system
> unit? See, we don't write each volume info into /etc/ceph/rbdmap /etc/fstab,
> and if th
On 28-1-2019 02:56, Will Dennis wrote:
I mean to use CephFS on this PoC; the initial use would be to back up an
existing ZFS server with ~43TB data (may have to limit the backed-up data
depending on how much capacity I can get out of the OSD servers) and then share
out via NFS as a read-only c
hi folks we need some help with our cephfs, all mds keep crashing
starting mds.mds02 at -
terminate called after throwing an instance of
'ceph::buffer::bad_alloc'
what(): buffer::bad_alloc
*** Caught signal (Aborted) **
in thread 7f542d825700 thread_name:md_log_replay
ceph version 13.2.4 (b10be4
The hope is to be able to provide scale-out storage, that will be performant
enough to use as a primary fs-based data store for research data (right now we
mount via NFS on our cluster nodes, may do that with Ceph or perhaps do native
cephfs access from the cluster nodes.) Right now I’m still
On Sat, Jan 26, 2019 at 6:57 PM Marc Roos wrote:
>
>
>
>
> From the owner account of the bucket I am trying to enable logging, but
> I don't get how this should work. I see the s3:PutBucketLogging is
> supported, so I guess this should work. How do you enable it? And how do
> you access the log?
>
Hello cephers,
as described - we also have the slow requests in our setup.
We recently updated from ceph 12.2.4 to 12.2.10, updated Ubuntu 16.04 to the
latest patchlevel (with kernel 4.15.0-43) and applied dell firmware 2.8.0.
On 12.2.5 (before updating the cluster) we had in a frequency of 10m
hi, professor
Recently, i am intend to make big adaption for local small-scale ceph
cluster. The job mainly includes two parts:
(1) mds metadata: switch metadata storage medium to ssd.
(2) osd bluestore wal&db: switch wal&db storage medium to ssd.
now, we are doing some research and
11 matches
Mail list logo