Hi there,
i have updated my ceph-cluster from luminous to 14.2.1 and whenever I run a
"ceph tell mon.* version"
I get the correct versions from all monitors except mon.5
For mon.5 is get the error:
Error ENOENT: problem getting command descriptions from mon.5
mon.5: problem getting command desc
Hi there,
yesterday I upgraded my ceph cluster from luminous to nautilus and since
then I have the message xxx pgs not deep-scrubbed in time.
My deep scrubbings were okay before, I do have a deep scrub
interval of 6 weeks:
osd_deep_scrub_interval = 3628800
and i had no warning.
Since yeterday
;
> >head candidate had a read error
> >
> >When I check dmesg on the osd node I see:
> >
> >blk_update_request: critical medium error, dev sdX, sector 123
> >
> >I will also see a few uncorrected read errors in smartctl.
> >
> >Info:
> >Ce
eep-scrub comes back as inconsistent, but doing another
> manual scrub comes back as fine and clear each time.
>
> Not sure if related or not..
>
> On Wed, 7 Nov 2018 at 11:57 PM, Christoph Adomeit <
> christoph.adom...@gatworks.de> wrote:
>
> > Hello together,
>
Hello together,
we have upgraded to 12.2.9 because it was in the official repos.
Right after the update and some scrubs we have issues.
This morning after regular scrubs we had around 10% of all pgs inconstent:
pgs: 4036 active+clean
380 active+clean+inconsistent
After repairung
Hi there,
I noticed that luminous 12.2.3 is already released.
Is there any changelog for this release ?
Thanks
Christoph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
at 09:57:16AM +0200, Christoph Adomeit wrote:
> Hi there,
>
> is it possible to move WAL and DB Data for Existing bluestore OSDs to
> separate partitions ?
>
> I am looking for a method to maybe take an OSD out, do some magic and move
> some data to new SSD Devices and then
Hi there,
is it possible to move WAL and DB Data for Existing bluestore OSDs to separate
partitions ?
I am looking for a method to maybe take an OSD out, do some magic and move some
data to new SSD Devices and then take the OSD back in.
Any Ideas ?
Thanks
Christoph
__
an mtime you can check via the rados tool. You could
> > write a script to iterate through all the objects in the image and find the
> > most recent mtime (although a custom librados binary will be faster if you
> > want to do this frequently).
> > -Greg
> >
>
Hi,
no i did not enable the journalling feature since we do not use mirroring.
On Thu, Mar 23, 2017 at 08:10:05PM +0800, Dongsheng Yang wrote:
> Did you enable the journaling feature?
>
> On 03/23/2017 07:44 PM, Christoph Adomeit wrote:
> >Hi Yang,
> >
> >I mea
rote:
> Hi Christoph,
>
> On 03/23/2017 07:16 PM, Christoph Adomeit wrote:
> >Hello List,
> >
> >i am wondering if there is meanwhile an easy method in ceph to find more
> >information about rbd-images.
> >
> >For example I am interested in the m
Hello List,
i am wondering if there is meanwhile an easy method in ceph to find more
information about rbd-images.
For example I am interested in the modification time of an rbd image.
I found some posts from 2015 that say we have to go over all the objects of an
rbd image and find the newest
Jewel and with this
> email we want to share our experiences.
>
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm Terstappen
christoph.adom
build the object map for that specific snapshot via
> "rbd object-map rebuild vm-208-disk-2@initial.20160729-220225".
>
> On Sat, Jul 30, 2016 at 7:17 AM, Christoph Adomeit
> wrote:
> > Hi there,
> >
> > I upgraded my cluster to jewel recently, built object m
on responsible
> for delivering it to the intended recipient, you are hereby notified that any
> disclosure, copying, distribution or use of any of the information contained
> in or attached to this message is STRICTLY PROHIBITED. If you have received
> this transmission in error,
Hi there,
I upgraded my cluster to jewel recently, built object maps for every image and
recreated all snapshots du use fast-diff feature for backups.
Unfortunately i am still getting the following error message on rbd du:
root@host:/backups/ceph# rbd du vm-208-disk-2
warning: fast-diff map is i
; >>>>
> >>>> Jun 28 09:46:41 roc04r-sca090 kernel: [137912.685939]
> >>>>
> >>>> [] page_fault+0x28/0x30
> >>>>
> >>>> Jun 28 09:46:41 roc04r-sca090 kernel: [137912.685967]
> >>>>
> >>>
n
>
>
> - Original Message -
> > From: "Christoph Adomeit"
> > To: "Jason Dillaman"
> > Cc: ceph-us...@ceph.com
> > Sent: Friday, March 18, 2016 6:19:16 AM
> > Subject: Re: [ceph-users] Does object map feature lock snapshots ?
Zhanks Jaseon,
this worked ...
On Fri, Mar 18, 2016 at 02:31:44PM -0400, Jason Dillaman wrote:
> Try the following:
>
> # rbd lock remove vm-114-disk-1 "auto 140454012457856" client.71260575
>
> --
>
> Jason Dillaman
>
>
> - Original Message ---
Hi,
we have upgraded our ceph-cluster to infernalis from hammer.
Ceph is still running as root and we are using the
"setuser match path = /var/lib/ceph/$type/$cluster-$id" directive in ceph.conf
Now we would like to change the ownership of data-files and devices to ceph at
runtime.
What ist t
Hi,
some of my rbds show they have an exclusive lock.
I think the lock can be stale or weeks old.
We have also once added feature exclusive lock and later removed that feature
I can see the lock:
root@machine:~# rbd lock list vm-114-disk-1
There is 1 exclusive lock on this image.
Locker
e exclusive lock feature was enabled, but
> that should have been fixed in v9.2.1.
>
> [1] http://tracker.ceph.com/issues/14542
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
> > From: "Christoph Adomeit"
> > To: ceph-
Hi,
i have installed ceph 9.21 on proxmox with kernel 4.2.8-1-pve.
Afterwards I have enabled the features:
rbd feature enable $IMG exclusive-lock
rbd feature enable $IMG object-map
rbd feature enable $IMG fast-diff
During the night I have a cronjob which does a rbd snap create on each
of my im
Hi there,
I just updated our ceph-cluster to infernalis and now I want to enable the new
image features.
I wonder if I can add the features on the rbd images while the VMs are running.
I want to do something like this:
rbd feature enable $IMG exclusive-lock
rbd feature enable $IMG object-map
r
Hi there,
I am using Ceph-Hammer and I am wondering about the following:
What is the recommended way to find out when an rbd-Image was last modified ?
Thanks
Christoph
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht
Hi there,
I was hoping for the following changes in 0.94.4 release:
-Stable Object Maps for faster Image Handling (Backups, Diffs, du etc).
-Link against better Malloc implementation like jemalloc
Does 0.94.4 bring any improvement in these areas ?
Thanks
Christoph
On Mon, Oct 19, 2015 at
ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm
ebug
> environment.
>
> Thanks,
>
> Jason
>
>
> - Original Message -
> > From: "Christoph Adomeit"
> > To: ceph-users@lists.ceph.com
> > Sent: Monday, August 31, 2015 7:49:00 AM
> > Subject: [ceph-users] How to disable object-map and e
Hi there,
I have a ceph-cluster (0.94-2) with >100 rbd kvm images.
Most vms are running rock-solid but 7 vms are hanging about once a week.
I found out the hanging machines have
features: layering, exclusive, object map while all other vms do not have
exclusive and object map set.
Now I want
configured or changed in ceph so that
availability will become better in case of flapping networks ?
I understand, it is not a ceph problem but a network problem but maybe
something can be learned from such incidents ?
Thanks
Christoph
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199
faster disks ?
Just give them another weight or are there other methods ?
Thanks
Christoph
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm Terstappen
Hello Dziannis,
I am also planning to change our cluster from straw to straw2 because we
habe different hdd sizes and changes in the HDD-Sizes always triggers a lot of
reorganization load.
Did you experience any issues ? Did you already change the other hosts ?
Don't you think we will have less
Hi there,
we are using ceph hammer and we have some fully provisioned
images with only little data.
rbd export of a 500 GB rbd Image takes long time although there are only
15 GB of used data, even if the rbd-image is trimmed.
Do you think it is a good idea to enable the object-map feature on
al
Hi there,
I have a ceph cluster running hammer-release.
Recently I trimmed a lot of virtual disks and I can verify that
the size of the images has decreased a lot.
I checked this with:
/usr/bin/rbd diff $IMG | grep -v zero |awk '{ SUM += $2 } END { print
SUM/1024/1024 " MB" }'
the output afte
Hi there,
today I had an osd crash with ceph 0.87/giant which made my hole cluster
unusable for 45 Minutes.
First it began with a disk error:
sd 0:1:2:0: [sdc] CDB: Read(10)Read(10):: 28 28 00 00 0d 15 fe d0 fd 7b e8 f8
00 00 00 00 b0 08 00 00
XFS (sdc1): xfs_imap_to_bp: xfs_trans_read_buf()
:
> If you watch `ceph -w` while stopping the OSD, do you see
> 2014-12-02 11:45:17.715629 mon.0 [INF] osd.X marked itself down
>
> ?
>
> On Tue, Dec 2, 2014 at 11:06 AM, Christoph Adomeit <
> christoph.adom...@gatworks.de> wrote:
>
> > Thanks Craig,
> >
>
40:13AM -0800, Craig Lewis wrote:
> I've found that it helps to shut down the osds before shutting down the
> host. Especially if the node is also a monitor. It seems that some OSD
> shutdown messages get lost while monitors are holding elections.
>
> On Tue, Dec 2, 2014 at 10:10
Hi there,
I have a giant cluster with 60 OSDs on 6 OSD Hosts.
Now I want to do maintenance on one of the OSD Hosts.
The documented Procedure is to "ceph osd set noout" and then shutdown
the OSD Node for maintenance.
However, as soon as I even shut down 1 OSD I get around 200 slow requests
and t
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm Terstappen
christoph.adom...@gatworks.de Internetloesungen vom Feinsten
Fon. +49 2166 9149-32
faster on ceph storage ?
Many Thanks
Christoph
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm Terstappen
christoph.adom...@gatworks.de Internetloesungen
Hello Ceph-Community,
we are considering to use a Ceph Cluster for serving VMs.
We need goog performance and absolute stability.
Regarding Ceph I have a few questions.
Presently we use Solaris ZFS Boxes as NFS Storage for VMs.
The zfs boxes are totally fast, because they use all free ram
for r
41 matches
Mail list logo