On Mon, Jan 8, 2018 at 2:55 AM, Richard Bade wrote:
> Hi Everyone,
> I've got a couple of pools that I don't believe are being used but
> have a reasonably large number of pg's (approx 50% of our total pg's).
> I'd like to delete them but as they were pre-existing when I inherited
> the cluster, I
Hello,
i have installed Luminous 12.2.2 on a 5 node cluster with logical volume
OSDs.
I am trying to stop and start ceph on one of the nodes using systemctl
commands.
*systemctl stop ceph.target; systemctl start ceph.target*
When i stop ceph, all OSDs are stopped on the node properly.
But when i
Quoting Chris Sarginson (csarg...@gmail.com):
> You probably want to consider increasing osd max backfills
>
> You should be able to inject this online
>
> http://docs.ceph.com/docs/luminous/rados/configuration/osd-config-ref/
>
> You might want to drop your osd recovery max active settings back
On maandag 8 januari 2018 11:34:22 CET Stefan Kooman wrote:
> Thanks. If forgot to mention I already increased that setting to "10"
> (and eventually 50). It will increase the speed a little bit: from 150
> objects /s to ~ 400 objects / s. It would still take days for the cluster
> to recover.
The
Graham,
The before/after FIO tests sound interesting, we’re trying to pull together
some benchmark tests to do the same for our Ceph cluster. Could you expand on
which parameters you used, and how the file size relates to the RAM available
to your VM?
Regards,
Paul Ashman
Hi list,
all this is on Ceph 12.2.2.
An existing cephFS (named "cephfs") was backed up as a tar ball, then
"removed" ("ceph fs rm cephfs --yes-i-really-mean-it"), a new one
created ("ceph fs new cephfs cephfs-metadata cephfs-data") and the
content restored from the tar ball. According to t
I recently came across the bluestore_prefer_deferred_size family of config
options, for controlling the upper size threshold on deferred writes. Given a
number of users suggesting that write performance in filestore is better than
write performance in bluestore - because filestore writing to an
Hi *,
trying to remove a caching tier from a pool used for RBD / Openstack,
we followed the procedure from
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/#removing-a-writeback-cache and ran into
problems.
The cluster is currently running Ceph 12.2.2, the caching tier was
On a default luminous test cluster I would like to limit logging of I
guess successful notifications related to deleted snapshots. I don’t
need there 77k messages of these in my syslog server.
What/where would be the best to place to do this? (but not dumping it at
syslog)
Jan 8 13:11:54 c
If you are using a pre-created RBD image for this, you will need to
disable all the image features that krbd doesn't support:
# rbd feature disable dummy01 exclusive-lock,object-map,fast-diff,deep-flatten
On Sun, Jan 7, 2018 at 11:36 AM, Traiano Welcome wrote:
> Hi List
>
> I'm getting the follo
Hi all,
I've migrated all of my replicated rbd images to erasure coded images using
"rbd copy" with the "--data-pool" parameter.
So i now have a replicated pool with 4K pgs, that is only storing RBD
headers and metadata. RBD data is stored on the erasure pool.
Now i would like to move the images
Hi Yehuda,
Thanks for replying.
>radosgw failed to connect to your ceph cluster. Does the rados command
>with the same connection params work?
I am not quite sure what to do by running rados command to test.
So I tried again, could you please take a look and check what could have
gone wrong?
H
On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman wrote:
> Quoting Dan van der Ster (d...@vanderster.com):
>> Thanks Stefan. But isn't there also some vgremove or lvremove magic
>> that needs to bring down these /dev/dm-... devices I have?
>
> Ah, you want to clean up properly before that. Sure:
>
>
ceph-volume relies on systemd, it will not work with upstart. Going
the fstab way might work, but most of the lvm implementation will want
to do systemd-related calls like enabling units and placing files.
For upstart you might want to keep using ceph-disk, unless upgrading
to a newer OS is an opt
On Mon, Jan 8, 2018 at 4:37 PM, Alfredo Deza wrote:
> On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman wrote:
>> Quoting Dan van der Ster (d...@vanderster.com):
>>> Thanks Stefan. But isn't there also some vgremove or lvremove magic
>>> that needs to bring down these /dev/dm-... devices I have?
>>
Hi,
I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded.
I have 2 active mds instances and 1 standby. All the active instances
are now in replay state and show the same error in the logs:
mds1
2018-01-08 16:04:15.765637 7fc2e92451c0 0 ceph version 12.2.2
(cf0baee
Hi Alessandro,
What is the state of your PGs? Inactive PGs have blocked CephFS
recovery on our cluster before. I'd try to clear any blocked ops and
see if the MDSes recover.
--Lincoln
On Mon, 2018-01-08 at 17:21 +0100, Alessandro De Salvo wrote:
> Hi,
>
> I'm running on ceph luminous 12.2.2 and
Thanks Lincoln,
indeed, as I said the cluster is recovering, so there are pending ops:
pgs: 21.034% pgs not active
1692310/24980804 objects degraded (6.774%)
5612149/24980804 objects misplaced (22.466%)
458 active+clean
329 active+rema
On Mon, Jan 8, 2018 at 10:53 AM, Dan van der Ster wrote:
> On Mon, Jan 8, 2018 at 4:37 PM, Alfredo Deza wrote:
>> On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman wrote:
>>> Quoting Dan van der Ster (d...@vanderster.com):
Thanks Stefan. But isn't there also some vgremove or lvremove magic
>>
Hi lists,
ceph version:luminous 12.2.2
The cluster was doing writing thoughput test when this problem happened.
The cluster health became error
Health check update: 27 stuck requests are blocked > 4096 sec (REQUEST_STUCK)
Clients can't write any data into cluster.
osd22 and osd40 are the osds wh
On Mon, Dec 11, 2017 at 2:26 AM, Martin, Jeremy wrote:
> Hello,
>
>
>
> We are currently doing some evaluations on a few storage technologies and
> ceph has made it on our short list but the issue is we haven’t been able to
> evaluate it as I can’t seem to get it to deploy out.
>
>
>
> Before I sp
ceph-volume relies on systemd and 14.04 does not have that available.
You will need to upgrade to a distro version that supports systemd
On Wed, Dec 13, 2017 at 11:37 AM, Stefan Kooman wrote:
> Quoting 姜洵 (jiang...@100tal.com):
>> Hi folks,
>>
>>
>> I am trying to install create a bluestore osd m
ceph will always bind to the local IP. It can't bind to an IP that isn't
assigned directly to the server such as a NAT'd IP. So your public network
should be the local network that's configured on each server. If you
cluster network is 10.128.0.0/16 for instance your public network might be
10.129.
I guess the mds cache holds files, attributes etc but how many files
will the default "mds_cache_memory_limit": "1073741824" hold?
-Original Message-
From: Stefan Kooman [mailto:ste...@bit.nl]
Sent: vrijdag 5 januari 2018 12:54
To: Patrick Donnelly
Cc: Ceph Users
Subject: Re: [ceph-u
Good day,
I've just merged some changs into master that set us up to compile
with C++17. This will require a reasonably new compiler to build
master.
Due to a change in how 'noexcept' is handled (it is now part of the type
signature of a function), mangled symbol names of noexcept functions are
d
In my experience 1GB cache could hold roughly 400k inodes.
_
From: Marc Roos
Sent: Monday, January 8, 2018 23:02
Subject: Re: [ceph-users] MDS cache size limits
To: pdonnell , stefan
Cc: ceph-users
I guess the mds cache holds files, attributes etc but how many fil
On Mon, 8 Jan 2018, Adam C. Emerson wrote:
> Good day,
>
> I've just merged some changs into master that set us up to compile
> with C++17. This will require a reasonably new compiler to build
> master.
Yay!
> Due to a change in how 'noexcept' is handled (it is now part of the type
> signature
You cannot force mds quit "replay" state for obvious reason of keeping data
consistent. You might raise mds_beacon_grace to a somewhat reasonable value
that would allow MDS to replay the journal without being marked laggy and
eventually blacklisted.
From: ceph-u
What could cause this problem?Is this caused by a faulty HDD?
what data's crc didn't match ?
This may be caused due faulty drive. Check your dmesg.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
29 matches
Mail list logo