If you are re-creating or adding the OSD anywyas: consider using
Bluestore for the new ones, it performs *so much* better. Especially
in scenarios like these.
Running a mixed configuration is no problem in our experience.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us
Hi,
Most likely the issue is with your consumer grade journal ssd. Run
this to your ssd to check if it performs: fio --filename=
--direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1
--runtime=60 --time_based --group_reporting --name=journal-test
On Tue, Nov 27, 2018 at 2:06 AM Cody wro
I just rolled back a snapshot, and when I started the (windows) vm, I
noticed still a software update I installed after this snapshot.
What am I doing wrong that libvirt is not reading the rolled back
snapshot (,but uses something from cache)?
___
CPU: 2 x E5-2603 @1.8GHz
RAM: 16GB
Network: 1G port shared for Ceph public and cluster traffics
Journaling device: 1 x 120GB SSD (SATA3, consumer grade)
OSD device: 2 x 2TB 7200rpm spindle (SATA3, consumer grade)
0.84 MB/s sequential write is impossibly bad, it's not normal with any
kind of de
Hi There,
We are replicating a 100TB RBD image to DR site. Replication works fine.
rbd --cluster cephdr mirror pool status nfs --verbose
health: OK
images: 1 total
1 replaying
dir_research:
global_id: 11e9cbb9-ce83-4e5e-a7fb-472af866ca2d
state: up+replaying
description:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an issue
in certain upgrade scenarios, and this release was expedited to revert
those patches. If
On 27/11/2018 14:50, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an issue
in certain upgrade scenarios, and this
Hi,
We’re currently progressively pushing into production a CEPH Mimic cluster and
we’ve noticed a fairly strange behaviour. We use Ceph as a storage backend for
Openstack block device. Now, we’ve deployed a few VMs on this backend to test
the waters. These VMs are practically empty, with only
On 11/27/2018 08:50 AM, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an issue
in certain upgrade scenarios, an
In the old days when I first installed Ceph with RGW the performance would
be very slow after storing 500+ million objects in my buckets. With
Luminous and index sharding is this still a problem or is this an old
problem that has been solved?
Regards
R
_
Hi everyone,
Many, many thanks to all of you!
The root cause was due to a failed OS drive on one storage node. The
server was responsive to ping, but unable to login. After a reboot via
IPMI, docker daemon failed to start due to I/O errors and dmesg
complained about the failing OS disk. I failed
Hi Robert,
Solved is probably a strong word. I'd say that things have improved.
Bluestore in general tends to handle large numbers of objects better
than filestore does for several reasons including that it doesn't suffer
from pg directory splitting (though RocksDB compaction can become a
And this exact problem was one of the reasons why we migrated
everything to PXE boot where the OS runs from RAM.
That kind of failure is just the worst to debug...
Also, 1 GB of RAM is cheaper than a separate OS disk.
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https:
Hi,
I'm running into an issue with the RadosGW Swift API when the S3 bucket
versioning is enabled. It looks like it silently drops any metadata sent
with the "X-Object-Meta-foo" header (see example below).
This is observed on a Luminous 12.2.8 cluster. Is that a normal thing? Am I
misconfiguring s
Am 27.11.18 um 15:50 schrieb Abhishek Lekshmanan:
> As mentioned above if you've successfully upgraded to v12.2.9 DO NOT
> upgrade to v12.2.10 until the linked tracker issue has been fixed.
What about clusters currently running 12.2.9 (because this was the
version in the repos when they got i
On 11/27/18 8:26 AM, Simon Ironside wrote:
On 27/11/2018 14:50, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an
On 11/27/18 9:40 AM, Graham Allan wrote:
On 11/27/2018 08:50 AM, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause
On 11/27/18 12:00 PM, Robert Sander wrote:
Am 27.11.18 um 15:50 schrieb Abhishek Lekshmanan:
As mentioned above if you've successfully upgraded to v12.2.9 DO NOT
upgrade to v12.2.10 until the linked tracker issue has been fixed.
What about clusters currently running 12.2.9 (because this
On 11/27/18 12:11 PM, Josh Durgin wrote:
13.2.3 will have a similar revert, so if you are running anything other
than 12.2.9 or 13.2.2 you can go directly to 13.2.3.
Correction: I misremembered here, we're not reverting these patches for
13.2.3, so 12.2.9 users can upgrade to 13.2.2 or later, b
Hi, I am facing a problem where a OSD wont start after moving to a new
node with 12.2.10 (the old one has 12.2.8)
I have one node of my cluster failed and trued to move 3 osds to a new
node. 2 of the 3 osds has started and is running fine at the moment
(backfiling is still in place.) but one o
This is *probably* unrelated to the upgrade as it's complaining at a
very early stage about data corruption.
(Earlier than the bug that would trigger related to the 12.2.9 issues)
So this might just be a coincidence with a bad disk.
That being said: you are running a 12.2.9 OSD and you probably sh
Hi there,
I have a question about rgw/civetweb log settings.
Currently, rgw/civetweb prints 3 lines of logs with loglevel 1 (high priority)
for each HTTP request, like following:
$ tail /var/log/ceph/ceph-client.rgw.node-1.log
2018-11-28 11:52:45.339229 7fbf2d693700 1 == starting new reque
22 matches
Mail list logo