Hi ,
I'm glad to know that it happened not only to me.
Though it is unharmful, it seems like kind of bug...
Are there any Ceph developers who know how exactly is the implementation of
"ceph osd perf" command?
Is the leap second really responsible for this behavior?
Thanks.
Sincerely,
Craig Chi
Hi,
I have a cluster with RGW in which one bucket is really big, so every
so often we delete stuff from it.
That bucket is now taking 3.3T after we deleted just over 1T from it.
That was done last week.
The pool (.rgw.buckets) is using 5.1T, and before the deletion was
taking almost 6T.
How can
I'm playing with rbd mirroring with openstack. The final idea is to use it
for disaster recovery of DB server running on Openstack cluster, but would
like to test this functionality first.
I've prepared this configuration:
- 2 openstack clusters (devstacks)
- 2 ceph clusters (one node clusters)
Rem
> Op 5 januari 2017 om 10:08 schreef Luis Periquito :
>
>
> Hi,
>
> I have a cluster with RGW in which one bucket is really big, so every
> so often we delete stuff from it.
>
> That bucket is now taking 3.3T after we deleted just over 1T from it.
> That was done last week.
>
> The pool (.rgw
On Thu, Jan 5, 2017 at 7:24 AM, Klemen Pogacnik wrote:
> I'm playing with rbd mirroring with openstack. The final idea is to use it
> for disaster recovery of DB server running on Openstack cluster, but would
> like to test this functionality first.
> I've prepared this configuration:
> - 2 openst
Patrick,
Do you have any rough idea on when the deadline for submitting
presentation proposals might be? Not to rush you, we’re interested, but
know it might take some time to get internal approval to present at an
outside conference.
Thanks,
Bruce
On 1/4/17, 8:49 AM, "ceph-users on behalf of Patr
Actually these steps didn't work for me - I had an older version of
Ceph, so I had to upgrade first.
However, the monmap could be restored, but the OSDs were not found anyway.
But I had success now with the Scripts I mentioned before and I was able
to extract all VM images from the raw OSDs wi
Apologies for the thread necromancy :)
We've (finally) configured our signing system to use sha256 for GPG
digests, so this issue should no longer appear on Debian/Ubuntu.
- Ken
On Fri, May 27, 2016 at 6:20 AM, Saverio Proto wrote:
> I started to use Xenial... does everyone have this error ? :
Hi,
any idea of the root cause of this, inside a KVM VM, running qcow2 on
cephfs dmesg shows:
[846193.473396] ata1.00: status: { DRDY }
[846196.231058] ata1: soft resetting link
[846196.386714] ata1.01: NODEV after polling detection
[846196.391048] ata1.00: configured for MWDMA2
[846196.391053] a
Looks like disk i/o is too slow. You can try configuring ceph.conf with
settings like "osd client op priority"
http://docs.ceph.com/docs/jewel/rados/configuration/osd-config-ref/
(which is not loading for me at the moment...)
On 01/05/2017 04:43 PM, Oliver Dzombic wrote:
Hi,
any idea of the r
Hi David,
thank you for your suggestion.
In the night, thats when this issues occure primary/(only?), we run the
scrubs and deep scrubs.
In this time the HDD Utilization of the cold storage peaks to 80-95%.
But we have a SSD hot storage in front of this, which is buffering
writes and reads.
In
Hello,
On Thu, 5 Jan 2017 23:02:51 +0100 Oliver Dzombic wrote:
I've never seen hung qemu tasks, slow/hung I/O tasks inside VMs with a
broken/slow cluster I've seen.
That's because mine are all RBD librbd backed.
I think your approach with cephfs probably isn't the way forward.
Also with cephfs
Hello All,
I have setup a ceph cluster based on 0.94.6 release in 2 servers each with
80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
which is connected to a 10G switch with a replica of 2 [ i will add 3 more
servers to the cluster] and 3 seperate monitor nodes which are vms.
rbd_cache
2017-01-06 11:10 GMT+08:00 kevin parrikar :
> Hello All,
>
> I have setup a ceph cluster based on 0.94.6 release in 2 servers each
> with 80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
> which is connected to a 10G switch with a replica of 2 [ i will add 3 more
> servers to the cluster
2017-01-04 23:52 GMT+08:00 Mike Miller :
> Wido, all,
>
> can you point me to the "recent benchmarks" so I can have a look?
> How do you define "performance"? I would not expect cephFS throughput to
> change, but it is surprising to me that metadata on SSD will have no
> measurable effect on laten
Hello,
On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote:
> Hello All,
>
> I have setup a ceph cluster based on 0.94.6 release in 2 servers each with
> 80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
> which is connected to a 10G switch with a replica of 2 [ i will add 3 more
>
Hello group--
I have been running ceph 10.2.3 for awhile now without any issues. This
evening my admin node (which is also an OSD and Monitor) crashed. I
checked my other OSD servers and the data seems to still be there.
Is there an easy way to bring the admin node back into the cluster? I am
17 matches
Mail list logo