On 16-10-13 14:56:12, tao changtao wrote:
> Hi All,
>
> why the rbd ThreadPool threads number are set 1 by hard code ?
details here: http://tracker.ceph.com/issues/15034
>
>
> class ThreadPoolSingleton : public ThreadPool {
> public:
> explicit ThreadPoolSingleton(CephContext *cct)
> :
Hi David,
I am Praveen, we also had a similar problem with hammer 0.94.2. We had the
problem when we created a new cluster with erasure coding pool (10+5
config).
Root cause:
The high memory usage in our case was because of pg logs. The number of pg
logs are higher in case of erasure coding pool
On 22/09/2016 15:29, Chris Murray wrote:
Hi all,
Might anyone be able to help me troubleshoot an "apt-get dist-upgrade"
which is stuck at "Setting up ceph-osd (10.2.3-1~bpo80+1)"?
I'm upgrading from 10.2.2. The two OSDs on this node are up, and think
they are version 10.2.3, but the upgrade doe
from status page it seems that Ceph didn't like networking problems. May
we find out some details what happened? Underprovisioned servers (RAM
upgrades were in there too)? Too much load on disks? Something else?
This situation may be not pleasant but I feel that others can learn from
it to pre
Is apt/dpkg doing something now? Is problem repeatable, e.g. by killing
upgrade and starting again. Are there any stuck systemctl processes?
I had no problems upgrading 10.2.x clusters to 10.2.3
On 16-10-13 13:41, Chris Murray wrote:
On 22/09/2016 15:29, Chris Murray wrote:
Hi all,
Might any
Hi,
i fully agree.
If the downtime is related to a problem with a ceph cluster, it should
be very intresting to anyone of us, what happend to this ceph cluster to
cause a downtime of multiple days.
Usually thats quiet long for a productive usage.
So any information with some details is highly a
When you increase your pg number, the new pgs will have to peer first and
during this time they will be unreachable.So you need to put the cluster in
maintenance mode for this operation.
The way to upgrade the number of PG and the PGP of a running cluster is :
- First, it's very important to
Hello,
I have a huge RGW bucket with 180 million objects and non-sharded
bucket. Ceph version is 10.2.1.
I wonder is it safe to delete it with --purge-data option? Will other
buckets be heavily influenced by that?
Regards, Vasily.
___
ceph-users mailing
Hello everyone,
We are in the process of buying hardware for our first ceph-cluster. We
will start with some testing and do some performance measurements to
see that we are on the right track, and once we are satisfied with our
setup we'll continue to grow in it as time comes along.
Now, I'm jus
Hello,
I run a cluster in jewel 10.2.2, I have deleted the last Bucket of a radosGW
pool to delete this pool and recreate it in EC (was replicate)
Detail of the pool :
> pool 36 'erasure.rgw.buckets.data' replicated size 3 min_size 2 crush_ruleset
> 0 object_hash rjenkins pg_num 128 pgp_num 12
I have a basement cluster that is partially built with Odroid-C2 boards and
when I attempted to upgrade to the 10.2.3 release I noticed that this release
doesn't have an arm64 build. Are there any plans on continuing to make arm64
builds?
Thanks,
Bryan
_
6 SSD per nvme journal might leave your journal in contention. Can you
provide the specific models you will be using?
On Oct 13, 2016 10:23 AM, "Patrik Martinsson" <
patrik.martins...@trioptima.com> wrote:
> Hello everyone,
>
> We are in the process of buying hardware for our first ceph-cluster.
On tor, 2016-10-13 at 10:29 -0500, Brady Deetz wrote:
> 6 SSD per nvme journal might leave your journal in contention. Canyou
> provide the specific models you will be using?
Well, according to Dell, the card is called "Dell 1.6TB, NVMe, Mixed
Use Express Flash, PM1725", but the specs for the card
On Thu, 13 Oct 2016, Henrik Korkuc wrote:
> from status page it seems that Ceph didn't like networking problems. May we
> find out some details what happened? Underprovisioned servers (RAM upgrades
> were in there too)? Too much load on disks? Something else?
>
> This situation may be not pleasant
Hi,
I had experience with deleting a big bucket (25M small objects) with
--purge-data option. It took ~20H (run in screen) and didn't made any
significant effect on the cluster performance.
Stas
On Thu, Oct 13, 2016 at 9:42 AM, Василий Ангапов wrote:
> Hello,
>
> I have a huge RGW bucket with 1
I have a directory I’ve been trying to remove from cephfs (via cephfs-hadoop),
the directory is a few hundred gigabytes in size and contains a few million
files, but not in a single sub directory. I startd the delete yesterday at
around 6:30 EST, and it’s still progressing. I can see from (ceph
On 13/10/2016 11:49, Henrik Korkuc wrote:
Is apt/dpkg doing something now? Is problem repeatable, e.g. by
killing upgrade and starting again. Are there any stuck systemctl
processes?
I had no problems upgrading 10.2.x clusters to 10.2.3
On 16-10-13 13:41, Chris Murray wrote:
On 22/09/2016 15
On Thu, Oct 13, 2016 at 12:44 PM, Heller, Chris wrote:
> I have a directory I’ve been trying to remove from cephfs (via
> cephfs-hadoop), the directory is a few hundred gigabytes in size and
> contains a few million files, but not in a single sub directory. I startd
> the delete yesterday at aroun
On Thu, Oct 13, 2016 at 11:33 AM, Stillwell, Bryan J
wrote:
> I have a basement cluster that is partially built with Odroid-C2 boards and
> when I attempted to upgrade to the 10.2.3 release I noticed that this
> release doesn't have an arm64 build. Are there any plans on continuing to
> make arm6
Thanks very much, Stas! Anyone can also confirm this?
2016-10-13 19:57 GMT+03:00 Stas Starikevich :
> Hi,
>
> I had experience with deleting a big bucket (25M small objects) with
> --purge-data option. It took ~20H (run in screen) and didn't made any
> significant effect on the cluster performance
On 10/13/16, 2:32 PM, "Alfredo Deza" wrote:
>On Thu, Oct 13, 2016 at 11:33 AM, Stillwell, Bryan J
> wrote:
>> I have a basement cluster that is partially built with Odroid-C2 boards
>>and
>> when I attempted to upgrade to the 10.2.3 release I noticed that this
>> release doesn't have an arm64 bui
Hello,
On Thu, 13 Oct 2016 15:46:03 + Patrik Martinsson wrote:
> On tor, 2016-10-13 at 10:29 -0500, Brady Deetz wrote:
> > 6 SSD per nvme journal might leave your journal in contention. Canyou
> > provide the specific models you will be using?
>
> Well, according to Dell, the card is called
> On 7 Oct. 2016, at 22:53, Haomai Wang wrote:
>
> do you try to restart osd to se the memory usage?
>
Restarting OSDs does not change the memory usage.
(Apologies for delay in reply - was offline due to illness.)
Regards
David
--
FetchTV Pty Ltd, Level 5, 61 Lavender Street, Milsons Poin
On 13 Oct. 2016, at 20:21, Praveen Kumar G T (Cloud Platform)
wrote:
>
>
> Hi David,
>
> I am Praveen, we also had a similar problem with hammer 0.94.2. We had the
> problem when we created a new cluster with erasure coding pool (10+5 config).
>
> Root cause:
>
> The high memory usage in o
On 16-10-13 22:46, Chris Murray wrote:
On 13/10/2016 11:49, Henrik Korkuc wrote:
Is apt/dpkg doing something now? Is problem repeatable, e.g. by
killing upgrade and starting again. Are there any stuck systemctl
processes?
I had no problems upgrading 10.2.x clusters to 10.2.3
On 16-10-13 13:4
25 matches
Mail list logo