Hi,
agreed. but the packages built for stretch do depend on the library
I had a wrong debian version in my sources list :-(
Thanks for looking into it.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
That is great news!
Thanks,
Ovidiu
On 03/19/2018 10:44 AM, Gregory Farnum wrote:
Maybe (likely?) in Mimic. Certainly the next release.
Some code has been written but the reason we haven’t done this before
is the number of edge cases involved, and it’s not clear how long
rounding those off wi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I have a query regarding cephfs and prefered number of clients. We are
currently using luminous cephfs to support storage for a number of web
servers. We have one file system split into folders, example:
/vol1
/vol2
/vol3
/vol4
At the moment the ro
On Tue, Mar 20, 2018 at 3:27 AM, James Poole wrote:
> I have a query regarding cephfs and prefered number of clients. We are
> currently using luminous cephfs to support storage for a number of web
> servers. We have one file system split into folders, example:
>
> /vol1
> /vol2
> /vol3
> /vol4
>
Good evening everyone.
My ceph is cross-compiled and runs on armv7l 32-bit development board.The ceph
version is 10.2.3,The compiler version is 6.3.0.
After I placed an object in the rados cluster, I scrubed the object manually.
At this time, the main osd crashed.
Here is the osd log:
ceph ver
Hi all,
Here's the output of 'rados df' for one of our clusters (Luminous 12.2.2):
ec_pool 75563G 19450232 0 116701392 0 0 0 385351922 27322G 800335856 294T
rbd 42969M 10881 0 32643 0 0 0 615060980 14767G 970301192 207T
rbdssd 252G 65446 0 196338 0 0 0 29392480 1581G 211205402 2601G
total_object
I wanted to report an update.
We added more ceph storage nodes, so we can take the problem OSDs out.
speeds are faster.
I found a way to monitor OSD latency in ceph, using "ceph pg dump osds"
The commit latency is always "0" for us.
fs_perf_stat/commit_latency_ms
But the apply latency shows us
Hi,
I made the changes directly to the crush map, i.e.,
(1) deleting the all the weight_set blocks and then move the bucket via the
CLI
or
(2) move the buckets in the crush map and add a new entry to the weight set
Paul
2018-03-16 21:00 GMT+01:00 :
> Hi Paul,
>
> Many thanks for the super
Hi Paul,
Many thanks for the replies, I actually did (1) and it worked perfectly, I was
also able to reproduce this via a test monitor too.
I have updated the bug with all of this info so hopefully no one hits this
again.
Many thanks.
Warren
From: Paul Emmerich
Sent: 20 March 2018 17:21
To:
Hello,
Does object expiration work on indexless (blind) buckets?
Thank you
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
@Pavan, I did not know about 'filestore split rand factor'. That looks
like it was added in Jewel and I must have missed it. To disable it, would
I just set it to 0 and restart all of the OSDs? That isn't an option at
the moment, but restarting the OSDs after this backfilling is done is
definite
On 03/20/2018 01:33 PM, Robert Stanford wrote:
Hello,
Does object expiration work on indexless (blind) buckets?
Thank you
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
No. Lifecy
> 在 2018年3月12日,上午9:49,Christian Wuerdig 写道:
>
> Hm, so you're running OSD nodes with 2GB of RAM and 2x10TB = 20TB of storage?
> Literally everything posted on this list in relation to HW requirements and
> related problems will tell you that this simply isn't going to work. The
> slightest h
I’m sorry for my late reply.
Thank you for your reply.
Yes, this error only exists while backend is xfs.
Ext4&bluestore will not trigger the error.
> 在 2018年3月12日,下午6:31,Peter Woodman 写道:
>
> from what i've heard, xfs has problems on arm. use btrfs, or (i
> believe?) ext4+bluestore will work.
On Mon, Mar 19, 2018 at 11:45 PM, Nicolas Huillard
wrote:
> Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit :
>> Default for mds_log_events_per_segment is 1024, in my set up I ended
>> up with 8192.
>> I calculated that value like IOPS / log segments * 5 seconds (afaik
>> MDS performs
Hi all,
We got some decommissioned servers from other projects for setting up OSDs.
They've 10 2TB SAS disks with 4 2TB SSD.
We try to test with bluestores and hope to play wal and db devices on SSD.
Need advice on some newbie questions:
1. As there are more SAS than SSD, is it possible/recom
16 matches
Mail list logo