Are these single threaded writes that you are referring to? It certainly
appears so from the thread, but I thought it would be good to confirm that
before digging in further.
David Byte
Sr. Technology Strategist
SCE Enterprise Linux
SCE Enterprise Storage
Alliances and SUSE Embedded
db...@suse
Are you using bluestore OSDs ?
if so my thought process on this is what we are having an issue with is caching
and bluestore
see the thread on bluestore caching
"Re: [ceph-users] Best practices for allocating memory to bluestore cache"
before when we were on Jewel and filestore we could get a
More important than being able to push those settings or further is
probably the ability to actually split your subfolders. I've been using
variants of this [1] script I created a while back to take care of that.
To answer your question, we do run with much larger settings than you're
using. 128/-
You mean you set up CephFS with a cache tier but want to ignore it?
No, that's generally not possible. How would the backup server get
consistent data if it's ignoring the cache? (Answer: It can't.)
-Greg
On Fri, Aug 31, 2018 at 2:35 AM Fyodor Ustinov wrote:
> Hi!
>
> I have cephfs with tiering.
[replying to myself]
I set aside cephfs and created an rbd volume. I get the same splotchy
throughput with rbd as I was getting with cephfs. (image attached)
So, withdrawing this as a question here as a cephfs issue.
#backingout
peter
Peter Eisch
virginpulse.com
|globalchallenge.virginpu
So it sounds like you tried what I was going to do, and it broke
things. Good to know... thanks.
In our case, what triggered the extra index objects was a user running
PUT /bucketname/ around 20 million times -- this apparently recreates
the index objects.
-- dan
On Thu, Aug 30, 2018 at 7:20 PM
Hi Eugen.
Entirely my missunderstanding, I thought there would be something at boot
time (what would certainly not make any sense at all). Sorry.
Before stage 3 I ran the commands you suggested on the nodes, and only one
got me the output below:
###
# grep -C5 sda4 /var/l
I installed a new Luminous cluster. Everything is fine so far. Then I
tried to start RGW and got this error:
2018-08-31 15:15:41.998048 7fc350271e80 0 rgw_init_ioctx ERROR:
librados::Rados::pool_create returned (34) Numerical result out of range
(this can be due to a pool or placement group mi
help
end
Ok from what I have learned sofar from my own test environment. (Keep in
mind I am having a test setup for only a year). The s3 rgw is not so
much requiring high latency, so you should be able to do fine with hdd
only cluster. I guess my setup should be sufficient for what you need
to have,
Hi ceph users,
I am setting up a cluster of S3 - like storage, to decide on the server
specifications from where can I find the minimum and production ready
hardware recommendations?
The following URL does not mention it :
http://docs.ceph.com/docs/hammer/start/hardware-recommendations/#minimum-h
Hello there,
I'm trying to reduce recovery impact on client operations and using mclock
for this purpose. I've tested different weights for queues but didn't see
any impacts on real performance.
ceph version 12.2.8 luminous (stable)
Last tested config:
"osd_op_queue": "mclock_opclass",
"
Hi!
I have cephfs with tiering.
Does anyone know if it's possible to mount a file system so that the tiring is
not used?
I.e. I want mount cephfs on backup server without tiering usage and on samba
server with tiering usage.
It's possible?
WBR,
Fyodor.
___
In jewel I use the below config rgw is work well with the nginx. But with
luminous the nginx look like can not work with the rgw.
10.11.3.57, request: "GET / HTTP/1.1", upstream:
"fastcgi://unix:/var/run/ceph/ceph-client.rgw.ceph-11.asok:", host:
"10.11.3.57:7480"
2018/08/31 16:38:25 [error]
On Fri, Aug 31, 2018 at 6:11 AM morf...@gmail.com wrote:
>
> Hello all!
>
> I had a electric power problem. After this I have 2 incomplete pg. But all
> RBD volumes are work.
>
> But not work my CephFS. MDS load stop at "replay" state and MDS related
> commands hangs:
>
> cephfs-journal-tool jou
Hi,
I'm not sure if there's a misunderstanding. You need to track the logs
during the osd deployment step (stage.3), that is where it fails, and
this is where /var/log/messages could be useful. Since the deployment
failed you have no systemd-units (ceph-osd@.service) to log
anything.
Be
16 matches
Mail list logo