Hello,
On Thu, 7 Feb 2019 08:17:20 +0100 jes...@krogh.cc wrote:
> Hi List
>
> We are in the process of moving to the next usecase for our ceph cluster
> (Bulk, cheap, slow, erasurecoded, cephfs) storage was the first - and
> that works fine.
>
> We're currently on luminous / bluestore, if upgra
This seems right. You are doing a single benchmark from a single client.
Your limiting factor will be the network latency. For most networks this is
between 0.2 and 0.3ms. if you're trying to test the potential of your
cluster, you'll need multiple workers and clients.
On Thu, Feb 7, 2019, 2:17 A
Hi List
We are in the process of moving to the next usecase for our ceph cluster
(Bulk, cheap, slow, erasurecoded, cephfs) storage was the first - and
that works fine.
We're currently on luminous / bluestore, if upgrading is deemed to
change what we're seeing then please let us know.
We have 6 O
I'm seeing some interesting performance issues with file overwriting on
CephFS.
Creating lots of files is fast:
for i in $(seq 1 1000); do
echo $i; echo test > a.$i
done
Deleting lots of files is fast:
rm a.*
As is creating them again.
However, repeatedly creating the same file over
Let's try to restrict discussion to the original thread
"backfill_toofull while OSDs are not full" and get a tracker opened up
for this issue.
On Sat, Feb 2, 2019 at 11:52 AM Fyodor Ustinov wrote:
>
> Hi!
>
> Right now, after adding OSD:
>
> # ceph health detail
> HEALTH_ERR 74197563/199392333 ob
I come back here.
> I've recently added a host to my ceph cluster, using proxmox 'helpers'
> to add OSD, eg:
>
> pveceph createosd /dev/sdb -journal_dev /dev/sda5
>
> and now i've:
>
> root@blackpanther:~# ls -la /var/lib/ceph/osd/ceph-12
> totale 60
> drwxr-xr-x 3 root root 199
Le lundi 12 novembre 2018 à 15:31 +0100, Marc Roos a écrit :
> > > is anybody using cephfs with snapshots on luminous? Cephfs
> > > snapshots are declared stable in mimic, but I'd like to know
> > > about the risks using them on luminous. Do I risk a complete
> > > cephfs failure or just some not w
Hey all,
The Orchestration weekly team meeting on Mondays at 16:00 UTC has a
new meeting location. The blue jeans url has changed so we can start
recording the meetings. Please see instructions below. The event also
has updated information:
To join the meeting on a computer or mobile phone:
https
Note that there are some improved upmap balancer heuristics in
development here: https://github.com/ceph/ceph/pull/26187
-- dan
On Tue, Feb 5, 2019 at 10:18 PM Kári Bertilsson wrote:
>
> Hello
>
> I previously enabled upmap and used automatic balancing with "ceph balancer
> on". I got very good
Hi,
With HEALTH_OK a mon data dir should be under 2GB for even such a large cluster.
During backfilling scenarios, the mons keep old maps and grow quite
quickly. So if you have balancing, pg splitting, etc. ongoing for
awhile, the mon stores will eventually trigger that 15GB alarm.
But the intend
Hi Swami
The limit is somewhat arbitrary, based on cluster sizes we had seen when
we picked it. In your case it should be perfectly safe to increase it.
sage
On Wed, 6 Feb 2019, M Ranga Swami Reddy wrote:
> Hello - Are the any limits for mon_data_size for cluster with 2PB
> (with 2000+ OSDs
On Wed, Feb 6, 2019 at 11:09 AM James Dingwall
wrote:
>
> Hi,
>
> I have been doing some testing with striped rbd images and have a
> question about the calculation of the optimal_io_size and
> minimum_io_size parameters. My test image was created using a 4M object
> size, stripe unit 64k and str
I was trying to set my mimic dashboard cert using the instructions
from
http://docs.ceph.com/docs/mimic/mgr/dashboard/
and I'm pretty sure the lines
$ ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt
$ ceph config-key set mgr mgr/dashboard/key -i dashboard.key
should be
$ ceph conf
Hi,
I have been doing some testing with striped rbd images and have a
question about the calculation of the optimal_io_size and
minimum_io_size parameters. My test image was created using a 4M object
size, stripe unit 64k and stripe count 16.
In the kernel rbd_init_disk() code:
unsigned int obj
On 06/02/2019 11:14, Marc Roos wrote:
Yes indeed, but for osd's writing the replication or erasure objects you
get sort of parrallel processing not?
Multicast traffic from storage has a point in things like the old
Windows provisioning software Ghost where you could netboot a room full
och c
Hello - Are the any limits for mon_data_size for cluster with 2PB
(with 2000+ OSDs)?
Currently it set as 15G. What is logic behind this? Can we increase
when we get the mon_data_size_warn messages?
I am getting the mon_data_size_warn message even though there a ample
of free space on the disk (a
For EC coded stuff,at 10+4 with 13 others needing data apart from the
primary, they are specifically NOT getting the same data, they are getting
either 1/10th of the pieces, or one of the 4 different checksums, so it
would be nasty to send full data to all OSDs expecting a 14th of the data.
Den o
Hi,
we have a compuverde cluster, and AFAIK it uses multicast for node
discovery, not for data distribution.
If you need more information, feel free to contact me either by email or
via IRC (-> Be-El).
Regards,
Burkhard
___
ceph-users mailin
Yes indeed, but for osd's writing the replication or erasure objects you
get sort of parrallel processing not?
Multicast traffic from storage has a point in things like the old
Windows provisioning software Ghost where you could netboot a room full
och computers, have them listen to a mcast
Multicast traffic from storage has a point in things like the old Windows
provisioning software Ghost where you could netboot a room full och
computers, have them listen to a mcast stream of the same data/image and
all apply it at the same time, and perhaps re-sync potentially missing
stuff at the
On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote:
>
> On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote:
> >
> > Hi,
> >
> > Following the update of one secondary site from 12.2.8 to 12.2.11, the
> > following warning have come up.
> >
> > HEALTH_WARN 1 large omap objects
> > LARGE_OMAP_OBJECTS 1 larg
21 matches
Mail list logo