As long as the cluster is no healthy, the OSD will require much more space,
depending on the cluster size and other factors. Yes this is somewhat
normal.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH
hat system
200 PGs per OSD is to much, I would suggest 75-100 PGs per OSD
You can improve latency on HDD clusters using external DB/WAL on NVMe. That
might help you
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit G
ch a small
amount that it makes no sense in my eyes.
> Does that mean that occasional iSCSI path drop-outs are somewhat
expected?
Not that I'm aware of, but I have no HDD based ISCSI cluster at hand to
check. Sorry.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: mart
more details and commands at
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap
.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h,
Hello Daniel,
just throw away your crappy Samsung SSD 860 Pro. It won't work in an
acceptable way.
See
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit?usp=sharing
for a performance indication of individual disks.
--
Martin Verges
Managing dir
Hello Adam,
in our croit Ceph Management Software, we have a snapshot manager feature
that is capable of doing that.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin
Hello Tony,
as it is HDD, your CPU won't be a bottleneck at all. Both CPUs are
overprovisioned.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - V
ou are one of very very few people
and will most properly hit some crazy bugs that will cause trouble. A
high price to pay in my opinion just for an "imaginary" performance or
power reduction benefit. Storage has to run 24*7 all year long without
a single incident. Everything else in
Hello,
we from croit use Ceph on Debian and deploy all our clusters with it.
It works like a charm and I personally have quite good experience with
it since ~20 years. It is a fantastic solid OS for Servers.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver
Hello,
source code should be compressible, maybe just creating something like
a tar.gz per repo or so? That way you would get much bigger objects
that could improve speed and make it easier to store on any storage
system.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail
general we consider even a 2 copy setup not secure enough.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich
Hello,
you could switch to croit. We can overtake existing clusters without
much pain and then you have a single button to upgrade in the future
;)
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr
Hello Fred,
from hundreds of installations, we can say it is production ready and
working fine if deployed and maintained correctly. As always it
depends, but it works for a huge amount of use cases.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat
Hello,
you can buy Arista 7050QX-32S are 40G Switches. They come for around
500-700€ each and can be stacked using MLAG. They work great.
If you like to spend more money, 7060QX-32S are 100G Switches
available for around 1500€.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail
Hello,
you can migrate to nautilus and skip the outdated mimic. Save yourself
the trouble of mimic it's not worth.
You find packages on debian-backports
(https://packages.debian.org/buster-backports/ceph) or the croit
debian mirror.
--
Martin Verges
Managing director
Mobile: +49 174 9335
like to bundle
with a specific mlag id
On both switches as an example:
-
interface Port-Channel1
switchport access vlan 101
mlag 1
!
interface Ethernet7/1
description "MLAG bonding to Server XXX"
channel-group 1 mode active
!
Hello Mark,
Ceph itself does it incremental. Just select the value you will have
at the end, and wait for Ceph to do so.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO
throughput. This in turn leads to a higher impact during replication
work, which is particularly prevalent in EC. In EC, not only write
accesses but also read accesses must be loaded from several OSDs.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https
want to
do some specific Ceph workload benchmarks, feel free to drop me a
mail.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com
> failure-domain=host
yes (or rack/room/datacenter/..), for regular clusters it's therefore
absolute no problem as you correctly assumed.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Fresenius
the same as to having multiple OSDs per NVMe
as some people do it.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register
> So perhaps we'll need to change the OSD to allow for 500 or 1000 PGs
We had a support case last year where we where forced to set the OSD
limit to >4000 for a few days, and had more then 4k active PGs on that
single OSD. You can do that, however it is quite uncommon.
--
Martin Verg
, you simply press the reboot button and
get a nice fresh and clean OS booted in your memory. Besides that, It
is easy to maintain, solid, and all your hosts run on exactly the same
software and configuration state (kernel, libs, Ceph, everything).
--
Martin Verges
Managing director
Mobile: +4
ncounter cluster that fall apart or have a meltdown just because
they run out of memory and we use tricks like zram to help them out and
recover their clusters. If I now go and do it per container/osd in a finer
grained way, it will just blow up even more.
--
Martin Verges
Managing director
Mobile:
t a single one.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: ht
by adding a hook script within croit "onHealthDegrate" and
"onHealthRecover" that notifies us using telegram/slack/... ;)
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 3
> So no, I am not convinced yet. Not against it, but personally I would say
it's not the only way forward.
100% agree to your whole answer
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Fresenius
You can change the crush rule to be OSD instead of HOST specific. That way
Ceph will put a chunk per OSD and multiple Chunks per Host.
Please keep in mind, that will cause an outage if one of your hosts are
offline.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver
e my confidence.
Sorry I can't. But as Stefan Kooman wrote, "Best way is to find out
for yourself and your use case(s) with a PoC cluster"
We use it for our own stuff and customers of croit use it in lot of
installations. However nothing is better then a PoC to gain more
confidence.
--
M
pain for Debian users and if it's
still possible we should try to avoid that. Is there something we
could help to make it happen?
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 8124
do...@rgw.new-croit-host-C0DE01.service: Failed with
result 'exit-code'.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com
free forever version.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
m ones, as they are
often harder to debug. But of course we do it anyways :).
To have a perfect storage, strip away anything unneccessary. Avoid any
complexity, avoid anything that might affect your system. Keep it simply
stupid.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: ma
services will come up
without a problem when you have all configs in place.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register
Hello Szabo,
you can try it with our docs at
https://croit.io/docs/master/hypervisors/proxmox, maybe it helps you to
connect your Ceph cluster to Proxmox.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH
ou can turn to open source alternatives
that are massively cheaper per IO and only minimally more expensive per GB.
I therefore believe, stripping out overhead is also an important topic for
the future of Ceph.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@c
are even problems with containers if you don't use
version X from docker. That's what the past told us, why should it be
better in the future with even more container environments. Have you tried
running rancher on debian in the past? It breaks apart due to iptables or
other stuff.
-
red supported
clusters, we never encountered a BGP deployment in the field. It's always
just the theoretical or testing where we hear from BGP.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31
e to new temporary file in pool
remove old file
rename temp file to old file location
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-I
Just PXE boot whatever OS you like at the time. If you need to switch to
another, a reboot is enough to switch OS. It's even possible without
containers, so absolute no problem at all.
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit
. In
addition put 2*25GbE/40GbE in the servers and you need only a few of them
to simulate a lot. This would save costs, makes it easier to maintain, and
you are much more flexible. For example running tests on different OS,
injecting latency, simulating errors and more.
--
Martin Verges
Managing
be a litte bit
faster as well.
We have quite some experience with that and can be of help if you need more
details and vendor suggestions.
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - V
Use pacific for new deployments.
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https
active
number to your liking.
> From the perspective of getting the maximum bandwidth, which one should i
choose, CephFS or Ceph S3?
Choose what's best for your application / use case scenario.
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerg
croit would happily support that.
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https:
ome super old release to
Octopus to get them running again or have propper tooling to fix the issue.
But I agree, we as croit are still afraid of pushing our users to Pacific,
as we encounter bugs in our tests. This however will change soon, as we are
close to a stable enough Pacific release as we
for
public service.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: htt
s important. However when adding complexity, you
endangering that.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgeric
absolute
no processes, personell or structure to be a support organization.
Therefore companies like ours do the support around Ceph.
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges
not lose
data, and to run even if one datacenter burns down without a downtime. But
this all come with costs, sometimes quite high costs. Often it's cheaper to
live with a short interruption or to build 2 separated systems than to get
more nines to your availability on a single one.
--
Mar
As the price for SSDs is the same regardless of the interface, I would not
invest so much money in a still slow and outdated platform.
Just buy some new chassis as well and go NVMe. It adds only a little cost
but will increase performance drastically.
--
Martin Verges
Managing director
Mobile
.
Leave min_size on 2 as well and accept the potential downtime!
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https
Some say it is, some say it's not.
Every time I try it, it's buggy as hell and I can destroy my test clusters
with ease. That's why I still avoid it. But as you can see in my signature,
I am biased ;).
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.
on it should be faster than pacific. Maybe try
to jump away from the pacific release into the unknown!
--
Martin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. reg
artin Verges
Managing director
Mobile: +49 174 9335695 | Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
On Sun, 15 May 2022 at
ed, please feel free to
contact me and I will show you how we do it. We also have reseller options,
maybe that's something for you.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247
the TCO
even more and if you have a way, I would love to hear about it.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register
cheap but capable HDD node. I
never saw a better offer for big fat systems on a price per TB and TCO.
Please remember, there is no best node for everyone, this node is not the
best or fastest out on the market and just an example ;)
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail
Hello XuYun,
In my experience, I would always disable swap, it won't do any good.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE31063849
Hello Nghia,
just use one network interface card and use frontend and backend traffic on
the same. No problem with that.
If you have a dual port card, use both ports as an LACP channel and maybe
separate it using VLANs if you want to, but not required as well.
--
Martin Verges
Managing director
Hello Moritz,
drop the EVO disk and use a SSD that is working with Ceph. For example just
use PM883 / PM983 from the same vendor and you will have a huge performance
increase.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me
t the maximum peak performance), then drop the
MTU to default ;).
Please if anyone has other real world benchmarks showing huge differences
in regular Ceph clusters, please feel free to post it here.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https:/
Hello,
as I find it a good idea and couldn't find another, I just created
https://t.me/ceph_users.
Please feel free to join and let's see to get this channel startet ;)
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/Ma
Hello,
you could use another deployment and management solution to have NFS and
everything with ease. Take a look into
https://croit.io/docs/croit/master/gateways/nfs.html#services how easy it
would be to deploy NFS.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver
Hello,
take a look at croit.io, we believe to have the most sophisticated Ceph
deployment and management solution.
If something is missing, please let us know.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH
network packet loss monitoring into our Ceph management solution to help
customers track down their network issues.
Therefore choosing a single network strongly increases your reliability and
availability of your cluster.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver
tricky and not suggested to inexperienced users.
If you don't touch the Ceph disks at all, the service will come up again
without anything to change from your side. Sometimes it's better to clean
up some old mess and do it the way you currently work.
--
Martin Verges
Managing direct
I agree, please check for min_size to cover min 1 max 2 configs as we have
done in our software for our users since years. It is important and it can
prevent lot's of issues.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinV
Hello,
just delete the old one and deploy a new one.
Make sure to have a quorum (2 of 3 or 3 of 5) online while doing so.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO
Yes, no problem
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
Hello,
yes this is correct. Sorry for the inconvenience.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht
art.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo
I did not see 10 cores, but 7 cores per osd over a long period on pm1725a
disks with around 60k IO/s according to sysstat of each disk.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247
PCIe PM1725a NVMe OSDs and 12 HDD OSDs).
Unfortunately I have no console log output that would show more details
like IO pattern.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
Without knowing the source code and just from my observations, I would say
everytime the osd map changes, the crush/pgmap tries to fix that. However a
running backfill is not stopped and only backfill_wait would be
reconsidered.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail
well.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://g
Hello Gesiel,
we as croit do provide worldwide services in english but we are not located
in brazil.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID
Hello,
we did some local testing a few days ago on a new installation of a small
cluster.
Performance of our iSCSI implementation showed a performance drop of 20-30%
against krbd.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me
drives, take a look at the uncached write latency. The lower the value is,
the better will be the drive.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID
We had this with older Ceph versions, maybe just try to restart all OSDs of
affected PGs.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID
loads spinning media is
never an option, use Flash!
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl
Hello,
we (croit GmbH) are a founding member of the Ceph foundation and we build
the packages from the official git repository to ship it with our own
solution.
However, we are not Ceph itself and so this is not an official mirror.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E
.ceph.com/docs/master/dev/cephfs-snapshots/
If you do have questions or want some consulting to get the best Ceph
cluster for the job. Please feel free to contact us.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
have in a Server as it is very slow and can
cause long downtimes.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht
.
btw, all multiple hundred croit based ceph deployments are 100% swap free.
As we boot over the network directly into the ram, there is no swap disk
that would be available. We don't have any issues with this and I doubt we
will ever encounter such.
--
Martin Verges
Managing director
Mobile: +4
There should be no issue and we have a lot of systems with multiple IPs.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register
assle, no pain, no OS trouble. It all comes to with
absolutely no costs!
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. reg
Hello,
according to some prices we have heard so far, the Seagate dual actuator
HDD will cost around 15-20% more than a single actuator.
We can help with a good hardware selection if interested.
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph
. Before I think about 40 vs 25/50/100 GbE, I
would reduce latency of these disks.
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth-training.
Mobile: +49 174 9335695
E-Mail:
Hello Dave,
you can configure Ceph to pick multiple OSDs per Host and therefore work
like a classic raid.
It will cause a downtime whenever you have to do maintenance on a system,
but when you plan to grow it quite fast, it's maybe an option for you.
--
Martin Verges
Managing director
Hello Frank,
we are always looking for Ceph/Linux consultants.
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth-training.
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
than replica,
but cost efficient and more then ok on most workloads. If you split generic
VMs and Databases (or similar workloads), you can save a lot of money with
EC.
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://cro
Depends on your current SSDs and the new SSDs. It is highly likely that
most performance increase will come from choosing good new NVMe. In
addition higher clock frequency will increase IO as well but only if it is
a bottleneck.
--
Martin Verges
Managing director
Hint: Secure one of the last
*cough* use croit to deploy your cluster, then you have a well tested
OS+Ceph image and no random version change ;) *cough*
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth
eph.8.html.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube:
Hello Chad,
starting with the Problems from lost connections with the kernel CephFS
mount to a much simpler service setup, there are plenty.
But what would be the point in stacking different tools (kernel mount, smb
service,..) untested together just because you can?
--
Martin Verges
Managing
for object size, metadata when using cephfs?
>
Leave it by the defaults unless you know what special case you have. A lot
of issues we see in the wild coming from bad configurations, copy pasted
from a random page found on google.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-
This is too little memory. We have already seen MDS with well over 50 GB
Ram requirements.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID
u boot 3 oder 300 nodes, all
boot the exact same image in a few seconds.
.. lots more
Please do not hesitate to contact us directly. We always try to offer an
excellent service and are strongly customer oriented.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@
1 - 100 of 106 matches
Mail list logo