>>
>> You can do that for a PoC, but that's a bad idea for any production
>> workload. You'd want at least three nodes with OSDs to use the default RF=3
>> replication. You can do RF=2, but at the peril of your mortal data.
>
> I'm not sure I agree - I think size=2, min_size=2 is no worse t
Disclaimer: I'm fairly new to Ceph, but I've read a bunch of threads
on the min_size=1 issue as that was perplexing me when I started, as
one replica is generally considered fine in many other applications.
However, there really are some unique concerns to Ceph beyond just the
number of disks you c
On 2023-12-22 03:28, Robert Sander wrote:
Hi,
On 22.12.23 11:41, Albert Shih wrote:
for n in 1-100
Put off line osd on server n
Uninstall docker on server n
Install podman on server n
redeploy on server n
end
Yep, that's basically the procedure.
But first try it on a test cluste
> I have manually configured a ceph cluster with ceph fs on debian bookworm.
Bookworm support is very, very recent I think.
> What is the difference from installing with cephadm compared to manuall
> install,
> any benefits that you miss with manual install?
A manual install is dramatically m
Hi all,
I am all new with ceph and I come from gluster.
We have had our eyes on ceph for several years
and as the gluster project seems to slow down we
now think it is time to start look into ceph.
I have manually configured a ceph cluster with ceph fs on debian
bookworm.
What is the difference
> Sorry I thought of one more thing.
>
> I was actually re-reading the hardware recommendations for Ceph and it seems
> to imply that both RAID controllers as well as HBAs are bad ideas.
Advice I added most likely ;) "RAID controllers" *are* a subset of HBAs BTW.
The nomenclature can be co
Sorry I thought of one more thing.
I was actually re-reading the hardware recommendations for Ceph and it seems to
imply that both RAID controllers as well as HBAs are bad ideas.
I believe I remember knowing that RAID controllers are sub optimal but I guess
I don't understand how you would actu
I'd like to say that it was something smart but it was a bit of luck.
I logged in on a hypervisor (we run OSDs and OpenStack hypervisors on the
same hosts) to deal with another issue, and while checking the system I
noticed that one of the OSDs was using a lot more CPU than the others. It
made me
Hello,
I would like to share a quite worrying experience I’ve just found on one of my
production clusters.
User successfully created a bucket with name of a bucket that already exists!
He is not bucket owner - the original user is, but he is able to see it when he
does ListBuckets over s3 api.
I'm currently trying to setup a ceph-dashboard using the official documentation
on how to do so.
I've managed to log-in by just visiting the URL & port, and by visting it
through haproxy. However using haproxy to visit the site results in odd
behavior.
At my first login, nothing loads on the
Hi again,
It turns out that our rados cluster wasn't that happy, the rgw index pool
wasn't able to handle the load. Scaling the PG number helped (256 to 512),
and the RGW is back to a normal behaviour.
There is still a huge number of read IOPS on the index, and we'll try to
figure out what's happ
Hi!
As I'm reading through the documentation about subtree pinning, I was wondering
if the following is possible.
We've got the following directory structure.
/
/app1
/app2
/app3
/app4
Can I pin /app1 to MDS rank 0 and 1, the directory /app2 to rank 2 and finally
/app3 and /app4 to ra
On 22.12.23 11:46, Marc wrote:
Does podman have this still, what dockers has. That if you kill the docker
daemon all tasks are killed?
Podman does not come with a daemon to start containers.
The Ceph orchestrator creates systemd units to start the daemons in
podman containers.
Regards
--
Podman doesn't use a daemon, thats one of the basic ideas.
We also use it in production btw.
- Am 22. Dez 2023 um 11:46 schrieb Marc m...@f1-outsourcing.eu:
>> >
>> >> It's been claimed to me that almost nobody uses podman in
>> >> production, but I have no empirical data.
>
> As opposed to
Hi,
On 22.12.23 11:41, Albert Shih wrote:
for n in 1-100
Put off line osd on server n
Uninstall docker on server n
Install podman on server n
redeploy on server n
end
Yep, that's basically the procedure.
But first try it on a test cluster.
Regards
--
Robert Sander
Heinlein Consu
> >
> >> It's been claimed to me that almost nobody uses podman in
> >> production, but I have no empirical data.
As opposed to docker or to having no containers at all?
> > I even converted clusters from Docker to podman while they stayed
> > online thanks to "ceph orch redeploy".
> >
Does po
That's good to know, I have the same in mind for one of the clusters
but didn't have the time to test it yet.
Zitat von Robert Sander :
On 21.12.23 22:27, Anthony D'Atri wrote:
It's been claimed to me that almost nobody uses podman in
production, but I have no empirical data.
I even conv
Hey Ceph-Users,
RGW does have options [1] to rate limit ops or bandwidth per bucket or user.
But those only come into play when the request is authenticated.
I'd like to also protect the authentication subsystem from malicious or
invalid requests.
So in case e.g. some EC2 credentials are not v
On 21.12.23 22:27, Anthony D'Atri wrote:
It's been claimed to me that almost nobody uses podman in production, but I
have no empirical data.
I even converted clusters from Docker to podman while they stayed online
thanks to "ceph orch redeploy".
Regards
--
Robert Sander
Heinlein Consulting
19 matches
Mail list logo