stead of writing it
to the disk directly.
Tahnks,
Simon
Von: Robert Sander
Gesendet: Montag, 6. September 2021 16:48:52
An: Simon Sutter; Marc; ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Performance optimization
Am 06.09.21 um 16:44 schrieb Simon Sutter:
>
Hello
> >
> > >> - The one 6TB disk, per node?
> > >
> > > You get bad distribution of data, why not move drives around between
> > these to clusters, so you have more the same in each.
> > >
> >
> > I would assume that this behaves exactly the other way around. As long
> > as you have the same
SMR ones? Because they will absolutely
destroy any kind of performance (ceph does not use writecaches due to
powerloss concerns, so they kinda do their whole magic for each
writerequest).
Greetings
On 9/6/21 10:47 AM, Simon Sutter wrote:
> Hello everyone!
>
> I have built two clusters
Hello everyone!
I have built two clusters with old hardware, which is lying around, the
possibility to upgrade is there.
The clusters main usecase is hot backup. This means it's getting written 24/7
where 99% is writing and 1% is reading.
It should be based on harddisks.
At the moment, the
fan Kooman
Gesendet: Donnerstag, 1. Juli 2021 16:42:56
An: Simon Sutter; ceph-users@ceph.io
Betreff: Re: [ceph-users] configure fuse in fstab
On 7/1/21 4:14 PM, Simon Sutter wrote:
> Hello Everyone!
>
> I'm trying to mount the ceph, with the fuse client under debian 9 (ceph-fuse
>
Hello Everyone!
I'm trying to mount the ceph, with the fuse client under debian 9 (ceph-fuse
10.2.11-2).
Ceph is on the latest Octopus release.
The direct command is working, but writing it in fstab does not.
Command I use:
ceph-fuse --id dev.wsc -k /etc/ceph/ceph.clinet.dev.wsc.keyring -r
/t
Hello everyone!
We had a switch outage and the ceph kernel mount did not work anymore.
This is the fstab entry:
10.99.10.1:/somefolder /cephfs ceph
_netdev,nofail,name=cephcluster,secret=IsSecret 0 0
I reproduced it with disabling the vlan on the switch on which the ceph is
reachabl
Hello everyone!
I'm trying to calculate the theoretical usable storage of a ceph cluster with
erasure coded pools.
I have 8 nodes and the profile for all data pools will be k=6 m=2.
If every node has 6 x 1TB wouldn't the calculation be like this:
RAW capacity: 8Nodes x 6Disks x 1TB = 48TB
Loss t
Von: Stefan Kooman
Gesendet: Dienstag, 8. September 2020 11:38:29
An: Simon Sutter; ceph-users@ceph.io
Betreff: Re: [ceph-users] Syncing cephfs from Ceph to Ceph
On 2020-09-08 11:22, Simon Sutter wrote:
> Hello,
>
>
> Is it possible to somehow sync a ceph from one site to
Hello,
Is it possible to somehow sync a ceph from one site to a ceph form another site?
I'm just using the cephfs feature and no block devices.
Being able to sync cephfs pools between two sites would be great for a hot
backup, in case one site fails.
Thanks in advance,
Simon
advanced public_bind_addr v2:[ext-addr]:0/0 *
Do I have to change them somewhere else too?
Thanks in advance,
Simon
Von: Janne Johansson [mailto:icepic...@gmail.com]
Gesendet: 27 August 2020 20:01
An: Simon Sutter
Betreff: Re: [ceph-users] cephfs needs access from two networks
Den tors 27
Hello,
So I know, the mon services can only bind to just one ip.
But I have to make it accessible to two networks because internal and external
servers have to mount the cephfs.
The internal ip is 10.99.10.1 and the external is some public-ip.
I tried nat'ing it with this: "firewall-cmd --zone=p
Hello
I'm trying to get Grafana working inside the Dashboard.
If I press on "Overall Performance" tab, I get an error, because the iframe
tries to connect to the internal hostname, which cannot be resolved from my
machine.
If I directly open grafana, everything works.
How can I tell the dashboar
mer...@croit.io]
Gesendet: Mittwoch, 24. Juni 2020 17:35
An: Simon Sutter
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Feedback of the used configuration
Have a look at cephfs subvolumes:
https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-subvolumes
They are internally just directories
Hello,
After two months of the "ceph try and error game", I finally managed to get an
Octopuss cluster up and running.
The unconventional thing about it is, it's just for hot backups, no virtual
machines on there.
All the nodes are without any caching ssd's, just plain hdd's.
At the moment ther
Hello,
If you do it, like Sebastian told you, you would automatically deploy osd's.
For a beginner I would recommend to do it semi-automated, so you know a bit
more what's going on.
So first do the "ceph orch device ls" which should print every disk, in all
nodes.
Then I recommend to zap device
ow if my installation was not
good, or if cephadm just had a problem installing it.
Also the message in the error is not very meaningful.
For example, if I forget to install python3, the "python3 is not installed"
message is very clear.
Simon
________
Vo
Hello,
I did a mistake, while deploying a new node on octopus.
The node is a fresh installed Centos8 machine.
Bevore I did a "ceph orch host add node08" I pasted the wrong command:
ceph orch daemon add osd node08:cl_node08/ceph
That did not return anything, so I tried to add the node first with
Hello,
What is the current status, of using multiple cephfs?
In octopuss I get lots of warnings, that this feature is still not fully
tested, but the latest entry regarding multiple cephfs in the mailinglist is
from about 2018.
Is someone using multiple cephfs in production?
Thanks in Advanc
Hello,
When you deploy ceph to other nodes with the orchestrator, they "just" have the
containers you deployed to them.
This means in your case, you started the monitor container on ceph101 and you
must have installed at least the ceph-common package (else the ceph command
would not work).
If
Hello,
You just copied the same message.
I'll make a ticket in the tracker.
Regards,
Simon
Von: Amudhan P
Gesendet: Donnerstag, 11. Juni 2020 09:32:36
An: Simon Sutter
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Octopus: orchestrator not wo
han P
Gesendet: Donnerstag, 11. Juni 2020 09:32:36
An: Simon Sutter
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Octopus: orchestrator not working correctly with
nfs
Hi,
I have not worked with orchestrator but I remember I read somewhere that NFS
implementation is not supported.
Hello,
Did I not provide enough information, or simply nobody knows how to solve the
problem?
Should I write to the ceph tracker or does this just produce unnecessary
overhead?
Thanks in advance,
Simon
Von: Simon Sutter
Gesendet: Montag, 8. Juni 2020 10:56
Hello
I know that nfs on octopus is still a bit under development.
I'm trying to deploy nfs daemons and have some issues with the orchestartor.
For the other daemons, for example monitors, I can issue the command "ceph orch
apply mon 3"
This will tell the orchestrator to deploy or remove moni
Hello Andy,
I had mixed experiences with cephadm.
What I would do:
Check if all your daemons indeed are running in the according containers on
every node.
You can check it with "ceph orch ps"
If that is the case, you can get rid of the old rpms and install the new
ceph-common v15 rpm.
You
Simon Sutter; ceph-users@ceph.io
Betreff: Re: [ceph-users] Change mon bind address / Change IPs with the
orchestrator
On 6/3/20 4:49 PM, Simon Sutter wrote:
> Hello,
>
>
> I think I missunderstood the internal / public network concepts in the docs
> https://docs.ceph.c
Hello,
I think I missunderstood the internal / public network concepts in the docs
https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/.
Now there are two questions:
- Is it somehow possible to bind the MON daemon to 0.0.0.0?
I tried it with manually add the ip in /var/li
s/45819
Bug #45819: Possible error in deploying-nfs-ganesha docs - Orchestrator -
Ceph<https://tracker.ceph.com/issues/45819>
tracker.ceph.com
Redmine
>
> Zac
> Ceph Docs
>
> On Wed, Jun 3, 2020 at 12:34 AM Simon Sutter
> > Sorry, allways the wrong button...
&g
can see, one container fails to connect to the cluster, but wher can I find
out why?
Thanks in Advance and sorry for the split mail,
Simon
Von: Simon Sutter
Gesendet: Dienstag, 2. Juni 2020 16:26:15
An: ceph-users@ceph.io
Betreff: [ceph-users] Deploy nfs fr
Hello Ceph users,
I'm trying to deploy nfs-ganesha with cephadm on octopus.
According to the docs, it's as easy as running the command in the docs:
https://docs.ceph.com/docs/master/cephadm/install/#deploying-nfs-ganesha
___
ceph-users mailing list --
Hello again,
I have a new question:
We want to upgrade a server, with an os based on rhel6.
The ceph cluster is atm on octopus.
How can I install the client packages to mount cephfs and do a backup of the
server?
Is it even possible?
Are the client packages from hammer compatible with the oct
ne help me debugging this?
Cheers
Simon
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
hosttech GmbH | Simon Sutter
hosttech.ch<https://www.hosttech.ch>
WE LOVE TO HOST YOU.
create your own website!
more informati
Hello everyone
I've got a fresh ceph octopus installation and I'm trying to set up a cephfs
with erasure code configuration.
The metadata pool was set up as default.
The erasure code pool was set up with this command:
-> ceph osd pool create ec-data_fs 128 erasure default
Enabled overwrites:
->
Hello Michael,
I had the same problems. It's very unfamiliar, if you never worked with the
cephadm tool.
The Way I'm doing it is to go into the cephadm container:
# cephadm shell
Here you can list all containers (for each service, one container) with the
orchestration tool:
# ceph orch ps
a
27;s a bit easyer to understand, and the documentation out there is way
better.
Thanks in advance,
Simon
Von: Joshua Schmid
Gesendet: Dienstag, 5. Mai 2020 16:39:29
An: Simon Sutter
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Add lvm in cephadm
On 20/05
nce
Simon
Von: Simon Sutter
Gesendet: Dienstag, 5. Mai 2020 10:43:10
An: ceph-users@ceph.io
Betreff: [ceph-users] Add lvm in cephadm
Hello Everyone,
The new cephadm is giving me a headache.
I'm setting up a new testenvironment, where I have to use lvm pa
Hello Everyone,
The new cephadm is giving me a headache.
I'm setting up a new testenvironment, where I have to use lvm partitions,
because I don't have more Hardware.
I could't find any information about the compatibility of existing lvm
partitions and cephadm/octopus.
I tried the old metho
em.
I think I'll have to reinstall the node.
I'll update you.
Thanks and kind regards,
Simon
Von: Gert Wieberdink
Gesendet: Dienstag, 28. April 2020 21:16:10
An: Simon Sutter; ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Upgrading to Octopus
Sorry for
Hello,
Yes I upgraded the system to Centos8 and now I can install the dashboard module.
But the problem now is, I cannot log in to the dashboard.
I deleted every cached file on my end and reinstalled the mgr and dashboard
several times.
If I try to log in with a wrong password, it tells me th
Gesendet: Donnerstag, 23. April 2020 14:41:38
An: Simon Sutter
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Upgrading to Octopus
Simon,
You can try to search for the exact package name, you can try these repos as
well:
yum<https://www.server-world.info/en/command/html/yum.html> -y i
jwt.noarch1.6.4-2.el7epel
How do I get either: The right packages or a workaround because i can install
the dependencies with pip?
Regards,
Simon
Von: Khodayar Doustar
Gesendet: Mittwoch, 22. April 2020 20:02:04
An: Simon
Hello everybody
In octopus there are some interesting looking features, so I tried to upgrading
my Centos 7 test nodes, according to:
https://docs.ceph.com/docs/master/releases/octopus/
Everything went fine and the cluster is healthy.
To test out the new dashboard functions, I tried to instal
Thank you very much, I couldn't see the forest for the trees.
Now I moved a disk and added another one, now the problem is gone, I have 8TB
to use.
Thanks again.
Simon Sutter
Von: Reed Dier
Gesendet: Mittwoch, 15. April 2020 22:59:12
An: Simon Sutt
tings except the number:
512
Thank you very much
Simon Sutter
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
44 matches
Mail list logo