Hi.
From our point of view, it's important to keep disk failure prediction
tool as part of Ceph, ideally as an MGR module. In environments with
hundreds or thousands of disks, it's crucial to know whether, for
example, a significant number of them are likely to fail within a month
- which, in
These are your two Luminous clients:
---snip---
{
"name": "unknown.0",
"entity_name": "client.admin",
"addrs": {
"addrvec": [
{
"type": "none",
"addr": "172.27.254.7:0",
"nonce": 44
> On Apr 8, 2025, at 9:13 AM, quag...@bol.com.br wrote:
>
> These 2 IPs are from the storage servers.
What is a “storage server”?
> There are no user processes running on them. It only has the operating system
> and ceph installed.
Nobody said anything about user processes.
>
>
> Rafael.
> Hi.
>
> From our point of view, it's important to keep disk failure prediction tool
> as part of Ceph, ideally as an MGR module
> . In environments with hundreds or thousands of disks, it's crucial to know
> whether, for example, a significant number of them are likely to fail within
> a
These 2 IPs are from the storage servers.
There are no user processes running on them. It only has the operating system and ceph installed.
Rafael.
De: "Eugen Block"
Enviada: 2025/04/08 09:35:35
Para: quag...@bol.com.br
Cc: ceph-users@ceph.io
Assunto: Re: [ceph-users] Ceph squid fresh instal
> Maybe one of the many existing projects could be adapted, re-formed,
> re-aimed. Or maybe they're all dead in the water because they failed
> one of more of the above five bullets.
Often the unstated sixth bullet:
— Has a supportable architecture
The ongoing viability of a system should not
What is the locally installed ceph version? Run 'ceph --version' to
get the local rpm/deb package version.
Zitat von quag...@bol.com.br:
These 2 IPs are from the storage servers.
There are no user processes running on them. It only has the operating
system and ceph installed.
Rafael.
--
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hey Rafael,
So these client mounts probably use the kernel mounts with an older kernel
version that gets identified as the luminous client.
You can try upgrading the kernel version and remounting.
Laimis J.
> On 8 Apr 2025, at 16:13, quag...@bol.com.br wrote:
>
> These 2 IPs are from the stora
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What is a “storage server”?
These are machines that only have the operating system and ceph installed.
De: "Anthony D'Atri"
Enviada: 2025/04/08 10:19:08
Para: quag...@bol.com.br
Cc: ebl...@nde.ag, ceph-users@ceph.io
Assunto: Re: [ceph-users] Ceph squid fresh install
> On Apr 8
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Linas,
Is the intent of purging of this data mainly due to just cost concerns? If
the goal is purely preservation of data, the likely cheapest and least
maintenance intensive way of doing this is a large scale tape archive.
Such archives (purely based on a google search) exist at LLNL and OU,
The intent is the US administration’s assault against science, Linas doesn’t
*want* to do it, he wants to preserve for the hope of a better future.
> On Apr 8, 2025, at 9:28 AM, Alex Gorbachev wrote:
>
> Hi Linas,
>
> Is the intent of purging of this data mainly due to just cost concerns? If
This is a hotfix release to address only one issue:
https://github.com/ceph/ceph/pull/62711 and has limited testing.
Details of this release are summarized here:
https://tracker.ceph.com/issues/70822#note-1
Release Notes - TBD
LRC upgrade - excluded from this release
Gibba upgrade - excluded fro
I don't think Linus is only concerned with public data, no.
The United States Government has had in places for many years effective
means of preserving their data. Some of those systems may be old and
creaky, granted, and not always the most efficient, but they suffice.
The problem is that th
Hi all,
Our IRC/Slack/Discord bridge started failing since ~2 days ago. Slack
deprecated the use of legacy bots[1], so at the moment, IRC/Discord is the
only bridge working.
[1]
~~~
time="2025-04-08T18:09:30Z" level=error msg="Could not retrieve channels:
slack.SlackErrorResponse{Err:"legacy_custo
I think it's just hard-coded in the cephadm binary [0]:
_log_file_handler = {
'level': 'DEBUG',
'class': 'logging.handlers.WatchedFileHandler',
'formatter': 'cephadm',
'filename': '%s/cephadm.log' % LOG_DIR,
}
I haven't looked to deep yet if it can be overridden, but didn't find
Interesting. So it's like that for everybody?
Meaning cephadm.log logs debug messages.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Awesome. Wish I knew that before I spent half a day trying to overwrite it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Some of your messages come across without newlines which makes them hard to
read.
Try
ceph tell mds.0 client ls | grep kernel_version
and
ceph daemon mon.whatever sessions
> On Apr 7, 2025, at 4:34 PM, quag...@bol.com.br wrote:
>
> Hi Anthony, Thanks for your reply. I dont have any client c
Hi!
Is Block alignment for Databases a thing with RBD?
I heard something with 16K alignment ...
If yes how can be checked what alignment a RBD got.
And yes I need to run my Database in Ceph as I have no other storage
with redundandency.
Hints for optimizing a single Block device for a Datab
On 08/04/2025, Yuri Weinstein wrote:
> This is a hotfix release to address only one issue:
> https://github.com/ceph/ceph/pull/62711 and has limited testing.
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/70822#note-1
>
> Release Notes - TBD
> LRC upgrade -
Hi all,
This most likely boils down to the kernel version of connected clients. We
faced the same problem some time ago and just kicked away the client mds
connections with a follow up kernel update on the servers doing the mount.
I can check the version details later for more clarity.
Best,
Lai
If you have a line on the data, I have connections that can store it or I
can consult pro bono on building a system to store it.
However wan-ceph is not the answer here.
On Sun, Apr 6, 2025, 11:08 PM Linas Vepstas wrote:
> OK what you will read below might sound insane but I am obliged to ask.
I’ve done the same as well.
It doesn’t help that smartctl doesn’t even try to have consistent output for
SATA, SAS, and NVMe drives, and especially that it doesn’t enforce uniform
attribute labels. Yes, the fundamental problem is that SMART is mechanism
without policy, and inconsistently imple
So, OSD nodes that do not have any OSDs on them yet?
> On Apr 8, 2025, at 9:41 AM, quag...@bol.com.br wrote:
>
> More complete description:
>
> 1-) I formatted and installed the operating system
>
> 2-) This is "ceph installed":
>
> curl --silent --remote-name --location
> https://download.ce
Hi Fabien,
[cc Milind (who owns the tracker ticket)]
On Tue, Apr 8, 2025 at 8:16 PM Fabien Sirjean wrote:
>
> Dear all,
>
> If I am to believe this issue [1], it seems that it is still not
> possible to make files immutable (chattr +i) in cephfs.
That's correct.
>
> Do you have any update on t
Hi Jiří,
On Mon, Apr 7, 2025 at 8:36 AM Jiří Župka wrote:
>
> Dear conference,
>I would like to ask if is secure use cephfs when cephfs-data-scan
> cleanup is not
> finished yet?
You could interrupt the cleanup task and use the file system.
BTW, you could also speed up the cleanup process
Hi,
It's will be very nice, if this module will be removed. Everything that Ceph
operator need can be covered via smartctl_exporter [1]
Thanks,
k
[1] https://github.com/prometheus-community/smartctl_exporter
Sent from my iPhone
> On 8 Apr 2025, at 02:20, Yaarit Hatuka wrote:
>
> We would l
I guess it's debatable as almost everything. ;-)
One of the advantages is that you usually see immediately what is
failing, you don't have to turn on debug first, retry the deployment
or whatever again to reproduce. The file doesn't really grow to huge
sizes (around 2 MB per day or so) and g
I was trying to analyze the original request, which seems to be something
of the following set:
- The goal is to archive a large amount of (presumably public) data on a
community run globally sharded or distributed storage.
- Can Ceph be used for this? Seems no, at least not in a sense of runnin
Can someone paste in here their copy of logrotate?
The trick always with rotating logs is that the service writing to it needs
to be restarted or told to stop writing so the file handle gets closed.
Otherwise it stays open and the free disc space isn't recovered.
___
It's the same across all versions I have used so far.
Zitat von Alex :
What about Pacific and Quincy?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What about Pacific and Quincy?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Am 4/8/25 um 14:30 schrieb Anthony D'Atri:
anthonydatri@Mac models % pwd
/Users/anthonydatri/git/ceph/src/pybind/mgr/diskprediction_local/models
anthonydatri@Mac models % file redhat/*
redhat/config.json: JSON data
redhat/hgst_predictor.pkl:data
redhat/hgst_scaler.pkl: d
Dear all,
If I am to believe this issue [1], it seems that it is still not
possible to make files immutable (chattr +i) in cephfs.
Do you have any update on this matter ?
Thanks a lot for all the good work!
Cheers,
Fabien
[1] : https://tracker.ceph.com/issues/10679
What does “ceph installed” mean? I suspect that this description is not
complete.
> On Apr 8, 2025, at 9:21 AM, quag...@bol.com.br wrote:
>
> What is a “storage server”?
> These are machines that only have the operating system and ceph
> installed.
>
>
>
> De: "Anthony D'Atri"
>
This seems to have worked to get the orch back up and put me back to 16.2.15.
Thank you. Debating on waiting for 18.2.5 to move forward.
-jeremy
> On Monday, Apr 07, 2025 at 1:26 AM, Eugen Block (mailto:ebl...@nde.ag)> wrote:
> Still no, just edit the unit.run file for the MGRs to use a differe
I mean this bit
'log_file': {
'level': 'DEBUG',
'class': 'logging.handlers.WatchedFileHandler',
'formatter': 'cephadm',
'filename': '%s/cephadm.log' % LOG_DIR,
}
___
ceph-users mailing list
43 matches
Mail list logo