g SPF -all seems better, but not sure about how easy that would
be to implement... :)
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtub
erent racks and rows. In this case the latency should be acceptable
and low.
My question was more related to the redundant nfs and if you have some
experience with similar setups. I was trying to know if first is
feasible what I'm planning to do.
Thank you so much :)
Cheers!
El 2024-03-05
dvance,
Cheers!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bid
.html
* quincy:
https://lists.proxmox.com/pipermail/pve-devel/2024-February/061798.html
Not sure this has been upstreamed.
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Eneko Lacunza
Zuzendari teknikoa | Di
prefer a decent web interface.
Any comments/recommendations?
Best regards,
Kai
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
ing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/
t pool will continue working with only 2 replicas.
For the "near" calculus, you must factor in nearfull and full ratios for
OSDs, and also that data may be unevenly distributed among OSDs...
The choice also will affect how well the aggregated IOPS will be spread
between VMs<->disk
680.7702370.892833
9 16 173 157 69.7677 1040.9760050.878237
10 16 195 179 71.5891880.7553630.869603
That is very poor !
Why ?
Thanks
_______
ceph-users mailing list -- ceph-user
Hi,
El 17/1/23 a las 8:12, duluxoz escribió:
Thanks to Eneko Lacunza, E Taka, and Anthony D'Atri for replying - all
that advice was really helpful.
So, we finally tracked down our "disk eating monster" (sort of). We've
got a "runaway" ceph-guest-NN that is
nspect each process' open files and find what file(s) have no
longer a directory entry... that would give you a hint.
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11
this list for reference).
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company
ing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io
___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human
Hi,
This is a consumer SSD. Did you test it's performance first? Better get
a datacenter disk...
Cheers
El 5/10/22 a las 17:53, Murilo Morais escribió:
Nobody?
___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-use
Hi Gregory
Thanks for your confirmation. I hope I can start some tests today.
Cheers
El 5/5/22 a las 5:19, Gregory Farnum escribió:
On Wed, May 4, 2022 at 1:25 AM Eneko Lacunza wrote:
Hi Gregory,
El 3/5/22 a las 22:30, Gregory Farnum escribió:
On Mon, Apr 25, 2022 at 12:57 AM
be I misunderstood something.
P.S. And the question is which of usage disk I can use for stored
data: the usage what I see on web or what I see on terminal?
Hope this helps ;)
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | htt
space was
restored to the size before the expansion.
Can anyone help me, thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT
out RAM and CPU, but at 24xEnterprise SSD per
server I think you'll be wasting much of those SSD's performance...
I suggest you consider a 4-6 server cluster, and SSD for wal + Spinning
disks for storage. This will give you more redundancy for less money,
and more peace of mind when a spi
. Some of the VMs host containers.
Cheers
-Original Message-
From: Eneko Lacunza
Sent: Friday, 4 June 2021 15:49
To: ceph-users@ceph.io
Subject: *SPAM* [ceph-users] Re: Why you might want packages
not containers for Ceph deployments
Hi,
We operate a few Ceph hyperconverged
, which would delay feeback/bugreport to upstream.
Cheers and thanks for the great work!
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/u
me to the link above. The
suggested addition to the kernel command line fixed the issue.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu <mailto:kdh...@binghamton.edu>
On Thu, Apr 15, 2021 at 4:07 AM Eneko Lacunza <mailto:elacu...@binovo.es>> wrote:
Hi Dave,
.
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/compan
27;t think you should use Ceph for this config. The bare minimum you
should use is 3 nodes, because default failure domain is host.
Maybe you can explain what your goal is, so people can recommend setups.
Regards
--
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel
ting insights to share for ceph 10G
ethernet storage networking?
Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2
simple switches (Mikrotik for example) and in Proxmox use an
active-pasive bond, with default interface in all nodes to the same switch.
Cheers
--
Eneko La
size=2) vs risk of data loss (min_size=1).
Not everyone needs to max SSD disk IOPS; having a decent, HA setup can
be of much value...
Cheers
--
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2
heers
Dave Hall
Binghamton University
kdh...@binghamton.edu <mailto:kdh...@binghamton.edu>
607-760-2328 (Cell)
607-777-4641 (Office)
On 10/23/2020 6:00 AM, Eneko Lacunza wrote:
Hi Dave,
El 22/10/20 a las 19:43, Dave Hall escribió:
El 22/10/20 a las 16:48, Dave Hall escribió:
(BTW, Nautilus 14.2.7
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Eneko Lacunza
n you paste the warning message? If shows the spillover size. What
size are the partitions on NVMe disk (lsblk)
Cheers
--
Eneko Lacunza| +34 943 569 206
| elacu...@binovo.es
Zuzendari teknikoa | https://www.binovo.es
Director técnico
sk. I think some BIOS/UEFIs have settings for a
secondary boot/UEFI bootfile, but that would have to be prepared and
maintained manually, out of the mdraid10; and would only work with a
total failure of the primary disk.
Cheers
--
Eneko Lacunza| +34 943 56
ver failure, so that won't be a problem.
With small clusters (like ours) you may want to reorganize OSDs to a new
server (let's say, move one OSD of earch server to the new server). But
this is an uncommon corner-case, I agree :)
Cheers
--
Eneko Lacunza
Hi Brian,
El 22/10/20 a las 17:50, Brian Topping escribió:
On Oct 22, 2020, at 9:14 AM, Eneko Lacunza <mailto:elacu...@binovo.es>> wrote:
Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1
NVMe drive for 2 SAS drives and provision 300GB for WAL/DB
y this is a good size for us, but I'm wondering if my
BlueFS spillovers are resulting from using drives that are too big. I
also thought I might have seen some comments about cutting large
drives into multiple OSDs - could that be?
Not using such big disk here, sorry :) (no space ne
ces across all nodes.
Any ideas what may cause that? Maybe I've missed something important in
release notes?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Eneko Lacunza
Hi,
We had this issue in a 14.2.8 cluster, although it appeared after
resizing db device to a larger one.
After some time (weeks), spillover was gone...
Cheers
Eneko
El 6/6/20 a las 0:07, Reed Dier escribió:
I'm going to piggy back on this somewhat.
I've battled RocksDB spillovers over the
Hi,
Yes, it can be done (shuting down the OSD but no rebuild required), we
did it for resizing wal partition to a bigger one.
A simple Google search will help; I can paste the procedure we followed
but it's in spanish :(
Cheers
El 26/5/20 a las 17:20, Frank R escribió:
Is there a safe way
Hi,
I strongly suggest you read ceph documentation in
https://docs.ceph.com/docs/master
El 21/5/20 a las 15:06, CodingSpiderFox escribió:
Hello everyone :)
When I try to create an OSD, Proxmox UI asks for
* Data disk
* DB disk
* WAL disk
What disk will be the limiting factor in terms of st
Hi all,
We're receiving a certificate error for telemetry module:
Module 'telemetry' has failed:
HTTPSConnectionPool(host='telemetry.ceph.com', port=443): Max retries
exceeded with url: /report (Caused by SSLError(SSLError("bad handshake:
Error([('SSL routines', 'tls_process_server_certificate
Hi Andras,
El 31/3/20 a las 16:42, Andras Pataki escribió:
I'm looking for some advice on what to do about drives of different
sizes in the same cluster.
We have so far kept the drive sizes consistent on our main ceph
cluster (using 8TB drives). We're getting some new hardware with
larger,
Hi Jarett,
El 23/3/20 a las 3:52, Jarett DeAngelis escribió:
So, I thought I’d post with what I learned re: what to do with this problem.
This system is a 3-node Proxmox cluster, and each node had:
1 x 1TB NVMe
2 x 512GB HDD
I had maybe 100GB of data in this system total. Then I added:
2 x 2
are rather than renting it,
since I want to create a private cloud.
Thanks!
*Ignacio Ocampo*
On Mar 9, 2020, at 4:12 AM, Eneko Lacunza wrote:
Hola Ignacio,
El 9/3/20 a las 3:00, Ignacio Ocampo escribió:
Hi team, I'm planning to invest in hardware for a PoC and I would
like your
feedback
Hola Ignacio,
El 9/3/20 a las 3:00, Ignacio Ocampo escribió:
Hi team, I'm planning to invest in hardware for a PoC and I would like your
feedback before the purchase:
The goal is to deploy a *16TB* storage cluster, with *3 replicas* thus *3
nodes*.
System configuration: https://pcpartpicker.co
Hi Christian,
El 27/2/20 a las 20:08, Christian Wahl escribió:
Hi everyone,
we currently have 6 OSDs with 8TB HDDs split across 3 hosts.
The main usage is KVM-Images.
To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs.
The plan was to put 2x1TB in each host.
One option
Hi Fabian,
El 24/2/20 a las 19:01, Fabian Zimmermann escribió:
we currently creating a new cluster. This cluster is (as far as we can
tell) an config-copy (ansible) of our existing cluster, just 5 years later
- with new hardware (nvme instead of ssd, bigger disks, ...)
The setup:
* NVMe for Jo
Hi Stefan,
El 2/1/20 a las 10:47, Stefan Kooman escribió:
I'm wondering how many of are using messenger v2 in Nautilus after
upgrading from a previous release (Luminous / Mimic).
Does it work for you? Or why did you not enable it (yet)?
Our hyperconverged office cluster (Proxmox) with 5 nodes
44 matches
Mail list logo