working if just one user
creates a RBD snapshot on the pool (For example using Cinder Backup).
I hope somebody could give us more information about this "unmanaged snaps
mode" or point us to a way to revert this behavior once all RBD snapshots have
been removed from a pool.
Thanks!
S
ear, but it seems RBD is using some kind of rados object level snapshots and
I could not find documentation about that feature.
Thanks!
Saludos cordiales,
Xavier Trilla P.
Silicon Hosting
¿Sabías que ahora en SiliconHosting
resolvemos tus dudas técnicas gratis?
Más información en: siliconhost
username other than admin?
(We know we could use kernel module to open the images, but for some kernel
compatibility issues we are stuck with fuse)
Thanks!
Saludos cordiales,
Xavier Trilla P.
Silicon Hosting<https://siliconhosting.com/>
¿Sabías que ahora en SiliconHosting
resolvemos
be production ready till 1st of september...)
Any help would be certainly apreciated :)
Thanks!
Saludos cordiales,
Xavier Trilla P.
Silicon Hosting<https://siliconhosting.com/>
¿Todavía no conoces Bare Metal Cloud?
¡La evolución de los Servidores VPS ya ha llegado!
más información en: s
stand QEMU / KVM link dynamically to librbd libraries, right?)
Actually, I plan to perform some tests by myself soon, but if someone could
give me some insight before I get my hands over proper HW to run some tests it
would be really great.
Thanks!
Saludos cordiales,
Xavier Trilla P.
Silicon Hos
Hi,
Is it possible to enable copy on read for a rbd child image? I've been checking
around and looks like the only way to enable copy-on-read is enabling it for
the whole cluster using:
rbd_clone_copy_on_read = true
Can it be enabled just for specific images or pools?
We keep some parent imag
Hi,
I'm working into improving the costs of our actual ceph cluster. We actually
keep 3 x replicas, all of them in SSDs (That cluster hosts several hundred VMs
RBD disks) and lately I've been wondering if the following setup would make
sense, in order to improve cost / performance.
The ideal w
ll
check into it, and I'll start a new thread :)
Anyway, thanks for the info!
Xavier.
-Mensaje original-
De: Christian Balzer [mailto:ch...@gol.com]
Enviado el: martes, 22 de agosto de 2017 2:40
Para: ceph-users@lists.ceph.com
CC: Xavier Trilla
Asunto: Re: [ceph-users] NVMe
endent on cpu speed and various other factors).
Mark
>
> Anyway, thanks for the info!
> Xavier.
>
> -Mensaje original-
> De: Christian Balzer [mailto:ch...@gol.com] Enviado el: martes, 22 de
> agosto de 2017 2:40
> Para: ceph-users@lists.ceph.com
> CC: Xavier T
Hi,
Does anybody know if there is a way to inspect the progress of a volume
flattening while using the python rbd library?
I mean, using the CLI is it possible to see the progress of the flattening, but
when calling volume.flatten() it just blocks until it's done.
Is there any way to infer the
Hi guys,
No ideas about how to do that? Does anybody know where we could ask about
librbd python library usage?
Thanks!
Xavier.
De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre de Xavier
Trilla
Enviado el: martes, 17 de octubre de 2017 11:55
Para: ceph-users@lists.ceph.com
trieve the flatten, remove,
etc progress indications. Improvements to the API are always welcomed.
On Mon, Oct 23, 2017 at 11:06 AM, Xavier Trilla
mailto:xavier.tri...@silicontower.net>> wrote:
Hi guys,
No ideas about how to do that? Does anybody know where we could ask about
librbd python l
Hi Nick,
I'm actually wondering about exactly the same. Regarding OSDs, I agree, there
is no reason to apply the security patch to the machines running the OSDs -if
they are properly isolated in your setup-.
But I'm worried about the hypervisors, as I don't know how meltdown or Spectre
patches
able some of the kpti.
> I haven't tried it yet though, so give it a whirl.
>
> https://en.wikipedia.org/wiki/Kernel_page-table_isolation
> <https://en.wikipedia.org/wiki/Kernel_page-table_isolation>
>
> Kind Regards,
>
> David Majchrzak
>
>
>>
Hi guys,
I don't think we are really worried about how those patches affect OSDs
performance -patches can be easily disabled via sys- but quite worried about
how do they affect librbd performance.
Librbd is running on the hypervisor, and even if you don't need to patch
hypervisor kernel for Me
Hi Caspar,
Did you find any information regarding the migration from crush-compat to
unmap? I’m facing the same situation.
Thanks!
De: ceph-users En nombre de Caspar Smit
Enviado el: lunes, 25 de junio de 2018 12:25
Para: ceph-users
Asunto: [ceph-users] Balancer: change from crush-compat to
.
Also, how is the sys usage if you run top on the machines hosting the OSDs?
Saludos Cordiales,
Xavier Trilla P.
Clouding.io
¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?
¡Pruébalo ahora en Clouding.io!
-Mensaje original-
De: ceph-users En nombre de Pavel
values in a pure NVMe cluster, I hope the
result will be better.
I think a good document about how to tune OSD performance, would really help
Ceph :)
Cheers!
Saludos Cordiales,
Xavier Trilla P.
Clouding.io<https://clouding.io/>
¿Un Servidor Cloud con SSDs, redundado
y disponible en menos
, bcache, etc and add some SSD caching to each HDD
(Meaning it can affect write endurance of the SSDs).
Dmcache and bluestore seems to be a quite interesting option IMO, as you’ll get
faster reads and writes, and you’ll avoid the double write penalty of filestore.
Cheers!
Saludos Cordiales,
Xavier
you.
Cheers!
P.S.: “Saludos” means cheers in Spanish XD… my name is Xavier
Xavier Trilla P.
Clouding.io<https://clouding.io/>
¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?
¡Pruébalo ahora en Clouding.io<https://clouding.io/>!
De: ceph-users En n
Hi,
Does anybody have information about using jemalloc with Luminous? For what
I've seen on the mailing list and online, bluestor crashes when using jemalloc.
We've been running ceph with jemalloc since Hammer, as performance with
tcmalloc was terrible (We run a quite big full SSD cluster) and
igrations where the image needs
to be opened R/W by two clients at the same time.
--
Jason Dillaman
Saludos Cordiales,
Xavier Trilla P.
SiliconHosting
¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?
¡Pruébalo ahora en Clouding.io<https://clouding.io/>!
El 1
ludos Cordiales,
Xavier Trilla P.
Clouding.io<https://clouding.io/>
¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?
¡Pruébalo ahora en Clouding.io<https://clouding.io/>!
___
ceph-users mailing list
ceph-users@lists.cep
x27;ll
upgrade TCMalloc on 14.04)
We still have to consider:
* Using Jemalloc in OSD Servers
* Using Jemalloc in QEMU
Any comments or suggestions are welcome :)
Thanks!
Saludos Cordiales,
Xavier Trilla P.
SiliconHosting<https://siliconhosting.com/>
¿Un Servidor Cloud con SSDs, redu
Hi,
I'm trying to debut why there is a big difference using POSIX AIO and libaio
when performing read tests from inside a VM using librbd.
The results I'm getting using FIO are:
POSIX AIO Read:
Type: Random Read - IO Engine: POSIX AIO - Buffered: No - Direct: Yes - Block
Size: 4KB - Disk Targ
manage to find.
Any help will be much appreciated.
Thanks.
De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre de Xavier
Trilla
Enviado el: jueves, 9 de marzo de 2017 6:56
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] Posix AIO vs libaio read performance
Hi,
I'm
ing iothread on qemu drive should help a little bit too.
- Mail original -
De: "Xavier Trilla"
mailto:xavier.tri...@silicontower.net>>
À: "ceph-users" mailto:ceph-users@lists.ceph.com>>
Envoyé: Vendredi 10 Mars 2017 05:37:01
Objet: Re: [ceph-users] Posix AIO vs l
sue being related to OSDs. But so
far performance of the OSDs is really good using other test engines, so I'm
working more on the client side.
Any help or information would be really welcome :)
Thanks.
Xavier.
De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre de Xavier
T
gine=libaio --buffered=0
--direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32
Also thanks for the blktrace tip, on Monday I'll start playing with it and I'll
post my findings.
Thanks!
Xavier
-Mensaje original-
De: Jason Dillaman [mailto:jdill...@redhat.com]
Enviado el: vier
Hi Shain,
Not talking from experience, but as far as I now -from how ceph works- I guess
is enough if you reinstall the system, install ceph again, add ceph.conf and
keys, and udev will do the rest. Maybe you'll need to restart the server after
you've done everything, but ceph should find the O
--
De: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Enviado el: sábado, 11 de marzo de 2017 7:25
Para: Xavier Trilla
CC: ceph-users
Asunto: Re: [ceph-users] Posix AIO vs libaio read performance
>>Regarding rbd cache, is something I will try -today I was thinking about it-
>>but
by a huge margin the latency and
overall performance of our ceph cluster :)
Thanks for all your help!
Xavi.
-Mensaje original-
De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre de Xavier
Trilla
Enviado el: viernes, 10 de marzo de 2017 20:28
Para: dilla...@redhat.com
CC: ce
.
Thanks!
Xavi.
De: Michal Kozanecki [mailto:michal.kozane...@live.ca]
Enviado el: sábado, 11 de marzo de 2017 1:36
Para: dilla...@redhat.com; Xavier Trilla
CC: ceph-users
Asunto: Re: [ceph-users] Posix AIO vs libaio read performance
Hi Xavier,
Are you sure this is due to CEPH? I get similar
My opinion, go for the 2 pools option. And, try to use SSD for journals. In our
tests HDDs and VMs don't really work well together (Too much small IOs) but
obviously it depends on what the VMs are running.
Another option would be to have an SSD cache tier in front of the HDD. That
would really
- as we need to keep latency under 1ms.
My recommendation would be to be careful when using fd_codel In hypervisors -or
at least tweak the default values-. But, we don't have serious test data to
backup using that configuration.
Saludos Cordiales,
Xavier Trilla P.
Clouding.io&
with IT mode
controllers.
Saludos Cordiales,
Xavier Trilla P.
Clouding.io<https://clouding.io/>
¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?
¡Pruébalo ahora en Clouding.io<https://clouding.io/>!
El 31 des 2018, a les 17:15, Marc Schöchlin
mailto:m..
ool and specify
the --upmap-deviation parameter it works as expected.
Here is the output of ceph config-key dump:
{
"mgr/balancer/active": "1",
"mgr/balancer/max_misplaced": "0.01",
"mgr/balancer/mode": "upmap",
"m
as like 20 some GB of data, but 0 PGs. Is
this related to bluestore WAL and block.db? Or there is something weird going
on here?
Thanks!
Xavier Trilla P.
Clouding.io<https://clouding.io/>
___
ceph-users mailing list
ceph-users@lis
Hi,
We had an strange issue while adding a new OSD to our Ceph Luminous 12.2.8
cluster. Our cluster has > 300 OSDs based on SSDs and NVMe.
After adding a new OSD to the Ceph cluster one of the already running OSDs
started to give us slow queries warnings.
We checked the OSD and it was working
Hi,
What would be the proper way to add 100 new OSDs to a cluster?
I have to add 100 new OSDs to our actual > 300 OSDs cluster, and I would like
to know how you do it.
Usually, we add them quite slowly. Our cluster is a pure SSD/NVMe one, and it
can handle plenty of load, but for the sake of s
eep
between each for peering.
Let the cluster balance and get healthy or close to healthy.
Then repeat the previous 2 steps increasing weight by +0.5 or +1.0 until I am
at the desired weight.
Kevin
On 7/24/19 11:44 AM, Xavier Trilla wrote:
Hi,
What would be the proper way to add 100 new OSDs to
Hi Peter,
Im not sure but maybe after some changes the OSDs are not being recongnized by
ceph scripts.
Ceph used to use udev to detect the OSDs and then moved to lvm, which kind of
OSDs are you running? Blustore or filestore? Which version did you use to
create them?
Cheers!
El 24 jul 2019,
Hi,
We run few hundred HDD OSDs for our backup cluster, we set one RAID 0 per HDD
in order to be able to use -battery protected- write cache from the RAID
controller. It really improves performance, for both bluestore and filestore
OSDs.
We also avoid expanders as we had bad experiences with t
We had a similar situation, with one machine reseting when we restarted another
one.
I’m not 100% sure why it happened, but I would bet it was related to several
thousand client connections migrating from the machine we restarted to another
one.
We have a similar setup than yours, and if you c
44 matches
Mail list logo