I'm sure you know also the following, but just in case:
- Intel SATA D3-S4610 (I think they're out of stock right now)
- Intel SATA D3-S4510 (I see stock of these right now)
El 27/12/19 a las 17:56, vita...@yourcmc.ru escribió:
SATA: Micron 5100-5200-5300, Seagate Nytro 1351/1551 (don't forget to
Hi Sinan,
Just to reiterate: don't do this. Consumer SSDs will destroy your
enterprise SSD's performance.
Our office cluster is made of consumer-grade servers: cheap gaming
motherboards, memory, ryzen processors, desktop HDDs. But SSD drives are
Enterprise, we had awful experiences with cons
Hi,
El 5/6/19 a las 16:53, vita...@yourcmc.ru escribió:
Ok, average network latency from VM to OSD's ~0.4ms.
It's rather bad, you can improve the latency by 0.3ms just by
upgrading the network.
Single threaded performance ~500-600 IOPS - or average latency of 1.6ms
Is that comparable to wh
Hi Kai,
El 12/3/19 a las 9:13, Kai Wembacher escribió:
Hi everyone,
I have an Intel D3-S4610 SSD with 1.92 TB here for testing and get
some pretty bad numbers, when running the fio benchmark suggested by
Sébastien Han
(http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd
Hi Uwe,
We tried to use a Samsung 840 Pro SSD as OSD some time ago and it was a
no-go; it wasn't that performance was bad, it just didn't work for the
kind of use of OSD. Any HDD was better than it (the disk was healthy and
have been used in a software raid-1 for a pair of years).
I suggest
Hi,
El 25/11/18 a las 18:23, Виталий Филиппов escribió:
Ok... That's better than previous thread with file download where the
topic starter suffered from normal only-metadata-journaled fs...
Thanks for the link, it would be interesting to repeat similar tests.
Although I suspect it shouldn't b
Hi all,
We're planning the migration of a VMWare 5.5 cluster backed by a EMC
VNXe 3200 storage appliance to Proxmox.
The VNXe has about 3 year of warranty left and half the disks
unprovisioned, so the current plan is to use the same VNXe for Proxmox
storage. After warranty expires we'll most
Hi Fabian,
Hope your arm is doing well :)
unless such a backport is created and tested fairly well (and we will
spend some more time investigating this internally despite the caveats
above), our plan B will probably involve:
- building Luminous for Buster to ease the upgrade from Stretch+Lumino
Hi all,
We're in the process of deploying a new Proxmox/ceph cluster. We had
planned to use S3710 disks for system+journals, but our provider (Dell) is
telling us that they're EOL and the only alternative they offer are some
"mix use" Hawk-M4E with sizes 200GB/400GB.
I really can't find reliable
Hi Gandalf,
El 07/11/17 a las 14:16, Gandalf Corvotempesta escribió:
Hi to all
I've been far from ceph from a couple of years (CephFS was still unstable)
I would like to test it again, some questions for a production cluster
for VMs hosting:
1. Is CephFS stable?
Yes.
2. Can I spin up a 3 no
Hi Nick,
El 17/05/17 a las 11:12, Nick Fisk escribió:
There seems to be a shift in enterprise SSD products to larger less write
intensive products and generally costing more than what
the existing P/S 3600/3700 ranges were. For example the new Intel NVME P4600
range seems to start at 2TB. Alth
>> b) better throughput (I'm speculating that the S3610 isn't
4 times
>> faster than the S3520)
>>
>> c) load spread across 4 SATA channels (I suppose this
doesn't really
>> matter since the drives can
Adam,
What David said before about SSD drives is very important. I will tell
you another way: use enterprise grade SSD drives, not consumer grade.
Also, pay attention to endurance.
The only suitable drive for Ceph I see in your tests is SSDSC2BB150G7,
and probably it isn't even the most suit
Hi Michal,
El 14/03/17 a las 23:45, Michał Chybowski escribió:
I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs
per node) to test if ceph in such small scale is going to perform good
enough to put it into production environment (or does it perform well
only if there are t
Hi Martin,
Take a look at
http://ceph.com/pgcalc/
Cheers
Eneko
El 10/03/17 a las 09:54, Martin Wittwer escribió:
Hi List
I am creating a POC cluster with CephFS as a backend for our backup
infrastructure. The backups are rsyncs of whole servers.
I have 4 OSD nodes with 10 4TB disks and 2 SSDs
Hi Iban,
Is the monitor data safe? If it is, just install jewel in other servers
and plug in the OSD disks, it should work.
El 24/02/17 a las 14:41, Iban Cabrillo escribió:
Hi,
We have a serious issue. We have a mini cluster (jewel version) with
two server (Dell RX730), with 16Bays and the
Hi,
El 24/11/16 a las 12:09, Stephen Harker escribió:
Hi All,
This morning I went looking for information on the Ceph release
timelines and so on and was directed to this page by Google:
http://docs.ceph.com/docs/jewel/releases/
but this doesn't seem to have been updated for a long time. Is
Hi Michiel,
How are you configuring VM disks on Proxmox? What type (virtio, scsi,
ide) and what cache setting?
El 23/11/16 a las 07:53, M. Piscaer escribió:
Hi,
I have an little performance problem with KVM and Ceph.
I'm using Proxmox 4.3-10/7230e60f, with KVM version
pve-qemu-kvm_2.7.0-8.
El 06/06/16 a las 20:53, Oliver Dzombic escribió:
Hi,
thank you for your suggestion.
Rsync will copy the whole file new, if the size is different.
Since we talk about raw image files of virtual servers, rsync is no option.
We need something which will inside of a file just copy the delta's.
lve host swami-resize-test-vm
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 52428800 blocks
The physical size of the device is 13107200 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort?
On Thu, May 12, 2016 at 6:37 PM, Eneko La
do with FS shink before "rbd resize"
Thanks
Swami
On Thu, May 12, 2016 at 4:34 PM, Eneko Lacunza wrote:
Did you shrink the FS to be smaller than the target rbd size before doing
"rbd resize"?
El 12/05/16 a las 12:33, M Ranga Swami Reddy escribió:
When I used "rbd r
Did you shrink the FS to be smaller than the target rbd size before
doing "rbd resize"?
El 12/05/16 a las 12:33, M Ranga Swami Reddy escribió:
When I used "rbd resize" option for size shrink, the image/volume
lost its fs sectors and asking for "fs" not found...
I have used "mkf" option, then
Hi Mad,
El 09/04/16 a las 14:39, Mad Th escribió:
We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks
Are you using 3-way replication? I guess you are. :)
1) If we want to add more disks , what are the things that we need to
be careful about?
Will the following steps automati
Hi,
El 28/01/16 a las 13:53, Gaetan SLONGO escribió:
Dear Ceph users,
We are currently working on CEPH (RBD mode only). The technology is
currently in "preview" state in our lab. We are currently diving into
Ceph design... We know it requires at least 3 nodes (OSDs+Monitors
inside) to work p
Hi,
El 27/01/16 a las 15:00, Vlad Blando escribió:
I have a production Ceph Cluster
- 3 nodes
- 3 mons on each nodes
- 9 OSD @ 4TB per node
- using ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6)
Now I want to upgrade it to Hammer, I saw the documentation on
upgrading, it look
Hi Mart,
El 23/11/15 a las 10:29, Mart van Santen escribió:
On 11/22/2015 10:01 PM, Robert LeBlanc wrote:
There have been numerous on the mailing list of the Samsung EVO and
Pros failing far before their expected wear. This is most likely due
to the 'uncommon' workload of Ceph and the control
Hi Jan,
What SSD model?
I've seen SSDs work quite well usually but suddenly give a totally awful
performance for some time (not those 8K you see though).
I think there was some kind of firmware process involved, I had to
replace the drive with a serious DC one.
El 23/06/15 a las 14:07, Jan
Hi,
On 02/06/15 16:18, Mark Nelson wrote:
On 06/02/2015 09:02 AM, Phil Schwarz wrote:
Le 02/06/2015 15:33, Eneko Lacunza a écrit :
Hi,
On 02/06/15 15:26, Phil Schwarz wrote:
On 02/06/15 14:51, Phil Schwarz wrote:
i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact)
cluster.
Hi,
On 02/06/15 15:26, Phil Schwarz wrote:
On 02/06/15 14:51, Phil Schwarz wrote:
i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.
-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB
SATA
It'll be used as OSD+Mon server only.
Are these SSDs Intel S3700
Hi,
On 02/06/15 14:51, Phil Schwarz wrote:
i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.
-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB SATA
It'll be used as OSD+Mon server only.
Are these SSDs Intel S3700 too? What amount of RAM?
- 3 nodes are
Hi,
On 02/06/15 14:18, Pontus Lindgren wrote:
We have recently acquired new servers for a new ceph cluster and we want to run
Debian on those servers. Unfortunately drivers needed for the raid controller
are only available in newer kernels than what Debian Wheezy provides.
We need to run the
emember an enhancement
of ceph-disk for Hammer that is more aggressive in reusing previous
partition.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, May 25, 2015 at 4:22 AM, Eneko Lacunza wrote:
Hi all,
We have a firefly ceph cluster (using
Hi all,
We have a firefly ceph cluster (using Promxox VE, but I don't think this
is revelant), and found a OSD disk was having quite a high amount of
errors as reported by SMART, and also quite high wait time as reported
by munin, so we decided to replace it.
What I have done is down/out the
Hi,
I'm just writing to you to stress out what others have already said,
because it is very important that you take it very seriously.
On 20/04/15 19:17, J-P Methot wrote:
On 4/20/2015 11:01 AM, Christian Balzer wrote:
This is similar to another thread running right now, but since our
curr
Hi,
The common recommendation is to use a good (Intel S3700) SSD disk for
journals for each 3-4 OSDs, or otherwise to use internal journal on each
OSD. Don't put more than one journal on the same spinning disk.
Also, it is recommended to use 500G-1TB disks, specially if you have a
1gbit netw
Hi Robert,
I don't see any reply to your email, so I send you my thoughts.
Ceph is all about using cheap local disks to build a large performant
and resilient storage. Your use case with SAN and storwise doesn't seem
to fit very well to Ceph. (I'm not saying it can't be done).
¿Why are you p
t;rbd" pool was created
with size=2. This was done before adding the OSDs of one of the nodes.
Thanks
Eneko
On 20/01/15 16:23, Eneko Lacunza wrote:
Hi all,
I've just created a new ceph cluster for RBD with latest firefly:
- 3 monitors
- 2 OSD nodes, each has 1 s3700 (journals) + 2 x 3TB
Hi all,
I've just created a new ceph cluster for RBD with latest firefly:
- 3 monitors
- 2 OSD nodes, each has 1 s3700 (journals) + 2 x 3TB WD red (osd)
Network is 1gbit, different physical interfaces for public and private
network. There's only one pool "rbd", size=2. There are just 5 rbd
dev
Hi Steven,
On 30/12/14 13:26, Steven Sim wrote:
You mentioned that machines see a QEMU IDE/SCSI disk, they don't know
whether its on ceph, NFS, local, LVM, ... so it works OK for any VM
guest SO.
But what if I want to CEPH cluster to serve a whole range of clients
in the data center, rangi
,
Christian
Am 30.12.2014 12:23, schrieb Eneko Lacunza:
Hi Christian,
Have you tried to migrate the disk from the old storage (pool) to the
new one?
I think it should show the same problem, but I think it'd be a much
easier path to recover than the posix copy.
How full is your storage?
May
Hi Christian,
Have you tried to migrate the disk from the old storage (pool) to the
new one?
I think it should show the same problem, but I think it'd be a much
easier path to recover than the posix copy.
How full is your storage?
Maybe you can customize the crushmap, so that some OSDs are
Hi,
On 30/12/14 11:55, Lindsay Mathieson wrote:
On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote:
have a small setup with such a node (only 4 GB RAM, another 2 good
nodes for OSD and virtualization) - it works like a charm and CPU max is
always under 5% in the graphs. It only peaks when
Hi Steven,
Welcome to the list.
On 30/12/14 11:47, Steven Sim wrote:
This is my first posting and I apologize if the content or query is
not appropriate.
My understanding for CEPH is the block and NAS services are through
specialized (albeit opensource) kernel modules for Linux.
What about
Hi,
On 29/12/14 15:12, Christian Balzer wrote:
3rd Node
- Monitor only, for quorum
- Intel Nuc
- 8GB RAM
- CPU: Celeron N2820
Uh oh, a bit weak for a monitor. Where does the OS live (on this and the
other nodes)? The leveldb (/var/lib/ceph/..) of the monitors likes it fast,
SSDs preferably.
ocessing.
-Greg
On Wed, Dec 10, 2014 at 5:27 AM, Eneko Lacunza wrote:
Hi all,
I fixed the issue with the following commands:
# ceph osd pool set data size 1
(wait some seconds for clean+active state of +64pgs)
# ceph osd pool set data size 2
# ceph osd pool set metadata size 1
(wait some seconds f
trick?
Cheers
Eneko
On 10/12/14 13:14, Eneko Lacunza wrote:
Hi all,
I have a small ceph cluster with just 2 OSDs, latest firefly.
Default data, metadata and rbd pools were created with size=3 and
min_size=1
An additional pool rbd2 was created with size=2 and min_size=1
This would give me a wa
Hi all,
I have a small ceph cluster with just 2 OSDs, latest firefly.
Default data, metadata and rbd pools were created with size=3 and min_size=1
An additional pool rbd2 was created with size=2 and min_size=1
This would give me a warning status, saying that 64 pgs were
active+clean and 192 ac
fact work out the cheapest in
terms of write durability.
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eneko
Lacunza
Sent: 04 December 2014 14:35
To: Ceph Users
Subject: [ceph-users] Suitable SSDs for journal
Hi all,
Does anyone know
Hi all,
Does anyone know about a list of good and bad SSD disks for OSD journals?
I was pointed to
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
But I was looking for something more complete?
For example, I have a Samsung 840 Pro that
49 matches
Mail list logo