A cluster to serve actual workloads.
I think raspberry pi and Virtual machine are out of scope, this is not just for
learning.
Thanks!
Ignacio Ocampo
> On 4 Oct 2020, at 16:01, Anthony D'Atri wrote:
>
>
>>> If you guys have any suggestions about used hardware that can be a good fit
>>> co
>> If you guys have any suggestions about used hardware that can be a good fit
>> considering mainly low noise, please let me know.
>
> So we didn’t get these requirements initially, there’s no way for us to help
> you when the requirements aren’t available for us to consider, even if we had
>
Comments inline
> On Oct 4, 2020, at 2:27 PM, Ignacio Ocampo wrote:
>
> Physical space isn't a constraint at this point, the only requirement I've in
> mind is to maintain a low level of noise (since the equipment will be in my
> office) and if possible low energy consumption.
>
> Based on my
Hi Brian and Martin,
Physical space isn't a constraint at this point, the only requirement I've
in mind is to maintain a *low level of noise* (since the equipment will be
in my office) and *if possible low energy consumption*.
Based on my limited experience, the only downside with used hardware i
Hi Ignacio, apologies I missed your responses here.
I would agree with Martin about buying used hardware for as cheap as possible,
but also understand the desire to have hardware you can promote into future
OpenStack usage.
Regarding networking, I started to use SFP+ cables like
https://amzn.
What about the network cards? The motherboard I’m looking for has 2 x 10Gbe,
with that and the CPU frequency, I think the bottleneck will be the HDD. Is
that overkill? Thanks!
Ignacio Ocampo
> On 2 Oct 2020, at 0:38, Martin Verges wrote:
>
>
> For private projects, you can search small 1U s
For private projects, you can search small 1U servers with up to 4 3.5"
disk slots and some e3-1230 v3/4/5 cpu. They can be bought for 250-350€
(used) and then you just plug in a disk.
They are also good for SATA SSDs and work quite well. You can mix both
drives in the same system as well.
--
Mart
Hi Brian,
Here more context about what I want to accomplish: I've migrated a bunch of
services from AWS to a local server, but having everything in a single
server is not safe, and instead of investing in RAID, I would like to start
setting up a small Ceph Cluster to have redundancy and a robust m
Welcome to Ceph!
I think better questions to start with are “what are your objectives in your
study?” Is it just seeing Ceph run with many disks, or are you trying to see
how much performance you can get out of it with distributed disk? What is your
budget? Do you want to try different combinat
RGW and RBD primarily, CephFS in less capacity.
> On 1 Oct 2020, at 9:58, Nathan Fish wrote:
>
>
> What kind of cache configuration are you planning? Are you going to use
> CephFS, RGW, and/or RBD?
>
>> On Tue, Sep 29, 2020 at 2:45 AM Ignacio Ocampo wrote:
>> Hi All :),
>>
>> I would like
On 2020-09-29 08:44, Ignacio Ocampo wrote:
> Hi All :),
>
> I would like to get your feedback about the components below to build a
> PoC OSD Node (I will build 3 of these).
>
> SSD for OS.
> NVMe for cache.
^^ For RocksDB / WAL, or a dm-cache setup. Should work either way.
> HDD for storage.
>
11 matches
Mail list logo