2018-03-21 7:20 GMT+01:00 ST Wong (ITSC) :
> Hi all,
>
>
>
> We got some decommissioned servers from other projects for setting up
> OSDs. They’ve 10 2TB SAS disks with 4 2TB SSD.
>
> We try to test with bluestores and hope to play wal and db devices on
> SSD. Need advice on some newbie question
Hi all,
I have a question regarding a possible scenario to put both wal and db
in a separate SSD device for an OSD node composed by 22 OSDs (HDD SAS
10k 1,8 To).
I'm thinking of 2 options (at about the same price) :
- add 2 SSD SAS Write Intensive (10DWPD)
- or add a unique SSD NVMe 800 Go
2018-03-21 8:56 GMT+01:00 Caspar Smit :
> 2018-03-21 7:20 GMT+01:00 ST Wong (ITSC) :
>
>> Hi all,
>>
>>
>>
>> We got some decommissioned servers from other projects for setting up
>> OSDs. They’ve 10 2TB SAS disks with 4 2TB SSD.
>>
>> We try to test with bluestores and hope to play wal and db de
Just run into this problem on our production cluster
It would have been nice if the release notes of 12.2.4 had been
adapted to inform user about this.
Best,
Martin
On Wed, Mar 14, 2018 at 9:53 PM, Gregory Farnum wrote:
> On Wed, Mar 14, 2018 at 12:41 PM, Lars Marowsky-Bree wrote:
>> On 20
On 21. mars 2018 11:27, Hervé Ballans wrote:
Hi all,
I have a question regarding a possible scenario to put both wal and db
in a separate SSD device for an OSD node composed by 22 OSDs (HDD SAS
10k 1,8 To).
I'm thinking of 2 options (at about the same price) :
- add 2 SSD SAS Write Intensiv
Hi,
I just ran into this table for a 10G Netgear switch we use:
Fiberdelays:
10 Gbps vezelvertraging (64 bytepakketten): 1.827 µs
10 Gbps vezelvertraging (512 bytepakketten): 1.919 µs
10 Gbps vezelvertraging (1024 bytepakketten): 1.971 µs
10 Gbps vezelvertraging (1518 bytepakketten): 1.905 µs
Co
On 15/03/18 10:45, Vik Tara wrote:
>
> On 14/03/18 12:31, Amardeep Singh wrote:
>
>> Though I have now another issue because I am using Multisite setup
>> with one zone for data and second zone for metadata with elastic
>> search tier.
>>
>> http://docs.ceph.com/docs/master/radosgw/elastic-sync-m
Hi,
2.3µs is a typical delay for a 10GBASE-T connection. But fiber or SFP+ DAC
connections should be faster: switches are typically in the range of ~500ns
to 1µs.
But you'll find that this small difference in latency induced by the switch
will be quite irrelevant in the grand scheme of things wh
My apologies, I don't seem to be getting notifications on PRs. I'll review
this week.
Thanks,
Berant
On Mon, Mar 19, 2018 at 5:55 AM, Konstantin Shalygin wrote:
> Hi Berant
>
>
> I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and
>> exports the usage information for all
On 21-3-2018 13:47, Paul Emmerich wrote:
> Hi,
>
> 2.3µs is a typical delay for a 10GBASE-T connection. But fiber or SFP+
> DAC connections should be faster: switches are typically in the range of
> ~500ns to 1µs.
>
>
> But you'll find that this small difference in latency induced by the
> switc
I'm trying to determine the best way to go about configuring IO
rate-limiting for individual images within an RBD pool.
Here [1], I've found that OpenStack appears to use Libvirt's "iotune"
parameter, however I seem to recall reading about being able to do so
via Ceph's settings.
Is there a
Hi all,
The context :
- Test cluster aside production one
- Fresh install on Luminous
- choice of Bluestore (coming from Filestore)
- Default config (including wpq queuing)
- 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far more Gb at each
switch uplink...
- R3 pool, 2 nodes per site
- separat
I retract my previous statement(s).
My current suspicion is that this isn't a leak as much as it being
load-driven, after enough waiting - it generally seems to settle around
some equilibrium. We do seem to sit on the mempools x 2.4 ~ ceph-osd RSS,
which is on the higher side (I see documentation
Latency is a concern if your application is sending one packet at a time
and waiting for a reply. If you are streaming large blocks of data, the
first packet is delayed by the network latency but after that you will
receive a 10Gbps stream continuously. The latency for jumbo frames vs 1500
byte fra
Hi,
It will be appreciated if you could recommend some SSD models ( 200GB or
less)
I am planning to deploy 2 SSD and 6 HDD ( for a 1 to 3 ratio) in few DELL
R620 with 64GB RAM
Also, what is the highest HDD capacity that you were able to use in the
R620 ?
Note
I apologize for asking "research e
Looking at the latency numbers in this thread, it seems to be a cut-through
switch.
Subhachandra
On Wed, Mar 21, 2018 at 12:58 PM, Subhachandra Chandra <
schan...@grailbio.com> wrote:
> Latency is a concern if your application is sending one packet at a time
> and waiting for a reply. If you are
If you want speed and IOPS, try: PM863a or SM863a (PM863a is slightly
cheaper).
If you want high endurances, try Intel DC S3700 series.
Do not use consumer SSD for caching either HDD desktop for OSD.
what is the highest HDD capacity that you were able to use in the R620 ?
This depend on your
On 03/21/2018 06:48 PM, Andre Goree wrote:
> I'm trying to determine the best way to go about configuring IO
> rate-limiting for individual images within an RBD pool.
>
> Here [1], I've found that OpenStack appears to use Libvirt's "iotune"
> parameter, however I seem to recall reading about bei
18 matches
Mail list logo