On 14 January 2015 at 12:08, JM wrote:
> Hi Roland,
>
> You should tune your Ceph Crushmap with a custom rule in order to do that
> (write first on s3 and then to others). This custom rule will be applied
> then to your proxmox pool.
> (what you want to do is only interesting if you run VM from h
On 16 January 2015 at 17:15, Gregory Farnum wrote:
> > I have set up 4 machines in a cluster. When I created the Windows 2008
> > server VM on S1 (I corrected my first email: I have three Sunfire X
> series
> > servers, S1, S2, S3) since S1 has 36GB of RAM en 8 x 300GB SAS drives, it
> > was run
On Fri, Jan 16, 2015 at 2:52 AM, Roland Giesler wrote:
> On 14 January 2015 at 21:46, Gregory Farnum wrote:
>>
>> On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler
>> wrote:
>> > I have a 4 node ceph cluster, but the disks are not equally distributed
>> > across all machines (they are substantiall
On 14 January 2015 at 21:46, Gregory Farnum wrote:
> On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler
> wrote:
> > I have a 4 node ceph cluster, but the disks are not equally distributed
> > across all machines (they are substantially different from each other)
> >
> > One machine has 12 x 1TB SA
So you can see my server names and their osd's too...
# idweighttype nameup/downreweight
-111.13root default
-28.14host h1
1 0.9osd.1 up1
3 0.9osd.3 up1
4 0.9osd.4 up1
50.68
# Get the compiled crushmap
root@server01:~# ceph osd getcrushmap -o /tmp/myfirstcrushmap
# Decompile the compiled crushmap above
root@server01:~# crushtool -d /tmp/myfirstcrushmap -o
/tmp/myfirstcrushmap.txt
then give us your /tmp/myfirstcrushmap.txt file.. :)
2015-01-14 17:36 GMT+01:00 Roland
On 14 January 2015 at 12:08, JM wrote:
> Hi Roland,
>
> You should tune your Ceph Crushmap with a custom rule in order to do that
> (write first on s3 and then to others). This custom rule will be applied
> then to your proxmox pool.
> (what you want to do is only interesting if you run VM from h
Hi Roland,
You should tune your Ceph Crushmap with a custom rule in order to do that
(write first on s3 and then to others). This custom rule will be applied
then to your proxmox pool.
(what you want to do is only interesting if you run VM from host s3)
Can you give us your crushmap ?
2015-01-
On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler wrote:
> I have a 4 node ceph cluster, but the disks are not equally distributed
> across all machines (they are substantially different from each other)
>
> One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS (s3) and
> two machines
On 01/13/15 22:03, Roland Giesler wrote:
> I have a 4 node ceph cluster, but the disks are not equally
> distributed across all machines (they are substantially different from
> each other)
>
> One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS
> (s3) and two machines have only two
I have a 4 node ceph cluster, but the disks are not equally distributed
across all machines (they are substantially different from each other)
One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS (s3)
and two machines have only two 1 TB drives each (s2 & s1).
Now machine s3 has by
11 matches
Mail list logo