You mix sata and ssd disks within the same server? Read this:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
When you have different pools for sata and ssd configure cache-pool:
ceph osd tier add satapool ssdpool
ceph osd tier cache-mode ssdpool writeback
Hello!
After change the crush map all osd (ceph version 0.61.4
(1669132fcfc27d0c0b5e5bb93ade59d147e23404)) on pool default is crushed
with the error:
2013-07-14 17:26:23.755432 7f0c963ad700 -1 *** Caught signal
(Segmentation fault) **
in thread 7f0c963ad700
...skipping...
10: (OSD::PeeringWQ::_p
5 rgw
1/ 5 hadoop
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 1
max_new 1000
log_file /var/log/ceph/ceph-osd.2.log
--- end dump of recent events ---
2013/7/14 Vladislav Go
Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Sun, Jul 14, 2013 at 10:59 PM, Vladislav Gorbunov
> wrote:
>> Sympthoms like on http://tracker.ceph.com/issues/4699
>>
>> all OSDs the process ceph-osd crash with segfault
>>
>> If i stop MONs
sorry, after i'm try to apply crush ruleset 3 (iscsi) to
pool iscsi:
ceph osd pool set iscsi crush_ruleset 3
2013/7/16 Vladislav Gorbunov :
>>Have you run this crush map through any test mappings yet?
> Yes, it worked on test cluster, and after apply map to main cluster.
> O
ruleset 3 is:
rule iscsi {
ruleset 3
type replicated
min_size 1
max_size 10
step take iscsi
step chooseleaf firstn 0 type datacenter
step chooseleaf firstn 0 type host
step emit
}
2013/7/16 Vladislav Gorbunov :
> sorry, after i
tree" and look at the shape of the map.
> But it sounds to me like maybe there's a disconnect between what
> you've put into the cluster, and what you're looking at.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Mon, Jul
> So if you could just do "ceph osd crush dump" and "ceph osd dump" and
> provide the output from those commands, we can look at what the map
> actually has and go from there.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On
#42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Jul 16, 2013 at 4:00 PM, Vladislav Gorbunov wrote:
>> output is in the attached files
>>
>> 2013/7/17 Gregory Farnum :
>>> The maps in the OSDs only would have gotten there from the monitors.
>>>
>
> On Tue, Jul 16, 2013 at 4:00 PM, Vladislav Gorbunov wrote:
>> output is in the attached files
>>
>> 2013/7/17 Gregory Farnum :
>>> The maps in the OSDs only would have gotten there from the monitors.
>>> If a bad map somehow got distributed to the OSDs the
leset 0
delete ruleset iscsi, upload crushmap to cluster
https://dl.dropboxusercontent.com/u/2296931/ceph/crushmap14-new.txt
OSD still Segmentation fault
2013/7/18 Gregory Farnum :
> On Wed, Jul 17, 2013 at 4:40 AM, Vladislav Gorbunov wrote:
>> Sorry, not send to ceph-users later.
>&
mon.1.tar.bz2
2013/7/19 Gregory Farnum :
> In the monitor log you sent along, the monitor was crashing on a
> setcrushmap command. Where in this sequence of events did that happen?
>
> On Wed, Jul 17, 2013 at 5:07 PM, Vladislav Gorbunov wrote:
>> That's what I did:
>>
&g
You can configure mon server and crushmap like shown in this
beautiful example:
http://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/
четверг, 27 июня 2013 г. пользователь писал:
> Hi,
>
> yes exactly. synchronous replication is OK. The distance between the
> datacenter
> is
We run
ceph osd crush reweight osd.{odd-num} 0
before
ceph osd out {osd-num}
to avoid double cluster rebalancing after
ceph osd crush remove osd. {osd-num}
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Is ceph osd crush move syntax changed on 0.67?
I have crushmap
# id weight type name up/down reweight
-1 10.11 root default
-4 3.82 datacenter dc1
-2 3.82 host cstore3
0 0.55osd.0 up 1
1 0.55
I am found solution with this commands:
ceph osd crush unlink cstore1
ceph osd crush link cstore1 root=default datacenter=dc1
2013/9/9 Vladislav Gorbunov :
> Is ceph osd crush move syntax changed on 0.67?
> I have crushmap
> # id weight type name up/down reweight
>
try with qemu-img:
qemu-img convert -p -f vpc hyper-v-image.vhd
rbd:rbdpool/ceph-rbd-image:mon_host=ceph-mon-name
where ceph-mon-name is the ceph monitor host name or ip
2013/10/22 James Harper :
> Can any suggest a straightforward way to import a VHD to a ceph RBD? The
> easier the better!
>
> T
>Has anyone tried using bcache of dm-cache with ceph?
I'm tested lvmcache (based on dm-cache) with ceph 0.80.5 on CentOS 7.
Got unrecoverable error with xfs and total lost osd server.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
Try to disable selinux or run
setsebool -P virt_use_execmem 1
2014-10-06 8:38 GMT+12:00 Nathan Stratton :
> I did the same thing, build the RPMs and now show rbd support, however
> when I try to start an image I get:
>
> 2014-10-05 19:48:08.058+: 4524: error : qemuProcessWaitForMonitor:1889
rbd -m mon-cluster1 export rbd/one-1 - | rbd -m mon-cluster2 - rbd/one-1
пятница, 25 апреля 2014 г. пользователь Brian Rak написал:
> Is there a recommended way to copy an RBD image between two different
> clusters?
>
> My initial thought was 'rbd export - | ssh "rbd import -"', but I'm not
> sur
rbd -m mon-cluster1 export rbd/one-1 - | rbd -m mon-cluster2 import -
rbd/one-1
пятница, 25 апреля 2014 г. пользователь Brian Rak написал:
> Is there a recommended way to copy an RBD image between two different
> clusters?
>
> My initial thought was 'rbd export - | ssh "rbd import -"', but I'm no
>Should this be done on the iscsi target server? I have a default option to
>enable rbd caching as it speeds things up on the vms.
Yes, only on the iscsi target servers.
2014-05-08 1:29 GMT+12:00 Andrei Mikhailovsky :
>> It's important to disable the rbd cache on tgtd host. Set in
>> /etc/ceph/ce
Hi!
Can you help me to understand why crushmap with
step chooseleaf firstn 0 type host
can't work with hosts in data centers?
If I have the osd tree:
# id weight type name up/down reweight
-1 0.12 root default
-3 0.03 host tceph2
1 0.03 osd.1 up 1
-4 0.03 host tceph3
2 0.03 osd.2 up 1
-2 0.03 hos
:00 Vladislav Gorbunov :
> Hi!
>
> Can you help me to understand why crushmap with
> step chooseleaf firstn 0 type host
> can't work with hosts in data centers?
>
> If I have the osd tree:
> # id weight type name up/down reweight
> -1 0.12 root default
> -3 0.03 h
>ceph osd pool set data pg_num 1800
>And I do not understand why the OSD 16 and 19 are hardly used
Actually you need to change the pgp_num for real data rebalancing:
ceph osd pool set data pgp_num 1800
Check it with the command:
ceph osd dump | grep 'pgp_num'
2013/7/3 Pierre BLONDEAU :
> Le 01/07
25 matches
Mail list logo