Thanks a lot.
yes u are correct. i can use connect the upcase target name in Linux but
can not in Windows.
after i change the target name to lowcase. Windows work fine too.
2014-07-22 22:13 GMT+07:00 Uwe Grohnwaldt :
> Hi,
>
> please try a targetname without uppercase characters and without a
Dear all ,
i have install tgt from http://ceph.com/packages/ceph-extras/debian
have create pool and image for iscsi.
but when i want to connect this iscsi from windows 2008 iscsi initiator,
can find, but status is not active . i do not know why?
some step i made for ceph iscsi targe
Pramana" escribió:
>
> AFAIK, it's not possible. A journal should be on the same server as the
>> OSD it serves. CMIIW.
>>
>> Thank you.
>>
>>
>> On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰 wrote:
>>
>>> thanks for ur reply.
>>>
>
/ceph/osd3/journal
>
> Then activate the OSDs...
>
> ceph-deploy --verbose osd activate
> cephosd01:/dev/sdg1:/var/ceph/osd1/journal
> ceph-deploy --verbose osd activate
> cephosd01:/dev/sdf1:/var/ceph/osd2/journal
> ceph-deploy --verbose osd activate
> cephosd01:/dev/
i have only one SSD want to improve Ceph perfermnace.
Is possible to use one SSD journal hard disk for 3 OSD ?
if it is possible ,how to config it ?
many thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
I have installed and test Ceph on VMs before, i know a bit about
configuration and install.
Now i want to use physic PC Server to install Ceph and do some Test, i
think the prefermance will better than VMs. I have some question about how
to plan the ceph storage architecture.
>>> what do i have a
ns for a discussion on this.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Mon, Aug 12, 2013 at 7:16 PM, 不坏阿峰 wrote:
>> i got PGs stuck long time. do not how to fix it. can some person help to
>> check?
>>
>> Environme
thanks a lot to your reply.
i know that [5,3], osd.5 is the primary osd, since my replicate size
is 2.and in my testing cluster. test.txt only have this only one
file.
i just mount -t cephfs 192.168.250.15:6789:/ , so means ,use pool data
by default ?
##The acting OSDs however are the OSD num
mount cephfs to /mnt/mycehfs on debian 7, kernel3.10
eg: have one file
root@test-debian:/mnt/mycephfs# ls -i test.txt
1099511627776 test.txt
root@test-debian:/mnt/mycephfs# ceph osd map volumes test.txt
osdmap e351 pool 'volumes' (3) object 'test.txt' -> pg 3.8b0b6108 (3.8) ->
up [5,3] acting [5,3
i got PGs stuck long time. do not how to fix it. can some person help to
check?
Environment: Debian 7 + ceph 0.617
root@ceph-admin:~# ceph -s
health HEALTH_WARN 6 pgs stuck unclean
monmap e2: 2 mons at {a=192.168.250.15:6789/0,b=192.168.250.8:6789/0},
election epoch 8, q
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
11 matches
Mail list logo