Hi Alfredo
Thanks for picking up on this
> -Original Message-
> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
> Sent: Montag, 21. Oktober 2013 14:17
> To: Fuchs, Andreas (SwissTXT)
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph Block Device install
>
> On Mon, Oct
Hi all!
I update my ceph verson form 0.56.3 to 0.62! I install call dev pockage!
and sucess build 0.62!
but when I used 0.62 init-ceph ! all osd can not restart!
it return fail /use/local/bin/ceph-osd -i 0 --pid -file
/var/run/osd.0.pid -c /tmp/fetched.ceph.conf.12035
and when
try with qemu-img:
qemu-img convert -p -f vpc hyper-v-image.vhd
rbd:rbdpool/ceph-rbd-image:mon_host=ceph-mon-name
where ceph-mon-name is the ceph monitor host name or ip
2013/10/22 James Harper :
> Can any suggest a straightforward way to import a VHD to a ceph RBD? The
> easier the better!
>
> T
Hi,
I was wondering if anyone has had any experience in attempting to use a RBD
volume as a clustered drive in Windows Failover Clustering? I'm getting the
impression that it won't work since it needs to be either an iSCSI LUN or a
SCSI LUN.
Thanks,
Damien
On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas (SwissTXT)
wrote:
> Hi Alfredo
> Thanks for picking up on this
>
>> -Original Message-
>> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>> Sent: Montag, 21. Oktober 2013 14:17
>> To: Fuchs, Andreas (SwissTXT)
>> Cc: ceph-users@lists.ce
RBD can be re-published via iSCSI using a gateway host to sit in
between, for example using targetcli.
On 2013-10-22 13:15, Damien Churchill wrote:
Hi,
I was wondering if anyone has had any experience in attempting to use
a RBD volume as a clustered drive in Windows Failover Clustering? I'm
Yeah, I'd thought of doing it that way, however it would be nice to avoid
that if possible since the machines in the cluster will be running under
QEMU using librbd, so it'd be additional overhead having to re-export the
drives using iSCSI.
On 22 October 2013 13:36, wrote:
>
> RBD can be re-pub
> -Original Message-
> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
> Sent: Dienstag, 22. Oktober 2013 14:16
> To: Fuchs, Andreas (SwissTXT)
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph Block Device install
>
> On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas (
Thanks Mark for the response. My comments inline...
From: Mark Nelson
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Rados bench result when increasing OSDs
Message-ID: <52653b49.8090...@inktank.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 10/21/2013 09:13 AM, Gua
Hi All,
I have a ceph cluster setup with 3 nodes which has 1Gbps public network and
10Gbps private cluster network which is not accessible from public network.
I want to force OSDs to use only private network and public network for
MONs and MDS. I am using ceph-deploy to setup the cluster and curre
Hi Kyle and Greg,
I will get back to you with more details tomorrow, thanks for the response.
Thanks,
Guang
在 2013-10-22,上午9:37,Kyle Bader 写道:
> Besides what Mark and Greg said it could be due to additional hops through
> network devices. What network devices are you using, what is the network
If get this message
RuntimeError: Failed to execute command: su -c 'rpm --import
"https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc";'
change the configuration of curl to
[root@cephtest01 ~]# cat .curlrc
proxy = http://proxy.de.signintra.com:80
In the root home directory.
Unfortu
2013/10/22 Michael Kirchner :
> If get this message
> RuntimeError: Failed to execute command: su -c 'rpm --import
> "https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc";'
>
> change the configuration of curl to
>
> [root@cephtest01 ~]# cat .curlrc
> proxy = http://proxy.de.signintra.
http://ceph.com/docs/master/rados/configuration/network-config-ref/
22 окт. 2013 г. 18:22 пользователь "Abhay Sachan"
написал:
> Hi All,
> I have a ceph cluster setup with 3 nodes which has 1Gbps public network
> and 10Gbps private cluster network which is not accessible from public
> network. I
Hi Abhay
Try to set this on your ceph.conf:
cluster_network = 192.168.1.1/24
public_network = 192.168.1.1/24
Obviously, use your own ip ranges on both variables.
Regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listi
Hi,
I accidentally installed Saucy Salamander. Does the project have a
timeframe for supporting this Ubuntu release?
Thanks,
JL
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For the time being, you can install the Raring debs on Saucy without issue.
echo deb http://ceph.com/debian-dumpling/ raring main | sudo tee
/etc/apt/sources.list.d/ceph.list
I'd also like to register a +1 request for official builds targeted at
Saucy.
Cheers,
Mike
On 10/22/2013 11:42 AM,
And a +1 from me as well. It would appear that ubuntu has picked up the 0.67.4
source and included a build of it in their official repo, so you may be able to
get by until the next point release with those.
http://packages.ubuntu.com/search?keywords=ceph
On Oct 22, 2013, at 11:46 AM, Mike Daws
Off topic perhaps but I'm finding it pretty buggy just now - not sure
I'd want it underpinning Ceph, at the moment.
On 2013-10-22 16:51, Mike Lowe wrote:
And a +1 from me as well. It would appear that ubuntu has picked up
the 0.67.4 source and included a build of it in their official repo,
so
Hello,
we're using a small Ceph cluster with 8 nodes, each 4 osds. People are using it
through instances and volumes in a Openstack platform.
We're facing a HEALTH_ERR with full or near full osds :
cluster 5942e110-ea2f-4bac-80f7-243fe3e35732
health HEALTH_ERR 1 full osd(s); 13 near full o
thanks for the quick responses. seems to be working ok for me, but...
[OT]
I keep hitting this issue where ceph-deploy will not mkdir /etc/ceph/
before it tries to "write cluster configuration to
/etc/ceph/{cluster}.conf". Manually creating the dir on each mon node
allows me to issue a "ceph-de
Hi all-
Ceph-Deploy 1.2.7 is hanging for me on CentOS 6.4 at this step:
[joceph01][INFO ] Running command: rpm -Uvh --replacepkgs
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
The command runs fine if I execute it myself via SSH with sudo to the target
system:
[ceph
I currently have two datacenters (active/passive) using NFS storage.
Backups are done with nightly rsyncs. I want to replace this with
RadosGW and RGW geo-replication. I plan to roll out production after
Emperor comes out.
I'm trying to figure out how to import my existing data. The data
alre
/etc/ceph should be installed by the package named 'ceph'. Make sure
you're using ceph-deploy install to install the Ceph packages before
trying to use the machines for mon create.
On 10/22/2013 10:32 AM, LaSalle, Jurvis wrote:
thanks for the quick responses. seems to be working ok for me, b
Hello,
What I have used to rebalance my cluster is:
ceph osd reweight-by-utilization
we're using a small Ceph cluster with 8 nodes, each 4 osds. People are
using it
through instances and volumes in a Openstack platform.
We're facing a HEALTH_ERR with full or near full osds :
cluster 5942e1
This was resolved by setting the curl proxy (which conveniently was identified
as necessary in another email on this list just earlier today).
Overall I had to directly configure the proxies for wget, rpm and curl before I
could "ceph-deploy install" completely. Setting global or user proxies
|
hi all!
I have 12 nodes ceph clouster ( 1 mon 2mds 9osd). today my osd.0 osd.3
osd.4 are down and can not restart them!
osd.0 , osd.3, osd.4 are in the same host which name is osd0!
firstly, here is the osd log:
#tail -f /var/log/ceph/osd.0.log
ceph version 0.62(
Hey all,
The OpenStack community has spawned a newish "Project Manila", an
effort spearheaded by NetApp to provide a file-sharing service
analogous to Cinder, but for filesystems instead of block devices. The
elevator pitch:
Isn't it great how OpenStack lets you manage block devices for your
hosts?
http://ceph.com/docs/master/rados/operations/placement-groups/
2013/10/22 HURTEVENT VINCENT
> Hello,
>
> we're using a small Ceph cluster with 8 nodes, each 4 osds. People are
> using it through instances and volumes in a Openstack platform.
>
> We're facing a HEALTH_ERR with full or near full
29 matches
Mail list logo