Hi list
Does dumpling v0.67.4 or v0.71 now support Multi-region / Disaster Recovery
function ? if v0.67.4/0.71 support that which doc can i refer to configure
regions/zones/agents ? may anyone give a link ?
thanks ___
ceph-users mailing list
ceph
Hi, All.
I am interested in the following questions:
1.Does the amount of HDD performance cluster?
2.Is there any experience of implementing KVM virtualization and Ceph on
the same server?
Thank!
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)
thanks again All.
On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader wrote:
> Recovering from a degraded state by copying existing replicas to other
> OSDs is going t
Also the prepare step done successfully
[ceph@ceph-deploy my-cluster]$ ceph-deploy disk list ceph-server02
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy disk list
ceph-server02
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.osd][INFO ] Dist
Nothing on the ceph-server02 log
ceph-deploy osd activate ceph-server02:/dev/sdb1
s=1 pgs=0 cs=0 l=1 c=0x7f0da8013a80).fault
[ceph-server02][ERROR ] 2013-10-29 21:54:38.712639 7f0db81e8700 0 -- :/1002801
>> 192.168.115.91:6789/0 pipe(0x7f0da800b350 sd=10 :0 s=1 pgs=0 cs=0 l=1
c=0x7f0da800f3d0
I made the changes and it seems now it can successfully wget the rpm but
getting a different error now:
[root@ceph-admin-node-centos-6-4 my-cluster]# ceph-deploy install
ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /us
On Tue, Oct 29, 2013 at 2:00 PM, wrote:
>
>
>
>
> From: Trivedi, Narendra [mailto:narendra.triv...@savvis.com]
> Sent: Tuesday, October 29, 2013 5:33 PM
> To: Whittle, Alistair: Investment Bank (LDN); joseph.r.gru...@intel.com;
> ceph-users@lists.ceph.com
>
>
> Subject: RE: ceph-deploy problems o
From: Trivedi, Narendra [mailto:narendra.triv...@savvis.com]
Sent: Tuesday, October 29, 2013 5:33 PM
To: Whittle, Alistair: Investment Bank (LDN); joseph.r.gru...@intel.com;
ceph-users@lists.ceph.com
Subject: RE: ceph-deploy problems on CentOS-6.4
Thanks a lot Joseph and Alistair... I have the
Thanks a lot Joseph and Alistair... I have the following questions based on
your inputs:
1) Do I need to make changes to all the nodes or just the admin node? I
guess all the nodes since ceph-deploy issues commands via ssh on all nodes...
2) The installation guide recommends using ce
You also want to make sure that if you are using a proxy your proxy settings
are maintained through sudo.
With my deployment I had to add a line to my sudoers file to specify that the
https_proxy and http_proxy settings are maintained. Didn't work otherwise.
Defaults env_keep += "http_proxy ht
To answer myself - there was a problem with my api secret key which rados
generated. It has escaped the "/", which for some reason CloudStack couldn't
understand. Removing the escape (\) character has solved the problem.
Andrei
- Original Message -
From: "Andrei Mikhailovsky"
To: ce
I was able to add a public_network line to the config on the admin host and
push the config to the nodes with a "ceph-deploy --overwrite-conf config push
rc-ceph-node1 rc-ceph-node2 rc-ceph-node3". I was able to follow the
quickstart after that without further incident. Rzk had to take additio
If you are behind a proxy try configuring the wget proxy through /etc/wgetrc.
I had a similar problem where I could complete wget commands manually but they
would fail in ceph-deploy until I configured the wget proxy in that manner.
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-bo
On Tue, Oct 29, 2013 at 12:28 PM, Michael wrote:
> I can't remember how the ceph-deploy behaves in this case but it might be
> worth trying manually installing the epel repo on one of the nodes and then
> just doing a simple. "ceph-deploy install #node#" on a single node to see if
> it behaves.
>
Hi All,
I am a newbie to ceph. I am installing ceph (dumpling release) using
ceph-deploy (issued from my admin node) on one monitor and two OSD nodes
running CentOS 6.4 (64-bit) using followed instructions in the link below:
http://ceph.com/docs/master/start/quick-ceph-deploy/
My setup looks e
Recovering from a degraded state by copying existing replicas to other OSDs
is going to cause reads on existing replicas and writes to the new
locations. If you have slow media then this is going to be felt more
acutely. Tuning the backfill options I posted is one way to lessen the
impact, another
I can't remember how the ceph-deploy behaves in this case but it might
be worth trying manually installing the epel repo on one of the nodes
and then just doing a simple. "ceph-deploy install #node#" on a single
node to see if it behaves.
Otherwise you can try installing ceph manually using
h
I use vi or emacs. I'm not sure what you mean by connecting to hadoop.
after you install and start hadoop on your cluster you can submit jobs
with the hadoop cli tools.
On Mon, Oct 28, 2013 at 6:02 PM, 鹏 wrote:
> Hi Noah!
> Thanks for you reply!
> Can I ask if you want to code mapreduce
Hi All,
I am a newbie to ceph. I am installing ceph (dumpling release) using *
ceph-deploy* (issued from my admin node) on one monitor and two OSD nodes
running CentOS 6.4 (64-bit) using followed instructions in the link below:
http://ceph.com/docs/master/start/quick-ceph-deploy/
My setup
Thanks. It does seem to be working ok and I can create / remove objects it
seems without issues.
I am however having another problem. In trying to add additional monitors to
my cluster I am getting the following errors (note I did not see this when
doing the first and currently only running
Hello Alistar
I also faced exactly same issue with one of my OSD , after OSD Activate ,
progress got hanged but finally OSD gets added in cluster with no problem.
My cluster is running without knows issues as of now. If this is a test setup ,
you can ignore this , but keep an eye on this.
R
The cost of the chassis component[1] is likely to influence totals a fair
bit. I notice that in their reference design there are only two 10Gb ports
for 60 drives -- this would be the cheap bulk storage option, if you had a
bandwidth-conscious application you'd be looking at more expensive 10Gb
po
Hi James,
> Message: 2
> Date: Tue, 29 Oct 2013 11:23:14 +
> From: ja...@peacon.co.uk
> To: Gregory Farnum
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Seagate Kinetic
> Message-ID: <81dbc7ae324ac5bc6afd85aef080f...@peacon.co.uk>
> Content-Type: text/plain; charset=UTF-8; forma
On 10/28/2013 06:31 PM, Yehuda Sadeh wrote:
On Mon, Oct 28, 2013 at 9:24 AM, Wido den Hollander wrote:
Hi,
I'm testing with some multipart uploads to RGW and I'm hitting a problem
when trying to upload files larger then 1159MB.
The tool I'm using is s3cmd 1.5.1
Ceph version: 0.67.4
It's ver
Hello guys,
I am doing a test ACS setup to see how we can use Ceph for both Primary and
Secondary storage services. I have now successfully added both Primary (cluster
wide) and Secondary storage. However, I've noticed that my SSVM and CPVM are
not being created, so digging in the logs reveal
I've found nothing related in Apache logs,
I believe it's something related to Radosgw,
Anyone else tested the same thing on owned Radosgw?
Regards
On Mon, Oct 28, 2013 at 11:52 PM, Mark Nelson wrote:
> I'm not really an apache expert, but you could try looking at the apache
> and rgw logs and
That's unfortunate; hopefully 2nd-gens will improve and open things up.
Some numbers:
- Commercial grid-style SAN is maybe £1.70 per usable GB
- Ceph cluster of about 1PB built on Dell hardware is maybe £1.25 per
usable GB
- Bare drives like WD RE4 3TB are about £0.21/GB (assuming 1/3rd
capac
Hello all,
I am getting some issues when activating OSD's on my Red Hat 6.4 Ceph cluster.
I am using the quick start mechanism so mounted a new xfs filesystem and ran
the "osd prepare" command.
The prepare seemed to be successful as per the log output below:
[ceph_deploy.cli][INFO ] Invoked
Hello guys,
I am doing a test ACS setup to see how we can use Ceph for both Primary and
Secondary storage services. I have now successfully added both Primary (cluster
wide) and Secondary storage. However, I've noticed that my SSVM and CPVM are
not being created, so digging in the logs reveale
Hello Nabil
1) Please check all the logs from /var/log/ceph ( ceph-deploy node )
2) Before doing OSD Activate , did OSD prepare command went fine ?
3) After this error from OSD activate , did you check on your node , is device
/dev/sdb1 getting mounted ?
Regards
Karan Singh
CSC IT Centre for
Hi,
maybe you want to have a look at the following thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html
Could be that you suffer from the same problems.
best regards,
Kurt
Rzk schrieb:
> Hi all,
>
> I have the same problem, just curious.
> could it be caused by po
Hi,list
From the document that a radosgw-agent's right info should like this
INFO:radosgw_agent.sync:Starting incremental sync
INFO:radosgw_agent.worker:17910 is processing shard number 0
INFO:radosgw_agent.worker:shard 0 has 0 entries after ''
INFO:radosgw_agent.worker:finished processing shard 0
32 matches
Mail list logo