Hello Raghavendra
Do you have your ceph cluster running ?
Regards
Karan Singh
- Original Message -
From: "Raghavendra Lad"
To: ceph-users@lists.ceph.com
Sent: Monday, 28 October, 2013 5:05:28 AM
Subject: [ceph-users] Install Guide - CEPH WITH OPENSTACK
Hi Cephs,
I am new
Hi,
Is this what you're looking for : http://ceph.com/docs/next/rbd/rbd-openstack/ ?
Cheers
On 28/10/2013 04:05, Raghavendra Lad wrote:
>
>
>
> Hi Cephs,
>
> I am new to Ceph. I am planning to install CEPH.
>
> I already have Openstack Grizzly installed and for storage thought of
> install
Hi all
Today I'd like to replicated one cluster with gateway.After master zone and
slave zone working , i started a radosgw-agent .Unfortuntly , the agent return
403 error all the time .
This is the master zone's information:
Mon, 28 Oct 2013 09:19:29 GMT
/admin/log
2013-10-28 17:19:29.397742
Not brand-new, but I've not seen it mentioned on here so far. Seagate
Kinetic essentially enables HDDs to present themselves directly over
Ethernet as Swift object storage:
http://www.seagate.com/solutions/cloud/data-center-cloud/platforms/?cmpid=friendly-_-pr-kinetic-us
If the CPUs on these
Thanks, I already had the correct ceph-deply version, but had the flag in the
wrong place.
Solving that got me to the next problem... I get the following error:
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy install
ldtdsr02se18 --no-adjust-repos
[ceph_deploy.install][DEBU
On Mon, Oct 28, 2013 at 7:33 AM, wrote:
>
> Thanks, I already had the correct ceph-deply version, but had the flag in the
> wrong place.
> Solving that got me to the next problem... I get the following error:
>
> [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy install
> ldtdsr
Hello Alistair
Can you try executing ceph-deploy install ldtdsr02se18 from ROOT user.
Regards
Karan Singh
- Original Message -
From: "Alfredo Deza"
To: "alistair whittle"
Cc: ceph-users@lists.ceph.com
Sent: Monday, 28 October, 2013 2:11:58 PM
Subject: Re: [ceph-users] Ceph-deploy, sudo
I get "Error: Nothing to do" when doing this on the node itself with sudo.
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Monday, October 28, 2013 12:12 PM
To: Whittle, Alistair: Investment Bank (LDN)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users
On Mon, Oct 28, 2013 at 8:26 AM, wrote:
>
>
> I get "Error: Nothing to do" when doing this on the node itself with sudo.
That may mean that it is already installed. Can you check if ceph is
installed and that you can move forward
with the rest of the process?
In this case, ceph-deploy would be
Yum tells me I have the following installed on the node:
ceph-deploy.noarch : Admin and deploy tool for Ceph
ceph-release.noarch : Ceph repository configuration
I think this means ceph is NOT already installed. Interesting that ceph-deploy
is on the node as well. I only installed it on the ad
I've been wondering about the same thing.
Has anyone had a chance to look at the Simulator?
https://github.com/Seagate/Kinetic-Preview
On Mon, Oct 28, 2013 at 5:56 PM, wrote:
> Not brand-new, but I've not seen it mentioned on here so far. Seagate
> Kinetic essentially enables HDDs to present
Well, as I understand it Seagate has their own home-rolled thing. I
believe there was some discussion at one point about using Ceph
together with their offering, but if I remember correctly Seagate
wanted to remove RADOS and just use Ceph clients, which didn't make a
lot of sense to us.
Best Re
On Mon, Oct 28, 2013 at 8:37 AM, wrote:
> Yum tells me I have the following installed on the node:
>
> ceph-deploy.noarch : Admin and deploy tool for Ceph
> ceph-release.noarch : Ceph repository configuration
>
> I think this means ceph is NOT already installed. Interesting that
> ceph-deploy i
Mark,
Thanks a lot for the info. Hopefully this will resolve our issues going
forward!
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
From: Mark Kirkwood [mark.kirkw...@catalyst.net.nz]
Sent: M
Sadly, this is already my second attempt on a "clean" build.
I have made more progress. I altered my ceph repo to include the repos
documented for a manual rpm build. Ceph-deploy now finds the ceph package,
but then got a number of yum dependency errors (mostly python related). I
sorted
Hi peng,
Unfortunately I’ve never used Eclipse. I think there may be some tutorials on
setting up Eclipse for Hadoop development, but I’ve never tried them.
-Noah
On Oct 27, 2013, at 7:13 PM, 鹏 wrote:
> Hi all !
> I have replaced the HDFS with cephFS,. today, I want using eclipse
>
On Monday, October 28, 2013, wrote:
> Not brand-new, but I've not seen it mentioned on here so far. Seagate
> Kinetic essentially enables HDDs to present themselves directly over
> Ethernet as Swift object storage:
>
> http://www.seagate.com/**solutions/cloud/data-center-**
> cloud/platforms/?cmp
Any one have clue why this error happen
2013-10-28 14:12:23.817719 7fe95437a700 0 -- :/1008986 >>
192.168.115.91:6789/0 pipe(0x7fe944010d00 sd=5 :0 s=1 pgs=0 cs=0 l=1
c=0x7fe9440046b0).fault
When I try to activate disk
___
ceph-users mailing list
ceph
On Mon, 28 Oct 2013, Nabil Naim wrote:
>
> Any one have clue why this error happen
>
> 2013-10-28 14:12:23.817719 7fe95437a700 0 -- :/1008986 >>
> 192.168.115.91:6789/0 pipe(0x7fe944010d00 sd=5 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fe9440046b0).fault
>
> When I try to activate disk
It looks like 192.16
Hi,
I'm testing with some multipart uploads to RGW and I'm hitting a problem
when trying to upload files larger then 1159MB.
The tool I'm using is s3cmd 1.5.1
Ceph version: 0.67.4
It's very specific, this is what I tried (after a lot of narrowing down):
$ dd if=/dev/zero of=1159MB.bin bs=10
Hi Sage,
Thank you for reply
I try to use implement CEPH following
http://ceph.com/docs/master/start/quick-ceph-deploy/
All my servers are VMware instances, all steps working fine unless
prepare/create OSD , I try
ceph-deploy osd prepare ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
and aslo I try t
Hi Josh,
We did map it directly to the host, and it seems to work just fine. I
think this is a problem with how the container is accessing the rbd module.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-20
Hi all,
We have a ceph cluster that being used as a backing store for several VMs
(windows and linux). We notice that when we reboot a node, the cluster enters a
degraded state (which is expected), but when it begins to recover, it starts
backfilling and it kills the performance of our VMs. The
On Mon, Oct 28, 2013 at 9:24 AM, Wido den Hollander wrote:
> Hi,
>
> I'm testing with some multipart uploads to RGW and I'm hitting a problem
> when trying to upload files larger then 1159MB.
>
> The tool I'm using is s3cmd 1.5.1
>
> Ceph version: 0.67.4
>
> It's very specific, this is what I trie
Hey folks - looking around, I see plenty (OK, some) on how to modify journal
size and location for older ceph, when ceph.conf was used (I think the switch
from ceph.conf to storing osd/journal config elsewhere happened with bobcat?).
I recently deployed a cluster with ceph-deploy on 0.67 and wan
Any answer to this question? I'm hitting almost the same issue with radosgw,
Read performance is not fine with radosgw
Regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It will be updated by the end of the day today...
On Sun, Oct 27, 2013 at 7:31 PM, maoqi1982 wrote:
> Hi list
> my ceph version is dumpling 0.67 ,i want use RGW Geo-Replication and
> Disaster Recovery function, can I refer the doc
> http://ceph.com/docs/wip-doc-radosgw/radosgw/federated-config/ (
Hi Hadi,
Can you tell me a bit about the tests you are doing and seeing poor
performance on?
Mark
On 10/28/2013 01:32 PM, hadi golestani wrote:
Any answer to this question? I'm hitting almost the same issue with radosgw,
Read performance is not fine with radosgw
Regards
___
I have installed apache + fastcgi as per the documentation (note: not the
ceph customized versions). I have created a user using radosgw-admin named
radosgwadmin (please see attached file radogwadmin-user.txt)
Now using S3 API, I am able to make a new request that gets authenticated,
however not a
My test is so simple,
On a cluster with 3 MON, 4 OSD, 1 RGW I can't download a big file from two
different clients concurrently,
One of them will wait till the other finish downloading it.
Regards
On Mon, Oct 28, 2013 at 10:19 PM, Mark Nelson wrote:
> Hi Hadi,
>
> Can you tell me a bit about t
I'm encountering a problem with RBD-backed Xen. During a VM boot,
pygrub attaches the VM's root VDI to dom0. This hangs with these
messages in the debug log:
Oct 27 21:19:59 xen27 kernel:
vbd vbd-51728: 16 Device in use; refusing to close
Oct 27 21:19:59 xen27 xenopsd-xenlight:
[xenops] wait
Sounds like an issue with your apache config. How did you install your
apache? What distribution are you running on? Are you using it as
mpm-worker? Do you have non-default radosgw settings?
Yehuda
On Mon, Oct 28, 2013 at 11:58 AM, hadi golestani
wrote:
> My test is so simple,
> On a cluster wit
Strange! I'm not sure I've actually ever seen two concurrent downloads
fail to work properly. Is there anything unusual about the setup?
Mark
On 10/28/2013 01:58 PM, hadi golestani wrote:
My test is so simple,
On a cluster with 3 MON, 4 OSD, 1 RGW I can't download a big file from
two differe
I'm running Ubuntu 12 on all my nodes and I've just installed every package
with default configs like what is mentioned in quick installtion guide of
Ceph
Anyone else experiancing the same issue?
Regards
On Mon, Oct 28, 2013 at 11:09 PM, Mark Nelson wrote:
> Strange! I'm not sure I've actuall
I'm not really an apache expert, but you could try looking at the apache
and rgw logs and see if you can trace where the 2nd request is hanging
up. Also, just to be sure, both clients can download data
independently, just not together?
Mark
On 10/28/2013 02:54 PM, hadi golestani wrote:
I'm
On 10/28/13, 11:20 AM, Abhijeet Nakhwa wrote:
> I have installed apache + fastcgi as per the documentation (note: not
> the ceph customized versions). I have created a user using radosgw-admin
> named radosgwadmin (please see attached file radogwadmin-user.txt)
>
> Now using S3 API, I am able to m
Hi Rzk,
Thanks for the links! I was able to add a public_network line to the config on
the admin host and push the config to the nodes with a "ceph-deploy
--overwrite-conf config push rc-ceph-node1 rc-ceph-node2 rc-ceph-node3".
The bug tracker indicates that the quickstart documentation was up
You can change some OSD tunables to lower the priority of backfills:
osd recovery max chunk: 8388608
osd recovery op priority: 2
In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance recover
That looks like a permissions problem. I've updated the draft
document here: http://ceph.com/docs/wip-doc-radosgw/radosgw/federated-config/
On Mon, Oct 28, 2013 at 2:25 AM, lixuehui wrote:
> Hi all
> Today I'd like to replicated one cluster with gateway.After master zone and
> slave zone work
I still need to update the graphics. The update text is here:
http://ceph.com/docs/wip-doc-radosgw/radosgw/federated-config/
On Mon, Oct 28, 2013 at 11:49 AM, John Wilkins wrote:
> It will be updated by the end of the day today...
>
> On Sun, Oct 27, 2013 at 7:31 PM, maoqi1982 wrote:
>> Hi list
John,
I've never installed anything on Scientific Linux. Are you sure that
QEMU has RBD support?
I have some wip-doc text, which I'm going to move around shortly. You
can see the yum install requirements here:
http://ceph.com/docs/wip-doc-install/install/yum-priorities/
http://ceph.com/docs/wip-
Raghavendra,
You can follow the link Loic provided. If you are running on
CentOS/RHEL, make sure you install QEMU with RBD support. See
http://ceph.com/docs/master/install/qemu-rpm/
Make sure your QEMU and libvirt installs are working. Then do the
integration with OpenStack.
On Mon, Oct 28, 2013
Hi JL,
I added public and cluster ip in global section manually (ceph.conf).
yup. i couldn't find it either. maybe they update it in this link,
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
but failed when i tried to add public address using "ceph-mon -i {mon-id}
--public-addr {ip
Hi all,
I have the same problem, just curious.
could it be caused by poor hdd performance ?
read/write speed doesn't match the network speed ?
Currently i'm using desktop hdd in my cluster.
Rgrds,
Rzk
On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader wrote:
> You can change some OSD tunables to
The bobtail release added udev/upstart capabilities that allowed you
to not have per OSD entries in ceph.conf. Under the covers the new
udev/upstart scripts look for a special label on OSD data volumes,
matching volumes are mounted and then a few files are inspected:
journal_uuid whoami
The jour
Hi John,
On 10/28/2013 08:55 PM, John Wilkins wrote:
> John,
>
> I've never installed anything on Scientific Linux.
SL6 is binary compatible with other EL6 varieties. This host uses the
same repos as CentOS for Dave Scott's xenserver-core technology preview:
EPEL6, Dave's XenServer + Ceph tech
I have a radosgw instance (ceph 0.71-299-g5cba838 src build), running on
Ubuntu 13.10. I've been experimenting with multipart uploads (which are
working fine). However while *most* objects (from radosgw perspective)
have their storage space gc'd after a while post deletion, I'm seeing
what look
Maybe nothing to do with your issue, but I was having problems using librbd
with blktap, and ended up adding:
[client]
ms rwthread stack bytes = 8388608
to my config. This is a workaround, not a fix though (IMHO) as there is nothing
to indicate that librbd is running out of stack space, rathe
Hi James,
That doesn't sound like a fun one to debug. I'll try your messaging
stack size tweak after the current (super ugly) hack experiment, to be
described next
Thanks-
John
On 10/28/2013 11:11 PM, James Harper wrote:
> Maybe nothing to do with your issue, but I was having prob
On Mon, Oct 28, 2013 at 9:06 PM, Mark Kirkwood
wrote:
> I have a radosgw instance (ceph 0.71-299-g5cba838 src build), running on
> Ubuntu 13.10. I've been experimenting with multipart uploads (which are
> working fine). However while *most* objects (from radosgw perspective) have
> their storage s
On 29/10/13 17:46, Yehuda Sadeh wrote:
The multipart abort operation is supposed to remove the objects (no gc
needed for these). Were there any other issues during the run, e.g.,
restarted gateways, failed requests, etc.?
Note that the objects here are from two different buckets (4902.1,
5001.2
The suspicious line in /var/log/debug (see the pastebin below) and that
'blkid' was the culprit keeping the device open looked like juicy clues:
kernel: vbd vbd-51728: 16 Device in use; refusing to close
Search results:
https://www.redhat.com/archives/libguestfs/2012-February/msg00023.html
52 matches
Mail list logo