Re: [ceph-users] Cuttlefish VS Bobtail performance series

2013-07-10 Thread Igor Laskovy
Thank you Mark! This is very interesting work ;)
Awaiting another parts!


On Tue, Jul 9, 2013 at 4:41 PM, Mark Nelson  wrote:

> Hi Guys,
>
> Just wanted to let everyone know that we've released part 1 of a series of
> performance articles that looks at Cuttlefish vs Bobtail on our Supermicro
> test chassis.  We'll be looking at both RADOS bench and RBD performance
> with a variety of IO sizes, IO patterns, concurrency levels, file systems,
> and more!
>
> Every day this week we'll be releasing a new part in the series.  Here's a
> link to part 1:
>
> http://ceph.com/performance-2/**ceph-cuttlefish-vs-bobtail-**
> part-1-introduction-and-rados-**bench/<http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-1-introduction-and-rados-bench/>
>
> Thanks!
> Mark
> __**_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] During copy new rbd image is totally thick

2014-01-30 Thread Igor Laskovy
Hello list,

Is it correct behavior during copy to thicking rbd image?

igor@hv03:~$ rbd create rbd/test -s 1024
igor@hv03:~$ rbd diff rbd/test | awk '{ SUM += $2 } END { print
SUM/1024/1024 " MB" }'
0 MB
igor@hv03:~$ rbd copy rbd/test rbd/cloneoftest
Image copy: 100% complete...done.
igor@hv03:~$ rbd diff rbd/cloneoftest | awk '{ SUM += $2 } END { print
SUM/1024/1024 " MB" }'
1024 MB

-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] During copy new rbd image is totally thick

2014-02-02 Thread Igor Laskovy
Anybody? ;)


On Thu, Jan 30, 2014 at 9:10 PM, Igor Laskovy wrote:

> Hello list,
>
> Is it correct behavior during copy to thicking rbd image?
>
> igor@hv03:~$ rbd create rbd/test -s 1024
> igor@hv03:~$ rbd diff rbd/test | awk '{ SUM += $2 } END { print
> SUM/1024/1024 " MB" }'
> 0 MB
> igor@hv03:~$ rbd copy rbd/test rbd/cloneoftest
> Image copy: 100% complete...done.
> igor@hv03:~$ rbd diff rbd/cloneoftest | awk '{ SUM += $2 } END { print
> SUM/1024/1024 " MB" }'
> 1024 MB
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Questions about Ceph

2014-03-21 Thread Igor Laskovy
Hi Jordi,

I would like to propose you consider use any virtualization layer on top of
ceph's RBD cluster. So you will be able achieve huge high availability
during planned and unplanned downtime.

And as mentioned Gregory already, RBD have ability to take snapshots. You
can easily export crush-consistent snap of running VMs and import they when
would be necessary back to the complex. BTW, we testing right now ability
to take consistent backups via guest agents behavior.


On Fri, Mar 21, 2014 at 11:22 PM, Gregory Farnum  wrote:

> When starting this you should be aware that the filesystem is not yet
> fully supported.
>
>
> On Thursday, March 20, 2014, Jordi Sion  wrote:
>
>> Hello,
>>
>> I plan to setup a Ceph cluster for a small size hosting company. The aim
>> is to have customers data (website and mail folders) in a distributed
>> cluster. Then to setup different servers like web, smtp, pop and imap,
>> accessing the the cluster data.
>>
>> The goals are:
>>
>> * Store all data replicated across different nodes
>> * Have all data accessible for every server (like www servers). This way,
>> we can easily move a web from a server into another, or from, let's say
>> apache to nginx. Or have all email accounts accessible from every pop/imap
>> server.
>>
>> I am about to build a 3 node cluster to start tests: 1 MDS with 240Gb SSD
>> and 2 OSD+Monitor with 2x2Tb, with 32 Gb of Ram each and will be
>> interconnected with a 1Gb Private LAN.
>>
>
> The MDS doesn't need any local storage beyond a few config files. :)
>
>
>>
>> Mainly, the servers using the cluster will provide Web serving, FTP
>> access and Email SMTP, POP and IMAP. Also I need to provide MySQL database,
>> which I am not sure how data fits in a Ceph cluster.
>>
>> I have some questions:
>>
>> 1) The plan is to keep MDS node dedicated. Will OSD's be able to act as
>> webservers (apache and proftpd) or mail servers (postfix, dovecot, amavis
>> and spamassassin).
>>
>
> That will depend on how much CPU they have and what clients you're using
> (you don't want to loop back mount with a kernel client ).
>
>
>>  2) How can I manage to have MySQL data stored in Ceph? Is that a good
>> idea? Any suggestions?
>>
>
> I'd recommend just using RBD rather than CephFS. That'll give you a block
> device which you can mount anywhere (but only on one at a time).
>
>
>
>> 3) To prevent major disasters, What is a good practice/strategy to
>> backup/replicate data in the cluster?
>>
>
> Hmm, there's not a good tailored answer for CephFS. With RBD there are
> some options around snapshots and incremental diffs.
> -Greg
>
>
>>
>> Thanks in advance,
>> Jordi
>>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] The next generation beyond Ceph

2014-03-21 Thread Igor Laskovy
Well, looks like Ted talk about its project than uses xtreemfs.


On Fri, Mar 21, 2014 at 6:17 PM, Loic Dachary  wrote:

> Hi Ted,
>
> Thanks for reaching out : it is nice to see more companies developing
> solutions around Ceph. Could you tell me what version you are using ?
>
> Cheers
>
> On 21/03/2014 11:06, Ted wrote:
> >
> > Hi Loic,
> >
> > I wanted to reach out to you since I understood you have been using
> Ceph. I work at a Berlin based software start up that has developed a
> *Carrier Grade, Fully Fault Tolerant and Automated Software Storage
> Solution that allows companies to run compute and storage on the same x86
> Servers*. The key to providing such a service is our highly scalable,
> distributed and unified software file system. We are currently in beta
> testing with companies in Germany and Switzerland and a number of them are
> also Ceph users.
> >
> > I would like to set up a call to better understand your use cases for
> Ceph and to also educate you about our solution. Since we are still in
> stealth mode I have attached a brief technical overview of our solution.
> >
> > I look forward to your feedback.
> >
> > Regards,
> >
> >
> >
>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD does not load at boot

2014-04-01 Thread Igor Laskovy
Hi Dan,

Have you tried use repos ? http://ceph.com/docs/master/install/get-packages/


On Tue, Apr 1, 2014 at 6:01 AM, Dan Koren  wrote:

> Even though it is included in /etc/rc.modules
> and initramfs has been updated.
> Suggestions much appreciated.
> MTIA,
> dk
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] If one node lost connect to replication network?

2013-03-11 Thread Igor Laskovy
Hi there!

I have Ceph FS cluster version 0.56.3. This is 3 nodes with XFS on disks
and with minimum options in ceph.conf in my lab and I do some crush
testing.
One of the of several tests is lost connect to replication network only.
What expect behavior in this situation? Will mounted disk on client machine
frozen or so?

Look like in my case whole cluster have gone crazy.

-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] If one node lost connect to replication network?

2013-03-11 Thread Igor Laskovy
Thanks for the quick reply.
Ok, so at this time looks like better to avoid split networks across
network interfaces.
Where can I find list of all issues related to the concrete version?


On Mon, Mar 11, 2013 at 5:16 PM, Gregory Farnum  wrote:

> On Monday, March 11, 2013, Igor Laskovy wrote:
>
>> Hi there!
>>
>> I have Ceph FS cluster version 0.56.3. This is 3 nodes with XFS on disks
>> and with minimum options in ceph.conf in my lab and I do some crush
>> testing.
>> One of the of several tests is lost connect to replication network only.
>> What expect behavior in this situation? Will mounted disk on client
>> machine frozen or so?
>>
>> Look like in my case whole cluster have gone crazy.
>>
>
> Yeah, this is a known issue with the way Ceph determines if nodes are up
> or down. Basically the OSDs are communicating over the replication network
> and reporting to the monitors that the disconnected node is dead, but when
> they mark it down it finds out and insists (over the public network) that
> it's up.
>
> I believe Sage fixed this issue in our development releases, but could be
> misremembering. Sage?
> -Greg
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Status of Mac OS and Windows PC client

2013-03-17 Thread Igor Laskovy
Hi there!

Could you please clarify what is the current status of development client
for OS X and Windows desktop editions?

-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Status of Mac OS and Windows PC client

2013-03-19 Thread Igor Laskovy
Anybody? :)

Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
On Mar 17, 2013 6:37 PM, "Igor Laskovy"  wrote:

> Hi there!
>
> Could you please clarify what is the current status of development client
> for OS X and Windows desktop editions?
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> Kiev, Ukraine
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Status of Mac OS and Windows PC client

2013-03-19 Thread Igor Laskovy
Thanks for reply!

Actually I would like found some way to use one large salable central
storage across multiple PC and MAC. CephFS will be most suitable here, but
you provide only Linux support.
Really no planning here?


On Tue, Mar 19, 2013 at 3:52 PM, Patrick McGarry wrote:

> Hey Igor,
>
> Currently there are no plans to develop a OS X or Windows-specific
> client per se.  We do provide a number of different ways to expose the
> cluster in ways that you could use it from these machines, however.
>
> The most recent example of this is the work being done on tgt that can
> expose Ceph via iSCSI.  For reference see:
> http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg11662.html
>
> Keep an eye out for more details in the near future.
>
>
> Best Regards,
>
> Patrick McGarry
> Director, Community || Inktank
>
> http://ceph.com  ||  http://inktank.com
> @scuttlemonkey || @ceph || @inktank
>
>
> On Tue, Mar 19, 2013 at 8:30 AM, Igor Laskovy 
> wrote:
> > Anybody? :)
> >
> > Igor Laskovy
> > facebook.com/igor.laskovy
> > Kiev, Ukraine
> >
> > On Mar 17, 2013 6:37 PM, "Igor Laskovy"  wrote:
> >>
> >> Hi there!
> >>
> >> Could you please clarify what is the current status of development
> client
> >> for OS X and Windows desktop editions?
> >>
> >> --
> >> Igor Laskovy
> >> facebook.com/igor.laskovy
> >> Kiev, Ukraine
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Replacement hardware

2013-03-20 Thread Igor Laskovy
Hi there!

What steps needs to be perform if we have totally lost a node.
As I already understand from docs, OSDs must be recreated (disabled,
removed and again created, right?)
But what about MON and MDS?

-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replacement hardware

2013-03-20 Thread Igor Laskovy
Actually, I already have recovered OSDs and MON daemon back to the cluster
according to http://ceph.com/docs/master/rados/operations/add-or-rm-osds/and
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ .

But doc has missed info about removing/add MDS.
How I can recovery MDS daemon for failed node?


On Wed, Mar 20, 2013 at 3:23 PM, Dave (Bob)  wrote:

> Igor,
>
> I am sure that I'm right in saying that you just have to create a new
> filesystem (btrfs?) on the new block device, mount it, and then
> initialise the osd with:
>
> ceph-osd -i  --mkfs
>
> Then you can start the osd with:
>
> ceph-osd -i 
>
> Since you are replacing an osd that already existed, the cluster knows
> about it, and there is a key for it that is known.
>
> I don't claim any great expertise, but this is what I've been doing, and
> the cluster seems to adopt the new osd and sort everything out.
>
> David
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replacement hardware

2013-03-20 Thread Igor Laskovy
Well, can you please clarify what exactly key I must to use? Do I need to
get/generate it somehow from working cluster?


On Wed, Mar 20, 2013 at 7:41 PM, Greg Farnum  wrote:

> The MDS doesn't have any local state. You just need start up the daemon
> somewhere with a name and key that are known to the cluster (these can be
> different from or the same as the one that existed on the dead node;
> doesn't matter!).
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wednesday, March 20, 2013 at 10:40 AM, Igor Laskovy wrote:
>
> > Actually, I already have recovered OSDs and MON daemon back to the
> cluster according to
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ and
> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ .
> >
> > But doc has missed info about removing/add MDS.
> > How I can recovery MDS daemon for failed node?
> >
> >
> >
> > On Wed, Mar 20, 2013 at 3:23 PM, Dave (Bob)  d...@bob-the-boat.me.uk)> wrote:
> > > Igor,
> > >
> > > I am sure that I'm right in saying that you just have to create a new
> > > filesystem (btrfs?) on the new block device, mount it, and then
> > > initialise the osd with:
> > >
> > > ceph-osd -i  --mkfs
> > >
> > > Then you can start the osd with:
> > >
> > > ceph-osd -i 
> > >
> > > Since you are replacing an osd that already existed, the cluster knows
> > > about it, and there is a key for it that is known.
> > >
> > > I don't claim any great expertise, but this is what I've been doing,
> and
> > > the cluster seems to adopt the new osd and sort everything out.
> > >
> > > David
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > --
> > Igor Laskovy
> > facebook.com/igor.laskovy (http://facebook.com/igor.laskovy)
> > Kiev, Ukraine
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replacement hardware

2013-03-20 Thread Igor Laskovy
Oh, thank you!


On Wed, Mar 20, 2013 at 7:52 PM, Greg Farnum  wrote:

> Yeah. If you run "ceph auth list" you'll get a dump of all the users and
> keys the cluster knows about; each of your daemons has that key stored
> somewhere locally (generally in /var/lib/ceph/ceph-[osd|mds|mon].$id). You
> can create more or copy an unused MDS one. I believe the docs include
> information on how this works.
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wednesday, March 20, 2013 at 10:48 AM, Igor Laskovy wrote:
>
> > Well, can you please clarify what exactly key I must to use? Do I need
> to get/generate it somehow from working cluster?
> >
> >
> > On Wed, Mar 20, 2013 at 7:41 PM, Greg Farnum  g...@inktank.com)> wrote:
> > > The MDS doesn't have any local state. You just need start up the
> daemon somewhere with a name and key that are known to the cluster (these
> can be different from or the same as the one that existed on the dead node;
> doesn't matter!).
> > > -Greg
> > >
> > > Software Engineer #42 @ http://inktank.com | http://ceph.com
> > >
> > >
> > > On Wednesday, March 20, 2013 at 10:40 AM, Igor Laskovy wrote:
> > >
> > > > Actually, I already have recovered OSDs and MON daemon back to the
> cluster according to
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ and
> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ .
> > > >
> > > > But doc has missed info about removing/add MDS.
> > > > How I can recovery MDS daemon for failed node?
> > > >
> > > >
> > > >
> > > > On Wed, Mar 20, 2013 at 3:23 PM, Dave (Bob) 
> > > >  d...@bob-the-boat.me.uk) (mailto:d...@bob-the-boat.me.uk)> wrote:
> > > > > Igor,
> > > > >
> > > > > I am sure that I'm right in saying that you just have to create a
> new
> > > > > filesystem (btrfs?) on the new block device, mount it, and then
> > > > > initialise the osd with:
> > > > >
> > > > > ceph-osd -i  --mkfs
> > > > >
> > > > > Then you can start the osd with:
> > > > >
> > > > > ceph-osd -i 
> > > > >
> > > > > Since you are replacing an osd that already existed, the cluster
> knows
> > > > > about it, and there is a key for it that is known.
> > > > >
> > > > > I don't claim any great expertise, but this is what I've been
> doing, and
> > > > > the cluster seems to adopt the new osd and sort everything out.
> > > > >
> > > > > David
> > > > > ___
> > > > > ceph-users mailing list
> > > > > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
> (mailto:ceph-users@lists.ceph.com)
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Igor Laskovy
> > > > facebook.com/igor.laskovy (http://facebook.com/igor.laskovy) (
> http://facebook.com/igor.laskovy)
> > > > Kiev, Ukraine
> > > > ___
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
> (mailto:ceph-users@lists.ceph.com)
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> >
> >
> >
> >
> > --
> > Igor Laskovy
> > facebook.com/igor.laskovy (http://facebook.com/igor.laskovy)
> > Kiev, Ukraine
>
>
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2013-03-26 Thread Igor Laskovy
Hi there!

Are Chris Holcombe and Robert Blair here? Please answer me about your
awesome job http://ceph.com/community/ceph-over-fibre-for-vmware/ .
Thanks!

-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs doesn't mount at boot time

2013-04-02 Thread Igor Laskovy
Hi, I can confirm this behavior in Ubuntu 12.04.

Try mount not root directory. For example, change "m1:6789,m2:6789,m3:6789:/"
to "m1:6789,m2:6789,m3:6789:/datastore00", but first you need have created
that "datastore00" catalog. Try this!


On Tue, Apr 2, 2013 at 11:39 AM, Marco Aroldi wrote:

>  My laptop (Linux Mint 14) mount ceph at boot (0.59) - no problem at all
> I've tried with a server with Ubuntu 12.04 (ceph 0.56.4) - problem!
> I've tried with 2 virtual machines with Ubuntu 12.04 (ceph 0.56.4) on my
> laptop - problem!
>
> The line in the fstab and the chmod setting on the keyring file are
> exactly the same
> I've tried to sobstitute the fqdn with ip... no luck
>
> With a 'mount -a' once booted, cephfs mounts ok, so it look like tries the
> mount when the net is down
> Maybe something has changed or updated on ubuntu 12.04 init processes?
>
> --
> Marco Aroldi
>
>
> Il 02/04/2013 00:38, John Wilkins ha scritto:
>
>  It looks like you are using a domain name instead of an IP address. Try
> it with the IP address. Are the chmod settings correct on the keyring? Once
> we resolve this, let me know how we can improve the docs here:
> http://ceph.com/docs/master/cephfs/fstab/
>
>
>
> On Sat, Mar 30, 2013 at 10:55 AM, Marco Aroldi wrote:
>
>> Hi all,
>> I have an entry in my fstab for Cephfs, but at boot time the fs
>> doesn't mount. Instead from the shell a "mount -a" works well
>>
>> m1:6789,m2:6789,m3:6789:/ /mnt/ceph ceph
>> name=gw1,secretfile=/etc/ceph/keyring.gw1,noatime   0   2
>>
>> I have tried to use monitors FQDN and the ip adresses
>> I have also added _netdev in the options
>> Still no luck
>>
>> Ceph is 0.56.4 on Ubuntu 12.04
>>
>> Any ideas?
>> Thanks
>>
>> --
>> Marco Aroldi
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
>  --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-19 Thread Igor Laskovy
Hello!

Does anybody use Rados Gateway via S3-compatible clients on desktop systems?

-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-21 Thread Igor Laskovy
A little bit more.

I have tried deploy RGW via http://ceph.com/docs/master/radosgw/ and than
connect S3 Browser, CrossFTP and CloudBerry Explorer clients, but all
unsuccessfully.

Again my question, does anybody use S3 desktop clients with RGW?


On Fri, Apr 19, 2013 at 10:54 PM, Igor Laskovy wrote:

> Hello!
>
> Does anybody use Rados Gateway via S3-compatible clients on desktop
> systems?
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-21 Thread Igor Laskovy
Well, in each case something specific. For CrossFTP, for example, it says
that asking the server it receive text data instead of XML.
In logs on servers side I don't found something interested.

I do everything shown at http://ceph.com/docs/master/radosgw/ and only
that, excluding swift compatible preparation.
May be there are needs something additional? Manual creating of root bucket
or something like that?


On Sun, Apr 21, 2013 at 6:53 PM, Yehuda Sadeh  wrote:

> On Sun, Apr 21, 2013 at 3:02 AM, Igor Laskovy 
> wrote:
> > A little bit more.
> >
> > I have tried deploy RGW via http://ceph.com/docs/master/radosgw/ and
> than
> > connect S3 Browser, CrossFTP and CloudBerry Explorer clients, but all
> > unsuccessfully.
> >
> > Again my question, does anybody use S3 desktop clients with RGW?
>
> These applications should be compatible with rgw. Are you sure your
> setup works? What are you getting?
>
> Yehuda
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-21 Thread Igor Laskovy
Just initial connect to rgw server, nothing further.
Please see below behavior for CrossFTP and S3Browser cases.

On CrossFTP side:
[R1] Connect to rgw.labspace
[R1] Current path: /
[R1] Current path: /
[R1] LIST /
[R1] Expected XML document response from S3 but received content type
text/html
[R1] Disconnected

On rgw side:
root@osd01:~# ps aux |grep rados
root  1785  0.4  0.1 2045404 6068 ?Ssl  19:47   0:00
/usr/bin/radosgw -n client.radosgw.a

root@osd01:~# tail -f /var/log/apache2/error.log
[Sun Apr 21 19:43:56 2013] [notice] FastCGI: process manager initialized
(pid 1433)
[Sun Apr 21 19:43:56 2013] [notice] Apache/2.2.22 (Ubuntu)
mod_fastcgi/mod_fastcgi-SNAP-0910052141 mod_ssl/2.2.22 OpenSSL/1.0.1
configured -- resuming normal operations
[Sun Apr 21 19:50:19 2013] [error] [client 192.168.1.51] File does not
exist: /var/www/favicon.ico

tail -f /var/log/apache2/access.log
nothing

On S3browser side:
[image: Inline image 2]
[4/21/2013 7:56 PM] Getting buckets list... TaskID: 2
[4/21/2013 7:56 PM] System.Net.WebException:The underlying connection was
closed: An unexpected error occurred on a send. TaskID: 2 TaskID: 2
[4/21/2013 7:56 PM] Error occurred during Getting buckets list TaskID: 2

On rgw side:

root@osd01:~# tail -f /var/log/apache2/error.log
[Sun Apr 21 19:56:19 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:22 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:23 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:23 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:24 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:24 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:25 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01
[Sun Apr 21 19:56:25 2013] [error] [client 192.168.1.51] Invalid method in
request \x16\x03\x01

tail -f /var/log/apache2/access.log
nothing



On Sun, Apr 21, 2013 at 7:43 PM, Yehuda Sadeh  wrote:

> On Sun, Apr 21, 2013 at 9:39 AM, Igor Laskovy 
> wrote:
> > Well, in each case something specific. For CrossFTP, for example, it says
> > that asking the server it receive text data instead of XML.
>
> When doing what? Are you able to do anything?
>
> > In logs on servers side I don't found something interested.
>
> What do the apache access and error logs show?
>
> >
> > I do everything shown at http://ceph.com/docs/master/radosgw/ and only
> that,
> > excluding swift compatible preparation.
> > May be there are needs something additional? Manual creating of root
> bucket
> > or something like that?
> >
> >
> > On Sun, Apr 21, 2013 at 6:53 PM, Yehuda Sadeh 
> wrote:
> >>
> >> On Sun, Apr 21, 2013 at 3:02 AM, Igor Laskovy 
> >> wrote:
> >> > A little bit more.
> >> >
> >> > I have tried deploy RGW via http://ceph.com/docs/master/radosgw/ and
> >> > than
> >> > connect S3 Browser, CrossFTP and CloudBerry Explorer clients, but all
> >> > unsuccessfully.
> >> >
> >> > Again my question, does anybody use S3 desktop clients with RGW?
> >>
> >> These applications should be compatible with rgw. Are you sure your
> >> setup works? What are you getting?
> >>
> >> Yehuda
> >
> >
> >
> >
> > --
> > Igor Laskovy
> > facebook.com/igor.laskovy
> > studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
<>___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-23 Thread Igor Laskovy
Sorry for delayed reply,

I am not good familiar with apache.
For RGW I use one of the OSD nodes. This is clear minimum installation of
Ubunut 12.04 and ceph deployment on it, no another services.
I must to say that I use default apache2 package from Ubuntu repository and
have rgw print continue = false in ceph.conf .
Again, all configuration made as shown at here
http://ceph.com/docs/master/radosgw/manual-install/ and here
http://ceph.com/docs/master/radosgw/config/ nothing more.
As {fqdn} I have used FQDN for this node.

> try listing whatever under /etc/apache2/sites-enabled, see if there's
anything else there.
Looks like apache works.

Which exactly log files can I show for you?


On Sun, Apr 21, 2013 at 11:49 PM, Yehuda Sadeh  wrote:

> On Sun, Apr 21, 2013 at 10:05 AM, Igor Laskovy 
> wrote:
> >
> > Just initial connect to rgw server, nothing further.
> > Please see below behavior for CrossFTP and S3Browser cases.
> >
> > On CrossFTP side:
> > [R1] Connect to rgw.labspace
> > [R1] Current path: /
> > [R1] Current path: /
> > [R1] LIST /
> > [R1] Expected XML document response from S3 but received content type
> text/html
> > [R1] Disconnected
> >
> > On rgw side:
> > root@osd01:~# ps aux |grep rados
> > root  1785  0.4  0.1 2045404 6068 ?Ssl  19:47   0:00
> /usr/bin/radosgw -n client.radosgw.a
> >
> > root@osd01:~# tail -f /var/log/apache2/error.log
> > [Sun Apr 21 19:43:56 2013] [notice] FastCGI: process manager initialized
> (pid 1433)
> > [Sun Apr 21 19:43:56 2013] [notice] Apache/2.2.22 (Ubuntu)
> mod_fastcgi/mod_fastcgi-SNAP-0910052141 mod_ssl/2.2.22 OpenSSL/1.0.1
> configured -- resuming normal operations
> > [Sun Apr 21 19:50:19 2013] [error] [client 192.168.1.51] File does not
> exist: /var/www/favicon.ico
>
> Doesn't seem that your apache is configured right. How does your site
> config file look like? Do you have any other sites configured (e.g.,
> the default one)? try listing whatever under
> /etc/apache2/sites-enabled, see if there's anything else there.
> >
> > tail -f /var/log/apache2/access.log
> > nothing
> >
> > On S3browser side:
> >
> > [4/21/2013 7:56 PM] Getting buckets list... TaskID: 2
> > [4/21/2013 7:56 PM] System.Net.WebException:The underlying connection
> was closed: An unexpected error occurred on a send. TaskID: 2 TaskID: 2
> > [4/21/2013 7:56 PM] Error occurred during Getting buckets list TaskID: 2
> >
> > On rgw side:
> >
> > root@osd01:~# tail -f /var/log/apache2/error.log
> > [Sun Apr 21 19:56:19 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:22 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:23 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:23 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:24 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:24 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:25 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> > [Sun Apr 21 19:56:25 2013] [error] [client 192.168.1.51] Invalid method
> in request \x16\x03\x01
> >
> > tail -f /var/log/apache2/access.log
> > nothing
> >
> >
> >
> > On Sun, Apr 21, 2013 at 7:43 PM, Yehuda Sadeh 
> wrote:
> >>
> >> On Sun, Apr 21, 2013 at 9:39 AM, Igor Laskovy 
> wrote:
> >> > Well, in each case something specific. For CrossFTP, for example, it
> says
> >> > that asking the server it receive text data instead of XML.
> >>
> >> When doing what? Are you able to do anything?
> >>
> >> > In logs on servers side I don't found something interested.
> >>
> >> What do the apache access and error logs show?
> >>
> >> >
> >> > I do everything shown at http://ceph.com/docs/master/radosgw/ and
> only that,
> >> > excluding swift compatible preparation.
> >> > May be there are needs something additional? Manual creating of root
> bucket
> >> > or something like that?
> >> >
> >> >
> >> > On Sun, Apr 21, 2013 at 6:53 PM, Yehuda Sadeh 
> wrote:
> >> >>
> >> >> On Sun, Apr 21, 2013 at 3:02 AM, Igor Laskovy <
> igor.lask...@gmail.com>
> >> >> wrote:
> >> >> > A little bit more.
> >> >> >
&g

Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-23 Thread Igor Laskovy
In /etc/apache2/httpd.conf I have :
ServerName osd01.ceph.labspace.studiogrizzly.com

In /etc/apache2/sites-available/rgw.conf :
FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock


ServerName osd01.ceph.labspace.studiogrizzly.com
ServerAdmin igor.lask...@gmail.com
DocumentRoot /var/www


RewriteEngine On
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
/s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]





Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off



AllowEncodedSlashes On
ErrorLog /var/log/apache2/error.log
CustomLog /var/log/apache2/access.log combined
ServerSignature Off


On Tue, Apr 23, 2013 at 5:57 PM, Yehuda Sadeh  wrote:
> On Tue, Apr 23, 2013 at 7:51 AM, Igor Laskovy  wrote:
>> Sorry for delayed reply,
>>
>> I am not good familiar with apache.
>> For RGW I use one of the OSD nodes. This is clear minimum installation of
>> Ubunut 12.04 and ceph deployment on it, no another services.
>> I must to say that I use default apache2 package from Ubuntu repository and
>> have rgw print continue = false in ceph.conf .
>> Again, all configuration made as shown at here
>> http://ceph.com/docs/master/radosgw/manual-install/ and here
>> http://ceph.com/docs/master/radosgw/config/ nothing more.
>> As {fqdn} I have used FQDN for this node.
>>
>>> try listing whatever under /etc/apache2/sites-enabled, see if there's
>>> anything else there.
>> Looks like apache works.
>>
>> Which exactly log files can I show for you?
>
> I think that your apache site config is the more interesting thing to
> look at it right now. The docs might be a bit unclear, we've seen some
> error there recently, can you make sure that there's only a single
> VirtualHost section in it?
>
>>
>>
>> On Sun, Apr 21, 2013 at 11:49 PM, Yehuda Sadeh  wrote:
>>>
>>> On Sun, Apr 21, 2013 at 10:05 AM, Igor Laskovy 
>>> wrote:
>>> >
>>> > Just initial connect to rgw server, nothing further.
>>> > Please see below behavior for CrossFTP and S3Browser cases.
>>> >
>>> > On CrossFTP side:
>>> > [R1] Connect to rgw.labspace
>>> > [R1] Current path: /
>>> > [R1] Current path: /
>>> > [R1] LIST /
>>> > [R1] Expected XML document response from S3 but received content type
>>> > text/html
>>> > [R1] Disconnected
>>> >
>>> > On rgw side:
>>> > root@osd01:~# ps aux |grep rados
>>> > root  1785  0.4  0.1 2045404 6068 ?Ssl  19:47   0:00
>>> > /usr/bin/radosgw -n client.radosgw.a
>>> >
>>> > root@osd01:~# tail -f /var/log/apache2/error.log
>>> > [Sun Apr 21 19:43:56 2013] [notice] FastCGI: process manager initialized
>>> > (pid 1433)
>>> > [Sun Apr 21 19:43:56 2013] [notice] Apache/2.2.22 (Ubuntu)
>>> > mod_fastcgi/mod_fastcgi-SNAP-0910052141 mod_ssl/2.2.22 OpenSSL/1.0.1
>>> > configured -- resuming normal operations
>>> > [Sun Apr 21 19:50:19 2013] [error] [client 192.168.1.51] File does not
>>> > exist: /var/www/favicon.ico
>>>
>>> Doesn't seem that your apache is configured right. How does your site
>>> config file look like? Do you have any other sites configured (e.g.,
>>> the default one)? try listing whatever under
>>> /etc/apache2/sites-enabled, see if there's anything else there.
>>> >
>>> > tail -f /var/log/apache2/access.log
>>> > nothing
>>> >
>>> > On S3browser side:
>>> >
>>> > [4/21/2013 7:56 PM] Getting buckets list... TaskID: 2
>>> > [4/21/2013 7:56 PM] System.Net.WebException:The underlying connection
>>> > was closed: An unexpected error occurred on a send. TaskID: 2 TaskID: 2
>>> > [4/21/2013 7:56 PM] Error occurred during Getting buckets list TaskID: 2
>>> >
>>> > On rgw side:
>>> >
>>> > root@osd01:~# tail -f /var/log/apache2/error.log
>>> > [Sun Apr 21 19:56:19 2013] [error] [client 192.168.1.51] Invalid method
>>> > in request \x16\x03\x01
>>> > [Sun Apr 21 19:56:22 2013] [error] [client 192.168.1.51] Invalid method
>>> > in request \x16\x03\x01
>>> > [Sun Apr 21 19:56:23 2013] [error] [client

Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-23 Thread Igor Laskovy
So, I totally lost in this, but I did it, and now CrossFTP report:
[R1] Connect to osd01.ceph.labspace.studiogrizzly.com
[R1] Current path: /
[R1] Current path: /
[R1] LIST /
[R1] Request Error [

404 Not Found

Not Found
The requested URL / was not found on this server.

].

On Tue, Apr 23, 2013 at 9:39 PM, Yehuda Sadeh  wrote:
> On Tue, Apr 23, 2013 at 11:33 AM, Igor Laskovy  wrote:
>> In /etc/apache2/httpd.conf I have :
>> ServerName osd01.ceph.labspace.studiogrizzly.com
>>
>> In /etc/apache2/sites-available/rgw.conf :
>
>
>
>> FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
>>
>> 
>> ServerName osd01.ceph.labspace.studiogrizzly.com
>> ServerAdmin igor.lask...@gmail.com
>> DocumentRoot /var/www
>> 
>
> remove this line ^^^
>
>>
>> RewriteEngine On
>> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
>> /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
>> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>>
>> 
>
> remove this line ^^^
>
>>
>> 
>> 
>> Options +ExecCGI
>> AllowOverride All
>> SetHandler fastcgi-script
>> Order allow,deny
>> Allow from all
>> AuthBasicAuthoritative Off
>> 
>> 
>>
>> AllowEncodedSlashes On
>> ErrorLog /var/log/apache2/error.log
>> CustomLog /var/log/apache2/access.log combined
>> ServerSignature Off
>> 
>>
>> On Tue, Apr 23, 2013 at 5:57 PM, Yehuda Sadeh  wrote:
>>> On Tue, Apr 23, 2013 at 7:51 AM, Igor Laskovy  
>>> wrote:
>>>> Sorry for delayed reply,
>>>>
>>>> I am not good familiar with apache.
>>>> For RGW I use one of the OSD nodes. This is clear minimum installation of
>>>> Ubunut 12.04 and ceph deployment on it, no another services.
>>>> I must to say that I use default apache2 package from Ubuntu repository and
>>>> have rgw print continue = false in ceph.conf .
>>>> Again, all configuration made as shown at here
>>>> http://ceph.com/docs/master/radosgw/manual-install/ and here
>>>> http://ceph.com/docs/master/radosgw/config/ nothing more.
>>>> As {fqdn} I have used FQDN for this node.
>>>>
>>>>> try listing whatever under /etc/apache2/sites-enabled, see if there's
>>>>> anything else there.
>>>> Looks like apache works.
>>>>
>>>> Which exactly log files can I show for you?
>>>
>>> I think that your apache site config is the more interesting thing to
>>> look at it right now. The docs might be a bit unclear, we've seen some
>>> error there recently, can you make sure that there's only a single
>>> VirtualHost section in it?
>>>
>>>>
>>>>
>>>> On Sun, Apr 21, 2013 at 11:49 PM, Yehuda Sadeh  wrote:
>>>>>
>>>>> On Sun, Apr 21, 2013 at 10:05 AM, Igor Laskovy 
>>>>> wrote:
>>>>> >
>>>>> > Just initial connect to rgw server, nothing further.
>>>>> > Please see below behavior for CrossFTP and S3Browser cases.
>>>>> >
>>>>> > On CrossFTP side:
>>>>> > [R1] Connect to rgw.labspace
>>>>> > [R1] Current path: /
>>>>> > [R1] Current path: /
>>>>> > [R1] LIST /
>>>>> > [R1] Expected XML document response from S3 but received content type
>>>>> > text/html
>>>>> > [R1] Disconnected
>>>>> >
>>>>> > On rgw side:
>>>>> > root@osd01:~# ps aux |grep rados
>>>>> > root  1785  0.4  0.1 2045404 6068 ?Ssl  19:47   0:00
>>>>> > /usr/bin/radosgw -n client.radosgw.a
>>>>> >
>>>>> > root@osd01:~# tail -f /var/log/apache2/error.log
>>>>> > [Sun Apr 21 19:43:56 2013] [notice] FastCGI: process manager initialized
>>>>> > (pid 1433)
>>>>> > [Sun Apr 21 19:43:56 2013] [notice] Apache/2.2.22 (Ubuntu)
>>>>> > mod_fastcgi/mod_fastcgi-SNAP-0910052141 mod_ssl/2.2.22 OpenSSL/1.0.1
>>>>> > configured -- resuming normal operations
>>>>> > [Sun Apr 21 19:50:19 2013] [error] [client 192.168.1.51] File does not
>>>>> > exist: /var/www/favicon.ico
>>>>&

Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-23 Thread Igor Laskovy
Ok, I removed right lines. Now CrossFTP connected, but when I trying
create bucket it report:
[R1] S3 Error: -1 (null) error: Request Error:
java.net.UnknownHostException:
fdfdf.osd01.ceph.labspace.studiogrizzly.com; XML Error Message: null
[R1] -1 (null) error: Request Error: java.net.UnknownHostException:
fdfdf.osd01.ceph.labspace.studiogrizzly.com; XML Error Message: null
[R1] Failed to create the directory



On Tue, Apr 23, 2013 at 9:39 PM, Yehuda Sadeh  wrote:
> On Tue, Apr 23, 2013 at 11:33 AM, Igor Laskovy  wrote:
>> In /etc/apache2/httpd.conf I have :
>> ServerName osd01.ceph.labspace.studiogrizzly.com
>>
>> In /etc/apache2/sites-available/rgw.conf :
>
>
>
>> FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
>>
>> 
>> ServerName osd01.ceph.labspace.studiogrizzly.com
>> ServerAdmin igor.lask...@gmail.com
>> DocumentRoot /var/www
>> 
>
> remove this line ^^^
>
>>
>> RewriteEngine On
>> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
>> /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
>> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>>
>> 
>
> remove this line ^^^
>
>>
>> 
>> 
>> Options +ExecCGI
>> AllowOverride All
>> SetHandler fastcgi-script
>> Order allow,deny
>> Allow from all
>> AuthBasicAuthoritative Off
>> 
>> 
>>
>> AllowEncodedSlashes On
>> ErrorLog /var/log/apache2/error.log
>> CustomLog /var/log/apache2/access.log combined
>> ServerSignature Off
>> 
>>
>> On Tue, Apr 23, 2013 at 5:57 PM, Yehuda Sadeh  wrote:
>>> On Tue, Apr 23, 2013 at 7:51 AM, Igor Laskovy  
>>> wrote:
>>>> Sorry for delayed reply,
>>>>
>>>> I am not good familiar with apache.
>>>> For RGW I use one of the OSD nodes. This is clear minimum installation of
>>>> Ubunut 12.04 and ceph deployment on it, no another services.
>>>> I must to say that I use default apache2 package from Ubuntu repository and
>>>> have rgw print continue = false in ceph.conf .
>>>> Again, all configuration made as shown at here
>>>> http://ceph.com/docs/master/radosgw/manual-install/ and here
>>>> http://ceph.com/docs/master/radosgw/config/ nothing more.
>>>> As {fqdn} I have used FQDN for this node.
>>>>
>>>>> try listing whatever under /etc/apache2/sites-enabled, see if there's
>>>>> anything else there.
>>>> Looks like apache works.
>>>>
>>>> Which exactly log files can I show for you?
>>>
>>> I think that your apache site config is the more interesting thing to
>>> look at it right now. The docs might be a bit unclear, we've seen some
>>> error there recently, can you make sure that there's only a single
>>> VirtualHost section in it?
>>>
>>>>
>>>>
>>>> On Sun, Apr 21, 2013 at 11:49 PM, Yehuda Sadeh  wrote:
>>>>>
>>>>> On Sun, Apr 21, 2013 at 10:05 AM, Igor Laskovy 
>>>>> wrote:
>>>>> >
>>>>> > Just initial connect to rgw server, nothing further.
>>>>> > Please see below behavior for CrossFTP and S3Browser cases.
>>>>> >
>>>>> > On CrossFTP side:
>>>>> > [R1] Connect to rgw.labspace
>>>>> > [R1] Current path: /
>>>>> > [R1] Current path: /
>>>>> > [R1] LIST /
>>>>> > [R1] Expected XML document response from S3 but received content type
>>>>> > text/html
>>>>> > [R1] Disconnected
>>>>> >
>>>>> > On rgw side:
>>>>> > root@osd01:~# ps aux |grep rados
>>>>> > root  1785  0.4  0.1 2045404 6068 ?Ssl  19:47   0:00
>>>>> > /usr/bin/radosgw -n client.radosgw.a
>>>>> >
>>>>> > root@osd01:~# tail -f /var/log/apache2/error.log
>>>>> > [Sun Apr 21 19:43:56 2013] [notice] FastCGI: process manager initialized
>>>>> > (pid 1433)
>>>>> > [Sun Apr 21 19:43:56 2013] [notice] Apache/2.2.22 (Ubuntu)
>>>>> > mod_fastcgi/mod_fastcgi-SNAP-0910052141 mod_ssl/2.2.22 OpenSSL/1.0.1
>>>>> > configured -- resuming normal operations
>>>>> > [Sun Apr 

Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-24 Thread Igor Laskovy
Ok. I will try, thanks.
One further question - does needed manually start /etc/init.d/radosgw all
time when this host have been rebooted? Why it is not part of service ceph
-a start?


On Tue, Apr 23, 2013 at 11:05 PM, Lorieri  wrote:

> I've made some tests again with s3cmd
>
> you need to have a valid and accessible host_bucket key in the .s3cfg
> for example:
> host_bucket = %(bucket)s.myhostname.com
>
> if you dont have it, it does not allow you to use lowercase buckets
> I believe it checks if the bucket name is a valid dns name, etc
>
> for osx:
>
> brew install s3cmd
>
> []s
> -lorieri
>
>
>
>
> On Tue, Apr 23, 2013 at 4:00 PM, Igor Laskovy wrote:
>
>> So, I totally lost in this, but I did it, and now CrossFTP report:
>> [R1] Connect to osd01.ceph.labspace.studiogrizzly.com
>> [R1] Current path: /
>> [R1] Current path: /
>> [R1] LIST /
>> [R1] Request Error [
>> 
>> 404 Not Found
>> 
>> Not Found
>> The requested URL / was not found on this server.
>> 
>> ].
>>
>> On Tue, Apr 23, 2013 at 9:39 PM, Yehuda Sadeh  wrote:
>> > On Tue, Apr 23, 2013 at 11:33 AM, Igor Laskovy 
>> wrote:
>> >> In /etc/apache2/httpd.conf I have :
>> >> ServerName osd01.ceph.labspace.studiogrizzly.com
>> >>
>> >> In /etc/apache2/sites-available/rgw.conf :
>> >
>> >
>> >
>> >> FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
>> >>
>> >> 
>> >> ServerName osd01.ceph.labspace.studiogrizzly.com
>> >> ServerAdmin igor.lask...@gmail.com
>> >> DocumentRoot /var/www
>> >> 
>> >
>> > remove this line ^^^
>> >
>> >>
>> >> RewriteEngine On
>> >> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
>> >> /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
>> >> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>> >>
>> >> 
>> >
>> > remove this line ^^^
>> >
>> >>
>> >> 
>> >> 
>> >> Options +ExecCGI
>> >> AllowOverride All
>> >>     SetHandler fastcgi-script
>> >> Order allow,deny
>> >> Allow from all
>> >> AuthBasicAuthoritative Off
>> >> 
>> >> 
>> >>
>> >> AllowEncodedSlashes On
>> >> ErrorLog /var/log/apache2/error.log
>> >> CustomLog /var/log/apache2/access.log combined
>> >> ServerSignature Off
>> >> 
>> >>
>> >> On Tue, Apr 23, 2013 at 5:57 PM, Yehuda Sadeh 
>> wrote:
>> >>> On Tue, Apr 23, 2013 at 7:51 AM, Igor Laskovy 
>> wrote:
>> >>>> Sorry for delayed reply,
>> >>>>
>> >>>> I am not good familiar with apache.
>> >>>> For RGW I use one of the OSD nodes. This is clear minimum
>> installation of
>> >>>> Ubunut 12.04 and ceph deployment on it, no another services.
>> >>>> I must to say that I use default apache2 package from Ubuntu
>> repository and
>> >>>> have rgw print continue = false in ceph.conf .
>> >>>> Again, all configuration made as shown at here
>> >>>> http://ceph.com/docs/master/radosgw/manual-install/ and here
>> >>>> http://ceph.com/docs/master/radosgw/config/ nothing more.
>> >>>> As {fqdn} I have used FQDN for this node.
>> >>>>
>> >>>>> try listing whatever under /etc/apache2/sites-enabled, see if
>> there's
>> >>>>> anything else there.
>> >>>> Looks like apache works.
>> >>>>
>> >>>> Which exactly log files can I show for you?
>> >>>
>> >>> I think that your apache site config is the more interesting thing to
>> >>> look at it right now. The docs might be a bit unclear, we've seen some
>> >>> error there recently, can you make sure that there's only a single
>> >>> VirtualHost section in it?
>> >>>
>> >>>>
>> >>>>
>> >>>> On Sun, Apr 21, 2013 at 11:49 PM, Yehuda Sadeh 
>> wrote:
>> >>>>>
>> >>>>> On Sun, Apr 21, 2013 at 10:05 AM, I

Re: [ceph-users] RBD single process read performance

2013-04-25 Thread Igor Laskovy
164MB/s
>> ceph 0.58, qemu/kvm, no cache:84MB/s
>> ceph 0.58, qemu/kvm, rbd cache:240MB/s
>> ceph wip-rbd-cache-aio, qemu/kvm, rbd cache:244MB/s
>>
>
> I tried with wip-bobtail-rbd-backports-req-**order and with the recent
> patch for Qemu ( 
> http://patchwork.ozlabs.org/**patch/232489/<http://patchwork.ozlabs.org/patch/232489/>)
>  and get about 90MB/sec write, but again, it's about reads.
>
>
>
>> 1 volume, 1 process, and iodepth = 16
>>
>> ceph 0.58, krbd:711MB/s
>> ceph 0.58, qemu/kvm, no cache:899MB/s
>> ceph 0.58, qemu/kvm, rbd cache:227MB/s
>> ceph wip-rbd-cache-aio, qemu/kvm, rbd cache:680MB/s
>>
>> 4MB read performance using libaio:
>>
>> 1 volume, 1 process, and iodepth = 1
>>
>> ceph 0.58, krbd:108MB/s
>> ceph 0.58, qemu/kvm, no cache:85MB/s
>> ceph 0.58, qemu/kvm, rbd cache:85MB/s
>> ceph wip-rbd-cache-aio, qemu/kvm, rbd cache:89MB/s
>>
>> 1 volume, 1 process, and iodepth = 16
>>
>> ceph 0.58, krbd:516MB/s
>> ceph 0.58, qemu/kvm, no cache:839MB/s
>> ceph 0.58, qemu/kvm, rbd cache:823MB/s
>> ceph wip-rbd-cache-aio, qemu/kvm, rbd cache:830MB/s
>>
>
> With 4m size and an iodepth of 16 I'm maxing out a 90MB/sec inside a Qemu
> VM.
>
> The whole reading seems sluggish. For example "man fio" took about 4
> seconds to show up. Even running apt-get update is rather slow.
>
> The VM doesn't feel responsive at all, so trying to figure out where that
> comes from.
>
>
>
>> To get single request performance to scale farther, you'll have to
>> diagnose if there are places that you can lower latency rather than hide
>> it with concurrency.  That's not an easy task in a distributed system
>> like Ceph.  There are probably opportunities for optimization, but I
>> suspect it may take more than tweaking the ceph.conf file.
>>
>>
> I fully get that the distributed nature has it's drawbacks in serial
> performance and that Ceph excels in parallel performance, however, just 60
> ~ 80MB/sec seems rather slow. On a pretty idle cluster that should be
> better, especially when all the OSDs have everything in their page cache.
>
>
>  Mark
>> __**_
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> __**_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Igor Laskovy
Hello,

have problem with clearing space for RGW pool with "radosgw-admin temp
remove" command:

root@osd01:~# ceph -v
ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
root@osd01:~# radosgw-admin temp remove --date=2014-04-26
failed to list objects
failure removing temp objects: (2) No such file or directory

I found that this may be a two years old bug -
http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg04037.html

--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Igor Laskovy
Well, than what about used space of RGW pool? How I can trim it after
files deletion?

On Sat, Apr 27, 2013 at 6:34 PM, Yehuda Sadeh  wrote:
> The temp remove is an obsolete feature that was needed before the
> introduction of the garbage collector. It's not needed in that
> version.
>
> Yehuda
>
> On Sat, Apr 27, 2013 at 6:33 AM, Igor Laskovy  wrote:
>> Hello,
>>
>> have problem with clearing space for RGW pool with "radosgw-admin temp
>> remove" command:
>>
>> root@osd01:~# ceph -v
>> ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
>> root@osd01:~# radosgw-admin temp remove --date=2014-04-26
>> failed to list objects
>> failure removing temp objects: (2) No such file or directory
>>
>> I found that this may be a two years old bug -
>> http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg04037.html
>>
>> --
>> Igor Laskovy
>> facebook.com/igor.laskovy
>> studiogrizzly.com
>> _______
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Igor Laskovy
I will rephrase my question.
When I upload files over s3 the ceph -s return growth in used space,
but when this files deleted there are no available space freed.
Yehuda, explain please a little bit more about how I can control this behavior ?

On Sat, Apr 27, 2013 at 7:09 PM, Igor Laskovy  wrote:
> Well, than what about used space of RGW pool? How I can trim it after
> files deletion?
>
> On Sat, Apr 27, 2013 at 6:34 PM, Yehuda Sadeh  wrote:
>> The temp remove is an obsolete feature that was needed before the
>> introduction of the garbage collector. It's not needed in that
>> version.
>>
>> Yehuda
>>
>> On Sat, Apr 27, 2013 at 6:33 AM, Igor Laskovy  wrote:
>>> Hello,
>>>
>>> have problem with clearing space for RGW pool with "radosgw-admin temp
>>> remove" command:
>>>
>>> root@osd01:~# ceph -v
>>> ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
>>> root@osd01:~# radosgw-admin temp remove --date=2014-04-26
>>> failed to list objects
>>> failure removing temp objects: (2) No such file or directory
>>>
>>> I found that this may be a two years old bug -
>>> http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg04037.html
>>>
>>> --
>>> Igor Laskovy
>>> facebook.com/igor.laskovy
>>> studiogrizzly.com
>>> _______
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-28 Thread Igor Laskovy
Thanks, with "rgw gc processor period" it works for me finally ;)

Yehuda, the http://ceph.com/docs/master/radosgw/config-ref/ have misprint:
insted of "rgw gc processor period" it has "rgw gc processor *max* period"


On Sun, Apr 28, 2013 at 2:28 AM, Yehuda Sadeh  wrote:

> Basically you need for the relevant objects to expire, and then wait
> for the garbage collector to run its course. Expiration is ~2hr from
> deletion, garbage collector starts every hour, but you can run it
> manually via 'radosgw-admin gc process'. There are a couple of
> relevant configurables that can be set:
>
>  * rgw gc obj min wait (default: 2 * 3600)
>
> Minimum number of seconds before an object is ready to be processed by
> the garbage collector. After this period if the object is not still
> being used then the garbage collection may purge it.
>
>  * rgw gc processor period
>
> Time between the start of two consecutive garbage collector runs
>
>
>
> Yehuda
>
> On Sat, Apr 27, 2013 at 10:23 AM, Igor Laskovy 
> wrote:
> > I will rephrase my question.
> > When I upload files over s3 the ceph -s return growth in used space,
> > but when this files deleted there are no available space freed.
> > Yehuda, explain please a little bit more about how I can control this
> behavior ?
> >
> > On Sat, Apr 27, 2013 at 7:09 PM, Igor Laskovy 
> wrote:
> >> Well, than what about used space of RGW pool? How I can trim it after
> >> files deletion?
> >>
> >> On Sat, Apr 27, 2013 at 6:34 PM, Yehuda Sadeh 
> wrote:
> >>> The temp remove is an obsolete feature that was needed before the
> >>> introduction of the garbage collector. It's not needed in that
> >>> version.
> >>>
> >>> Yehuda
> >>>
> >>> On Sat, Apr 27, 2013 at 6:33 AM, Igor Laskovy 
> wrote:
> >>>> Hello,
> >>>>
> >>>> have problem with clearing space for RGW pool with "radosgw-admin temp
> >>>> remove" command:
> >>>>
> >>>> root@osd01:~# ceph -v
> >>>> ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
> >>>> root@osd01:~# radosgw-admin temp remove --date=2014-04-26
> >>>> failed to list objects
> >>>> failure removing temp objects: (2) No such file or directory
> >>>>
> >>>> I found that this may be a two years old bug -
> >>>> http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg04037.html
> >>>>
> >>>> --
> >>>> Igor Laskovy
> >>>> facebook.com/igor.laskovy
> >>>> studiogrizzly.com
> >>>> ___
> >>>> ceph-users mailing list
> >>>> ceph-users@lists.ceph.com
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >>
> >> --
> >> Igor Laskovy
> >> facebook.com/igor.laskovy
> >> studiogrizzly.com
> >
> >
> >
> > --
> > Igor Laskovy
> > facebook.com/igor.laskovy
> > studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-05-01 Thread Igor Laskovy
Hello and thanks again!

Push help back to community:
1. Please correct this doc
http://ceph.com/docs/master/start/quick-rgw/#create-a-gateway-configuration-file

2. I have successful tested this clients - DragonDisk (
http://www.dragondisk.com/), CrossFTP (http://www.crossftp.com/) and
S3Browser (http://s3browser.com/)


On Wed, Apr 24, 2013 at 10:39 AM, Igor Laskovy wrote:

> Ok. I will try, thanks.
> One further question - does needed manually start /etc/init.d/radosgw all
> time when this host have been rebooted? Why it is not part of service ceph
> -a start?
>
>
> On Tue, Apr 23, 2013 at 11:05 PM, Lorieri  wrote:
>
>> I've made some tests again with s3cmd
>>
>> you need to have a valid and accessible host_bucket key in the .s3cfg
>> for example:
>> host_bucket = %(bucket)s.myhostname.com
>>
>> if you dont have it, it does not allow you to use lowercase buckets
>> I believe it checks if the bucket name is a valid dns name, etc
>>
>> for osx:
>>
>> brew install s3cmd
>>
>> []s
>> -lorieri
>>
>>
>>
>>
>> On Tue, Apr 23, 2013 at 4:00 PM, Igor Laskovy wrote:
>>
>>>  So, I totally lost in this, but I did it, and now CrossFTP report:
>>> [R1] Connect to osd01.ceph.labspace.studiogrizzly.com
>>> [R1] Current path: /
>>> [R1] Current path: /
>>> [R1] LIST /
>>> [R1] Request Error [
>>> 
>>> 404 Not Found
>>> 
>>> Not Found
>>> The requested URL / was not found on this server.
>>> 
>>> ].
>>>
>>> On Tue, Apr 23, 2013 at 9:39 PM, Yehuda Sadeh 
>>> wrote:
>>> > On Tue, Apr 23, 2013 at 11:33 AM, Igor Laskovy 
>>> wrote:
>>> >> In /etc/apache2/httpd.conf I have :
>>> >> ServerName osd01.ceph.labspace.studiogrizzly.com
>>> >>
>>> >> In /etc/apache2/sites-available/rgw.conf :
>>> >
>>> >
>>> >
>>> >> FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
>>> >>
>>> >> 
>>> >> ServerName osd01.ceph.labspace.studiogrizzly.com
>>> >> ServerAdmin igor.lask...@gmail.com
>>> >> DocumentRoot /var/www
>>> >> 
>>> >
>>> > remove this line ^^^
>>> >
>>> >>
>>> >> RewriteEngine On
>>> >> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
>>> >> /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
>>> >> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>>> >>
>>> >> 
>>> >
>>> > remove this line ^^^
>>> >
>>> >>
>>> >> 
>>> >> 
>>> >> Options +ExecCGI
>>> >> AllowOverride All
>>> >> SetHandler fastcgi-script
>>> >> Order allow,deny
>>> >> Allow from all
>>> >> AuthBasicAuthoritative Off
>>> >> 
>>> >> 
>>> >>
>>> >> AllowEncodedSlashes On
>>> >> ErrorLog /var/log/apache2/error.log
>>> >> CustomLog /var/log/apache2/access.log combined
>>> >> ServerSignature Off
>>> >> 
>>> >>
>>> >> On Tue, Apr 23, 2013 at 5:57 PM, Yehuda Sadeh 
>>> wrote:
>>> >>> On Tue, Apr 23, 2013 at 7:51 AM, Igor Laskovy <
>>> igor.lask...@gmail.com> wrote:
>>> >>>> Sorry for delayed reply,
>>> >>>>
>>> >>>> I am not good familiar with apache.
>>> >>>> For RGW I use one of the OSD nodes. This is clear minimum
>>> installation of
>>> >>>> Ubunut 12.04 and ceph deployment on it, no another services.
>>> >>>> I must to say that I use default apache2 package from Ubuntu
>>> repository and
>>> >>>> have rgw print continue = false in ceph.conf .
>>> >>>> Again, all configuration made as shown at here
>>> >>>> http://ceph.com/docs/master/radosgw/manual-install/ and here
>>> >>>> http://ceph.com/docs/master/radosgw/config/ nothing more.
>>> >>>> As {fqdn} I have used FQDN for this node.
>>> >>>>
>>> >>>>> try listing whatever under /etc/

[ceph-users] RadosGW High Availability

2013-05-01 Thread Igor Laskovy
Hello,

Whether any best practices how to make Hing Availability of RadosGW?
For example, is this right way to create two or tree RadosGW (keys for
ceph-auth, directory and so on) and having for example this is ceph.conf:

[client.radosgw.a]
host = ceph01
...options...

[client.radosgw.b]
host = ceph02
...options...

Does this rgws will run simultaneous?
Have radosgw.b ability to continues serve load if ceph01 host went down?

-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD shared between clients

2013-05-02 Thread Igor Laskovy
Or maybe in case the hosting purposes easier implement RadosGW.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW High Availability

2013-05-07 Thread Igor Laskovy
I tried do that and put behind RR DNS, but unfortunately only one host can
server requests from clients - second host does not responds totally.  I am
not to good familiar with apache, in standard log files nothing helpful.
Maybe this whole HA design is wrong? Does anybody resolve HA for Rados
Gateway endpoint? How?


On Wed, May 1, 2013 at 12:28 PM, Igor Laskovy wrote:

> Hello,
>
> Whether any best practices how to make Hing Availability of RadosGW?
> For example, is this right way to create two or tree RadosGW (keys for
> ceph-auth, directory and so on) and having for example this is ceph.conf:
>
> [client.radosgw.a]
> host = ceph01
> ...options...
>
> [client.radosgw.b]
> host = ceph02
> ...options...
>
> Does this rgws will run simultaneous?
> Have radosgw.b ability to continues serve load if ceph01 host went down?
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 0.61 Cuttlefish released

2013-05-07 Thread Igor Laskovy
Hi,

where can I read more about ceph-disk?


On Tue, May 7, 2013 at 5:51 AM, Sage Weil  wrote:

> Spring has arrived (at least for some of us), and a new stable release of
> Ceph is ready!  Thank you to everyone who has contributed to this release!
>
> Bigger ticket items since v0.56.x "Bobtail":
>
>  * ceph-deploy: our new deployment tool to replace 'mkcephfs'
>  * robust RHEL/CentOS support
>  * ceph-disk: many improvements to support hot-plugging devices via chef
>and ceph-deploy
>  * ceph-disk: dm-crypt support for OSD disks
>  * ceph-disk: 'list' command to see available (and used) disks
>  * rbd: incremental backups
>  * rbd-fuse: access RBD images via fuse
>  * librbd: autodetection of VM flush support to allow safe enablement of
>the writeback cache
>  * osd: improved small write, snap trimming, and overall performance
>  * osd: PG splitting
>  * osd: per-pool quotas (object and byte)
>  * osd: tool for importing, exporting, removing PGs from OSD data store
>  * osd: improved clean-shutdown behavior
>  * osd: noscrub, nodeepscrub options
>  * osd: more robust scrubbing, repair, ENOSPC handling
>  * osd: improved memory usage, log trimming
>  * osd: improved journal corruption detection
>  * ceph: new 'df' command
>  * mon: new storage backend (leveldb)
>  * mon: config-keys service
>  * mon, crush: new commands to manage CRUSH entirely via CLI
>  * mon: avoid marking entire subtrees (e.g., racks) out automatically
>  * rgw: CORS support
>  * rgw: misc API fixes
>  * rgw: ability to listen to fastcgi on a port
>  * sysvinit, upstart: improved support for standardized data locations
>  * mds: backpointers on all data and metadata objects
>  * mds: faster fail-over
>  * mds: many many bug fixes
>  * ceph-fuse: many stability improvements
>
> Notable changes since v0.60:
>
>  * rbd: incremental backups
>  * rbd: only set STRIPINGV2 feature if striping parameters are
>incompatible with old versions
>  * rbd: require allow-shrink for resizing images down
>  * librbd: many bug fixes
>  * rgw: fix object corruption on COPY to self
>  * rgw: new sysvinit script for rpm-based systems
>  * rgw: allow buckets with _
>  * rgw: CORS support
>  * mon: many fixes
>  * mon: improved trimming behavior
>  * mon: fix data conversion/upgrade problem (from bobtail)
>  * mon: ability to tune leveldb
>  * mon: config-keys service to store arbitrary data on monitor
>  * mon: osd crush add|link|unlink|add-bucket ... commands
>  * mon: trigger leveldb compaction on trim
>  * osd: per-rados pool quotas (objects, bytes)
>  * osd: tool to export, import, and delete PGs from an individual OSD data
>store
>  * osd: notify mon on clean shutdown to avoid IO stall
>  * osd: improved detection of corrupted journals
>  * osd: ability to tune leveldb
>  * osd: improve client request throttling
>  * osd, librados: fixes to the LIST_SNAPS operation
>  * osd: improvements to scrub error repair
>  * osd: better prevention of wedging OSDs with ENOSPC
>  * osd: many small fixes
>  * mds: fix xattr handling on root inode
>  * mds: fixed bugs in journal replay
>  * mds: many fixes
>  * librados: clean up snapshot constant definitions
>  * libcephfs: calls to query CRUSH topology (used by Hadoop)
>  * ceph-fuse, libcephfs: misc fixes to mds session management
>  * ceph-fuse: disabled cache invalidation (again) due to potential
>deadlock with kernel
>  * sysvinit: try to start all daemons despite early failures
>  * ceph-disk: new list command
>  * ceph-disk: hotplug fixes for RHEL/CentOS
>  * ceph-disk: fix creation of OSD data partitions on >2TB disks
>  * osd: fix udev rules for RHEL/CentOS systems
>  * fix daemon logging during initial startup
>
> There are a few things to keep in mind when upgrading from Bobtail,
> specifically with the monitor daemons.  Please see the upgrade guide
> and/or the complete release notes.  In short: upgrade all of your monitors
> (more or less) at once.
>
> Cuttlefish is the first Ceph release on our new three-month stable release
> cycle.  We are very pleased to have pulled everything together on schedule
> (well, only a week later than planned).  The next stable release, which
> will be code-named Dumpling, is slated for three months from now
> (beginning of August).
>
> You can download v0.61 Cuttlefish from the usual locations:
>
>  * Git at git://github.com/ceph/ceph.git
>  * Tarball at http://ceph.com/download/ceph-0.61.tar.gz
>  * For Debian/Ubuntu packages, see
> http://ceph.com/docs/master/install/debian
>  * For RPMs, see http://ceph.com/docs/master/install/rpm
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Dell R515 performance and specification question

2013-05-07 Thread Igor Laskovy
If I currently understand idea, when this 1 SSD will fail whole node with
that SSD will fail. Correct?
What scenario for node recovery in this case?
Playing with "ceph-osd --flush-journal" and "ceph-osd --mkjournal" for each
osd?


On Tue, May 7, 2013 at 4:17 PM, Mark Nelson  wrote:

> On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
>
>> Hi,
>>
>> I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
>> which I intend to run in 3 x replication. I've opted for the following
>> configuration;
>>
>> 2 x 6 core processors
>> 32Gb RAM
>> H700 controller (1Gb cache)
>> 2 x SAS OS disks (in RAID1)
>> 2 x 1Gb ethernet (bonded for cluster network)
>> 2 x 1Gb ethernet (bonded for client network)
>>
>> and either 4 x 2Tb nearline SAS OSDs or 8 x 1Tb nearline SAS OSDs.
>>
>
> Hi Barry,
>
> With so few disks and the inability to do 10GbE, you may want to consider
> doing something like 5-6 R410s or R415s and just using the on-board
> controller with a couple of SATA disks and 1 SSD for the journal.  That
> should give you better aggregate performance since in your case you can't
> use 10GbE.  It will also spread your OSDs across more hosts for better
> redundancy and may not cost that much more per GB since you won't need to
> use the H700 card if you are using an SSD for journals.  It's not as dense
> as R515s or R720XDs can be when fully loaded, but for small clusters with
> few disks I think it's a good trade-off to get the added redundancy and
> avoid expander/controller complications.
>
>
>
>> At the moment I'm undecided on the OSDs, although I'm swaying towards
>> the second option at the moment as it would give me more flexibility and
>> the option of using some of the disks as journals.
>>
>> I'm intending to use this cluster to host the images for ~100 virtual
>> machines, which will run on different hardware most likely be managed by
>> OpenNebula.
>>
>> I'd be interested to hear from anyone running a similar configuration
>> with a similar use case, especially people who have spent some time
>> benchmarking a similar configuration and still have a copy of the results.
>>
>> I'd also welcome any comments or critique on the above specification.
>> Purchases have to be made via Dell and 10Gb ethernet is out of the
>> question at the moment.
>>
>> Cheers,
>>
>> Barry
>>
>>
>>
> __**_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW High Availability

2013-05-09 Thread Igor Laskovy
Anybody?


On Tue, May 7, 2013 at 1:19 PM, Igor Laskovy  wrote:

> I tried do that and put behind RR DNS, but unfortunately only one host can
> server requests from clients - second host does not responds totally.  I
> am not to good familiar with apache, in standard log files nothing helpful.
> Maybe this whole HA design is wrong? Does anybody resolve HA for Rados
> Gateway endpoint? How?
>
>
> On Wed, May 1, 2013 at 12:28 PM, Igor Laskovy wrote:
>
>> Hello,
>>
>> Whether any best practices how to make Hing Availability of RadosGW?
>> For example, is this right way to create two or tree RadosGW (keys for
>> ceph-auth, directory and so on) and having for example this is ceph.conf:
>>
>> [client.radosgw.a]
>> host = ceph01
>> ...options...
>>
>> [client.radosgw.b]
>> host = ceph02
>> ...options...
>>
>> Does this rgws will run simultaneous?
>> Have radosgw.b ability to continues serve load if ceph01 host went down?
>>
>> --
>> Igor Laskovy
>> facebook.com/igor.laskovy
>> studiogrizzly.com
>>
>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Igor Laskovy
: 'ulimit -n 8192;  /usr/bin/ceph-mon -i kvm-cs-sn-10i
> > > --pid-file /var/run/ceph/mon.kvm-cs-sn-10i.pid -c
> > /etc/ceph/ceph.conf
> > > '
> > > Starting ceph-create-keys on kvm-cs-sn-10i...
> > >
> > > Luckily I hadn't set up my ssh keys yet, so that's as far as I got.
> > >
> > > Would dearly love some guidance.  Thanks in advance!
> > >
> > > --Greg Chavez
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > --
> > Mean Trading Systems LLP
> > http://www.meantradingsystems.com
> >
> >
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Maximums for Ceph architectures

2013-05-11 Thread Igor Laskovy
Hi all,

Does anybody know where to learn about Maximums for Ceph architectures?
For example, I'm trying to find out about the maximum size of rbd image and
cephfs file. Additionally want to know maximum size for RADOS Gateway
object (meaning file for uploading).

-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Maximums for Ceph architectures

2013-05-15 Thread Igor Laskovy
Hi Gregory, thanks. But I think that you need initiate filling this gap
into architect documentation. As for this important question from design
point.


On Mon, May 13, 2013 at 7:41 PM, Gregory Farnum  wrote:

> On Sat, May 11, 2013 at 4:47 AM, Igor Laskovy 
> wrote:
> > Hi all,
> >
> > Does anybody know where to learn about Maximums for Ceph architectures?
> > For example, I'm trying to find out about the maximum size of rbd image
> and
> > cephfs file. Additionally want to know maximum size for RADOS Gateway
> object
> > (meaning file for uploading).
>
> The maximum size of a CephFS file is very large (a terabyte) and
> configurable on MDSMap creation with the "mds max file size" config
> option. I don't think RBD or RGW have max sizes, although somebody
> might correct me.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Regd: Ceph-deploy

2013-05-15 Thread Igor Laskovy
Try to minimizate differences between users on admin node and storage
nodes. Use the same naming for admin user (used for execute ceph-deploy)
and user which you use for storage nodes.


On Tue, May 14, 2013 at 6:09 PM, John Wilkins wrote:

> This is usually due to a connectivity issue:
>
> http://ceph.com/docs/master/start/quick-start-preflight/#ensure-connectivity
>  Make sure ceph-deploy can access the node where you are trying to
> deploy the monitor; then, repeat the ceph-deploy mon create step
> again. Then, repeat the ceph-deploy gatherkeys step again.
>
> On Mon, May 13, 2013 at 11:45 PM, Sridhar Mahadevan
>  wrote:
> > Hi,
> > I am trying to setup ceph and I am using ceph-deploy. I am following the
> > steps in object store quick guide. When I execute ceph-deploy gatherkeys
> it
> > throws up the following error.
> >
> > Unable to find /etc/ceph/ceph.client.admin.keyring
> > Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring
> > Unable to find /var/lib/ceph/bootstrap-msd/ceph.keyring
> >
> > Kindly help
> >
> > Thanks and Regards
> >
> > --
> > --sridhar
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy OSD Prepare error

2013-05-15 Thread Igor Laskovy
Hi, Ian,

try "ceph-deploy osd prepare ceph-server:sdc1"
If you have used "ceph-deploy disk zap" it creates single partition.


On Wed, May 15, 2013 at 4:51 PM,  wrote:

> Hi,
>
> ** **
>
> I am deploying using ceph-deploy (following quickstart guide) and getting
> the following error on
>
> ** **
>
> ceph-deploy osd prepare ceph-server:sdc
>
> ** **
>
> ** **
>
> > ValueError: need more than 3 values to unpack
>
> > > Traceback (most recent call last):
>
> >   File "/home/user/ceph-deploy/ceph_deploy/osd.py", line 426, in osd
>
> > >   File "/home/user/ceph-deploy/ceph-deploy", line 9, in 
>
> > > load_entry_point('ceph-deploy==0.1', 'console_scripts',
> 'ceph-deploy')()
>
> > >   File "/home/user/ceph-deploy/ceph_deploy/cli.py", line 112, in main*
> ***
>
> > > return args.func(args)
>
> > >   File "/home/user/ceph-deploy/ceph_deploy/osd.py", line 426, in osd**
> **
>
> > > prepare(args, cfg, activate_prepared_disk=False)
>
> > >   File "/home/user/ceph-deploy/ceph_deploy/osd.py", line 269, in
> prepare
>
> > > dmcrypt_dir=args.dmcrypt_key_dir,
>
> > > ValueError: need more than 3 values to unpack
>
> ** **
>
> Any suggestions?
>
> ** **
>
> Regards
>
> ** **
>
> Ian
>
> ** **
>
> Dell Corporation Limited is registered in England and Wales. Company
> Registration Number: 2081369
> Registered address: Dell House, The Boulevard, Cain Road, Bracknell,
> Berkshire, RG12 1LF, UK.
> Company details for other Dell UK entities can be found on  www.dell.co.uk
> .
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW High Availability

2013-05-20 Thread Igor Laskovy
Hi all,

Well, looks like DragonDisk (http://www.dragondisk.com/) deal with RRDNS
well, it just run with both RGWs ;)

But what actually I need to know now is why RGW not start at boot time with
"Initialization timeout, failed to initialize" error in logs. It
run successful by hands after that.


On Thu, May 9, 2013 at 7:28 PM, Dimitri Maziuk wrote:

> On 05/09/2013 09:57 AM, Tyler Brekke wrote:
> > For High availability RGW you would need a load balancer. HA Proxy is
> > an example of a load balancer that has been used successfully with
> > rados gateway endpoints.
>
> Strictly speaking for HA you need an HA solution. E.g. heartbeat. Main
> difference between that and load balancing is that one server serves the
> clients until it dies, then another takes over. With load balancing, all
> servers get a share of the requests. It can be configured to do HA: set
> "main" server's share to 100%, then the backup will get no requests as
> long as the main is up.
>
> RRDNS is a load balancing solution. Dep. on the implementation it can
> simply return a list of IPs instead of a single IP for the host name,
> then it's up to the client to pick one. A simple stupid client may
> always pick the first one. A simple stupid server may always return the
> list in the same order. That could be how all your clients always pick
> the same server.
>
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] FW: About RBD

2013-05-24 Thread Igor Laskovy
Hi Mensah,

as already mentioned Gregory, you can use RBD at same time from both
servers - Microsoft CSV should be run well.
I don't make any test for CSV yet, so can you please try and do
quick response about that?


On Thu, May 23, 2013 at 8:40 PM, Mensah, Yao (CIV) wrote:

>  Thank you very much for your prompt response…
>
> ** **
>
> So basically I can’t use cluster aware tool like Microsoft CSV on the RBD,
> is that correct? 
>
> ** **
>
> What I am trying to understand is that can I have 2 physical hosts (Maybe
> Dell PowerEdge2950)
>
> ** **
>
> *host1 with VM#0-10 
>
> *host2 with  VM #10-20
>
> ** **
>
> And both of these hosts accessing one big LUN or, in this case ceph RBD? *
> ***
>
> ** **
>
> Can host1 failed all it VMs to host2 in case that machine has trouble and
> still make it resources available to my users? This is very important to us
> if we really want to explore this new avenue of Ceph
>
> ** **
>
> Thank you,
>
> ** **
>
> Yao Mensah
>
> Systems Administrator II
>
> OLS Servers
>
> yao.men...@usdoj.gov
>
> (202) 307 0354
>
> MCITP
>
> MCSE NT4.0 / 2000-2003
>
> A+
>
> ** **
>
> *From:* Dave Spano [mailto:dsp...@optogenics.com]
> *Sent:* Thursday, May 23, 2013 1:19 PM
> *To:* Mensah, Yao (CIV)
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] FW: About RBD
>
> ** **
>
> Unless something changed, each RBD needs to be attached to 1 host at a
> time like an ISCSI lun. 
>
> Dave Spano
> Optogenics
>
>
> 
>  --
>
> *From: *"Yao Mensah (CIV)" 
> *To: *ceph-users@lists.ceph.com
> *Sent: *Thursday, May 23, 2013 1:10:53 PM
> *Subject: *[ceph-users] FW: About RBD
>
> FYI
>
>  
>
> *From:* Mensah, Yao (CIV)
> *Sent:* Wednesday, May 22, 2013 5:59 PM
> *To:* 'i...@inktank.com'
> *Subject:* About RBD
>
>  
>
> Hello,
>
>  
>
> I was doing some reading on your web site about ceph and what it capable
> of. I have one question and maybe you can help on this:
>
>  
>
> Can ceph RBD be used by 2 physical hosts at the same time? Or, is Ceph rbd
> CSV(Clustered Shared Volumes) aware? 
>
>  
>
> Thank you, 
>
>  
>
> Yao Mensah
>
> Systems Administrator II
>
> OLS Servers
>
>  ****
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ** **
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting a shared block device on multiple hosts

2013-05-28 Thread Igor Laskovy
Hi Jon, I already mentioned multiple times here - RBD just a block device.
You can map it to multiple hosts, but before doing  dd if=/dev/zero
of=/media/tmp/test you have created file system, right? This file system
MUST be distributed, thus multiple hosts can read and write files on it.


On Wed, May 29, 2013 at 4:24 AM, Jon  wrote:

> Hello,
>
> I would like to mount a single RBD on multiple hosts to be able to share
> the block device.
> Is this possible?  I understand that it's not possible to share data
> between the different interfaces, e.g. CephFS and RBDs, but I don't see
> anywhere it's declared that sharing an RBD between hosts is or is not
> possible.
>
> I have followed the instructions on the github page of ceph-deploy (I was
> following the 5 minute quick start
> http://ceph.com/docs/next/start/quick-start/ but when I got to the step
> with mkcephfs it erred out and pointed me to the github page), as I only
> have three servers I am running the osds and monitors on all of the hosts,
> I realize this isn't ideal but I'm hoping it will work for testing purposes.
>
> This is what my cluster looks like:
>
> >> root@red6:~# ceph -s
> >>health HEALTH_OK
> >>monmap e2: 3 mons at {kitt=
> 192.168.0.35:6789/0,red6=192.168.0.40:6789/0,shepard=192.168.0.2:6789/0},
> election epoch 10, quorum 0,1,2 kitt,red6,shepard
> >>osdmap e29: 5 osds: 5 up, 5 in
> >> pgmap v1692: 192 pgs: 192 active+clean; 19935 MB data, 40441 MB
> used, 2581 GB / 2620 GB avail; 73B/s rd, 0op/s
> >>mdsmap e1: 0/0/1 up
>
> To test, what I have done is created a 20GB RBD mapped it and mounted it
> to /media/tmp on all the hosts in my cluster, so all of the hosts are also
> clients.
>
> Then I use dd to create a 1MB file named test-$hostname
>
> >> dd if=/dev/zero of=/media/tmp/test-`hostname` bs=1024 count=1024;
>
> after the file is created, I wait for the writes to finish in `ceph -w`,
> then on each host when I list /media/tmp I see the results of
> /media/tmp/test-`hostname`, if I unmount then remount the RBD, I get mixed
> results.  Typically, I see the file that was created on the host that is at
> the front of the line in the quorum. e.g. the test I did while typing this
> e-mail "kitt" is listed first quorum 0,1,2 kitt,red6,shepard, this is the
> file I see created when I unmount then mount the rbd on shepard.
>
> Where this is going is, I would like to use CEPH as my back end storage
> solution for my virtualization cluster.  The general idea is the
> hypervisors will all have a shared mountpoint that holds images and vms so
> vms can easily be migrated between hypervisors.  Actually, I was thinking I
> would create one mountpoint each for images and vms for performance
> reasons, am I likely to see performance gains using more smaller RBDs vs
> fewer larger RBDs?
>
> Thanks for any feedback,
> Jon A
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph configuration

2013-06-05 Thread Igor Laskovy
>and I'm unable to mount the cluster with the following command:
>root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt

So, what it says?

I'm also recommend to you start from my russian doc
http://habrahabr.ru/post/179823


On Tue, Jun 4, 2013 at 4:22 PM, Явор Маринов  wrote:

>  That's the exact documentation which i'm using the directory on ceph2 is
> created, and the service is starting without any problems on both nodes.
> However the health of the cluster is getting WARN and i was able to mount
> the cluster
>
>
>
>
>  On 06/04/2013 03:43 PM, Andrei Mikhailovsky wrote:
>
> Yavor,
>
> I would highly recommend taking a look at the quick install guide:
> http://ceph.com/docs/next/start/quick-start/
>
> As per the guide, you need to precreate the directories prior to starting
> ceph.
>
> Andrei
> --
> *From: *"Явор Маринов"  
> *To: *ceph-users@lists.ceph.com
> *Sent: *Tuesday, 4 June, 2013 11:03:52 AM
> *Subject: *[ceph-users] ceph configuration
>
>
> Hello,
>
> I'm new to the Ceph mailing list, and I need some advices for our
> testing cluster. I have 2 servers with x2 hard disks. On the first
> server i configured monitor and OSD, and on the second server only OSD.
> The configuration looks like as follows:
>
> [mon.a]
>
>  host = ceph1
>  mon addr = 192.168.2.170:6789
>
> [osd.0]
>  host = ceph1
>  addr = 192.168.2.170
>  devs = /dev/sdb
>
> [osd.1]
>  host = ceph2
>  addr = 192.168.2.114
>  devs = /dev/sdb
>
> Once i initiate 'service ceph -a start' i keep getting the following error:
>
> Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
> df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory
>
> and I'm unable to mount the cluster with the following command:
> root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
>
> Also executing 'ceph health' i'm getting this response:
> HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122
> degraded (12.295%)
>
> This is fresh install and there aren't any nodes which are added/removed.
>
> Any help will be much appreciated.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph configuration

2013-06-05 Thread Igor Laskovy
Sure,
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/


On Wed, Jun 5, 2013 at 11:38 AM, Явор Маринов  wrote:

>  I've managed to start and mount the cluster by completely starting the
> process from scratch. Other thing that i'm searching for is any
> documentation how to add another node (or hard drives) on a running cluster
> without affecting the mount point and the running service. Can you point me
> for this?
>
>
>
>
>  On 06/05/2013 11:20 AM, Igor Laskovy wrote:
>
> >and I'm unable to mount the cluster with the following command:
> >root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
>
>  So, what it says?
>
>  I'm also recommend to you start from my russian doc
> http://habrahabr.ru/post/179823
>
>
> On Tue, Jun 4, 2013 at 4:22 PM, Явор Маринов  wrote:
>
>>  That's the exact documentation which i'm using the directory on ceph2
>> is created, and the service is starting without any problems on both nodes.
>> However the health of the cluster is getting WARN and i was able to mount
>> the cluster
>>
>>
>>
>>
>>  On 06/04/2013 03:43 PM, Andrei Mikhailovsky wrote:
>>
>> Yavor,
>>
>> I would highly recommend taking a look at the quick install guide:
>> http://ceph.com/docs/next/start/quick-start/
>>
>> As per the guide, you need to precreate the directories prior to starting
>> ceph.
>>
>> Andrei
>> --
>> *From: *"Явор Маринов"  
>> *To: *ceph-users@lists.ceph.com
>> *Sent: *Tuesday, 4 June, 2013 11:03:52 AM
>> *Subject: *[ceph-users] ceph configuration
>>
>>
>> Hello,
>>
>> I'm new to the Ceph mailing list, and I need some advices for our
>> testing cluster. I have 2 servers with x2 hard disks. On the first
>> server i configured monitor and OSD, and on the second server only OSD.
>> The configuration looks like as follows:
>>
>> [mon.a]
>>
>>  host = ceph1
>>  mon addr = 192.168.2.170:6789
>>
>> [osd.0]
>>  host = ceph1
>>  addr = 192.168.2.170
>>  devs = /dev/sdb
>>
>> [osd.1]
>>  host = ceph2
>>  addr = 192.168.2.114
>>  devs = /dev/sdb
>>
>> Once i initiate 'service ceph -a start' i keep getting the following
>> error:
>>
>> Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
>> df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory
>>
>> and I'm unable to mount the cluster with the following command:
>> root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
>>
>> Also executing 'ceph health' i'm getting this response:
>> HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122
>> degraded (12.295%)
>>
>> This is fresh install and there aren't any nodes which are added/removed.
>>
>> Any help will be much appreciated.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
>  --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-20 Thread Igor Laskovy
Hello list!

I am trying deploy Ceph RBD + OpenStack Cinder.
Basically, my question related to this section in documentation:

cat > secret.xml <
  
client.volumes secret
  

EOF
sudo virsh secret-define --file secret.xml

sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat
client.volumes.key) && rm client.volumes.key secret.xml

Do I need tie libvirt secrets logic with ceph client.volumes user on each
cinder-volume hosts? So it will be separate "uuid of secret" for each host
but they all will use single user cinder.volumes, right?

Asking this because I have strange error in nova-scheduler.log on
controller host:

2013-06-20 13:10:01.270 ERROR nova.scheduler.filter_scheduler
[req-b173d765-9528-43af-a3d1-bd811df8710d fd860a2737f94ff0bc7decec5783017b
3f47be9a0c2348faac4deec2a988acd8] [instance:
d8dd40d4-61de-498d-a54f-12f4d9e9c594] Error from last host: node03 (node
node03.ceph.labspace.studiogrizzly.com): [u'Traceback (most recent call
last):\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 848, in
_run_instance\nset_access_ip=set_access_ip)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1107, in
_spawn\nLOG.exception(_(\'Instance failed to spawn\'),
instance=instance)\n', u'  File "/usr/lib/python2.7/contextlib.py", line
24, in __exit__\nself.gen.next()\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1103, in
_spawn\nblock_device_info)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1527,
in spawn\nblock_device_info)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2443,
in _create_domain_and_network\ndomain = self._create_domain(xml,
instance=instance)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2404,
in _create_domain\ndomain.createWithFlags(launch_flags)\n', u'  File
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit\n
 result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u'  File
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in
proxy_call\nrv = execute(f,*args,**kwargs)\n', u'  File
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker\n
   rv = meth(*args,**kwargs)\n', u'  File
"/usr/lib/python2.7/dist-packages/libvirt.py", line 711, in
createWithFlags\nif ret == -1: raise libvirtError
(\'virDomainCreateWithFlags() failed\', dom=self)\n', u"libvirtError:
internal error rbd username 'volumes' specified but secret not found\n"]

--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-21 Thread Igor Laskovy
Merci Sebastien, it's work now ;)

Now for live migration do I need follow
https://wiki.openstack.org/wiki/LiveMigrationUsage begining from libvirt
settings section?


On Thu, Jun 20, 2013 at 2:47 PM, Sebastien Han
wrote:

> Hi,
>
> No this must always be the same UUID. You can only specify one in
> cinder.conf.
>
> Btw nova does the attachment this is why it needs the uuid and secret.
>
> The first secret import generates an UUID, then always re-use the same one
> for all your compute node, do something like:
>
> 
>9e4c7795-0681-cd4f-cf36-8cb8aef3c47f
>
>
>  client.volumes secret
>
> 
>
>
> Cheers.
>
> 
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
>
>
>
>
>
>
>
>
> *Phone : *+33 (0)1 49 70 99 72 – *Mobile : *+33 (0)6 52 84 44 70
> *Email :* sebastien@enovance.com – *Skype : *han.sbastien
> *Address :* 10, rue de la Victoire – 75009 Paris
> *Web : *www.enovance.com – *Twitter : *@enovance
>
> On Jun 20, 2013, at 12:23 PM, Igor Laskovy  wrote:
>
> Hello list!
>
> I am trying deploy Ceph RBD + OpenStack Cinder.
> Basically, my question related to this section in documentation:
>
> cat > secret.xml < 
>   
> client.volumes secret
>   
> 
> EOF
> sudo virsh secret-define --file secret.xml
> 
> sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat
> client.volumes.key) && rm client.volumes.key secret.xml
>
> Do I need tie libvirt secrets logic with ceph client.volumes user on each
> cinder-volume hosts? So it will be separate "uuid of secret" for each host
> but they all will use single user cinder.volumes, right?
>
> Asking this because I have strange error in nova-scheduler.log on
> controller host:
>
> 2013-06-20 13:10:01.270 ERROR nova.scheduler.filter_scheduler
> [req-b173d765-9528-43af-a3d1-bd811df8710d fd860a2737f94ff0bc7decec5783017b
> 3f47be9a0c2348faac4deec2a988acd8] [instance:
> d8dd40d4-61de-498d-a54f-12f4d9e9c594] Error from last host: node03
> (node node03.ceph.labspace.studiogrizzly.com): [u'Traceback (most recent
> call last):\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 848, in
> _run_instance\nset_access_ip=set_access_ip)\n', u'
>  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
> 1107, in _spawn\nLOG.exception(_(\'Instance failed to spawn\'),
> instance=instance)\n', u'  File "/usr/lib/python2.7/contextlib.py", line
> 24, in __exit__\nself.gen.next()\n', u'
>  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
> 1103, in _spawn\nblock_device_info)\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1527,
> in spawn\nblock_device_info)\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2443,
> in _create_domain_and_network\ndomain = self._create_domain(xml,
> instance=instance)\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2404,
> in _create_domain\ndomain.createWithFlags(launch_flags)\n', u'  File
> "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit\n
>  result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u'  File
> "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in
> proxy_call\nrv = execute(f,*args,**kwargs)\n', u'  File
> "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker\n
>rv = meth(*args,**kwargs)\n', u'  File
> "/usr/lib/python2.7/dist-packages/libvirt.py", line 711, in
> createWithFlags\nif ret == -1: raise
> libvirtError (\'virDomainCreateWithFlags() failed\', dom=self)\n',
> u"libvirtError: internal error rbd username 'volumes' specified but secret
> not found\n"]
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
<>___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem connecting with Cyberduck

2013-06-29 Thread Igor Laskovy
How you deploy RGW? Please show used configs.


On Wed, Jun 26, 2013 at 11:15 AM, Gary Bruce wrote:

> Hi All,
>
> I have followed the 2-node install and trying to connect using Cyberduck
> to:
>
> https://x...@cephserver1.zion.bt.co.uk/
>
> I get the following message:
>
> I/O Error
> Connection failed
> Unrecognised SSL message, plaintext  connection?.
> GET /HTTP/1.1
> Date.
> Authorisation: AWS
> 
> Host: cephserver1.zion.bt.co.uk:443
> Connection: Keep-Alive
> User-Agent: Cyberduck...
>
> Can anyone help?
>
> Thanks
> Gary
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com