> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
_
te...done.
igor@hv03:~$ rbd diff rbd/cloneoftest | awk '{ SUM += $2 } END { print
SUM/1024/1024 " MB" }'
1024 MB
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
Anybody? ;)
On Thu, Jan 30, 2014 at 9:10 PM, Igor Laskovy wrote:
> Hello list,
>
> Is it correct behavior during copy to thicking rbd image?
>
> igor@hv03:~$ rbd create rbd/test -s 1024
> igor@hv03:~$ rbd diff rbd/test | awk '{ SUM += $2 } END { print
> SUM/1024/1024 &
sasters, What is a good practice/strategy to
>> backup/replicate data in the cluster?
>>
>
> Hmm, there's not a good tailored answer for CephFS. With RBD there are
> some options around snapshots and incremental diffs.
> -Greg
>
&
gt;
> > I look forward to your feedback.
> >
> > Regards,
> >
> >
> >
>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
> _______
> ceph-users mailing lis
; MTIA,
> dk
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users
client machine
frozen or so?
Look like in my case whole cluster have gone crazy.
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for the quick reply.
Ok, so at this time looks like better to avoid split networks across
network interfaces.
Where can I find list of all issues related to the concrete version?
On Mon, Mar 11, 2013 at 5:16 PM, Gregory Farnum wrote:
> On Monday, March 11, 2013, Igor Laskovy wr
Hi there!
Could you please clarify what is the current status of development client
for OS X and Windows desktop editions?
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Anybody? :)
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
On Mar 17, 2013 6:37 PM, "Igor Laskovy" wrote:
> Hi there!
>
> Could you please clarify what is the current status of development client
> for OS X and Windows desktop editions?
>
> --
> Igor Lask
eph.com || http://inktank.com
> @scuttlemonkey || @ceph || @inktank
>
>
> On Tue, Mar 19, 2013 at 8:30 AM, Igor Laskovy
> wrote:
> > Anybody? :)
> >
> > Igor Laskovy
> > facebook.com/igor.laskovy
> > Kiev, Ukraine
> >
> > On Mar 17, 2013 6:
Hi there!
What steps needs to be perform if we have totally lost a node.
As I already understand from docs, OSDs must be recreated (disabled,
removed and again created, right?)
But what about MON and MDS?
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
gt; the cluster seems to adopt the new osd and sort everything out.
>
> David
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Igor Laskovy
facebook.com/igor.laskov
t are known to the cluster (these can be
> different from or the same as the one that existed on the dead node;
> doesn't matter!).
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wednesday, March 20, 2013 at 10:40 AM, Igor Laskovy wrote:
-[osd|mds|mon].$id). You
> can create more or copy an unused MDS one. I believe the docs include
> information on how this works.
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wednesday, March 20, 2013 at 10:48 AM, Igor Laskovy wrote:
>
Hi there!
Are Chris Holcombe and Robert Blair here? Please answer me about your
awesome job http://ceph.com/community/ceph-over-fibre-for-vmware/ .
Thanks!
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users
ph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
>
>
>
Hello!
Does anybody use Rados Gateway via S3-compatible clients on desktop systems?
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
A little bit more.
I have tried deploy RGW via http://ceph.com/docs/master/radosgw/ and than
connect S3 Browser, CrossFTP and CloudBerry Explorer clients, but all
unsuccessfully.
Again my question, does anybody use S3 desktop clients with RGW?
On Fri, Apr 19, 2013 at 10:54 PM, Igor Laskovy
tible preparation.
May be there are needs something additional? Manual creating of root bucket
or something like that?
On Sun, Apr 21, 2013 at 6:53 PM, Yehuda Sadeh wrote:
> On Sun, Apr 21, 2013 at 3:02 AM, Igor Laskovy
> wrote:
> > A little bit more.
> >
> > I h
On Sun, Apr 21, 2013 at 7:43 PM, Yehuda Sadeh wrote:
> On Sun, Apr 21, 2013 at 9:39 AM, Igor Laskovy
> wrote:
> > Well, in each case something specific. For CrossFTP, for example, it says
> > that asking the server it receive text data instead of XML.
>
> When doing
if there's
anything else there.
Looks like apache works.
Which exactly log files can I show for you?
On Sun, Apr 21, 2013 at 11:49 PM, Yehuda Sadeh wrote:
> On Sun, Apr 21, 2013 at 10:05 AM, Igor Laskovy
> wrote:
> >
> > Just initial connect to rgw server, nothing fur
/log/apache2/access.log combined
ServerSignature Off
On Tue, Apr 23, 2013 at 5:57 PM, Yehuda Sadeh wrote:
> On Tue, Apr 23, 2013 at 7:51 AM, Igor Laskovy wrote:
>> Sorry for delayed reply,
>>
>> I am not good familiar with apache.
>> For RGW I use one of the O
at 9:39 PM, Yehuda Sadeh wrote:
> On Tue, Apr 23, 2013 at 11:33 AM, Igor Laskovy wrote:
>> In /etc/apache2/httpd.conf I have :
>> ServerName osd01.ceph.labspace.studiogrizzly.com
>>
>> In /etc/apache2/sites-available/rgw.conf :
>
>
>
>> FastCgiExte
: java.net.UnknownHostException:
fdfdf.osd01.ceph.labspace.studiogrizzly.com; XML Error Message: null
[R1] Failed to create the directory
On Tue, Apr 23, 2013 at 9:39 PM, Yehuda Sadeh wrote:
> On Tue, Apr 23, 2013 at 11:33 AM, Igor Laskovy wrote:
>> In /etc/apache2/httpd.conf I have :
>> S
osx:
>
> brew install s3cmd
>
> []s
> -lorieri
>
>
>
>
> On Tue, Apr 23, 2013 at 4:00 PM, Igor Laskovy wrote:
>
>> So, I totally lost in this, but I did it, and now CrossFTP report:
>> [R1] Connect to osd01.ceph.labspace.studiogrizzly.com
>> [R1] Current
t; diagnose if there are places that you can lower latency rather than hide
>> it with concurrency. That's not an easy task in a distributed system
>> like Ceph. There are probably opportunities for optimization, but I
>> suspect it may take more than tweaking the ceph.conf file.
>>
>>
> I fully get that the distributed nature has it's drawbacks in serial
> performance and that Ceph excels in parallel performance, however, just 60
> ~ 80MB/sec seems rather slow. On a pretty idle cluster that should be
> better, especially when all the OSDs have everything in their page cache.
>
>
> Mark
>> __**_
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> __**_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
bjects: (2) No such file or directory
I found that this may be a two years old bug -
http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg04037.html
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
on.
>
> Yehuda
>
> On Sat, Apr 27, 2013 at 6:33 AM, Igor Laskovy wrote:
>> Hello,
>>
>> have problem with clearing space for RGW pool with "radosgw-admin temp
>> remove" command:
>>
>> root@osd01:~# ceph -v
>> ceph version 0.56.4 (63b0f8
I will rephrase my question.
When I upload files over s3 the ceph -s return growth in used space,
but when this files deleted there are no available space freed.
Yehuda, explain please a little bit more about how I can control this behavior ?
On Sat, Apr 27, 2013 at 7:09 PM, Igor Laskovy wrote
ay purge it.
>
> * rgw gc processor period
>
> Time between the start of two consecutive garbage collector runs
>
>
>
> Yehuda
>
> On Sat, Apr 27, 2013 at 10:23 AM, Igor Laskovy
> wrote:
> > I will rephrase my question.
> > When I upload files over s3 t
(http://s3browser.com/)
On Wed, Apr 24, 2013 at 10:39 AM, Igor Laskovy wrote:
> Ok. I will try, thanks.
> One further question - does needed manually start /etc/init.d/radosgw all
> time when this host have been rebooted? Why it is not part of service ceph
> -a start?
>
>
>
= ceph02
...options...
Does this rgws will run simultaneous?
Have radosgw.b ability to continues serve load if ceph01 host went down?
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Or maybe in case the hosting purposes easier implement RadosGW.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Gateway endpoint? How?
On Wed, May 1, 2013 at 12:28 PM, Igor Laskovy wrote:
> Hello,
>
> Whether any best practices how to make Hing Availability of RadosGW?
> For example, is this right way to create two or tree RadosGW (keys for
> ceph-auth, directory and so on) and having for
code-named Dumpling, is slated for three months from now
> (beginning of August).
>
> You can download v0.61 Cuttlefish from the usual locations:
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://ceph.com/download/ceph-0.61.tar.gz
> * For Deb
y of the results.
>>
>> I'd also welcome any comments or critique on the above specification.
>> Purchases have to be made via Dell and 10Gb ethernet is out of the
>> question at the moment.
>>
>> Cheers,
>>
>> Barry
>>
>>
>>
> _
Anybody?
On Tue, May 7, 2013 at 1:19 PM, Igor Laskovy wrote:
> I tried do that and put behind RR DNS, but unfortunately only one host can
> server requests from clients - second host does not responds totally. I
> am not to good familiar with apache, in standard log files nothin
t; > >
> > > root@kvm-cs-sn-10i:/tmp# service ceph -a start
> > > === mon.kvm-cs-sn-10i ===
> > > Starting Ceph mon.kvm-cs-sn-10i on kvm-cs-sn-10i...
> > > failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i kvm-cs-sn-10i
> > > --pid-file /var/ru
Hi all,
Does anybody know where to learn about Maximums for Ceph architectures?
For example, I'm trying to find out about the maximum size of rbd image and
cephfs file. Additionally want to know maximum size for RADOS Gateway
object (meaning file for uploading).
--
Igor Laskovy
faceboo
Hi Gregory, thanks. But I think that you need initiate filling this gap
into architect documentation. As for this important question from design
point.
On Mon, May 13, 2013 at 7:41 PM, Gregory Farnum wrote:
> On Sat, May 11, 2013 at 4:47 AM, Igor Laskovy
> wrote:
> > Hi all,
k
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> Any suggestions?
>
> ** **
>
> Regards
>
> ** **
>
> Ian
>
> ** **
>
> Dell Corporation Limited is registered in England and Wales. Company
> Registration Number: 2081369
> Registered address: Dell House, The Boulevard, Cain Road, Br
; --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Igor Laskovy
facebook.com/i
sed by 2 physical hosts at the same time? Or, is Ceph rbd
> CSV(Clustered Shared Volumes) aware?
>
>
>
> Thank you,
>
>
>
> Yao Mensah
>
> Systems Administrator II
>
> OLS Servers
>
> ****
>
>
> __
using more smaller RBDs vs
> fewer larger RBDs?
>
> Thanks for any feedback,
> Jon A
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t;
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
; documentation how to add another node (or hard drives) on a running cluster
> without affecting the mount point and the running service. Can you point me
> for this?
>
>
>
>
> On 06/05/2013 11:20 AM, Igor Laskovy wrote:
>
> >and I'm unable to mount the cluster wi
= execute(f,*args,**kwargs)\n', u' File
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker\n
rv = meth(*args,**kwargs)\n', u' File
"/usr/lib/python2.7/dist-packages/libvirt.py", line 711, i
72 – *Mobile : *+33 (0)6 52 84 44 70
> *Email :* sebastien@enovance.com – *Skype : *han.sbastien
> *Address :* 10, rue de la Victoire – 75009 Paris
> *Web : *www.enovance.com – *Twitter : *@enovance
>
> On Jun 20, 2013, at 12:23 PM, Igor Laskovy wrote:
>
> He
ent: Cyberduck...
>
> Can anyone help?
>
> Thanks
> Gary
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Igor Laskovy
facebo
51 matches
Mail list logo