Woah, major thread necromancy! :)
On Feb 13, 2015, at 3:03 PM, Josef Johansson wrote:
>
> Hi,
>
> I skimmed the logs again, as we’ve had more of this kinda errors,
>
> I saw a lot of lossy connections errors,
> -2567> 2014-11-24 11:49:40.028755 7f6d49367700 0 -- 10.168.7.23:6819/10217
> >> 1
I was also able to reproduce this, guys, but I believe it’s specific to the
mode of testing rather than to anything being wrong with the OSD. In
particular, after restarting the OSD whose file I removed and running repair,
it did so successfully.
The OSD has an “fd cacher” which caches open file
I believe the debian folder only includes stable releases; .57 is a dev
release. See http://ceph.com/docs/master/install/debian/ for more! :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tuesday, March 5, 2013 at 8:44 AM, Scott Kinder wrote:
> When is ceph 0.57 going t
This is a companion discussion to the blog post at
http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
The short and slightly alternate version: I spent most of about two weeks
working on bugs related to snapshots in the MDS, and we started realizing that
we could probably d
On Tuesday, March 5, 2013 at 5:53 AM, Marco Aroldi wrote:
> Hi,
> I've collected a osd log with these parameters:
>
> debug osd = 20
> debug ms = 1
> debug filestore = 20
>
> You can download it from here:
> https://docs.google.com/file/d/0B1lZcgrNMBAJVjBqa1lJRndxc2M/edit?usp=sharing
>
> I ha
There's not a really good per-version list, but tracker.ceph.com is reasonably
complete and has a number of views.
-Greg
On Monday, March 11, 2013 at 8:22 AM, Igor Laskovy wrote:
> Thanks for the quick reply.
> Ok, so at this time looks like better to avoid split networks across network
> inter
On Wednesday, March 13, 2013 at 5:52 AM, Ansgar Jazdzewski wrote:
> hi,
>
> i added 10 new OSD's to my cluster, after the growth is done, i got:
>
> ##
> # ceph -s
> health HEALTH_WARN 217 pgs stuck unclean
> monmap e4: 2 mons at {a=10.100.217.3:6789/0,b=10.100.217.4:6789/0
> (http://1
On Friday, March 8, 2013 at 3:29 PM, Kevin Decherf wrote:
> On Fri, Mar 01, 2013 at 11:12:17AM -0800, Gregory Farnum wrote:
> > On Tue, Feb 26, 2013 at 4:49 PM, Kevin Decherf > (mailto:ke...@kdecherf.com)> wrote:
> > > You will find the archive here:
> > > The data is not anonymized. Interesting
On Friday, March 15, 2013 at 3:40 PM, Marc-Antoine Perennou wrote:
> Thank you a lot for these explanations, looking forward for these fixes!
> Do you have some public bug reports regarding this to link us?
>
> Good luck, thank you for your great job and have a nice weekend
>
> Marc-Antoine Peren
At various times there the ceph-fuse client has worked on OS X — Noah was the
last one to do this and the branch for it is sitting in my long-term
really-like-to-get-this-mainlined-someday queue. OS X is a lot easier than
Windows though, and nobody's done any planning around that beyond noting t
The MDS doesn't have any local state. You just need start up the daemon
somewhere with a name and key that are known to the cluster (these can be
different from or the same as the one that existed on the dead node; doesn't
matter!).
-Greg
Software Engineer #42 @ http://inktank.com | http://cep
uster?
>
>
> On Wed, Mar 20, 2013 at 7:41 PM, Greg Farnum (mailto:g...@inktank.com)> wrote:
> > The MDS doesn't have any local state. You just need start up the daemon
> > somewhere with a name and key that are known to the cluster (these can be
> > different f
12 matches
Mail list logo