On 04/30/2014 10:46 PM, Patrick McGarry wrote:
Hey Danny (and Wido),
WRT the foundation I'm sure you can see why it has been on hold for
the last few weeks. However, this is not signifying the death of the
effort. Both Sage and I still feel that this is a discussion worth
having. However, the
Victor,
This is a verified issue reported earlier today:
http://tracker.ceph.com/issues/8260
Cheers,
Mike
On 4/30/2014 3:10 PM, Victor Bayon wrote:
Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
error when running the "ceph-deploy osd activate" and I am gett
I've re-proposed the rbd-clone-image-handler blueprint via nova-specs:
https://review.openstack.org/91486
In other news, Sebastien has helped me test the most recent
incarnation of this patch series and it seems to be usable now. With
an important exception of live migrations of VMs with RBD backe
On 04/30/2014 05:33 PM, Gandalf Corvotempesta wrote:
2014-05-01 0:20 GMT+02:00 Matt W. Benjamin :
Hi,
Sure, that's planned for integration in Giant (see Blueprints).
Great. Any ETA? Firefly was planned for February :)
At least on the plus side you can download the code whenever you want,
2014-05-01 0:20 GMT+02:00 Matt W. Benjamin :
> Hi,
>
> Sure, that's planned for integration in Giant (see Blueprints).
Great. Any ETA? Firefly was planned for February :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
On 30/04/2014 00:42, Gregory Farnum wrote:
> On Tue, Apr 29, 2014 at 3:28 PM, Marc wrote:
>> Thank you for the help so far! I went for option 1 and that did solve
>> that problem. However quorum has not been restored. Here's the
>> information I can get:
>>
>> mon a+b are in state Electing and hav
2014-05-01 0:11 GMT+02:00 Mark Nelson :
> Usable is such a vague word. I imagine it's testable after a fashion. :D
Ok but I prefere an "official" support with IB integrated in main ceph repo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
On 04/30/2014 05:05 PM, Gandalf Corvotempesta wrote:
2014-04-30 22:27 GMT+02:00 Mark Nelson :
Check out the xio work that the linuxbox/mellanox folks are working on.
Matt Benjamin has posted quite a bit of info to the list recently!
Is that usable ?
Usable is such a vague word. I imagine i
2014-04-30 22:27 GMT+02:00 Mark Nelson :
> Check out the xio work that the linuxbox/mellanox folks are working on.
> Matt Benjamin has posted quite a bit of info to the list recently!
Is that usable ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I was wondering why would OSDs not start at the boot time, happens on 1
server (2 OSDs).
If i check with: chkconfig ceph --list, I can see that is should start,
that is, the MON on this server does really start but OSDs does not.
I can normally start them with: service ceph start osd.X
This
Hey Danny (and Wido),
WRT the foundation I'm sure you can see why it has been on hold for
the last few weeks. However, this is not signifying the death of the
effort. Both Sage and I still feel that this is a discussion worth
having. However, the discussion hasn't happened yet, so it's far too
Am 30.04.2014 14:18, schrieb Sage Weil:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line o
On 04/30/2014 03:21 PM, Gandalf Corvotempesta wrote:
2014-04-30 14:18 GMT+02:00 Sage Weil :
Today we are announcing some very big news: Red Hat is acquiring Inktank.
Great news.
Any changes to get native Infiniband support in ceph like in GlusterFS ?
Check out the xio work that the linuxbox/
Hi Irek,
Good day to you.
Any updates/comments on below?
Looking forward to your reply, thank you.
Cheers.
On Tue, Apr 29, 2014 at 12:47 PM, Indra Pramana wrote:
> Hi Irek,
>
> Good day to you, and thank you for your e-mail.
>
> Is there a better way other than patching the kernel? I would
2014-04-30 14:18 GMT+02:00 Sage Weil :
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
Great news.
Any changes to get native Infiniband support in ceph like in GlusterFS ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
2014-04-30 22:11 GMT+02:00 Andrey Korolyov :
> regarding this one and previous you told about memory consumption -
> there are too much PGs, so memory consumption is so high as you are
> observing. Dead loop of osd-never-goes-up is probably because of
> suicide timeout of internal queues. It is may
Galndalf,
regarding this one and previous you told about memory consumption -
there are too much PGs, so memory consumption is so high as you are
observing. Dead loop of osd-never-goes-up is probably because of
suicide timeout of internal queues. It is may be not good but
expected.
OSD behaviour
Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
error when running the "ceph-deploy osd activate" and I am getting an
exception. See below[2].
I am following the quick tutorial step by step, except that
any help greatly appreciate
"ceph-deploy mon create-initial" doe
Congrats to the Inktank team and Sage!
On Wed, Apr 30, 2014 at 5:18 AM, Sage Weil wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a
On 04/30/2014 02:18 PM, Sage Weil wrote:
Today we are announcing some very big news: Red Hat is acquiring Inktank.
We are very excited about what this means for Ceph, the community, the
team, our partners, and our customers. Ceph has come a long way in the ten
years since the first line of code h
On 30/04/14 18:05, Mike Hanby wrote:
Congrats, any possible conflict with RedHat's earlier acquisition of GlusterFS?
They are similar projects but with slightly different targets. I guess
with Ceph+Openstack and Gluster+[RHEV|Ovirt] as natural partners. That
said I'd love to see Ceph+[RHEV|
Yes, this is normal. The pgmap version updates continuously even on an
idle system, because it is incremented when the periodic reports on PG
status are received by the mon from the osds.
It's a bit annoying if you want to set something else up to update when the
pg status changes - in that case
Congrats, any possible conflict with RedHat's earlier acquisition of GlusterFS?
> On Apr 30, 2014, at 7:18, "Sage Weil" wrote:
>
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our part
Hi Peng,
If you are interested in the code path of Ceph, these blogs may help:
How does a Ceph OSD handle a read message ? (in Firefly and up)
How does a Ceph OSD handle a write message ? (up to Emperor)
Here is the note about rados write I took when I read the source code:
| rados put [infile]
On Wed, Apr 30, 2014 at 2:18 PM, Sage Weil wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the firs
Hello!!!
Congratulations Glad to hear that RH will continue develop as a OSS and
even open source Calamari!
I also admire RH work on KVM and Linux kernel, so I'm very excited on this
news!
best regards!
--
pawel
On Wed, Apr 30, 2014 at 2:18 PM, Sage Weil wrote:
> Today we are announcing
On 04/30/2014 10:19 AM, Loic Dachary wrote:
Hi Sage,
Congratulations, this is good news.
On 30/04/2014 14:18, Sage Weil wrote:
One important change that will take place involves Inktank's product
strategy, in which some add-on software we have developed is proprietary.
In contrast, Red Hat fa
I'm testing an idle ceph cluster.
my pgmap version is always increasing, is this normal ?
2014-04-30 17:20:41.934127 mon.0 [INF] pgmap v281: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:42.962033 mon.0 [INF] pgmap v282: 640 pgs: 640
active+clean;
Hi Sage,
Congratulations, this is good news.
On 30/04/2014 14:18, Sage Weil wrote:
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That mea
On Wed, Apr 30, 2014 at 3:07 PM, Mohd Bazli Ab Karim
wrote:
> Hi Zheng,
>
> Sorry for the late reply. For sure, I will try this again after we completely
> verifying all content in the file system. Hopefully all will be good.
> And, please confirm this, I will set debug_mds=10 for the ceph-mds, a
Hi Jeff,
or something else. Downgrading back to leveldb 1.7.0-2 resolved my problem.
Is anyone else seeing this?
It sounds a bit like what I reported here:
http://tracker.ceph.com/issues/7918
--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidconsu
Per http://tracker.ceph.com/issues/6022 leveldb-1.12 was pulled out of
the ceph-extras repo due to patches applied by a leveldb fork (Basho
patch). It's back in ceph-extras (since the 28th at least), and on
CentOS 6 is causing an abort on mon start when run with the Firefly
release candidate
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 30/04/14 06:32, Henrik Korkuc wrote:
>
> Ubuntu 14.04 currently ships ceph 0.79. After firefly release
> ubuntu maintainer will update ceph version in ubuntu's repos.
Thanks Henrik - you beat me to it :-)
>
> On 2014.04.30 07:08, Kenneth wrote
Two days ago, my cluster have down because hard drive is damaged,now , I fix
the hardisk and copy the data to a new disk, but some pg
stay in "incomplete" state. how can I solve it ? thank you___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Sage,
Congrats to you and Inktank!
- Travis
On Wed, Apr 30, 2014 at 9:27 AM, Haomai Wang wrote:
> Congratulation!
>
> On Wed, Apr 30, 2014 at 8:18 PM, Sage Weil wrote:
> > Today we are announcing some very big news: Red Hat is acquiring Inktank.
> > We are very excited about what this means
Congratulation!
On Wed, Apr 30, 2014 at 8:18 PM, Sage Weil wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> yea
This is a very good news, congratulations !
(do you known if Ceph Enterprise subscription price will remain the same ?
I'm looking to take support next year)
- Mail original -
De: "Sage Weil"
À: ceph-de...@vger.kernel.org, ceph-us...@ceph.com
Envoyé: Mercredi 30 Avril 2014 14:18:48
literally it a very good news for ceph, but ceph is now a expensive product
:)
On Wed, Apr 30, 2014 at 6:18 PM, Sage Weil wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partn
Today we are announcing some very big news: Red Hat is acquiring Inktank.
We are very excited about what this means for Ceph, the community, the
team, our partners, and our customers. Ceph has come a long way in the ten
years since the first line of code has been written, particularly over the
On 04/30/2014 10:41 AM, Gonzalo Aguilar Delgado wrote:
Hi,
I've found my system with memory almost full. I see
PID USUARIO PR NIVIRTRESSHR S %CPU %MEM HORA+ ORDEN
2317 root 20 0 824860 647856 3532 S 0,7 5,3 29:46.51
ceph-mon
I think it's too much. But wha
How to submit patch: https://github.com/ceph/ceph/blob/master/SubmittingPatches
You can register a bug on tracker.ceph.com/projects/ceph/issues
On Wed, Apr 30, 2014 at 4:30 PM, You, Ji wrote:
> Hi,
>
> An simple question, how to submit path for ceph? I just find the steps of
> submitting path f
Oh, sorry, not notice it's a incomplete state.
What's the result of "ceph -s"? It should be exists osd down.
On Wed, Apr 30, 2014 at 5:23 PM, vernon1...@126.com wrote:
> Hello,
> The pg was "incomplete", and I had try to repair it before. But it do
> nothing.
>
>
Hi,
I've found my system with memory almost full. I see
PID USUARIO PR NIVIRTRESSHR S %CPU %MEM HORA+ ORDEN
2317 root 20 0 824860 647856 3532 S 0,7 5,3 29:46.51
ceph-mon
I think it's too much. But what do you think?
Best regards,
_
You can find the inconsistence pg via " ceph pg dump" and then run
"ceph pg repair "
On Wed, Apr 30, 2014 at 5:00 PM, vernon1...@126.com wrote:
> Hi,
> I have some problem now. A large number of osds have down before. When some
> of them become up, I found a pg was "incomplete". Now this pg's map
Hi,
I have some problem now. A large number of osds have down before. When some of
them become up, I found a pg was "incomplete". Now this pg's map is [35,29,42].
the pg's folders in osd.35 and osd.29 are empty. But there are 9.2G capacity in
osd.42. Like this:
here is osd.35
[root@ceph952
OK, actually I just know it. It looks OK.
According to the log, many osds try to boot and repeatedly. I think
the problem maybe in monitor side. Could you check the monitor node
and the ceph-mon.log which provided is blank.
On Wed, Apr 30, 2014 at 3:59 PM, Cao, Buddy wrote:
> Yes, I set "osd jou
Hi,
An simple question, how to submit path for ceph? I just find the steps of
submitting path for ceph documents.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In the librados.cc , I found the following code:
Step 1 .
file : librados.cc
void librados::ObjectWriteOperation::write(uint64_t off, const bufferlist& bl)
{
::ObjectOperation *o = (::ObjectOperation *)impl;
bufferlist c = bl;
o->write(off, c);
}
Step 2 . to find ::ObjectOperation
You mean the "[osd]" and "[osd.x]" information are not necessary anymore? What
if all the ceph nodes plus monitor nodes are down, after I reboot all the
nodes, will ceph cluster be back to the normal status? And where the ceph
cluster read the configuration info in ceph.conf before?
Wei Cao (B
Yes, I set "osd journal size= 0 " by purpose, I'd like to use all of the space
of journal device, I think I got the idea from Ceph website... Yes, I do run "
mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin" to start create
ceph cluster, and it succeed.
Do you think "osd journal si
I found "osd journal size = 0" in your ceph.conf?
Do you really run mkcephfs with this? I think it will be fail.
On Wed, Apr 30, 2014 at 2:42 PM, Cao, Buddy wrote:
> Here you go... I did not see any stuck clean related log...
>
>
>
> Wei Cao (Buddy)
>
> -Original Message-
> From: Haomai W
On 30.04.2014 09:38, Cao, Buddy wrote:
> Thanks Robert. The auto-created ceph.conf file in local working directory is
> too simple, almost nothing inside it. How do I know the osd.x created by
> ceph-deploy, and populate these kinda necessary information into ceph.conf?
This information is not
>71260 kB used,
> 1396 GB / 1396 GB avail
> 192 active+clean
>
>The only thing embarrasses me is available space. I have 2x750Gb
> drives and the total amount
> of available space is indeed 1396Gb, but if Ceph automatically
> creates 2 replicas of every obje
Thanks Robert. The auto-created ceph.conf file in local working directory is
too simple, almost nothing inside it. How do I know the osd.x created by
ceph-deploy, and populate these kinda necessary information into ceph.conf?
Wei Cao (Buddy)
-Original Message-
From: ceph-users-boun...
On 30.04.2014 08:18, Cao, Buddy wrote:
> Thanks for your reply Haomai. There is no /etc/ceph/ceph.conf on any ceph
> nodes, that is why I raised the question at beginning.
ceph-deploy creates the ceph.conf file in the local working directory.
You can distribute that with "ceph-deploy admin".
Reg
Hi Zheng,
Sorry for the late reply. For sure, I will try this again after we completely
verifying all content in the file system. Hopefully all will be good.
And, please confirm this, I will set debug_mds=10 for the ceph-mds, and do you
want me to send the ceph-mon log too?
BTW, how to confirm
56 matches
Mail list logo