Hi 皓月,
You can try "ls -al /mnt/ceph ", check if the current user have W/R access to
the directory. Maybe you need to use "chown" to change the directory owner.
Regards,
Kai
At 2013-11-06 22:03:31,"皓月" wrote:
1. I have installed ceph with one mon/mds and one o
Hi CEPH,
Introduction of Savanna for those haven't heard of it:
Savanna project aims to provide users with simple means to provision a Hadoop
cluster at OpenStack by specifying several parameters like Hadoop version,
cluster
topology, nodes hardware details and a few more.
For now, Savanna c
following oneliner:
ceph --admin-daemon /var/run/ceph/ceph-mon.*.asok mon_status | perl
-MJSON -0e 'exit((from_json(<>))->{state} != "leader")'
Has someone written up a quicker shortcut, preferably usable in bash?
Or did someone solve this entirely different?
Th
[seq-write-64k]
bs=64K
rw=write
My benchmark script: https://gist.github.com/kazhang/8344180
Regards,
Kai
At 2014-01-09 01:25:17,"Bradley Kite" wrote:
Hi there,
I am new to Ceph and still learning its performance capabilities, but I would
like to share my performance results in t
/27968a74d29998703207705194ec4e0c93a6b42d/examples/librados/hello_world.cc
Regrard,
Kai
At 2014-03-11 21:57:23,"ljm李嘉敏" wrote:
Hi all,
Is it possible that ceph support windows client? Now I can only use RESTful
API(Swift-compatible) through ceph object gateway,
but the languages that can be used are ja
Hi Thanh,
I think you miss the "$ git submodule update --init", which clones all the
submodules required for compilation.
Cheers,
Kai
At 2014-04-07 09:35:32,"Thanh Tran" wrote:
Hi,
When i build ceph from source code that I downloaded from
https://github.com/ceph/cep
ger.cc] _send_message()
L submit_message()
L pipe->_send()
L [msg/Pipe.h] _send()
L [msg/Pipe.cc] witer()
L write_message()
L do_sendmsg()
L sendmsg()
Hope these could help.
Regards,
Kai Zhang
At 2014-04-30 00:04:55,peng wrote:
In the librados.cc , I found the following code:
Step
Hi Adrian,
You may be interested in "rados -p poo_name df --format json", although it's
pool oriented, you could probably add the values together :)
Regards,
Kai
在 2014-05-13 08:33:11,"Adrian Banasiak" 写道:
Thanks for sugestion with admin daemon but it looks like sin
nning the osd, iirc. I had to use the fuse-based driver to
mount, which obviously is not too great, speed-wise.
See http://ceph.com/docs/master/faq/#try-ceph for a better description
of the issue.
Cheers,
Kai
- --
Kai Blin
Worldforge developer http://www.worldforge.org/
Wine developer http://wiki.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Dito, cya in Darmstadt!
On 01/16/2018 08:47 AM, Wido den Hollander wrote:
> Yes! Looking forward :-) I'll be there :)
>
> Wido
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: OpenPGP digital signature
Just for those of you who are not subscribed to ceph-users.
Forwarded Message
Subject:Ceph team involvement in Rook (Deploying Ceph in Kubernetes)
Date: Fri, 19 Jan 2018 11:49:05 +0100
From: Sebastien Han
To: ceph-users , Squid Cybernetic
, Dan Mick , Chen, Hua
just wondering if there's
anything that I'm not aware of?
Thanks
Kai
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: OpenPGP digital signature
___
ceph-user
Hi and welcome,
On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR
> feature. We've 2 10Gb connected data centers in the same campus. I
> wonder if it's possible to setup a CEPH cluster with following
> components in each data cente
On 12.02.2018 00:33, c...@elchaka.de wrote:
> I absolutely agree, too. This was really great! Would be Fantastic if the
> ceph days will happen again in Darmstadt - or Düsseldorf ;)
>
> Btw. Will the Slides and perhaps Videos of the presentation be online
> avaiable?
AFAIK Danny is working on
Sometimes I'm just blind. Way to less ML :D
Thanks!
On 12.02.2018 10:51, Wido den Hollander wrote:
> Because I'm co-organizing it! :) It send out a Call for Papers last
> week to this list.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
s
Hi Wido,
how do you know about that beforehand? There's no official upcoming
event on the ceph.com page?
Just because I'm curious :)
Thanks
Kai
On 12.02.2018 10:39, Wido den Hollander wrote:
> The next one is in London on April 19th
--
SUSE Linux GmbH, GF: Felix Imendörffer,
Hi,
maybe it's worth looking at this:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017378.html
Kai
On 02/14/2018 11:06 AM, Götz Reinicke wrote:
> Hi,
>
> We have some work to do on our power lines for all building and we have to
> shut down all systems. So
openATTIC.
If you've missed the ongoing Dashboard V2 discussions and work, here's a
blog post to follow up:
https://www.openattic.org/posts/ceph-manager-dashboard-v2/
Let us know about your thoughts on this.
Thanks
Kai
On 02/16/2018 06:20 AM, Laszlo Budai wrote:
> Hi,
>
>
On 22.08.2018 20:57, David Turner wrote:
> does it remove any functionality of the previous dashboard?
No it doesn't. All dashboard_v1 features are integrate and part of the
dashboard_v2 as well.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
I totally understand and see your frustration here, but you've to keep
in mind that this is an Open Source project with a lots of volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.
It'
Hi,
given the fact that we don't have Ubu or Centos packages, you could
install directly from our sources.
http://download.openattic.org/sources/3.x/openattic-3.6.2.tar.bz2
Our docs are hosted at: http://docs.openattic.org/en/latest/
Kai
On 03/02/2018 04:39 PM, Budai Laszlo wrote:
t could be interesting
for such events. It happened already in the past that two guys submitted
more or less the same topic and weren't even aware of that. If there's
nothing like that so far I would like to get it started somehow :-).
Thanks,
Kai
--
SUSE Linux GmbH, GF: Felix Imendö
Hi Robert,
thanks will forward it to the community list as well.
Kai
On 03/26/2018 11:03 AM, Robert Sander wrote:
> Hi Kai,
>
> On 22.03.2018 18:04, Kai Wagner wrote:
>> don't know if this is the right place to discuss this but I was just
>> wondering if there's
the new
channel on OFTC.
Thanks,
Kai
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Is this just from one server or from all servers? Just wondering why VD
0 is using WriteThrough compared to the others. If that's the setup for
the OSD's you already have a cache setup problem.
On 10.04.2018 13:44, Mohamad Gebai wrote:
> megacli -LDGetProp -cache -Lall -a0
>
> Adapter 0-VD 0(targ
Hi all,
indeed it was a lot of fun again and what I really liked to most are the
open discussions afterwards.
Big thanks goes to Wido for organizing this and we should not forget to
thank you all the sponsors who made this happen as well.
Kai
On 20.04.2018 10:32, Sean Purdy wrote:
> Jus
Looks very good. Is it anyhow possible to display the reason why a
cluster is in an error or warning state? Thinking about the output from
ceph -s if this could by shown in case there's a failure. I think this
will not be provided by default but wondering if it's possible to add.
Kai
Hi Oliver,
a good value is 100-150 PGs per OSD. So in your case between 20k and 30k.
You can increase your PGs, but keep in mind that this will keep the
cluster quite busy for some while. That said I would rather increase in
smaller steps than in one large move.
Kai
On 17.05.2018 01:29
ting\|inactive' || break
> done
> ceph osd pool set $pool pgp_num $num
> while sleep 10; do
> ceph osd health | grep -q
> 'peering\|stale\|activating\|creating\|inactive' || break
> done
> done
> for flag in $flags; do
> ceph osd unset $flag
>
*that would raise my interest dramatically :-)*
Kai
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-use
I'm also not 100% sure but I think that the first one is the right way
to go. The second command only specifies the db partition but no
dedicated WAL partition. The first one should do the trick.
On 28.06.2018 22:58, Igor Fedotov wrote:
>
> I think the second variant is what you need. But I'm not
On 28.06.2018 23:25, Eric Jackson wrote:
> Recently, I learned that this is not necessary when both are on the same
> device. The wal for the Bluestore OSD will use the db device when set to 0.
That's good to know. Thanks for the input on this Eric.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Ja
near
future? Or, is this otherwise fixable by "turning it off and on again"?
Regards,
Kai
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Wido and Shinobu,
On 20/01/2017 19:54, Shinobu Kinjo wrote:
> What does `ceph -s` say?
Health_ok; this was not the cause, thanks though.
> On Sat, Jan 21, 2017 at 3:39 AM, Wido den Hollander wrote:
>>
>>> Op 20 januari 2017 om 17:17 schreef Kai Storbeck :
>>
Congrats to everyone.
Seems like we're getting closer to pony's, rainbows and ice cream for
everyone!;-)
On 12/11/18 12:15 AM, Mike Perez wrote:
> Hey all,
>
> Great news, the Rook team has declared Ceph to be stable in v0.9! Great work
> from both communities in collaborating to make this poss
Hi all,
just a friendly reminder to use this pad for CfP coordination .
Right now it seems like I'm the only one who submitted something to
Cephalocon and I can't believe that ;-)
https://pad.ceph.com/p/cfp-coordination
Thanks,
Kai
On 5/31/18 1:17 AM, Gregory Farnum wrote:
>
rd as this is now the official name of it?
Thoughts?
Kai
On 3/5/19 3:37 PM, Laura Paduano wrote:
> Hi Ashley,
>
> thanks for pointing this out! I've created a tracker issue [1] and we
> will take care of updating the documentation accordingly.
>
> Thanks,
> Laura
>
.
Has someone here tested this SSD and can give me some values for comparison?
Many thanks in advance,
Kai
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
(108MB/s)(6205MiB/60001msec)
Best regards,
Kai
From: Martin Verges
Sent: Wednesday, 13 March 2019 19:34
To: Kai Wembacher
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Intel D3-S4610 performance
Hello Kai,
there are tons of bad SSDs on the market. You cannot buy any brand without
Hi Marc,
let me add Danny so he's aware of your request.
Kai
On 23.05.19 12:13, Wido den Hollander wrote:
>
> On 5/23/19 12:02 PM, Marc Roos wrote:
>> Sorry for not waiting until it is published on the ceph website but,
>> anyone attended this talk? Is it production
On 10.07.19 20:46, Reed Dier wrote:
> It does not appear that that page has been updated in a while.
Addressed that already - someone needs to merge it
https://github.com/ceph/ceph/pull/28643
--
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)
signature.asc
Descriptio
Hi, all
I'm a newbie to Ceph, and just setup a whole new Ceph cluster (0.87) with
two servers. But when its status is always warning:
[root@serverA ~]# ceph osd tree
# idweight type name up/down reweight
-1 62.04 root default
-2 36.4host serverA
0 3.64
building a
custom CRUSH map?
From: Yueliang [yueliang9...@gmail.com]
Sent: Monday, March 30, 2015 12:04 PM
To: ceph-users@lists.ceph.com; Kai KH Huang
Subject: Re: [ceph-users] Ceph osd is all up and in, but every pg is incomplete
Hi Kai KH
ceph -s report &quo
ent: Monday, March 30, 2015 1:50 PM
To: ceph-users@lists.ceph.com; Kai KH Huang
Subject: RE: [ceph-users] Ceph osd is all up and in, but every pg is incomplete
I think there no other way. :)
--
Yueliang
Sent with Airmail
On March 30, 2015 at 13:17:55, Kai KH Huang
(huangk...@lenovo.com<mailto
Hi, all
I have a two-node Ceph cluster, and both are monitor and osd. When they're
both up, osd are all up and in, everything is fine... almost:
[root~]# ceph -s
health HEALTH_WARN 25 pgs degraded; 316 pgs incomplete; 85 pgs stale; 24
pgs stuck degraded; 316 pgs stuck inactive; 85 pgs
sday, March 31, 2015 11:53 AM
To: Lindsay Mathieson; Kai KH Huang
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] One host failure bring down the whole cluster
On Mon, Mar 30, 2015 at 8:02 PM, Lindsay Mathieson
wrote:
> On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
>> Hi, all
t it will get the 19.04 kernel.
[1] https://wiki.ubuntu.com/Kernel/LTSEnablementStack
--
Kai Stian Olstad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
48 matches
Mail list logo