Hello,
On Wed, 17 Aug 2016 16:54:41 -0500 Dan Jakubiec wrote:
> Hi Wido,
>
> Thank you for the response:
>
> > On Aug 17, 2016, at 16:25, Wido den Hollander wrote:
> >
> >
> >> Op 17 augustus 2016 om 17:44 schreef Dan Jakubiec :
> >>
> >>
> >> Hello, we have a Ceph cluster with 8 OSD that
As it is a lab environment, can i install the setup in a way to achieve
less redundancy (replication factor) and more capacity?
How can i achieve that?
On Wed, Aug 17, 2016 at 7:47 PM, Gaurav Goyal
wrote:
> Hello,
>
> Awaiting any suggestion please!
>
>
>
>
> Regards
>
> On Wed, Aug 17, 2016
Hello,
Awaiting any suggestion please!
Regards
On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal
wrote:
> Hello Brian,
>
> Thanks for your response!
>
> Can you please elaborate on this.
>
> Do you mean i must use
>
> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
>
> This is going to be a lab
Dear Ceph Users,
Awaiting some suggestion please!
On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal
wrote:
> Hello Mart,
>
> Thanks a lot for the detailed information!
> Please find my response inline and help me to get more knowledge on it
>
>
> Ceph works best with more hardware. It is not rea
On Thu, Aug 18, 2016 at 1:12 AM, agung Laksono wrote:
> Hi Ceph User,
>
> When I make change inside ceph codes in the development mode,
> I found that recompiling takes around an hour because I have to remove
> a build folder and all the contest and then reproduce it.
>
> Is there a way to make th
Hi Wido,
Thank you for the response:
> On Aug 17, 2016, at 16:25, Wido den Hollander wrote:
>
>
>> Op 17 augustus 2016 om 17:44 schreef Dan Jakubiec :
>>
>>
>> Hello, we have a Ceph cluster with 8 OSD that recently lost power to all 8
>> machines. We've managed to recover the XFS filesyste
> Op 17 augustus 2016 om 17:44 schreef Dan Jakubiec :
>
>
> Hello, we have a Ceph cluster with 8 OSD that recently lost power to all 8
> machines. We've managed to recover the XFS filesystems on 7 of the machines,
> but the OSD service is only starting on 1 of them.
>
> The other 5 machines
Hi All,
I'm writing a small piece of code to call fsfreeze/unfreeze that can be invoked
by a RADOS notify. I have the basic watch/notify
functionality working but I need to be able to determine if the notify message
is to freeze or unfreeze, or maybe something
completely unrelated.
I'm looking
Hey cephers,
Just a reminder that the August Ceph Tech Talk is on for next Thursday
@ 1p EDT.
http://ceph.com/ceph-tech-talks/
Alfredo Deza will be talking about ‘Unified CI: transitioning away
from gitbuilders’ and a special guest moderator will be recording the
event while I’m running around d
Hi,
On 08/16/2016 02:16 PM, Lenz Grimmer wrote:
> I blogged about the state of Ceph support a few months ago [1], a
> followup posting is currently in the works.
>
> [1]
> https://blog.openattic.org/posts/update-the-state-of-ceph-support-in-openattic/
FWIW, the update has been published now:
Hello, we have a Ceph cluster with 8 OSD that recently lost power to all 8
machines. We've managed to recover the XFS filesystems on 7 of the machines,
but the OSD service is only starting on 1 of them.
The other 5 machines all have complaints similar to the following:
2016-08-17 09:32
Hello Mart,
Thanks a lot for the detailed information!
Please find my response inline and help me to get more knowledge on it
Ceph works best with more hardware. It is not really designed for small
scale setups. Of course small setups can work for a PoC or testing, but I
would not advise this fo
Hi Ceph User,
When I make change inside ceph codes in the development mode,
I found that recompiling takes around an hour because I have to remove
a build folder and all the contest and then reproduce it.
Is there a way to make the compiling process be faster? something like only
compile a partic
Hello Brian,
Thanks for your response!
Can you please elaborate on this.
Do you mean i must use
4 x 1TB HDD on each nodes rather than 2 x 2TB?
This is going to be a lab environment. Can you please suggest to have best
possible design for my lab environment.
On Wed, Aug 17, 2016 at 9:54 AM,
You're going to see pretty slow performance on a cluster this size
with spinning disks...
Ceph scales very very well but at this type of size cluster it can be
challenging to get nice throughput and iops..
for something small like this either use all ssd osds or consider
having more spinning osds
Dear Gaurav,
Ceph works best with more hardware. It is not really designed for small
scale setups. Of course small setups can work for a PoC or testing, but
I would not advise this for production.
If you want to proceed however, have a good look the manuals or this
mailinglist archive and do inv
Dear Ceph Users,
I need your help to redesign my ceph storage network.
As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.
Now we are ordering Local HDDs.
My Network would be
Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3
Dear Ceph Users,
I need your help to redesign my ceph storage network.
As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.
Now we are ordering Local HDDs.
My Network would be
Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3
Dear Ceph Users,
Can you please address my scenario and suggest me a solution.
Regards
Gaurav Goyal
On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal
wrote:
> Hello
>
>
> I need your help to redesign my ceph storage network.
>
> As suggested in earlier discussions, i must not use SAN storage. So
Hi Stefan,
I have same problem than you, trying to monitor ceph through the socket with a
non-root user.
Do you have find a clean way to add write permissions to ceph group to the
socket ?
- Mail original -
De: "Stefan Priebe, Profihost AG"
À: "Gregory Farnum"
Cc: "ceph-users"
Envo
Hi Eric,
I never installed inkscope before, (I'm currently using collectd -influxdb
-grafana).
With the new 1.4 release, I see that now it's possible to use collect-influxdb
(great I'm already using them),
but does it still need old monitoring daemons (ceph-rest-api,
cephprobe,sysprobe,..)
fr
sorry guys, I solved the problem.
The issue because high io wait on the local disk of the monitor servers.
I migrate the local disks from raid1 to raid5 to get more io .
(The leveldb stores on the local disks of the monitor servers and each
change in the map requires update of the database)
__
after changing replication factor from 2 to 1
the ceph cluster(mons) doesn't respond to commands
when I run ceph -s i get timeout
everything gone good till I created EC pool and then each changing in the
replication pools cause the cluster to be irresponsive and with one core
100% util with 80 %
23 matches
Mail list logo