Hello,
is there a way to get librados for MacOS? Has anybody tried to build
librados for MacOS? Is this even possible?
Best,
Martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Aug 3, 2017 at 5:21 PM, Martin Palma wrote:
> Hello,
>
> is there a way to get librados for MacOS? Has anybody tried to build
> librados for MacOS? Is this even possible?
Yes, it is eminently possible, but would require a dedicated effort.
As far as I know there is no one working on this
On 03/08/2017 09:36, Brad Hubbard wrote:
> On Thu, Aug 3, 2017 at 5:21 PM, Martin Palma wrote:
>> Hello,
>>
>> is there a way to get librados for MacOS? Has anybody tried to build
>> librados for MacOS? Is this even possible?
>
> Yes, it is eminently possible, but would require a dedicated effort
Thanks for this -- it is indeed pretty close to what I was looking
for. I'll look more in detail at its heuristic to confirm it's
correctly telling you which OSDs are safe to remove or not.
BTW, I had to update all the maps to i64 from i32 to make this work --
I'll be sending a pull req.
-- Dan
On Fri, Jul 28, 2017 at 9:42 PM, Peter Maloney
wrote:
> Hello Dan,
>
> Based on what I know and what people told me on IRC, this means basicaly the
> condition that the osd is not acting nor up for any pg. And for one person
> (fusl on irc) that said there was a unfound objects bug when he had siz
Hello!
When setting up Ceph Filesystem at least two RADOS pools, one for data and one
for metadata, are required.
Example:
$ ceph osd pool create cephfs_data
$ ceph osd pool create cephfs_metadata
My question is regarding the value :
Should this value be equal for data an metadata?
Is my assump
On 08/03/17 11:05, Dan van der Ster wrote:
> On Fri, Jul 28, 2017 at 9:42 PM, Peter Maloney
> wrote:
>> Hello Dan,
>>
>> Based on what I know and what people told me on IRC, this means basicaly the
>> condition that the osd is not acting nor up for any pg. And for one person
>> (fusl on irc) that
On Thu, Aug 3, 2017 at 11:42 AM, Peter Maloney
wrote:
> On 08/03/17 11:05, Dan van der Ster wrote:
>
> On Fri, Jul 28, 2017 at 9:42 PM, Peter Maloney
> wrote:
>
> Hello Dan,
>
> Based on what I know and what people told me on IRC, this means basicaly the
> condition that the osd is not acting nor
Hello,
I was running ceph cluster with hdds for OSDs, now I've created new
dedicated SSD pool within same cluster, everything looks fine, cluster
is "healthy", but if I try to create new rbd image in this new ssd
pool it just hangs, I've tried both "rbd" command and within proxmox
gui, " rbd" just
> Op 2 augustus 2017 om 17:55 schreef Marcus Haarmann
> :
>
>
> Hi,
> we are doing some tests here with a Kraken setup using bluestore backend (on
> Ubuntu 64 bit).
> We are trying to store > 10 mio very small objects using RADOS.
> (no fs, no rdb, only osd and monitors)
>
> The setup was
Hi all,
One thing which has bothered since the beginning of using ceph is that a
reboot of a single OSD causes a HEALTH_ERR state for the cluster for at
least a couple of seconds.
In the case of planned reboot of a OSD node, should I do some extra
commands in order not to go to HEALTH_ERR state?
Hi everyone:
I just wonder is erasure-code-pool’s pg num calculation rule same as common
pool?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
root ssds {
id -9 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
}
It is empty in ssds!
rule ssdpool {
ruleset 1
type replicated
min_size 1
max_size 10
step take ssds
step chooseleaf firstn
set the osd noout nodown
At 2017-08-03 18:29:47, "Hans van den Bogert" wrote:
Hi all,
One thing which has bothered since the beginning of using ceph is that a reboot
of a single OSD causes a HEALTH_ERR state for the cluster for at least a couple
of seconds.
In the case of planned reb
> Op 3 augustus 2017 om 13:36 schreef linghucongsong :
>
>
>
>
> set the osd noout nodown
>
While noout is correct and might help in some situations, never set nodown
unless you really need that. It will block I/O since you are taking down OSDs
which aren't marked as down.
In Hans's case
What are the implications of this? Because I can see a lot of blocked
requests piling up when using 'noout' and 'nodown'. That probably makes
sense though.
Another thing, no when the OSDs come back online, I again see multiple
periods of HEALTH_ERR state. Is that to be expected?
On Thu, Aug 3, 201
Thanks for answering even before I asked the questions:)
So bottom line, HEALTH_ERR state is simply part of taking a (bunch of) OSD
down? Is HEALTH_ERR period of 2-4 seconds within normal bounds? For
context, CPUs are 2609v3 per 4 OSDs. (I know; they're far from the fastest
CPUs)
On Thu, Aug 3,
> Op 3 augustus 2017 om 14:14 schreef Hans van den Bogert
> :
>
>
> Thanks for answering even before I asked the questions:)
>
> So bottom line, HEALTH_ERR state is simply part of taking a (bunch of) OSD
> down? Is HEALTH_ERR period of 2-4 seconds within normal bounds? For
> context, CPUs are
Hello
Our goal it is make fast storage as possible.
By now our configuration of 6 servers look like that:
* 2 x CPU Intel Gold 6150 20 core 2.4Ghz
* 2 x 16 Gb NVDIMM DDR4 DIMM
* 6 x 16 Gb RAM DDR4
* 6 x Intel DC P4500 4Tb NVMe 2.5"
* 2 x Mellanox ConnectX-4 EN Lx 25Gb dualport
What a status in c
Yes. The only "difference" is that the number of replicas is k+n combined.
So if you have 6+2, then each PG will reside on 8 osds. The limitation is
how many PGs an osd daemon is responsible for which directly impacts its
memory requirements.
On Thu, Aug 3, 2017, 6:32 AM Zhao Damon wrote:
> Hi e
I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs
that report they need to be scrubbed, however the command to scrub them
seems to have gone away. The flapping OSD is an issue for another thread.
Please advise.
Example:
roger@desktop:~$ ceph --version
ceph version 12.1.2 (b
Hello!
I have purged my ceph and reinstalled it.
ceph-deploy purge node1 node2 node3
ceph-deploy purgedata node1 node2 node3
ceph-deploy forgetkeys
All disks configured as OSDs are physically in two servers.
Due to some restrictions I needed to modify the total number of disks usable as
OSD, thi
Hi,
On 03.08.2017 16:31, c.mo...@web.de wrote:
Hello!
I have purged my ceph and reinstalled it.
ceph-deploy purge node1 node2 node3
ceph-deploy purgedata node1 node2 node3
ceph-deploy forgetkeys
All disks configured as OSDs are physically in two servers.
Due to some restrictions I needed to m
3. August 2017 16:37, "Burkhard Linke"
schrieb:
> Hi,
>
> On 03.08.2017 16:31, c.mo...@web.de wrote:
>
>> Hello!
>>
>> I have purged my ceph and reinstalled it.
>> ceph-deploy purge node1 node2 node3
>> ceph-deploy purgedata node1 node2 node3
>> ceph-deploy forgetkeys
>>
>> All disks configu
Dear all,
I need to expand a ceph cluster with minimal impact. Reading previous threads
on this topic from the list I've found the ceph-gentle-reweight script
(https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight)
created by Dan van der Ster (Thank you Dan for sharin
I believe that command should still work, but it looks like it requires a
working manager daemon. Did you set one up yet?
On Thu, Aug 3, 2017 at 7:31 AM Roger Brown wrote:
> I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs
> that report they need to be scrubbed, however t
Don't forget that at those sizes the internal journals and rocksdb size
tunings are likely to be a significant fixed cost.
On Thu, Aug 3, 2017 at 3:13 AM Wido den Hollander wrote:
>
> > Op 2 augustus 2017 om 17:55 schreef Marcus Haarmann <
> marcus.haarm...@midoco.de>:
> >
> >
> > Hi,
> > we are
Thank you. That was it. They were installed, but hadn't been restarted
since upgrading. Solved with: sudo systemctl restart ceph-mgr.target
On Thu, Aug 3, 2017 at 1:56 PM Gregory Farnum wrote:
> I believe that command should still work, but it looks like it requires a
> working manager daemon.
Thanks!
On 2017年8月3日 +0800 21:50, David Turner , wrote:
Yes. The only "difference" is that the number of replicas is k+n combined. So
if you have 6+2, then each PG will reside on 8 osds. The limitation is how many
PGs an osd daemon is responsible for which directly impacts its memory
requireme
Hi cephers,
I experienced ceph status into HEALTH_ERR because of pg scrub error.
I thought all I/O is blocked when the status of ceph is Error.
However, ceph could operate normally even thought ceph is in error status.
There are two pools in ceph cluster which are include seperate
nodes.(volume
Depends on the error case – usually you will see blocked IO messages as well if
there is a condition causing OSDs to be unresponsive.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ???
Sent: Friday, 4 August 2017 1:34 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-use
Health_err only indicates that a request might block, not that it will. In
the case of a scrub error, only requests made to the objects flagged as
inconsistent in the failed PG will block. The rest of the objects in that
PG will work fine even though the PG has a scrub error.
On Fri, Aug 4, 2017,
32 matches
Mail list logo