On 07. sep. 2016 02:51, Vlad Blando wrote:
Hi,
I replaced a failed OSD and was trying to add it back to the pool, my
problem is that I am not detecting the physical disk. It looks like I
need to initialize it via the hardware raid before I can see it on the OS.
If I'm going to restart the said
Could be related to this? http://tracker.ceph.com/issues/13844
On Wed, Sep 7, 2016 at 7:40 AM, james wrote:
> Hi,
>
> Not sure if anyone can help clarify or provide any suggestion on how to
> troubleshoot this
>
> We have a ceph cluster recently build up with ceph version Jewel, 10.2.2.
> Based
Hello ceph people,
yesterday i stopped one of my OSDs via
root@:~# systemctl stop ceph-osd@10
and tried to flush the journal for this osd via
root@:~# ceph-osd -i 10 --flush-journal
but getting this output on the screen:
SG_IO: questionable sense data, results may be incorrect
SG_IO: questio
Hi All
We are preparing upgrade from Ceph Infernalis 9.2.0 to Ceph Jewel
10.2.2. Based on the Upgrade procedure documentation
http://docs.ceph.com/docs/jewel/install/upgrading-ceph/ it sound easy.
But it often fails when you think it's easy. Therefore I would like to
know your opinion for the fol
Hi,
I think it's more simple to
1) change the repository
2) apt-get dist-upgrade
3) restart mon on each node
4) restart osd on each node
done
I have upgrade 4 cluster like this without any problem
I never have used ceph-deploy for upgrade.
- Mail original -
De: "felderm"
À: "ceph-
Well not entirely too late I guess :-(
I woke up this morning to see that two OTHER osd's had been marked down
and out.
I again restarted the osd daemons and things seem to be ok at this point.
I agree that I need to get to the bottom on why this happened.
I have uploaded the log files from
Hi,
just in case if someone experience same problem: the only thing that
helped, was restart of gateway. Only when I restarted it, I was able to
create that bucket without "access denied" error on other operations.
Seems, RGW had some old data cached in it.
Arvydas
On Tue, Sep 6, 2016 at 6:10 PM
Hi,
One of the use-cases I'm currently testing is the possibility to replace
a NFS storage cluster using a Ceph cluster.
The idea I have is to use a server as an intermediate gateway. On the
client side it will expose a NFS share and on the Ceph side it will
mount the CephFS using mount.ceph. The
I've had to force recreate some PGs on my cephfs data pool due to some
cascading disk failures in my homelab cluster. Is there a way to easily
determine which files I need to restore from backup? My metadata pool is
completely intact.
Thanks for any help and suggestions.
Sincerely,
Michael
Have you seen this :
https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
> The idea I have is to
I want to access ceph cluster rbd images via python interface. In a
standalone simple python script this works without problems. However i want
to create a plugin for bareos backup where this does not work and cluster
configure exits with error:
cluster =
rados.Rados(rados_id="admin",clustername="
Hey again,
now i have stopped my osd.12 via
root@:~# systemctl stop ceph-osd@12
and when i am flush the journal...
root@:~# ceph-osd -i 12 --flush-journal
SG_IO: questionable sense data, results may be incorrect
SG_IO: questionable sense data, results may be incorrect
*** Caught signal (Segmen
Thanks
/vlad
On Wed, Sep 7, 2016 at 9:47 AM, LOPEZ Jean-Charles
wrote:
> Hi,
>
> the stray replicas will be automatically removed in the background.
>
> JC
>
> On Sep 6, 2016, at 17:58, Vlad Blando wrote:
>
> Sorry bout that
>
> It's all set now, i thought that was replica count as it is also
On 09/06/2016 10:16 PM, Dan Jakubiec wrote:
Hello, I need to issue the following commands on millions of objects:
rados_write_full(oid1, ...)
rados_setxattr(oid1, "attr1", ...)
rados_setxattr(oid1, "attr2", ...)
Would it make it any faster if I combined all 3 of these into a single
I have clients accessing CephFS over nfs (kernel nfs). I was seeing slow
writes with sync exports. I haven't had a chance to investigate and in the
meantime I'm exporting with async (not recommended, but acceptable in my
environment).
I've been meaning to test out Ganesha for a while now
@Sean, h
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
> The idea I have is to use a server as an intermediate gateway. On the
> client side it will expose a NFS share
Hey Henry, What a sec… That R101 is for our Nagios node, not apart of the Ceph
monitoring nodes. So both the R133 and the one R101 should have redundant
power supplies. Make sense?
Cheers
Mike
> On Sep 7, 2016, at 10:55 AM, Henry Figueroa
> wrote:
>
> Mike
> The monitoring node (R101.v6) do
Hey cephers,
For those who are attending OpenStack Summit in Barcelona this October
and have not yer purchased your ticket I wanted to share a 20%
discount code that has just been provided to Red Hat that we can
freely share. The code you need to enter is:
RED_OPENSTACKSUMMIT
This 20% discount c
And since my url pasting failed on the Eventbrite link, here is the
registration link:
https://openstacksummit2016barcelona.eventbrite.com/
On Wed, Sep 7, 2016 at 2:30 PM, Patrick McGarry wrote:
> Hey cephers,
>
> For those who are attending OpenStack Summit in Barcelona this October
> and hav
Hey cephers,
Tonight the CDM meeting is an APAC-friendly time slot (9p EDT), so
please drop a quick 1-liner and pad link if you have something to
discuss. Thanks!
http://tracker.ceph.com/projects/ceph/wiki/CDM_07-SEP-2016
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
h
On Wed, Sep 7, 2016 at 7:44 AM, Michael Sudnick
wrote:
> I've had to force recreate some PGs on my cephfs data pool due to some
> cascading disk failures in my homelab cluster. Is there a way to easily
> determine which files I need to restore from backup? My metadata pool is
> completely intact.
Hello,
On Wed, 7 Sep 2016 08:38:24 -0400 Shain Miley wrote:
> Well not entirely too late I guess :-(
>
Then re-read my initial reply and see if you can find something in other
logs (syslog/kernel) to explain this.
As well as if those OSDs are all on the same node, maybe have missed
their upgrade
Hi Greg...
I've had to force recreate some PGs on my cephfs data pool due to some
cascading disk failures in my homelab cluster. Is there a way to easily
determine which files I need to restore from backup? My metadata pool is
completely intact.
Assuming you're on Jewel, run a recursive "scru
23 matches
Mail list logo