I am getting this to, but in CentOS 7... fighting to get the 0.80.6
update, but EPEL blocks Ceph repos.
Marco Garcês
#sysadmin
Maputo - Mozambique
On Mon, Oct 13, 2014 at 8:20 AM, 10 minus wrote:
> Hi ,
>
> I have observed that the latest ceph packages from ceph are being blocked by
> ceph pack
On 13/10/2014 09:39, Marco Garcês wrote:
> I am getting this to, but in CentOS 7... fighting to get the 0.80.6
> update, but EPEL blocks Ceph repos.
Hi,
Could you please paste the shell session or the error message ? Maybe it's
because i'm new to CentOS but ... I don't know what "blocked" pack
You all do realize of course that you should NOT install 0.80.6 and wait
for .7 instead, right?
As in this mail from Saturday:
http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/21443
On Mon, 13 Oct 2014 09:39:11 +0200 Marco Garcês wrote:
> I am getting this to, but in CentOS 7... f
On Fri, Oct 10, 2014 at 12:09 AM, Christopher Armstrong
wrote:
> Turns out we need to explicitly list --privileged in addition to the other
> flags. Here's how it runs now:
>
> docker run --name deis-store-volume --rm -e HOST=$COREOS_PRIVATE_IPV4 --net
> host --privileged -v /dev:/dev -v /sys:/sys
Hey all,
I just saw this thread, I’ve been working on this and was about to share it:
https://etherpad.openstack.org/p/kilo-ceph
Since the ceph etherpad is down I think we should switch to this one as an
alternative.
Loic, feel free to work on this one and add more content :).
On 13 Oct 2014,
Hi,
I've deployed a number of clusters before on debian boxes, but now I
have to create one on centos boxes (not my choice) and I've run into a
road block.
I have started with the usual steps:
- complete preflight
- ceph-deploy new node01
- ceph-deploy install node01
- ceph-deploy mon create-ini
Wow... disregard this, I figured it out.
Overly restrictive iptables rules permitted incoming traffic from lo,
but outgoing traffic "to" lo was blocked... *facepalm*
On 13/10/2014 15:09, Marc wrote:
> Hi,
>
> I've deployed a number of clusters before on debian boxes, but now I
> have to create
On 10.10.2014 02:19, Marcus White wrote:
>
> For VMs, I am trying to visualize how the RBD device would be exposed.
> Where does the driver live exactly? If its exposed via libvirt and
> QEMU, does the kernel driver run in the host OS, and communicate with
> a backend Ceph cluster? If yes, does li
Great!
Some more folowups:)
1. In what stack is the driver used in that case if QEMU communicates
directly with librados?
2. With QEMU-librados I would guess the new kernel targets/LIO would
not work? They give better performance and lower CPU..
3. Where is the kernel driver used in that case?..
On 13.10.2014 16:47, Marcus White wrote:
>
> 1. In what stack is the driver used in that case if QEMU communicates
> directly with librados?
The qemu process directly communicates with the Ceph cluster via
network. It is a "normal" userland process when it comes to the host kernel.
> 2. With QEM
ceph auth list on gateway node has the following. I think I am using the
correct name in ceph.conf.
gateway@gateway:~$ ceph auth list
installed auth entries:
client.admin
key: AQBL3SxUiMplMxAAjrL6oT+0Q5JtdrD90toXqg==
caps: [mds] allow
caps: [mon] allow *
caps:
There's also a ceph related session proposed for the 'Ops meetup'
track. The track itself has several rooms over two days though
schedul isn't finalized yet.
I belive there's still more space for more working groups if anyone
wants to setup an ops focused ceph working group in addition to the
de
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
What we've working on:
* better mds/cephfs health reports to the monitor
* mds journal dump/repair tool
* many kernel and ceph-fuse/libcephfs client bug fixes
* file si
On 13-10-14 20:16, Sage Weil wrote:
> We've been doing a lot of work on CephFS over the past few months. This
> is an update on the current state of things as of Giant.
>
> What we've working on:
>
> * better mds/cephfs health reports to the monitor
> * mds journal dump/repair tool
> * many kerne
On Mon, 13 Oct 2014, Wido den Hollander wrote:
> On 13-10-14 20:16, Sage Weil wrote:
> > With Giant, we are at a point where we would ask that everyone try
> > things out for any non-production workloads. We are very interested in
> > feedback around stability, usability, feature gaps, and performa
Hi List,
I have a ceph cluster setup with two networks, one for public traffic
and one for cluster traffic.
Network failures in the public network are handled quite well, but
network failures in the cluster network are handled very badly.
I found several discussions on the ml about this topic and
I would be interested in testing the Samba VFS and Ganesha NFS
integration with CephFS. Are there any notes on how to configure these
two interfaces with CephFS?
Eric
We've been doing a lot of work on CephFS over the past few months.
This
is an update on the current state of things as of G
On Mon, Oct 13, 2014 at 11:32 AM, Martin Mailand wrote:
> Hi List,
>
> I have a ceph cluster setup with two networks, one for public traffic
> and one for cluster traffic.
> Network failures in the public network are handled quite well, but
> network failures in the cluster network are handled ver
Hi Greg,
I took down the interface with "ifconfig p7p1 down".
I attached the config of the first monitor and the first osd.
I created the cluster with ceph-deploy.
The version is ceph version 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82).
On 13.10.2014 21:45, Gregory Farnum wrote:
> How did you
On Mon, 13 Oct 2014, Eric Eastman wrote:
> I would be interested in testing the Samba VFS and Ganesha NFS integration
> with CephFS. Are there any notes on how to configure these two interfaces
> with CephFS?
For samba, based on
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/samba.py#L1
On 10/13/2014 4:56 PM, Sage Weil wrote:
On Mon, 13 Oct 2014, Eric Eastman wrote:
I would be interested in testing the Samba VFS and Ganesha NFS integration
with CephFS. Are there any notes on how to configure these two interfaces
with CephFS?
For ganesha I'm doing something like:
FSAL
{
CE
Bump:). It would be helpful, if someone can share info related to debugging
using counters/stats
On Sun, Oct 12, 2014 at 7:42 PM, Jakes John
wrote:
> Hi All,
> I would like to know if there are useful performance counters in
> ceph which can help to debug the cluster. I have seen hundr
Well that certainly looks ok. So entries in [client.radosgw.gateway]
*should* work. If they are not then that points to something else not
setup right on the ceph or radosgw side.
What version of ceph is this?
I'd do the following:
- check all ceph hosts have the same ceph version running
- re
Following the manual starter guide, I set up a Ceph cluster with HEALTH_OK,
(1 mon, 2 osd). In testing out auth commands I misconfigured the
client.admin key by accidentally deleting "mon 'allow *'".
Now I'm getting EACESS denied for all ceph actions.
Is there a way to recover or recreate a new
On 14-10-14 00:53, Anthony Alba wrote:
> Following the manual starter guide, I set up a Ceph cluster with HEALTH_OK,
> (1 mon, 2 osd). In testing out auth commands I misconfigured the
> client.admin key by accidentally deleting "mon 'allow *'".
>
> Now I'm getting EACESS denied for all ceph actio
I have Ceph 0.85 version. I can still talk to this gateway node like below
using swift v1.0. Note that this user was created using radosgw-admin..
swift -V 1.0 -A http://gateway.ex.com/auth/v1.0 -U s3User:swiftUser -K
CRV8PeotaW204nE9IyutoVTcnr+2Uw8M8DQuRP7i list
my-Test
I am at total loss now
On Mon, Oct 13, 2014 at 4:04 PM, Wido den Hollander wrote:
> On 14-10-14 00:53, Anthony Alba wrote:
>> Following the manual starter guide, I set up a Ceph cluster with HEALTH_OK,
>> (1 mon, 2 osd). In testing out auth commands I misconfigured the
>> client.admin key by accidentally deleting "mon
Hi,
# First a short description of our Ceph setup
You can skip to the next section ("Main questions") to save time and
come back to this one if you need more context.
We are currently moving away from DRBD-based storage backed by RAID
arrays to Ceph for some of our VMs. Our focus is on resilienc
Le 14/10/2014 01:28, Lionel Bouton a écrit :
> Hi,
>
> # First a short description of our Ceph setup
>
> You can skip to the next section ("Main questions") to save time and
> come back to this one if you need more context.
Missing important piece of information: this is Ceph 0.80.5 (guessable
as
That's the same version that I'm using.
Did you check the other points I mentioned:
- check *all* ceph host are running the same version
- restart 'em all to be sure
I did think that your 'auth list' output looked strange, but I guessed
that you have cut out the osd and mon info before placing
> You can disable cephx completely, fix the key and enable cephx again.
>
> auth_cluster_required, auth_service_required and auth_client_required
That did not work: i.e disabling cephx in the cluster conf and
restarting the cluster.
The cluster still complained about failed authentication.
>I *be
Hi Sebastien,
Great ! Let's all join under this pad :-)
Cheers
On 13/10/2014 02:40, Sebastien Han wrote:
> Hey all,
>
> I just saw this thread, I’ve been working on this and was about to share it:
> https://etherpad.openstack.org/p/kilo-ceph
> Since the ceph etherpad is down I think we should
I did restart the ceph cluster only to see the ceph health to be NOT OK. I did
the purge operation and re-installed ceph packages on all nodes. This time,
ceph admin node has 0.80.6 and all other cluster nodes including Openstack
client node have 0.80.5 version. Same error logs like before -
2
Was that with you moving just rgw_keystone_url into [global]? If so then
yeah, that won't work as it will be missing your auth token etc (so will
fail to authorize always). You need to chase up why it is not seeing
some/all settings in the [client.radosgw.gateway] section.
I have a suspicion t
34 matches
Mail list logo