On 08/14/2014 08:18 AM, Nigel Williams wrote:
Anyone know if this is safe in the short term? we're rebuilding our
nova-compute nodes and can make sure the Dumpling versions are pinned
as part of the process in the future.
The client should be fully backwards compatible. It figures out which
fe
On 08/13/2014 11:36 PM, Christian Balzer wrote:
Hello,
On Thu, 14 Aug 2014 03:38:11 + David Moreau Simard wrote:
Hi,
Trying to update my continuous integration environment.. same deployment
method with the following specs:
- Ubuntu Precise, Kernel 3.2, Emperor (0.72.2) - Yields a succes
Hello,
On Wed, 13 Aug 2014 14:55:29 +0100 James Eckersall wrote:
> Hi Christian,
>
> Most of our backups are rsync or robocopy (windows), so they are
> incremental file-based backups.
> There will be a high level of parallelism as the backups run mostly
> overnight with similar start times.
In
Same question here,
I'm contributor on proxmox, and we don't known if we can upgrade librbd safely,
for users with dumpling cluster.
Also, for ceph enterprise , s oes inktank support dumpling enterprise +
firefly librbd ?
- Mail original -
De: "Nigel Williams"
À: ceph-users@lis
Anyone know if this is safe in the short term? we're rebuilding our
nova-compute nodes and can make sure the Dumpling versions are pinned
as part of the process in the future.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
Hello,
On Thu, 14 Aug 2014 03:38:11 + David Moreau Simard wrote:
> Hi,
>
> Trying to update my continuous integration environment.. same deployment
> method with the following specs:
> - Ubuntu Precise, Kernel 3.2, Emperor (0.72.2) - Yields a successful,
> healthy cluster.
> - Ubuntu Trusty
Hi,
Trying to update my continuous integration environment.. same deployment method
with the following specs:
- Ubuntu Precise, Kernel 3.2, Emperor (0.72.2) - Yields a successful, healthy
cluster.
- Ubuntu Trusty, Kernel 3.13, Firefly (0.80.5) - I have stuck placement groups.
Here’s some releva
Hi Ceph Users,
We have deployed a cloud infrastructure and we are
using ceph (version 0.80.1) for the storage solution and
opennebula(version 4.6.1) for the compute nodes. Ceph pool is configured
to have a replication of 3.
We have monitored one OSD to be down. We
checked the VMs (running on
Hi Kenneth,
Could you give your configuration related to EC and KeyValueStore?
Not sure whether it's bug on KeyValueStore
On Thu, Aug 14, 2014 at 12:06 AM, Kenneth Waegeman
wrote:
> Hi,
>
> I was doing some tests with rados bench on a Erasure Coded pool (using
> keyvaluestore-dev objectstore) on
Hi,
It cannot be changed because it is compiled in
https://github.com/ceph/ceph/blob/firefly/src/osd/osd_types.h#L810
Cheers
On 13/08/2014 20:44, Shayan Saeed wrote:
> Hi,
>
> Is there a way to set the default pool type as erasure instead of replicated?
>
> Regards,
> Shayan Saeed
> _
On 08/11/2014 01:14 PM, Zach Hill wrote:
Thanks for the info! Great data points. We will still recommend a
separated solution, but it's good to know that some have tried to
unify compute and storage and have had some success.
Yes, and using drives on compute node for backup is a seducing idea
If the journal is lost, the OSD is lost. This can be a problem if you use
1 SSD for journals for many OSDs.
There has been some discussion about making the OSDs able to recover from a
lost journal, but I haven't heard anything else about it. I haven't been
paying much attention to the developer
Also, even a "ls -ltr" could be done inside the /mnt of the RBD that
it freeze the prompt. Any ideas? I've attach some syslogs from one of
the OSD servers and also from the client. Both are running Ubuntu
14.04LTS with Kernel 3.15.8.
The cluster is not usable at this point, since I can't run a
Hi,
Is there a way to set the default pool type as erasure instead of replicated?
Regards,
Shayan Saeed
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I believe we haven't built on CentOS7 for firefly yet, but, you could
(I think) point to the rhel7 repos in the meantime.
You would need to use the `--repo-url` along with the `--gpg-url` to
point to the rhel7 repos and take it from there.
On Wed, Aug 13, 2014 at 2:15 PM, Ojwang, Wilson O (Wilson
Alfredo,
Thanks for your quick response.
Yes, this is a CentOS7 box and I did upgrade to ceph-deploy 1.5.11 which was
released today, but I still see the error below.
===\
[ceph@nfv3 ~]$ ceph-deploy --version
1.5.11
[ceph@nfv1 my-cluster]$ ceph-deploy install nfv2
[ceph_de
Yes, ceph pg query, not dump. Sorry about that.
Are you having problems with OSD stability? There's a lot of history in
the [recovery_state][past_intervals]. That's normal when OSDs go down, and
out, and come back up and in. You have a lot of history there. You might
even be getting into the po
I assume this was on a CentOS7 box? (a full paste of the ceph-deploy
output would tell)
If that is the case, you should use ceph-deploy 1.5.11 which was released today.
On Wed, Aug 13, 2014 at 11:24 AM, Ojwang, Wilson O (Wilson)
wrote:
> All,
>
>
>
> Need help with the error below that I encount
Robert, thanks for your reply, please see my comments inline
- Original Message -
> From: "Robert van Leeuwen"
> To: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com
> Sent: Wednesday, 13 August, 2014 6:57:57 AM
> Subject: RE: cache pools on hypervisor servers
> > I was hoping to get
Hi,
I was doing some tests with rados bench on a Erasure Coded pool (using
keyvaluestore-dev objectstore) on 0.83, and I see some strangs things:
[root@ceph001 ~]# ceph status
cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d
health HEALTH_WARN too few pgs per osd (4 < min 20)
monma
All,
Need help with the error below that I encountered while executing the command
"ceph-deploy install host"
==\
[nfv2][INFO ] Running command: sudo rpm -Uvh --replacepkgs
http://ceph.com/rpm-firefly/el7/
noarch/ceph-release-1-0.el7.noarch.rpm
[nfv2][DEBUG ] Ret
Actually is very strange, since if i run the fio test on the client,
and also un parallel run a iostat on all the OSD servers, i don't see
any workload going on over the disks, I mean... nothing! 0.00and
also the fio script on the client is reacting very rare too:
$ sudo fio --filename=/d
On 08/13/2014 08:19 AM, German Anders wrote:
Hi to all,
I'm having a particular behavior on a new Ceph cluster. I've map
a RBD to a client and issue some performance tests with fio, at this
point everything goes just fine (also the results :) ), but then I try
to run another new test on a
Hi Christian,
Most of our backups are rsync or robocopy (windows), so they are
incremental file-based backups.
There will be a high level of parallelism as the backups run mostly
overnight with similar start times.
So far I've seen high iowait on the samba head we are using, but low osd
resource u
Hi All,
This is a bug fix release of ceph-deploy, it primarily addresses the
issue of failing to install
Ceph on CentOS7 distros.
Make sure you update!
-Alfredo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/c
Hi to all,
I'm having a particular behavior on a new Ceph cluster. I've map
a RBD to a client and issue some performance tests with fio, at this
point everything goes just fine (also the results :) ), but then I try
to run another new test on a new RBD on the same client, and suddenly
t
Hi,
I want to create a cluster with some osds running with a keyvalue
store, and some others with a filestore. I want to deploy these with
ceph-deploy (scripted).
To do this, I have to put 'osd objectstore = keyvaluestore-dev' in the
config file. But I don't want this for each osd, so I can
On Wed, 13 Aug 2014 12:47:22 +0100 James Eckersall wrote:
> Hi Christian,
>
> We're actually using the following chassis:
> http://rnt.de/en/bf_xxlarge.html
>
Ah yes, one of the Blazeback heritage.
But rather more well designed and thought through than most of them.
Using the motherboard SATA3
You can find related codes at osd/PG.cc. It's used to control the state of PG.
On Wed, Aug 13, 2014 at 5:17 PM, yuelongguang wrote:
> 2014-08-11 10:17:04.591497 7f0ec9b4f7a0 10 osd.0 pg_epoch: 153 pg[5.63(
> empty local-les=153 n=0 ec=81 les/c 153/153 152/152/152) [0] r=0 lpr=153
> crt=0'0 mlcod
If I remembered correctly, there exists option to control it. Now I
don't have env to verify, but you can look up via this:
ceph --show-config | grep concurrent
a option called "rbd_*concurrent_ops" will cover it.
On Wed, Aug 13, 2014 at 6:01 PM, Dietmar Maurer wrote:
> Hi all,
>
>
>
> I just n
Hi,
> Have you confirmed that if you unmount cephfs on /srv/micha the NFS export
works?
Yes, im probably hitting this bug: http://tracker.ceph.com/issues/7750
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/l
On Wed, 13 Aug 2014, Micha Krause wrote:
> Hi,
>
> any ideas?
Have you confirmed that if you unmount cephfs on /srv/micha the NFS export
works?
sage
>
> Micha Krause
>
> Am 11.08.2014 16:34, schrieb Micha Krause:
> > Hi,
> >
> > Im trying to build a cephfs to nfs gateway, but somehow i can'
Hi Christian,
We're actually using the following chassis:
http://rnt.de/en/bf_xxlarge.html
So yes there are SAS expanders. There are 4 expanders, one is used for the
SSD's and the other three are for the SATA drives.
The 4 SSD's for the OSD's are mounted at the back of the chassis, along
with th
Hi,
Im trying to build a cephfs to nfs gateway, but somehow i can't mount the share
if it is backed by cephfs:
mount ngw01.ceph:/srv/micha /mnt/tmp/
mount.nfs: Connection timed out
cephfs mount on the gateway:
10.210.32.11:6789:/ngw on /srv type ceph
(rw,relatime,name=cephfs-ngw,secret=,no
Hi all,
I just noticed that a snapshot rollback produces very high load on small
clusters. Seems
all OSDs copies data at full speed, and client access speed drops from 480MB/s
to 10MB/s.
Is there a way to limit rollback speed/priority?
___
ceph-us
Hello,
On Wed, 13 Aug 2014 09:15:34 +0100 James Eckersall wrote:
> Hi,
>
> I'm looking for some advice on my ceph cluster.
>
> The current setup is as follows:
>
> 3 mon servers
>
> 4 storage servers with the following spec:
>
> 1x Intel Xeon E5-2640 @2.50GHz 6 core (12 with hyperthreading)
2014-08-11 10:17:04.591497 7f0ec9b4f7a0 10 osd.0 pg_epoch: 153 pg[5.63( empty
local-les=153 n=0 ec=81 les/c 153/153 152/152/152) [0] r=0 lpr=153 crt=0'0
mlcod 0'0 inactive] null
2014-08-11 10:17:04.591501 7f0eb2b8f700 5 osd.0 pg_epoch: 155 pg[0.10( empty
local-les=153 n=0 ec=1 les/c 153/153 152
I deployed Ceph with chef, but sometimes Monitors failed to join the cluster.
The setup steps:
First, I deployed monitors in two hosts(lc001 and lc003) and I succeeded.
Then, I added two Monitors (lc002 and lc004)to the cluster about 30 minutes
later.
I used the same ceph-cookbook,but three diffe
On Mon, Aug 11, 2014 at 10:34 PM, Micha Krause wrote:
> Hi,
>
> Im trying to build a cephfs to nfs gateway, but somehow i can't mount the
> share if it is backed by cephfs:
>
>
> mount ngw01.ceph:/srv/micha /mnt/tmp/
> mount.nfs: Connection timed out
>
> cephfs mount on the gateway:
>
> 10.210.32.
Hi,
any ideas?
Micha Krause
Am 11.08.2014 16:34, schrieb Micha Krause:
Hi,
Im trying to build a cephfs to nfs gateway, but somehow i can't mount the share
if it is backed by cephfs:
mount ngw01.ceph:/srv/micha /mnt/tmp/
mount.nfs: Connection timed out
cephfs mount on the gateway:
10.210.
Hi,
I'm looking for some advice on my ceph cluster.
The current setup is as follows:
3 mon servers
4 storage servers with the following spec:
1x Intel Xeon E5-2640 @2.50GHz 6 core (12 with hyperthreading).
64GB DDR3 RAM
2x SSDSC2BB080G4 for OS
LSI MegaRAID 9260-16i with the following drives:
41 matches
Mail list logo