Hi
I think that this problem has already been reported, but I don't get
clearly how to resolve it.
I have an openstack deployment with some compute nodes. The OS
deployment is configured to use a ceph cluster (cinder+glance+nova
ephemeral)
My problem is this: the OS hypervisor stats reports
Hi,
You need configure libvirt to use Ceph as its backend.
Put this config to [libvirt] in nova.conf:
[libvirt]
inject_partition=-2
inject_password = false
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
inject_key=False
images_type
On 12/14/2015 12:41 AM, deeepdish wrote:
> Perhaps I’m not understanding something..
>
> The “extra_probe_peers” ARE the other working monitors in quorum out of
> the mon_host line in ceph.conf.
>
> In the example below 10.20.1.8 = b20s08; 10.20.10.251 = smon01s;
> 10.20.10.252 = smon02s
>
> The
On 12/10/2015 02:56 PM, Jacek Jarosiewicz wrote:
On 12/10/2015 02:50 PM, Dan van der Ster wrote:
On Wed, Dec 9, 2015 at 1:25 PM, Jacek Jarosiewicz
wrote:
2015-12-09 13:11:51.171377 7fac03c7f880 -1
filestore(/var/lib/ceph/osd/ceph-5) Error initializing leveldb :
Corruption:
29 missing files; e.
Hi,
Is there a reason python-flask is not in the repo of infernalis anymore
? In centos7 it is still not in the standard repos or epel..
Thanks!
Kenneth
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-use
Hello,
i'm doing some measuring on test (3 nodes) cluster and see strange performance
drop for sync writes..
I'm using SSD for both journalling and OSD. It should be suitable for
journal, giving about 16.1KIOPS (67MB/s) for sync IO.
(measured using fio --filename=/dev/xxx --direct=1 --sync=1 --r
Hi
I've set up a small (5-node) cluster of Ceph. I'm trying to benchmark
more real-life performance of ceph's block storage, but I'm seeing very
weird (low) values of my benchmark setup.
My cluster consists of 5 nodes, every node has:
2 x 3TB HGST SATA drive
1x Samsung SM 841 120GB SSD for jo
Hi Michal,
You can have a look at a thread I started a few days ago :
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-December/006494.html
I had some interrogations about performances as well and I think the
explanations apply to your case.
Also, your SSD does not seem to be DC grade, w
In a brand new CentOS 7 box I do see python-flask coming from the extras repo:
[vagrant@localhost ~]$ yum provides python-flask
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
* base: mirror.teklinks.com
* epel: fedora-epel.mirror.lstn.net
* extras: mirror.t
Thanks, now it works!!
On 14/12/15 10:10, Le Quang Long wrote:
Hi,
You need configure libvirt to use Ceph as its backend.
Put this config to [libvirt] in nova.conf:
[libvirt]
inject_partition=-2
inject_password = false
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_
Which SSD are you using? Dsync flag will dramatically slow down most SSDs.
You¹ve got to be very careful about the SSD you pick.
Warren Wang
On 12/14/15, 5:49 AM, "Nikola Ciprich" wrote:
>Hello,
>
>i'm doing some measuring on test (3 nodes) cluster and see strange
>performance
>drop for sync
Joao,
Please see below. I think you’re totally right on:
> I suspect they may already have this monitor in their map, but either
> with a different name or a different address -- and are thus ignoring
> probes from a peer that does not match what they are expecting.
The monitor in question ha
Sorry, none of the librbd configuration properties can be live-updated
currently.
--
Jason Dillaman
- Original Message -
> From: "Daniel Schwager"
> To: "ceph-us...@ceph.com"
> Sent: Friday, December 11, 2015 3:35:11 AM
> Subject: [ceph-users] Possible to change RBD-Caching setti
On 12/14/2015 04:49 AM, Nikola Ciprich wrote:
Hello,
i'm doing some measuring on test (3 nodes) cluster and see strange performance
drop for sync writes..
I'm using SSD for both journalling and OSD. It should be suitable for
journal, giving about 16.1KIOPS (67MB/s) for sync IO.
(measured usi
Hi,
I got a functionnal and operationnal ceph cluster (in version 0.94.5),
with 3 nodes (acting for MON and OSD), everything was fine.
I added a 4th osd node (same configuration than 3 others) and now cluster
status is health warn (active+remapped).
cluster e821c68f-995c-41a9-9c46-dbbd0a28
On Sun, Dec 13, 2015 at 7:27 AM, 孙方臣 wrote:
> Hi, All,
>
> I'm setting up federated gateway. One is master zone, the other is slave
> zone. Radosgw-agent is running in slave zone. I have encountered some
> problems, can anybody help answering this:
>
> 1. When put a object to radosgw, there are t
2 datacenters.
-Sam
On Mon, Dec 14, 2015 at 10:17 AM, Reno Rainz wrote:
> Hi,
>
> I got a functionnal and operationnal ceph cluster (in version 0.94.5), with
> 3 nodes (acting for MON and OSD), everything was fine.
>
> I added a 4th osd node (same configuration than 3 others) and now cluster
> s
Thank you for your answer, but I don't really understand what do you mean.
I use this map to distribute replicat into 2 differents dc, but I don't
know where the mistake is.
Le 14 déc. 2015 7:56 PM, "Samuel Just" a écrit :
> 2 datacenters.
> -Sam
>
> On Mon, Dec 14, 2015 at 10:17 AM, Reno Rainz
Hi,
is there a way to debug / monitor the osd journal usage?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You most likely have pool size set to 3, but your crush rule requires
replicas to be separated across DCs, of which you have only 2.
-Sam
On Mon, Dec 14, 2015 at 11:12 AM, Reno Rainz wrote:
> Thank you for your answer, but I don't really understand what do you mean.
>
> I use this map to distribu
Whoops, I misread Nikola¹s original email, sorry!
If all your SSDs are all performing at that level for sync IO, then I
agree that it¹s down to other things, like network latency and PG locking.
Sequential 4K writes with 1 thread and 1 qd is probably the worst
performance you¹ll see. Is there a ro
Even with 10G ethernet, the bottleneck is not the network, nor the drives
(assuming they are datacenter-class). The bottleneck is the software.
The only way to improve that is to either increase CPU speed (more GHz per
core) or to simplify the datapath IO has to take before it is considered
dura
I get where you are coming from, Jan, but for a test this small, I still
think checking network latency first for a single op is a good idea.
Given that the cluster is not being stressed, CPUs may be running slow. It
may also benefit the test to turn CPU governors to performance for all
cores.
Wa
Thank you for your help, while reading your answer, I realize that I
totally misunderstood how cruh map algo and data placement work in CEPH.
I fix my issue, with this new rule :
"rules": [
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
Dear CephFS experts
Before it was possible to mount a subtree of a filesystem using
ceph-fuse and -r option.
In invernalis, I am not understanding how that is working, and I am only
able to mount the full tree. 'ceph-fuse --help' does not seem to show
that option although 'man ceph-fuse' say
Should we add big packet test in heartbeat? Right now the heartbeat
only test the little packet. If the MTU is mismatched, the heartbeat
can not find that.
2015-12-14 12:18 GMT+08:00 Chris Dunlop :
> On Sun, Dec 13, 2015 at 09:10:34PM -0700, Robert LeBlanc wrote:
>> I've had something similar to
On Mon, Dec 14, 2015 at 09:29:20PM +0800, Jaze Lee wrote:
> Should we add big packet test in heartbeat? Right now the heartbeat
> only test the little packet. If the MTU is mismatched, the heartbeat
> can not find that.
It would certainly have saved me a great deal of stress!
I imagine you wouldn
I think I've understood how to run it...
ceph-fuse -m MON_IP:6789 -r /syd /coepp/cephfs/syd
does what I want
Cheers
Goncalo
On 12/15/2015 12:04 PM, Goncalo Borges wrote:
Dear CephFS experts
Before it was possible to mount a subtree of a filesystem using
ceph-fuse and -r option.
In inverna
28 matches
Mail list logo