Thanks to all for responses. Great thread with a lot of info.
I will go with the 3 partitions on Kingstone SDD for 3 OSDs on each node.
Thanks
Jiri
On 30/09/2015 00:38, Lionel Bouton wrote:
Hi,
Le 29/09/2015 13:32, Jiri Kanicky a écrit :
Hi Lionel.
Thank you for your reply. In this case I a
Hi,
I have also posted on the OpenNebula community forum
(https://forum.opennebula.org/t/changing-ceph-monitors-for-running-vms/1266).
Does anyone have any experience of changing the monitors in their Ceph cluster
whilst running OpenNebula VMs?
We have recently bought new hardware to replace ou
Make sure to check this blog page
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
Since Im not sure if you are playing arround with CEPH, or plan it for
production and good performance.
My experience SSD as journal: SSD Samsung 850 PRO = 20
Hello,
For the largest VMs is the declaration of the XML file for the VM ..
example:
function='0x0'/>
Try adding a " via "virsh"
and via the command ""netstat -laputen" verify that the CEPH-M
I have some experience with Kingstons - which model do you plan to use?
Shorter version: don't use Kingstons. For anything. Ever.
Jan
> On 30 Sep 2015, at 11:24, Andrija Panic wrote:
>
> Make sure to check this blog page
> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-s
Hi!
Yes, we did exactly the same and do not have practically any problems except
some
minor issues with recreatind VMs.
At first, OpenNebula use specified in a template ceph monitors only when
creating VM
or migrating it. This template values passed as qemu parameters, when
bootstrapping
VM. W
Yes, but it always results in 401 from horizon and cli
swift --debug --os-auth-url http://172.25.60.2:5000/v3 --os-username ldapuser
--os-user-domain-name ldapdomain --os-project-name someproject
--os-project-domain-name ldapdomain --os-password password123 -V 3 post
containerV3
DEBUG:keystonec
Hi,
Am 2015-09-29 um 15:54 schrieb Gregory Farnum:
> Can you create a ceph-deploy ticket at tracker.ceph.com, please?
> And maybe make sure you're running the latest ceph-deploy, but
> honestly I've no idea what it's doing these days or if this is a
> resolved issue.
Just file a bug.
The ceph-d
On 09/29/2015 04:56 PM, J David wrote:
On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
wrote:
The density would be higher than the 36 drive units but lower than the
72 drive units (though with shorter rack depth afaik).
You mean the 1U solution with 12 disk is longer in length than 72 disk
4U
On Tue, Sep 29, 2015 at 9:51 PM, Scottix wrote:
> I'm positive the client I sent you the log is 94. We do have one client
> still on 87.
>
which version of kernel are you using? I found a kernel bug which can cause
this issue in 4.1 and later kernels.
Regards
Yan, Zheng
>
> On Tue, Sep 29, 20
On 30-09-15 14:19, Mark Nelson wrote:
> On 09/29/2015 04:56 PM, J David wrote:
>> On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
>> wrote:
The density would be higher than the 36 drive units but lower than the
72 drive units (though with shorter rack depth afaik).
>>> You mean the 1U
OpenSuse 12.1
3.1.10-1.29-desktop
On Wed, Sep 30, 2015, 5:34 AM Yan, Zheng wrote:
> On Tue, Sep 29, 2015 at 9:51 PM, Scottix wrote:
>
>> I'm positive the client I sent you the log is 94. We do have one client
>> still on 87.
>>
> which version of kernel are you using? I found a kernel bug whic
Hi,
With 5 hosts, I could successfully create pools with k=4 and m=1, with the
failure domain being set to "host".
With 6 hosts, I could also create k=4,m=1 EC pools.
But I suddenly failed with 6 hosts k=5 and m=1, or k=4,m=2 : the PGs were never
created - I reused the pool name for my tests, th
On Wed, Sep 30, 2015 at 8:19 AM, Mark Nelson wrote:
> FWIW, I've mentioned to Supermicro that I would *really* love a version of the
> 5018A-AR12L that replaced the Atom with an embedded Xeon-D 1540. :)
Is even that enough? (It's a serious question; due to our insatiable
need for IOPs rather tha
Because we have a good thing going, our Ceph clusters are still
running Firefly on all of our clusters including our largest, all-SSD
cluster.
If I understand right, newer versions of Ceph make much better use of
SSDs and give overall much higher performance on the same equipment.
However, the imp
Hi,
Am 2015-09-17 um 19:02 schrieb Stefan Eriksson:
> I purged all nodes and did purgedata aswell and restarted, after this
> Everything was fine. You are most certainly right, if anyone else have
> this error, reinitialize the cluster might be the fastest way forward.
Great that it worked for y
Hi Jogi,
you can specify any repository you like with 'ceph-deploy install
--repo-url ', given you have the repo keys installed.
Best regards,
Kurt
Jogi Hofmüller wrote:
> Hi,
>
> Am 2015-09-25 um 22:23 schrieb Udo Lembke:
>
>> you can use this sources-list
>>
>> cat /etc/apt/sources.list.d/c
Hi,
Some more info:
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 3.59998 root default
-2 1.7 host ceph1
0 0.8 osd.0 up 1.0 1.0
1 0.8 osd.1 up 1.0 1.0
-3 1.7 host ceph2
2 0.89
On Tue, Sep 29, 2015 at 7:32 AM, Jiri Kanicky wrote:
> Thank you for your reply. In this case I am considering to create separate
> partitions for each disk on the SSD drive. Would be good to know what is the
> performance difference, because creating partitions is kind of waste of
> space.
It ma
Hi,
looking at the outputs below the following puzzles me:
You have two nodes but repl.size 3 for your test-data pool. With the
default crushmap this won't work as it tries to replicate on different
nodes.
So either change to rep.size 2, or add another node ;-)
best regards,
Kurt
Jogi Hofmülle
Hi all,
In the original paper (RADOS: a scalable, reliable storage service for
petabyte-scale storage clusters), three replication schemes were described
(primary copy, chain and splay).
Now the documentation only discusses primary copy. Does the chain scheme
still exist?
It would be much more ba
Hi Kurt,
Am 2015-09-30 um 17:09 schrieb Kurt Bauer:
> You have two nodes but repl.size 3 for your test-data pool. With the
> default crushmap this won't work as it tries to replicate on different
> nodes.
>
> So either change to rep.size 2, or add another node ;-)
Thanks a lot! I did not set a
On Wed, 30 Sep 2015, Wouter De Borger wrote:
> Hi all,
> In the original paper (RADOS: a scalable, reliable storage service for
> petabyte-scale storage clusters), three replication schemes were described
> (primary copy, chain and splay).
>
> Now the documentation only discusses primary copy. Doe
I try to update packages today, but I got a "connection reset by peer"
error every time.
It seems that the server will block my IP if I request a little frequently
( refresh page a few times manually per second).
I guess yum downloads packages in parallel and triggers something like
fail2ban.
Any
I tried
ceph.com/rpm-hammer
download.ceph.com/rpm-hammer
eu.ceph.com/rpm-hammer
On Oct 1, 2015 01:09, "Alkaid" wrote:
> I try to update packages today, but I got a "connection reset by peer"
> error every time.
> It seems that the server will block my IP if I request a little frequently
> ( refre
At the moment radosgw just doesn't support v3 (so it seems). I created
issue #13303. If anyone wants to pick this up (or provide some
information as to what it would require to support that) it would be
great.
Thanks,
Yehuda
On Wed, Sep 30, 2015 at 3:32 AM, Robert Duncan wrote:
> Yes, but it alw
Hi David,
Generally speaking, it is going to be super difficult to maximize the
bandwidth of NVMe with current Ceph latest release. In my humble opinion, I
don't think Ceph is aiming at high performance storage.
Here is link for your reference for some good work done by Samsung and SanDisk
re
Hi James,
- "James (Fei) Liu-SSI" wrote:
> Hi David,
> Generally speaking, it is going to be super difficult to maximize
> the bandwidth of NVMe with current Ceph latest release. In my humble
> opinion, I don't think Ceph is aiming at high performance storage.
Well, -I'm- certainly aiming
On 09/30/2015 09:34 AM, J David wrote:
Because we have a good thing going, our Ceph clusters are still
running Firefly on all of our clusters including our largest, all-SSD
cluster.
If I understand right, newer versions of Ceph make much better use of
SSDs and give overall much higher performanc
David,
You should move to Hammer to get all the benefits of performance. It's all
added to Giant and migrated to the present hammer LTS release.
FYI, focus was so far with read performance improvement and what we saw in our
environment with 6Gb SAS SSDs so far that we are able to saturate drives
Thanks Yehuda,
but this was already logged up about a year ago
http://tracker.ceph.com/issues/8052
it's hard to find a definitive answer regarding keystone v3. There is new
documentation regarding setting the swift endpoint with the new openstack
client for Kilo.
http://docs.ceph.com/docs/maste
Dear Ceph Gurus
- This is just a report about an annoying warning we keep getting every
time our logs are rotated.
ibust[8241/8241]: Warning: HOME environment variable not set.
Disabling LTTng-UST per-user tracing. (in setup_local_apps() at
lttng-ust-comm.c:305)
- I am running ceph 9
Hi,
Namaste from India !
I am not quite sure if I should ask Calamari related queries here.. so
please excuse my mistake.
I have a 18 OSD and 03 MON node cluster up and running.. I am able to see
all the information in Calamari Dashboard, but somehow, I don't get Usage
and IOPS data which is sho
All,
I was wondering if anyone has integrated his CEPH installation with
Zenoss monitoring software and is willing to share his knowledge.
Best regards,
George
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Dear all,
I'm trying to install ceph on Debian Jessie using pile of various manuals,
and have no luck so far.
Problem is ceph doesnt have jessie repositories. Jessie has own
repositories of ceph, but doesnt have ceph-deploy. As far as ceph-deploy is
a part of most documented manipulations, I can't
35 matches
Mail list logo