Hello
Before posting this message, I've been reading older posts in the mailing list,
but I didn't get any clear answer.
I happen to have three servers available to test Ceph, and I would like to know
if there is any kind of "performance prediction formula".
-My OSD servers are:
- 1 x Intel
Hello,
More line breaks, formatting.
A wall of text makes people less likely to read things.
On Fri, 2 Oct 2015 07:08:29 + Javier C.A. wrote:
> Hello
> Before posting this message, I've been reading older posts in the
> mailing list, but I didn't get any clear answer.
Define performa
Hi!
Yes, we run a small Hammer cluster in production.
Initially is was a 6-node Firefly installation on slightly outdated hardware:
- Intel 56XX platforms,
- 32-48Gb RAM,
- 70 SATA OSDs (1tb/2tb),
- SSD journals on DC S3700 200Gb,
- 10Gbit interconnect
- ~100 VM images (RBD only)
To r
On Fri, Oct 2, 2015 at 2:42 AM, Goncalo Borges
wrote:
> Dear CephFS Gurus...
>
> I have a question regarding ceph-fuse and its memory usage.
>
> 1./ My Ceph and CephFS setups are the following:
>
> Ceph:
> a. ceph 9.0.3
> b. 32 OSDs distributed in 4 servers (8 OSD per server).
> c. 'osd pool defau
Christian
thank you so much for your answer.
You're right, when I say Performance, I actually mean the "classic FIO
test".
Regarding the CPU, you meant 2Ghz per OSD and per CPU CORE, isn't?
One last question, with a total number of 18xOSD (2TB/OSD), and a replica
factor of 2, is it really ris
Can you run same test several times? Not one, two or three time, just more
several times.
And check more in detail. For instance descriptor, statics of networking served
in /sys/class/net//statistics/* and other things.
If every single result is same, there would be problems in some layer,
netw
The way I look at it is:
Would you normally put 18*2TB disks in a single RAID5 volume? If the answer is
no, then a replication factor of 2 is not enough.
Cheers,
Simon
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Javier
C.A.
Sent: 02 October 2015 09:58
To: ceph-use
Hi,
we accidentally added zeros to all our rbd images. So all images are no longer
thin provisioned. As we do not have access to the qemu guests running those
images. Is there any other options to trim them again?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
On 02-10-15 14:16, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> we accidentally added zeros to all our rbd images. So all images are no
> longer thin provisioned. As we do not have access to the qemu guests
> running those images. Is there any other options to trim them again?
>
Rough guess,
The following advice assumes these images don't have associated snapshots
(since keeping the non-sparse snapshots will keep utilizing the storage space):
Depending on how you have your images set up, you could snapshot and clone the
images, flatten the newly created clone, and delete the origina
A classic raid5 system takes a looong time to rebullid the raid, so i would
say NO, but how long does it take for ceph to rebullid the placement group?
J
> El 2 oct 2015, a las 12:01, Simon Hallam escribió:
>
> The way I look at it is:
>
> Would you normally put 18*2TB disks in a single RAI
Hi Stefan,
you can run an fstrim on the mounted images. This will delete the unused space
from ceph.
Greets
Christoph
On Fri, Oct 02, 2015 at 02:16:52PM +0200, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> we accidentally added zeros to all our rbd images. So all images are no
> longer thi
Since you didn¹t hear much from the successful crowd, I¹ll chime in. At my
previous employer, we ran some pretty large clusters (over 1PB)
successfully on Hammer. Some were upgraded from Firefly, and by no means
do I consider myself to be a developer. We totaled over 15 production
clusters. I¹m not
Hi all,
I have a Firefly cluster which has been upgraded from Emperor.
It has 2 OSD hosts and 3 monitors.
The cluster has default values for what concerns size and min_size of the
pools.
Once upgraded to Firefly, I created a new pool called bench2:
ceph osd pool create bench2 128 128
and set its si
This only works if enabled and if using virtio-scsi, I think...
Jan
> On 02 Oct 2015, at 15:34, Christoph Adomeit
> wrote:
>
> Hi Stefan,
>
> you can run an fstrim on the mounted images. This will delete the unused
> space from ceph.
>
> Greets
> Christoph
>
>
> On Fri, Oct 02, 2015 at 0
You probably don’t want hashpspool automatically set, since your clients may
still not understand that crush map feature. You can try to unset it for that
pool and see what happens, or create a new pool without hashpspool enabled from
the start. Just a guess.
Warren
From: Giuseppe Civitella
Hi Warren,
a simple:
ceph osd pool set bench2 hashpspool false
solved my problem.
Thank a lot
Giuseppe
2015-10-02 16:18 GMT+02:00 Warren Wang - ISD :
> You probably don’t want hashpspool automatically set, since your clients
> may still not understand that crush map feature. You can try to unse
On Fri, 2 Oct 2015 08:57:44 + Javier C.A. wrote:
> Christian
> thank you so much for your answer.
> You're right, when I say Performance, I actually mean the "classic FIO
> test". Regarding the CPU, you meant 2Ghz per OSD and per CPU CORE,
> isn't?
Yes.
Given mixed, typical load your CPU
Hello,
On Fri, 2 Oct 2015 15:31:11 +0200 Javier C.A. wrote:
Firstly, this has been discussed countless times here.
For one of the latest recurrences, check the archive for:
"calculating maximum number of disk and node failure that can
be handled by cluster with out data loss"
> A classic raid5
On Thu, Oct 1, 2015 at 9:32 PM, shiva rkreddy wrote:
> Hi,
> Any one has tried installing python-rbd and python-rados packages in python
> virtual environment?
> We are planning to have openstack services(cinder/glance) run in the virtual
> environment. There are no pip install packages available
Hi,
Le 02/10/2015 18:15, Christian Balzer a écrit :
> Hello,
> On Fri, 2 Oct 2015 15:31:11 +0200 Javier C.A. wrote:
>
> Firstly, this has been discussed countless times here.
> For one of the latest recurrences, check the archive for:
>
> "calculating maximum number of disk and node failure that c
When we re-arranged the download structure for packages and moved
everything to download.ceph.com, we did not carry ceph-extras over.
The reason is that the packages there were unmaintained. The EL6 QEMU
binaries were vulnerable to VENOM (CVE-2015-3456) and maybe other
CVEs, and no users should re
Hi,
Would anybody be able to comment on this idea, as to whether or not it's
feasible.
I was wondering if it would be possible for all incoming writes to be
written to the cache tier regardless of whether or not the object is
currently residing there. This would avoid the requirement for a
On Fri, Oct 2, 2015 at 1:57 AM, John Spray wrote:
> On Fri, Oct 2, 2015 at 2:42 AM, Goncalo Borges
> wrote:
>> Dear CephFS Gurus...
>>
>> I have a question regarding ceph-fuse and its memory usage.
>>
>> 1./ My Ceph and CephFS setups are the following:
>>
>> Ceph:
>> a. ceph 9.0.3
>> b. 32 OSDs d
Thanks Ken.
Does it mean we are going to have a pip package anytime soon? Redhat or
Ubuntu ship anything currently?
On Fri, Oct 2, 2015 at 11:33 AM, Ken Dreyer wrote:
> On Thu, Oct 1, 2015 at 9:32 PM, shiva rkreddy
> wrote:
> > Hi,
> > Any one has tried installing python-rbd and python-rados p
On Thu, Oct 01, 2015 at 10:01:03PM -0400, J David wrote:
> So, do medium-sized IT organizations (i.e. those without the resources
> to have a Ceph developer on staff) run Hammer-based deployments in
> production successfully?
I'm not sure if I count, given that I'm now working at DreamHost as the
i
Thanks a lot Mike..
I am sorry, I forgot to mention that I am using Debian 8 on the nodes. I
downloaded the Diamond from Git repo and built it from the source. Could
you please let me know the version you used to build RPM.. I might try to
build my .deb using the same.
Thanks.
Regards,
Daleep Si
Many thank Ken Dreyer,
But now we need it to run temporarily while planning to upgrade from CentOS
6 to CentOS 7.1.
On Fri, Oct 2, 2015 at 11:40 PM, Ken Dreyer wrote:
> When we re-arranged the download structure for packages and moved
> everything to download.ceph.com, we did not carry ceph-e
28 matches
Mail list logo