I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
*CentOS 7.5*
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download
*Kernel 4.17*
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/
So I
This worked.
However somebody should investigate why default is still jewel on Centos 7.4
Il 28/02/2018 00:53, jorpilo ha scritto:
Try using:
ceph-deploy --release luminous host1...
Mensaje original
De: Massimiliano Cuttini
Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
Para: ceph
This is the 5th time that I install and after purge the installation.
Ceph Deploy is alway install JEWEL instead of Luminous.
No way even if I force the repo from default to luminous:
|https://download.ceph.com/rpm-luminous/el7/noarch|
It still install Jewel it's stuck.
I've already ch
Not good.
I'm not worried about time and effort.
I'm worried to fix this while there is not time.
Ceph is builded to avoid downtime, not a good idea create it on an a
system with availability issues.
It is only with switching (when installing a node), subsequent kernel
updates should be instal
downtime, not a good idea create it on an a
system with availability issues.
-Original Message-
From: Massimiliano Cuttini [mailto:m...@phoenixweb.it]
Sent: zondag 25 februari 2018 13:18
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Linux Distribution: Is upgrade the kerner vers
Hi everybody,
Just a simple question.
In order to deploy Ceph...
/Do you'll use a default distribution that already support
recommended kernel version (> 4.4).//
//Let's say Ubuntu./
OR
/Do you'll use your preferred linux distribution and just upgrade it
to a higher kernerl vers
Il 23/01/2018 16:49, c...@jack.fr.eu.org ha scritto:
On 01/23/2018 04:33 PM, Massimiliano Cuttini wrote:
With Ceph you have to install an orchestrator 3rd party in order to
have a clear picture of what is going on.
Which can be ok, but not alway pheasable.
Just as with everything
As said
Il 23/01/2018 14:32, c...@jack.fr.eu.org ha scritto:
I think I was not clear
There are VMs management system, look at
https://fr.wikipedia.org/wiki/Proxmox_VE,
https://en.wikipedia.org/wiki/Ganeti, probably
https://en.wikipedia.org/wiki/OpenStack too
Theses systems interacts with Ceph.
Whe
You're more than welcome - we have a lot of work ahead of us...
Feel free to join our Freenode IRC channel #openattic to get in touch!
A curiosity!
as far as I understood this software was created to manage only Ceph. Is
it right?
so... why such a "far away" name for a software dedicated to
Il 23/01/2018 13:20, c...@jack.fr.eu.org ha scritto:
- USER taks: create new images, increase images size, sink images
size, check daily status and change broken disks whenever is needed.
Who does that ?
For instance, Ceph can be used for VMs. Your VMs system create images,
resizes images, wha
https://www.openattic.org/features.html
Oh god THIS is the answer!
Lenz, if you need help I can join also development.
Lenz
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey Lenz,
OpenAttic seems to implement several good feature and to be more-or-less
what I was asking.
I'll go through all the website. :)
THANKS!
Il 16/01/2018 09:04, Lenz Grimmer ha scritto:
Hi Massimiliano,
On 01/11/2018 12:15 PM, Massimiliano Cuttini wrote:
_*3) Manag
Il 22/01/2018 21:55, Jack ha scritto:
On 01/22/2018 08:38 PM, Massimiliano Cuttini wrote:
The web interface is needed because:*cmd-lines are prune to typos.*
And you never misclick, indeed;
Do you really mean: 1) misclick once on an option list, 2) miscklick
once on the form, 3) mistype the
llowed to manage them.
Less complexity, less errors, faster deploy of new customers.
Sorry if this sound so strange to you.
-Original Message-
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: dinsdag 16 januari 2018 6:18
To: Massimiliano Cuttini
Cc: ceph-users@lists.
Hi everybody,
i'm always looking at CEPH for the future.
But I do see several issue that are leaved unresolved and block nearly
future adoption.
I would like to know if there are some answear already:
_*1) Separation between Client and Server distribution.*_
At this time you have always to upd
Hi Riccardo,
using ceph-fuse will add extra layer.
Consider to use instead ceph-nbd which is a porting to use network
device blocks.
This should be faster and allow you to use latest tunables (which it's
better).
Il 17/07/2017 10:56, Riccardo Murri ha scritto:
Thanks a lot to all! Both th
Dear all,
i have to create several VM in order to use them as a MON on my cluster.
All my Ceph Clients are centOS.
But i'm thinking about creating all the monitor using Ubuntu, because it
seems lighter.
Is this a matter of taste?
Or are there something I should know before go with a mixed OS c
Hi everybody,
i would like to separate MON from OSD as reccomended.
In order to do so without new hardware I'm planning to create all the
monitor as a Virtual Machine on top of my hypervisors (Xen).
I'm testing a pool of 8 nodes of Xen.
I'm thinking about create 8 monitor and pin one monitor f
cache probably won’t hurt either (unless you
know your workload won’t include any cacheable reads)
Cheers,
Robert van Leeuwen
From: ceph-users on behalf of Massimiliano
Cuttini
Organization: PhoenixWeb Srl
Date: Wednesday, July 5, 2017 at 10:54 AM
To: "ceph-users@lists.ceph.com"
Dear all,
luminous is coming and sooner we should be allowed to avoid double writing.
This means use 100% of the speed of SSD and NVMe.
Cluster made all of SSD and NVMe will not be penalized and start to make
sense.
Looking forward I'm building the next pool of storage which we'll setup
on ne
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote:
Il 25/06/2017 21:52, Mykola Golub ha scritto:
On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
I can see the error even if I easily run list-mapped:
# rbd-nbd list-mapped
/dev/nbd0
2017-06
Hi Saumay,
i think you should take in account to track SMART on every SSD founded.
If it has SMART capabilities, then track its test (or commit tests) and
display its values on the dashboard (or separate graph).
This allow ADMINS to forecast the next OSD will die.
Preventing is better than Res
Il 25/06/2017 21:52, Mykola Golub ha scritto:
On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
I can see the error even if I easily run list-mapped:
# rbd-nbd list-mapped
/dev/nbd0
2017-06-25 18:49:11.761962 7fcdd9796e00 -1 asok(0x7fcde3f72810
ady raised by
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011938.html
Joecyw, how do you solve this issue?
Il 25/06/2017 16:03, Massimiliano Cuttini ha scritto:
UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok'
stance's librbd log file?
On Sun, Jun 25, 2017 at 4:30 AM, Massimiliano Cuttini
wrote:
After 4 months of test we decided to go live and store real VDI in
production.
However just the same day something went suddenly wrong.
The last copy of the VDI in Ceph was corrupted.
Trying to fix the
After 4 months of test we decided to go live and store real VDI in
production.
However just the same day something went suddenly wrong.
The last copy of the VDI in Ceph was corrupted.
Trying to fix the filesystem works for open it, but mysqld never went
online even after reinstalling but only f
What seems to be strange is that feature are *all disabled* when I
create some images.
While ceph should use default settings of jewel at least.
Do I need to place in ceph.conf something in order to use default settings?
Il 23/06/2017 23:43, Massimiliano Cuttini ha scritto:
I guess you
ri, Jun 23, 2017, 4:33 PM Massimiliano Cuttini <mailto:m...@phoenixweb.it>> wrote:
Ok,
At moment my client use only nbd-rbd, can I use all these feature
or this is something unavoidable?
I guess it's ok.
Reading around seems that a lost feature cannot be re-enabled
enable them in, at least that order should work.
On Fri, Jun 23, 2017 at 3:41 PM Massimiliano Cuttini
mailto:m...@phoenixweb.it>> wrote:
Hi everybody,
I just realize that all my Images are completly without features:
rbd info VHD-4c7ebb38-b081-48da-9b57-aac14bdf88c4
river. Because of this and similar improvements to ceph that
the kernel is requiring newer and newer versions to utilize, I've
become a strong proponent of utilizing the fuse, rgw, and
librados/librbd client options to keep my clients in feature parity
with my cluster's ceph version.
On Fri,
.
,Ashley
Sent from my iPhone
On 23 Jun 2017, at 10:40 PM, Massimiliano Cuttini <mailto:m...@phoenixweb.it>> wrote:
Ashley,
but.. instead of use NVMe as a journal, why don't add 2 OSD to the
cluster?
Incresing number of OSD instead of improving performance of actual OSD?
Il 23
Hi everybody,
I just realize that all my Images are completly without features:
rbd info VHD-4c7ebb38-b081-48da-9b57-aac14bdf88c4
rbd image 'VHD-4c7ebb38-b081-48da-9b57-aac14bdf88c4':
size 102400 MB in 51200 objects
order 21 (2048 kB objects)
block_name_
diff between DOM(0) speed and VM speed.
Is it normal?
Il 23/06/2017 10:24, Massimiliano Cuttini ha scritto:
Hi Mark,
having 2 node for testing allow me to downgrade the replication to 2x
(till the production).
SSD have the following product details:
* sequential read: 540MB/sec
* seque
to also offer LIO/TCMU starting with
Luminous and the next point release of CentOS (or a vanilla >=4.12-ish
kernel).
On Fri, Jun 23, 2017 at 5:31 AM, Massimiliano Cuttini
wrote:
Dear all,
running all server and clients a centOS release with a kernel 3.10.* I'm
facing this choiche:
sa
your getting a decent
rated NVME your bottle neck will be the NVME but will still improve
over your current bottle neck.
You could add two NVME OSD’s, but their higher performance would be
lost along with the other 12 OSD’s.
,Ashley
Sent from my iPhone
On 23 Jun 2017, at 8:34 PM, Massimiliano Cu
Hi Ashley,
You could move your Journal to another SSD this would remove the
double write.
If I move the journal to another SSD, I will loss an available OSD, so
this is likely to say improve of *x2* and then decrease of *x½ *...
this should not improve performance in any case on a full SSD di
rg ha scritto:
On 22/06/2017 19:19, Massimiliano Cuttini wrote:
We are already expecting the following bottlenecks:
* [ SATA speed x n° disks ] = 24Gbit/s
* [ Networks speed x n° bonded cards ] = 200Gbit/s
6Gbps SATA does not mean you can read 6Gbps from that d
probably have maxed out
on your disks. But the above tools should help as you grow and tune
your cluster.
Cheers,
Maged Mokhtar
PetaSAN
On 2017-06-22 19:19, Massimiliano Cuttini wrote:
Hi everybody,
I want to squeeze all the performance of CEPH (we are using jewel
10.2.7).
We are testing a
eadahead on the OSD devices to see if that improves
things at all. Still, unless I've missed something these numbers
aren't terrible.
Mark
On 06/22/2017 12:19 PM, Massimiliano Cuttini wrote:
Hi everybody,
I want to squeeze all the performance of CEPH (we are using jewel
10.2.7).
up to ~500MB/s and Sequential write speeds up to
460MB/s. Not too far off from what you are seeing. You might try
playing with readahead on the OSD devices to see if that improves
things at all. Still, unless I've missed something these numbers
aren't terrible.
Mark
On 06/22/20
Dear all,
running all server and clients a centOS release with a kernel 3.10.* I'm
facing this choiche:
* sacrifice TUNABLES and downgrade all the cluster to
CEPH_FEATURE_CRUSH_TUNABLES3 (which should be the right profile for
jewel on old kernel 3.10)
* sacrifice KERNEL RBD and map Cep
Hi everybody,
I want to squeeze all the performance of CEPH (we are using jewel 10.2.7).
We are testing a testing environment with 2 nodes having the same
configuration:
* CentOS 7.3
* 24 CPUs (12 for real in hyper threading)
* 32Gb of RAM
* 2x 100Gbit/s ethernet cards
* 2x OS dedicated i
s,
Best,
**
*German Anders*
2017-04-26 11:21 GMT-03:00 Massimiliano Cuttini mailto:m...@phoenixweb.it>>:
On a Ceph Monitor/OSD server can i run just:
*yum update -y*
in o
On a Ceph Monitor/OSD server can i run just:
*yum update -y*
in order to upgrade system and packages or did this mess up Ceph?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ah ...
Il 02/03/2017 15:56, Jason Dillaman ha scritto:
I'll refer you to the man page for blkdiscard [1]. Since it operates
on the block device, it doesn't know about filesystem holes and
instead will discard all data specified (i.e. it will delete all your
data).
[1] http://man7.org/linux/man
Il 02/03/2017 14:11, Jason Dillaman ha scritto:
On Thu, Mar 2, 2017 at 8:09 AM, Massimiliano Cuttini wrote:
Ok,
then, if the command comes from the hypervisor that hold the image is it
safe?
No, it needs to be issued from the guest VM -- not the hypervisor that
is running the guest VM. The
image via the rbd CLI, but no work has
been started on it yet.
[1] http://tracker.ceph.com/issues/13706
On Thu, Mar 2, 2017 at 5:16 AM, Massimiliano Cuttini wrote:
Thanks Jason,
I need some further info, because I'm really worried about ruin my data.
On this pool I have only XEN virtual
Il 01/03/2017 20:11, Jason Dillaman ha scritto:
You should be able to issue an fstrim against the filesystem on top of
the nbd device or run blkdiscard against the raw device if you don't
have a filesystem.
On Wed, Mar 1, 2017 at 1:26 PM, Massimiliano Cuttini wrote:
Dear all,
i use the rbd-
Dear all,
i use the rbd-nbd connector.
Is there a way to reclaim free space from rbd image using this component
or not?
Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
27. Februar 2017 10:52:56 MEZ schrieb Massimiliano Cuttini
:
It happens to my that OS being corrupted.
I just reinstalled the OS and deploy the monitor.
While I was going for zap and reinstal OSD I found that my OSD
were already running again.
Magically
Il 27/02/2017
/02/2017 19:41, Simon Weald ha scritto:
Is there a performance hit when using rbd-nbd?
On 27/02/17 18:34, Massimiliano Cuttini wrote:
But if everybody get Kernel Mismatch (me too)
... why don't use directly rbd-nbd and forget about kernel-rbd
All feature, almost same performance
But if everybody get Kernel Mismatch (me too)
... why don't use directly rbd-nbd and forget about kernel-rbd
All feature, almost same performance.
No?
Il 27/02/2017 18:54, Ilya Dryomov ha scritto:
On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo wrote:
We already discussed this:
https
Dear all,
i have 3 nodes with 4 OSD each.
And I would like to have 6 replicas.
So 2 replicas for nodes.
Does anybody know how to allow CRUSH to use twice the same node but
different OSD?
Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.cep
It happens to my that OS being corrupted.
I just reinstalled the OS and deploy the monitor.
While I was going for zap and reinstal OSD I found that my OSD were
already running again.
Magically
Il 27/02/2017 10:07, Iban Cabrillo ha scritto:
Hi,
Could I reinstall the server and try only to
erformance (no intermediates) but Kernel
exposed to upstream attacks.
Il 26/02/2017 06:04, Lindsay Mathieson ha scritto:
On 26/02/2017 12:12 AM, Massimiliano Cuttini wrote:
The pity is that is based o KVM, which is as far as I know is a ligth
hypervisor that is not able to isolate the virtu
you considered Proxmox at all? Nicely integrates with Ceph
storage. I moved from Xenserver longtime ago and have no regrets.
Thanks
Brians
On Sat, Feb 25, 2017 at 12:47 PM, Massimiliano Cuttini
mailto:m...@phoenixweb.it>> wrote:
Hi Iban,
you are running xen (just the softwar
--
*From: *"Massimiliano Cuttini"
*To: *"ceph-users"
*Sent: *Friday, 24 February, 2017 14:52:37
*Subject: *[ceph-users] Ceph on XenServer
Dear all,
even if Ceph should be officially supported by Xen since 4 years.
*
http://xenserver.org
ils 4.6.0-1ubuntu4.1
amd64Xenstore command line utilities for Xen
2017-02-24 15:52 GMT+01:00 Massimiliano Cuttini <mailto:m...@phoenixweb.it>>:
Dear all,
even if Ceph should be officially supported by Xen since 4 years.
*
http
Dear all,
even if Ceph should be officially supported by Xen since 4 years.
* http://xenserver.org/blog/entry/tech-preview-of-xenserver-libvirt-ceph.html
* https://ceph.com/geen-categorie/xenserver-support-for-rbd/
Still there is no support yet.
At this point there are only some self-made pl
Hi travis,
can I have a develop account or tester account in order to submit issue
by myself?
Thanks,
Massimiliano Cuttini
Il 18/11/2014 23:03, Travis Rhoden ha scritto:
I've captured this at http://tracker.ceph.com/issues/10133
On Tue, Nov 18, 2014 at 4:48 PM, Travis Rhoden <mai
Everytime i try to create a second OSD i get this hang:
$ ceph-deploy osd activate ceph-node2:/var/local/osd1
[cut ...]
[ceph_deploy.cli][INFO ] Invoked (1.5.20): /usr/bin/ceph-deploy osd
activate ceph-node2:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
n Tue, Nov 18, 2014 at 4:41 PM, Massimiliano Cuttini
mailto:m...@phoenixweb.it>> wrote:
I solved by installing EPEL repo on yum.
I think that somebody should write down in the documentation
that EPEL is mandatory
Il 18/11/2014 14:29, Massim
I solved by installing EPEL repo on yum.
I think that somebody should write down in the documentation that EPEL
is mandatory
Il 18/11/2014 14:29, Massimiliano Cuttini ha scritto:
Dear all,
i try to install ceph but i get errors:
#ceph-deploy install node1
Richiede: libtcmalloc.so.4()(64bit)
[node1][WARNIN] Richiede: libleveldb.so.1()(64bit)
[node1][WARNIN] Richiede: libtcmalloc.so.4()(64bit)
This seems strange.
Can you fix this?
Thanks,
Massimiliano Cuttini
___
c
Any hint?
Il 30/10/2014 15:22, Massimiliano Cuttini ha scritto:
Dear Ceph users,
I just received 2 fresh new servers and i'm starting to develop my
Ceph Cluster.
The first step is: create the admin node in order to controll all the
cluster by remote.
I have a big cluster of XEN server
Dear Ceph users,
I just received 2 fresh new servers and i'm starting to develop my Ceph
Cluster.
The first step is: create the admin node in order to controll all the
cluster by remote.
I have a big cluster of XEN servers and I'll setup there a new VM only
for this.
I need some info:
1) As f
Il 08/10/2014 14:39, Nathan Stratton ha scritto:
On Wed, Oct 8, 2014 at 8:15 AM, Massimiliano Cuttini
mailto:m...@phoenixweb.it>> wrote:
If you want to build up with Viatta.
And this give you the possibility to have a fully feature OS.
What kind of hardware would you
t so cheap. ^^
More below.
On Tue Oct 07 2014 at 11:05:23 AM Massimiliano Cuttini
wrote:
Hi Christian,
When you say "10 gig infiniband", do you mean QDRx4 Infiniband
(usually flogged as 40Gb/s even though it is 32Gb/s, but who's
counting), which tends to be the same basic
Hi Christian,
When you say "10 gig infiniband", do you mean QDRx4 Infiniband (usually
flogged as 40Gb/s even though it is 32Gb/s, but who's counting), which
tends to be the same basic hardware as the 10Gb/s Ethernet offerings from
Mellanox?
A brand new 18 port switch of that caliber will only co
Il 02/10/2014 17:24, Christian Balzer ha scritto:
On Thu, 02 Oct 2014 12:20:06 +0200 Massimiliano Cuttini wrote:
Il 02/10/2014 03:18, Christian Balzer ha scritto:
On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote:
Hello Christian,
Il 01/10/2014 19:20, Christian Balzer ha scritto
I don't think this is true.
If you have a SSD disk of 60Gb or 100GB then your TBW/day is really
limited (the disk is small then will wrote always on same sectors).
The bigger is the SSD the longer will be alive, you have limited write
per day then if your disk is bigger you have more sectors to
Il 02/10/2014 03:18, Christian Balzer ha scritto:
On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote:
Hello Christian,
Il 01/10/2014 19:20, Christian Balzer ha scritto:
Hello,
On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:
Dear all,
i need few tips about Ceph
Hello Christian,
Il 01/10/2014 19:20, Christian Balzer ha scritto:
Hello,
On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:
Dear all,
i need few tips about Ceph best solution for driver controller.
I'm getting confused about IT mode, RAID and JBoD.
I read many posts
What would you reccomend?
I'm getting dumb reading by myself tons of specs without any second
human opinion.
Thanks you for any hint you'll give!
--
*Massimiliano Cuttini*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
74 matches
Mail list logo