reusing the partitions I set up for journals on my SSDs as DB
> devices for Bluestore HDDs without specifying anything to do with the WAL,
> and I'd like to know sooner rather than later if I'm making some sort of
> horrible mistake.
>
> Rich
> --
> Richard Hesketh
>
gt;
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
*Alejandro Comisario*
*CTO | NUBELIU*
E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857
_
www.nube
sts.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
*Alejandro Comisario*
*CTO | NUBELIU*
E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857
_
www.nubeliu.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
fresh.
Any how-to's ? experiences ? i dont seem to find an official way of doing
it.
best.
--
*Alejandro Comisario*
*CTO | NUBELIU*
E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857
_
www.nubeliu.com
___
ceph-users mailing list
ceph-users@list
filestore. The
> cluster doesn't really know if an osd is filestore or bluestore... It's
> just an osd running a daemon.
>
> If there are any differences, they would be in the release notes for
> Luminous as changes from Jewel.
>
> On Sat, Sep 30, 2017, 6:28 PM Aleja
t's been mentioned on the ML
> is that you want to create a partition for the DB and the WAL will use it.
> A partition for the WAL is only if it is planned to be on a different
> device than the DB.
>
> On Tue, Oct 10, 2017 at 5:59 PM Alejandro Comisario
> wrote:
>
Hi all, i have to hot-swap a failed osd on a Luminous Cluster with Blue
store (the disk is SATA, WAL and DB are on NVME).
I've issued a:
* ceph osd crush reweight osd_id 0
* systemctl stop (osd I'd daemon)
* umount /var/lib/ceph/osd/osd_id
* ceph osd destroy osd_id
everything seems of, but if I l
hi guys, any tip or help ?
On Mon, Oct 16, 2017 at 1:50 PM, Alejandro Comisario
wrote:
> Hi all, i have to hot-swap a failed osd on a Luminous Cluster with Blue
> store (the disk is SATA, WAL and DB are on NVME).
>
> I've issued a:
> * ceph osd crush reweight osd_id 0
>
nclude an example of an actual message you are seeing in dmesg.
> 2) Provide the output of # ceph status
> 3) Provide the output of # ceph osd tree
>
> Regards,
> Jamie Fargen
>
>
>
> On Tue, Oct 17, 2017 at 4:34 PM, Alejandro Comisario <
> alejan...@nubeliu.com> wr
be
> present in dmesg until the kernel ring buffer is overwritten or the system
> is restarted.
>
> -Jamie
>
>
> On Tue, Oct 17, 2017 at 6:47 PM, Alejandro Comisario <
> alejan...@nubeliu.com> wrote:
>
>> Jamie, thanks for replying, info is as follow:
>>
>
be
> present in dmesg until the kernel ring buffer is overwritten or the system
> is restarted.
>
> -Jamie
>
>
> On Tue, Oct 17, 2017 at 6:47 PM, Alejandro Comisario <
> alejan...@nubeliu.com> wrote:
>
>> Jamie, thanks for replying, info is as follow:
>>
>
ed or disclosed with the consent of the
> copyright owner. If you have received this email by mistake or by breach of
> the confidentiality clause, please notify the sender immediately by return
> email and delete or destroy all copies of the email. Any confidentiality,
> privilege or copyright is not waived or lost because this email
Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact).
This weekend we'be experienced a huge outage from our customers vms
(located on pool CUSTOMERS, replica size 3 ) when lots of OSD's
started to slow request/block PG's on pool PRIVATE ( replica size 1 )
basically all PG's blocked wh
Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact).
This weekend we'be experienced a huge outage from our customers vms
(located on pool CUSTOMERS, replica size 3 ) when lots of OSD's
started to slow request/block PG's on pool PRIVATE ( replica size 1 )
basically all PG's blocked wh
OSD memory queues and block
> access to other pools as it cascades across the system.
>
> On Sun, Mar 5, 2017 at 6:22 PM Alejandro Comisario
> wrote:
>
>> Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact).
>> This weekend we'be experienced a hug
Any thoughts ?
On Tue, Mar 7, 2017 at 3:17 PM, Alejandro Comisario
wrote:
> Gregory, thanks for the response, what you've said is by far, the most
> enlightneen thing i know about ceph in a long time.
>
> What brings even greater doubt, which is, this "non-functional&q
e cluster worked
perfectly for about two weeks, till this happened.
After the resolution from my first email, everything has been working
perfect.
thanks for the responses.
On Fri, Mar 10, 2017 at 4:23 PM, Gregory Farnum wrote:
>
>
> On Tue, Mar 7, 2017 at 10:18 AM Alejandro Comisario
Hi, it's been a while since im using Ceph, and still im a little
ashamed that when certain situation happens, i dont have the knowledge
to explain or plan things.
Basically what i dont know is, and i will do an exercise.
EXCERCISE:
a virtual machine running on KVM has an extra block device where
anyone ?
On Fri, Mar 17, 2017 at 5:40 PM, Alejandro Comisario
wrote:
> Hi, it's been a while since im using Ceph, and still im a little
> ashamed that when certain situation happens, i dont have the knowledge
> to explain or plan things.
>
> Basically what i dont know
ter/architecture/#data-striping
> [2] https://en.wikipedia.org/wiki/Maximum_transmission_unit
>
>
>
> On Mon, Mar 20, 2017 at 5:24 PM, Alejandro Comisario
> wrote:
>> anyone ?
>>
>> On Fri, Mar 17, 2017 at 5:40 PM, Alejandro Comisario
>> wrote:
>>
any thoughts ?
On Tue, Mar 14, 2017 at 10:22 PM, Alejandro Comisario wrote:
> Greg, thanks for the reply.
> True that i cant provide enough information to know what happened since
> the pool is gone.
>
> But based on your experience, can i please take some of your time, and
>
e can still fill with
> requests not related to the blocked pg/objects? I would love for ceph to
> handle this better. I suspect some issues I have are related to this (slow
> requests on one VM can freeze others [likely blame the osd], even requiring
> kill -9 [likely blame client libr
Hi everyone!
I have to install a ceph cluster (6 nodes) with two "flavors" of
disks, 3 servers with SSD and 3 servers with SATA.
Y will purchase 24 disks servers (the ones with sata with NVE SSD for
the SATA journal)
Processors will be 2 x E5-2620v4 with HT, and ram will be 20GB for the
OS, and 1.
ournal you could generate up to 250MB/s of traffic per SSD OSD (24Gbps for
> 12x or 48Gbps for 24x) therefore I would consider doing 4x10G and
> consolidate both client and cluster network on that
>
> Cheers,
> Maxime
>
> On 23/03/17 18:55, "ceph-users on behalf of Aleja
Guys hi.
I have a Jewel Cluster divided into two racks which is configured on
the crush map.
I have clients (openstack compute nodes) that are closer from one rack
than to another.
I would love to (if is possible) to specify in some way the clients to
read first from the nodes on a specific rack t
any experiences ?
On Wed, Mar 29, 2017 at 2:02 PM, Alejandro Comisario
wrote:
> Guys hi.
> I have a Jewel Cluster divided into two racks which is configured on
> the crush map.
> I have clients (openstack compute nodes) that are closer from one rack
> than to another.
>
> I
gt;> >> localize the all reads to the VM image. You can, however, enable
>> >> localization of the parent image since that is a read-only data set.
>> >> To enable that feature, set "rbd localize parent reads = true" and
>> >> populate the "
t;> Chief Cloud Architect | NUBELIU
>> E-mail: massimo@nubeliu.comCell: +54 9 11 3770 1853
>> <+54%209%2011%203770-1853>
>> _
>> www.nubeliu.com
>> ___
>> ceph-users mailing list
>> ceph-user
Hi all, i have a multi datacenter 6 nodes (6 osd) ceph jewel cluster.
There are 3 pools in the cluster, all three with size 3 and min_size 2.
Today, i shut down all three nodes (controlled and in order) on
datacenter "CPD2" just to validate that everything keeps working on
"CPD1", whitch did (incl
Peter, hi.
thanks for the reply, let me check that out, and get back to you
On Wed, Jun 7, 2017 at 4:13 AM, Peter Maloney
wrote:
> On 06/06/17 19:23, Alejandro Comisario wrote:
>> Hi all, i have a multi datacenter 6 nodes (6 osd) ceph jewel cluster.
>> There are 3 pools in the clu
Peter, hi ... what happened to me is exactly what happened to you,
thanks so much for pointing that out!
I'm amazed on how you realized that was the problem !!
Maybe that will help me troubleshoot a little more pro.
best.
On Wed, Jun 7, 2017 at 5:06 PM, Alejandro Comisario
wrote:
> P
ha!
is there ANY way of knowing when this peering maximum has been reached for
a PG?
On Jun 7, 2017 20:21, "Brad Hubbard" wrote:
> On Wed, Jun 7, 2017 at 5:13 PM, Peter Maloney
> wrote:
>
> >
> > Now if only there was a log or warning seen in ceph -s that said the
> > tries was exceeded,
>
> Ch
.
On Thu, Jun 8, 2017 at 2:20 AM, Brad Hubbard wrote:
> On Thu, Jun 8, 2017 at 2:59 PM, Alejandro Comisario
> wrote:
>> ha!
>> is there ANY way of knowing when this peering maximum has been reached for a
>> PG?
>
> Not currently AFAICT.
>
> It takes plac
you might want to configure cinder.conf with
verbose = true
debug = true
and see /var/log/cinder/cinder-volume.log after a "systemctl restart
cinder-volume" to see the real cause.
best.
alejandrito
On Mon, Jun 19, 2017 at 6:25 PM, T. Nichole Williams
wrote:
> Hello,
>
> I’m having trouble con
34 matches
Mail list logo