?Hi all,
I'm using CephFS on Hammer and sometimes I need to reboot one or more clients
because , as ceph -s tells me, it's "failing to respond to capability
release".After tha?t all clients stop to respond: can't access files or
mount/umont cephfs.
I've 1.5 million files , 2 metadata servers
Hi Lincoln,
I'm using the kernel client.
Kernel version is: 3.13.0-53-generic?
Thanks,
Matteo
Da: Lincoln Bryant
Inviato: domenica 14 giugno 2015 19:31
A: Matteo Dacrema; ceph-users
Oggetto: Re: [ceph-users] CephFS client issue
Hi Matteo,
Are your cl
Ok, I'll update kernel to 3.16.3 version and let you know.
Thanks,
Matteo
Da: John Spray
Inviato: luned? 15 giugno 2015 10:51
A: Matteo Dacrema; Lincoln Bryant; ceph-users
Oggetto: Re: [ceph-users] CephFS client issue
On 14/06/15 20:00, Matteo Dacrema
atteo
?
Da: ceph-users per conto di Matteo Dacrema
Inviato: luned? 15 giugno 2015 12:37
A: John Spray; Lincoln Bryant; ceph-users
Oggetto: Re: [ceph-users] CephFS client issue
Ok, I'll update kernel to 3.16.3 version and let you know.
Thanks,
Matteo
___
Hi,
I've shutoff the node without take any cautions for simulate a real case.
The osd_pool_default_min_size is 2 .
Regards,
Matteo
Da: Christian Balzer
Inviato: martedì 16 giugno 2015 01:44
A: ceph-users
Cc: Matteo Dacrema
Oggetto: Re: [ceph-
Hello,
you're right.
I misunderstood the meaning of the two configuration params: size and min_size.
Now it works correctly.
Thanks,
Matteo
Da: Christian Balzer
Inviato: martedì 16 giugno 2015 09:42
A: ceph-users
Cc: Matteo Dacrema
Oggetto: Re:
Hi all,
I'm using CephFS on Hammer and I've 1.5 million files , 2 metadata servers in
active/standby configuration with 8 GB of RAM , 20 clients with 2 GB of RAM
each and 2 OSD nodes with 4 80GB osd and 4GB of RAM.
?I've noticed that if I kill the active metadata server the second one took
abou
Hi all,
I've recently buy two Samsung SM951 256GB nvme PCIe SSDs and built a 2 OSD ceph
cluster with min_size = 1.
I've tested them with fio ad I obtained two very different results with these
two situations with fio.
This is the command : fio --ioengine=libaio --direct=1 --name=test
--filena
Hi Nick,
I also tried to increase iodepth but nothing has changed.
With iostat I noticed that the disk is fully utilized and write per seconds
from iostat match fio output.
Matteo
From: Nick Fisk [mailto:n...@fisk.me.uk]
Sent: lunedì 26 ottobre 2015 13:06
To: Matteo Dacrema ; ceph-us
ts.ceph.com] On Behalf Of
Christian Balzer
Sent: Monday, October 26, 2015 8:23 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] BAD nvme SSD performance
Hello,
On Mon, 26 Oct 2015 14:35:19 +0100 Wido den Hollander wrote:
>
>
> On 26-10-15 14:29, Matteo Dacrem
Hi all,
I’m testing Ceph Luminous 12.2.1 installed with ceph ansible.
Doing some failover tests I noticed that when I kill an osd or and hosts Ceph
doesn’t recover automatically remaining in this state until I bring OSDs or
host back online.
I’ve 3 pools volumes, cephfs_data and cephfs_metadata
:21, Matteo Dacrema ha
> scritto:
>
> Hi all,
>
> I’m testing Ceph Luminous 12.2.1 installed with ceph ansible.
>
> Doing some failover tests I noticed that when I kill an osd or and hosts Ceph
> doesn’t recover automatically remaining in this state until I bring OSDs or
&
Hi all,
I’ve experienced a strange issue with my cluster.
The cluster is composed by 10 HDDs nodes with 20 nodes + 4 journal each plus 4
SSDs nodes with 5 SSDs each.
All the nodes are behind 3 monitors and 2 different crush maps.
All the cluster is on 10.2.7
About 20 days ago I started to notic
Update: I noticed that there was a pg that remained scrubbing from the first
day I found the issue to when I reboot the node and problem disappeared.
Can this cause the behaviour I described before?
> Il giorno 09 nov 2017, alle ore 15:55, Matteo Dacrema ha
> scritto:
>
> Hi al
itiated"
>> },
>> {
>> "time": "2017-09-12 20:04:27.987862",
>> "event": "queued_for_pg"
>> },
>> {
>> "time&q
Hi,
I noticed that sometimes the monitors start to log active+clean pgs many times
in the same line. For example I have 18432 and the logs shows " 2136
active+clean, 28 active+clean, 2 active+clean+scrubbing+deep, 16266
active+clean;”
After a minute monitor start to log correctly again.
Is it
Nov 14, 2017 at 1:09 AM Matteo Dacrema <mailto:mdacr...@enter.eu>> wrote:
> Hi,
> I noticed that sometimes the monitors start to log active+clean pgs many
> times in the same line. For example I have 18432 and the logs shows " 2136
> active+clean, 28 active+clean, 2
Hi,
I need to switch a cluster of over 200 OSDs from replica 2 to replica 3
There are two different crush maps for HDD and SSDs also mapped to two
different pools.
Is there a best practice to use? Can this provoke troubles?
Thank you
Matteo
___
ceph-u
up 1.0 1.0
> Il giorno 20 nov 2017, alle ore 12:17, Christian Balzer ha
> scritto:
>
>
> Hello,
>
> On Mon, 20 Nov 2017 11:56:31 +0100 Matteo Dacrema wrote:
>
>> Hi,
>>
>> I need to switch a cluster of over 200 OSDs from rep
Ok, thank you guys
The version is 10.2.10
Matteo
> Il giorno 20 nov 2017, alle ore 23:15, Christian Balzer ha
> scritto:
>
> On Mon, 20 Nov 2017 10:35:36 -0800 Chris Taylor wrote:
>
>> On 2017-11-20 3:39 am, Matteo Dacrema wrote:
>>> Yes I mean the existing Cl
Hi All,
I need to expand my ceph cluster and I also need to increase pg number.
In a test environment I see that during pg creation all read and write
operations are stopped.
Is that a normal behavior ?
Thanks
Matteo
This email and any files transmitted with it are confidential and intended
so
max_active": "1"
>"osd_client_op_priority": "63",
>"osd_recovery_op_priority": "1"
>
> Cheers
> G.
>
> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Matteo
> Dacrema [mdacr..
your users.
>
> From: ceph-users <mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Matteo Dacrema
> mailto:mdacr...@enter.eu>>
> Date: Sunday, September 18, 2016 at 3:29 PM
> To: Goncalo Borges <mailto:goncalo.bor...@sydney.edu.au>>, "ceph-
Hi All,
I’m trying to estimate how many iops ( 4k direct random write ) my ceph
cluster should deliver.
I’ve Journal on SSDs and SATA 7.2k drives for OSD.
The question is: does journal on SSD increase the number of maximum write iops
or I need to consider only the IOPS provided by SATA drives
Thanks a lot guys.
I’ll try to do as you told me.
Best Regards
Matteo
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed. If
you have received this email in error please notify the system manager.
Hi,
someone have ever tried to run a ceph cluster on two different version of the
OS?
In particular I’m running a ceph cluster half on Ubuntu 12.04 and half on
Ubuntu 14.04 with Firefly version.
I’m not seeing any issues.
Are there some kind of risks?
Thanks
Matteo
This email and any files tra
critto:
>
> Hi,
>
> On 09/22/2016 03:03 PM, Matteo Dacrema wrote:
>
>> someone have ever tried to run a ceph cluster on two different version
>> of the OS?
>> In particular I’m running a ceph cluster half on Ubuntu 12.04 and half
>> on Ubuntu 14.04 with
Hi,
I’m planning a similar cluster.
Because it’s a new project I’ll start with only 2 node cluster witch each:
2x E5-2640v4 with 40 threads total @ 3.40Ghz with turbo
24x 1.92 TB Samsung SM863
128GB RAM
3x LSI 3008 in IT mode / HBA for OSD - 1 each 8 OSD/SDDs
2x SSD for OS
2x 40Gbit/s NIC
What
place the Journal inline.
Thanks
Matteo
> Il giorno 11 ott 2016, alle ore 03:04, Christian Balzer ha
> scritto:
>
>
> Hello,
>
> On Mon, 10 Oct 2016 14:56:40 +0200 Matteo Dacrema wrote:
>
>> Hi,
>>
>> I’m planning a similar cluster.
>> Becaus
Hi,
does anyone ever tried to run ceph monitors in containers?
Could it lead to performance issues?
Can I run monitor containers on the OSD nodes?
I don’t want to buy 3 dedicated servers. Is there any other solution?
Thanks
Best regards
Matteo Dacrema
Hi All,
Did someone ever used or tested Sandisk Cloudspeed Eco II 1,92TB with Ceph?
I know they have 0,6 DWPD that with Journal will be only 0,3 DPWD which means
560GB of data per day over 5 years.
I need to know the performance side.
Thanks
Matteo
Hi All,
what happen if I set pause flag on a production cluster?
I mean, will all the request remain pending/waiting or all the volumes attached
to the VMs will become read-only?
I need to quickly upgrade placement group number from 3072 to 8192 or better to
165336 and I think doing it without
client_op_priority, they might be interesting for you
>
> On 02/01/2017 14:37, Matteo Dacrema wrote:
>> Hi All,
>>
>> what happen if I set pause flag on a production cluster?
>> I mean, will all the request remain pending/waiting or all the volumes
>> attach
Hi All,
I’ve a production cluster made of 8 nodes, 166 OSDs and 4 Journal SSD every 5
OSDs with replica 2 for a total RAW space of 150 TB.
I’ve few question about it:
It’s critical to have replica 2? Why?
Does replica 3 makes recovery faster?
Does replica 3 makes rebalancing and recovery less he
Hi all,
Does anyone run a production cluster with a modified crush map for create two
pools belonging one to HDDs and one to SSDs.
What’s the best method? Modify the crush map via ceph CLI or via text editor?
Will the modification to the crush map be persistent across reboots and
maintenance op
Hi All,
I have a galera cluster running on openstack with data on ceph volumes capped
at 1500 iops for read and write ( 3000 total ).
I can’t understand why with fio I can reach 1500 iops without IOwait and MySQL
can reach only 150 iops both read or writes showing 30% of IOwait.
I tried with fi
B & don’t just you the avgrq-sz # ]
>
>
> --
> Deepak
>
>
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of Matteo Dacrema
> Sent: Tuesday, March 07, 2017
e
> for the additional replica, and that would improve the recovery and read
> performance.
>
>
>
> Cheers,
>
> Maxime
>
>
>
> From: ceph-users <mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Henrik Korkuc
> mailto:li
fio and compare results.
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Matteo Dacrema
>> Sent: Wednesday, 8 March 2017 7:52 AM
>> To: ceph-users
>> Subject: [ceph-users] MySQL and ceph volumes
>>
>>
prohibited.
> Il giorno 08 mar 2017, alle ore 09:08, Wido den Hollander ha
> scritto:
>
>>
>> Op 8 maart 2017 om 0:35 schreef Matteo Dacrema > <mailto:mdacr...@enter.eu>>:
>>
>>
>> Thank you Adrian!
>>
>> I’ve forgot this option
Hi all,
I’m planning to replace a swift multi-region deployment using Ceph.
Right now Swift is deployed across 3 region in Europe and the data is
replicated across this 3 regions.
Is it possible to configure Ceph to do the same?
I think I need to go with multiple zone group with a single realm
Hi All,
I’ve configured a multisite deployment on Ceph Nautilus 14.2.1 with one zone
group “eu", one master zone and two secondary zone.
If I upload ( on the master zone ) for 200 objects of 80MB each and I delete
all of them without waiting for the replication to finish I end up with one
zone
42 matches
Mail list logo