Hi,
If possible, please add also my account.
Regards
Mateusz
> Wiadomość napisana przez Trilok Agarwal w dniu
> 15.03.2019, o godz. 18:40:
>
> Hi
> Can somebody over here invite me to join the ceph slack channel
>
> Thanks
> TRILOK
> ___
> ceph-users
Hi,
I have problem with starting two of my OSD’s with error:
osd.19 pg_epoch: 8887 pg[1.2b5(unlocked)] enter Initial
0> 2019-03-01 09:41:30.259485 7f303486be00 -1
/build/ceph-12.2.11/src/osd/PGLog.h: In function 'static void
PGLog::read_log_and_missing(ObjectStore*, coll_t, coll_t, ghobject_t, c
@Christian ,thanks for quick answer, please look bellow.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Monday, July 3, 2017 1:39 PM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała
> Subject: Re: [ceph-users] Cache Tier or any ot
n RBD with SSD
drives.
The second question, is it any cache tier mode, that replica can be set on
1, for best use of SSD space?
Best Regards
--
Mateusz Skała
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
Thank You for quick response.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Tuesday, July 19, 2016 3:39 PM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała
> Subject: Re: [ceph-users] Cache Tier configuration
>
>
> Hello,
Hello,
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Wednesday, July 13, 2016 4:03 AM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała
> Subject: Re: [ceph-users] Cache Tier configuration
>
>
> Hello,
>
> On Tue, 1
Hello,
It is safe to rename pool with cache-tier? I want to make some
standardization in pools name for example pools 'prod01' and 'cache-prod01'.
Maybe before rename should I remove cache tier?
Regards,
--
Mateusz Skała
mateusz.sk...@budikom.net
budikom.net
ul. Trz
Thank You for replay. Answers below.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Tuesday, July 12, 2016 3:37 AM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała
> Subject: Re: [ceph-users] Cache Tier configuration
>
>
>
eleaf firstn 2 type host
step emit
step take ssd
step chooseleaf firstn -2 type osd
step emit
}
OSD tree with SSD:
-8 0.68597 root ssd
-9 0.34299 rack skwer-ssd
-16 0.17099 host ceph40-ssd
32 0.17099 osd.32up 1.0 1.0
-19 0.17099 host ceph50-ssd
42 0.17099 osd.42up 1.0 1.0
-11 0.34299 rack nzoz-ssd
-17 0.17099 host ceph45-ssd
37 0.17099 osd.37up 1.0 1.0
-22 0.17099 host ceph55-ssd
47 0.17099 osd.47up 1.0 1.0
Can someone help? Any ideas? It is normal that whole cluster stops at disk
full error on cache tier, I was thinking that only one of pools can stops
and other without cache tier should still work.
Best regards,
--
Mateusz Skała
mateusz.sk...@budikom.net
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Cephers.
I'm looking for solution to scrubbing process optimization. In our
environment this process make big impact on performance. For monitoring
disks we are using monitorix. If process running 'Disk I/O activity (R+W)'
shows 20-60 reads+writes per second. After disabling scrub and deep-scru
Hi Cephers,
On Sunday evening we are upgraded Ceph form 0.87 to 0.94. After upgrade VM's
running on Proxmox, freezes for 3-4s in 10min periods (application is not
responding on Windows). Before upgrade everything was working fine. On
/proc/diskstats at field 7 (time spent reading (ms) ) and 11 (t
Hi,
Looking for better performance on our cluster, we found this blog
https://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio
_rbd.html
In our environment IOPS looks like on this picture of the blog
https://telekomcloud.github.io/images/2014-02-26-ceph-performance-analysis_f
b/ceph/osd/ceph-$id/current/$pgid_head/DIR*/DIR*/
# remove all data of this pg on other osds
# start all osd's - You probably have incomplete pg
Thanks for Your work, it could help me don't waste my time.
Regards, Mateusz
-Original Message-
From: Mykola Golub [mailto:mgo...@m
Hi,
After some hardware errors one of pg in our backup server is 'incomplete'.
I do export pg without problems like here:
https://ceph.com/community/incomplete-pgs-oh-my/
After remove pg from all osd's and import pg to one of osd pg is still
'incomplete'.
I want to recover only some pice of d
made only on one osd host with P410i controller, with SATA drives
ST1000LM014-1EJ1 for data and for journal SSD INTEL SSDSC2BW12.
Regards,
Mateusz
From: Jan Schermer [mailto:j...@schermer.cz]
Sent: Wednesday, June 17, 2015 9:41 AM
To: Mateusz Skała
Cc: ceph-users@lists.ceph.com
Subject: Re
Yes, all disk are in single drive raid 0. Now cache is enabled for all drives,
should I disable cache for SSD drives?
Regards,
Mateusz
From: Tyler Bishop [mailto:tyler.bis...@beyondhosting.net]
Sent: Thursday, June 11, 2015 7:30 PM
To: Mateusz Skała
Cc: ceph-users@lists.ceph.com
Subject
Hi,
Please help me with hardware cache settings on controllers for ceph rbd best
performance. All Ceph hosts have one SSD drive for journal.
We are using 4 different controllers, all with BBU:
* HP Smart Array P400
* HP Smart Array P410i
* Dell PERC 6/i
* D
Fixed problem,
Default pool size is set to 2, but for rbd pool size wos set to 3.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mateusz Skała
Sent: Tuesday, March 10, 2015 10:22 AM
To: 'Henrik Korkuc'; ceph-users@lists.ceph.com
Subject: Re: [ceph-u
w used space?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Henrik
Korkuc
Sent: Tuesday, March 10, 2015 10:13 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph free space
On 3/10/15 11:06, Mateusz Skała wrote:
Hi,
In my cluster is something wrong wit
Hi,
In my cluster is something wrong with free space. In cluster with 10OSD
(5*1TB + 5*2TB) 'ceph -s' shows:
11425 GB used, 2485 GB / 13910 GB avail
But I have only 2 rbd disks in one pool ('rbd'):
>>rados df
pool name category KB objects clones
degraded
Thanks, I doesn't figured out to delete this pools. Btw. I'm on 0.87 release.
Mateusz
-Original Message-
From: john.sp...@inktank.com [mailto:john.sp...@inktank.com] On Behalf Of John
Spray
Sent: Tuesday, February 3, 2015 10:04 AM
To: Mateusz Skała
Cc: ceph-users@lists.ceph.c
Hi,
It is possible to reduce pg_num at not used pool, for example from data or
mds. We are using only rbd pool, but pg_num for data and mds is set to 1024.
Regards
Mateusz
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
Thanks for reply, we are using now ceph 0.80.1 firefly, is this options
available?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mateusz Skała
Sent: Tuesday, October 28, 2014 9:27 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Scrub proces, IO performance
Hello
Hello,
We are using Ceph as a storage backend for KVM, used for hosting MS Windows
RDP, Linux for web applications with MySQL database and file sharing from
Linux. Wen scrub or deep-scrub process is active, RDP sessions are freezing
for a few seconds and web applications have big replay latency.
Hi,
We have 4 NIC controllers on ceph servers. Each server have installed
few osd's and one monitor. How should we setup networking on this hosts
with division on frontend network (10.20.8.0/22) and backend network
(10.20.4.0/22)?
At this time we are using this configuration of network:
auto
You mean to move /var/log/ceph/* to SSD disk?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi thanks for reply.
From the top of my head, it is recommended to use 3 mons in
production. Also, for the 22 osds your number of PGs look a bug low,
you should look at that.
I get it from
http://ceph.com/docs/master/rados/operations/placement-groups/
(22osd's * 100)/3 replicas = 733, ~
Hello,
we have deployed ceph cluster with 4 monitors and 22 osd's. We are using
only rbd's. All VM's on KVM have specified monitors in the same order.
One of monitors (the first on the list in vm disk specification -
ceph35) has more load than others and the performance of cluster is
poor. How
28 matches
Mail list logo