However, I can not get rid of these messages.
Dec 30 10:13:10 c02 ceph-mgr: 2019-12-30 10:13:10.343 7f7d3a2f8700 0
log_channel(cluster) log [DBG] : pgmap v710220:
-Original Message-
To: ceph-users; deaderzzs
Subject: Re: [ceph-users] ceph log level
I am decreasing logging with this s
I am decreasing logging with this script.
#!/bin/bash
declare -A logarrosd
declare -A logarrmon
declare -A logarrmgr
# default values luminous 12.2.7
logarrosd[debug_asok]="1/5"
logarrosd[debug_auth]="1/5"
logarrosd[debug_buffer]="0/1"
logarrosd[debug_client]="0/5"
logarrosd[debug_context]="0/1
384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.4
KiB/s rd, 573 KiB/s wr, 20 op/s
Dec 23 11:58:25 c02 ceph-mgr: 2019-12-23 11:58:25.194 7f7d3a2f8700 0
log_channel(cluster) log [DBG] : pgmap v411196: 384 pgs: 384
active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB
You can classify osd's, eg as ssd. And you can assign this class to a
pool you create. This way you have have rbd's running on only ssd's. I
think you have also a class for nvme and you can create custom classes.
-Original Message-
From: Philip Brown [mailto:pbr...@medata.com]
Se
but I
believe this is the output you get when delete a snapshot folder but
it's still referenced by a different snapshot farther up the hierarchy.
-Greg
On Mon, Dec 16, 2019 at 8:51 AM Marc Roos
wrote:
>
>
> Am I the only lucky one having this problem? Should I use the
> bugt
Am I the only lucky one having this problem? Should I use the bugtracker
system for this?
-Original Message-
From: Marc Roos
Sent: 14 December 2019 10:05
Cc: ceph-users
Subject: Re: [ceph-users] deleted snap dirs are back as
_origdir_1099536400705
ceph tell mds.a scrub start
ceph tell mds.a scrub start / recursive repair
Did not fix this.
-Original Message-
Cc: ceph-users
Subject: [ceph-users] deleted snap dirs are back as
_origdir_1099536400705
I thought I deleted snapshot dirs, but I still have them but with a
different name. How to get rid of thes
client.admin, did not have correct rights
ceph auth caps client.admin mds "allow *" mgr "allow *" mon "allow *"
osd "allow *"
-Original Message-
To: ceph-users
Subject: [ceph-users] ceph tell mds.a scrub status "problem getting
command descriptions"
ceph tell mds.a scrub status
G
ceph tell mds.a scrub status
Generates
2019-12-14 00:46:38.782 7fef4affd700 0 client.3744774 ms_handle_reset
on v2:192.168.10.111:6800/3517983549
Error EPERM: problem getting command descriptions from mds.a
___
ceph-users mailing list
ceph-users@lis
I thought I deleted snapshot dirs, but I still have them but with a
different name. How to get rid of these?
[@ .snap]# ls -1
_snap-1_1099536400705
_snap-2_1099536400705
_snap-3_1099536400705
_snap-4_1099536400705
_snap-5_1099536400705
_snap-6_1099536400705
_snap-7_1099536400705
___
Just also a bit curious. So it just creates a pv on sda and no
partitioning done on sda?
-Original Message-
From: Daniel Sung [mailto:daniel.s...@quadraturecapital.com]
Sent: dinsdag 10 december 2019 14:40
To: Philip Brown
Cc: ceph-users
Subject: Re: [ceph-users] sharing single SSD acr
>I am having hard times with creating graphs I want to see. Metrics are
exported in way that every single one is stored in separated series in
Influx like:
>
>> ceph_pool_stats,cluster=ceph1,metric=read value=1234
15506589110
>> ceph_pool_stats,cluster=ceph1,metric=write value=1234
This should get you started with using rbd.
WDC
WD40EFRX-68WT0N0
cat > secret.xml <
client.rbd.vps secret
EOF
virsh secret
ceph-users@lists.ceph.com is old one, why this is, I also do not know
https://www.mail-archive.com/search?l=all&q=ceph
-Original Message-
From: Rodrigo Severo - Fábrica [mailto:rodr...@fabricadeideias.com]
Sent: donderdag 5 december 2019 20:37
To: ceph-users@lists.ceph.com; ceph-us..
But I guess that in 'ceph osd tree' the ssd's were then also displayed
as hdd?
-Original Message-
From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
Sent: woensdag 4 december 2019 9:12
To: ceph-users
Subject: [ceph-users] SSDs behind Hardware Raid
Hi guys,
maybe this is common
Is there a point to sending such signature (twice) to a public mailing
list, having its emails stored on serveral mailing list websites?
===
I was thinking of going to the Polish one, but I will be tempted to go
to the London one, if you also be wearing this Kilt. ;D
-Original Message-
From: John Hearns [mailto:hear...@googlemail.com]
Sent: donderdag 24 oktober 2019 8:14
To: ceph-users
Subject: [ceph-users] Cloudstack and
I think I am having this issue also (at least I had with luminous) I had
to remove the hidden temp files rsync had left, when the cephfs mount
'stalled', otherwise I would never be able to complete the rsync.
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] hanging slow req
B.R.
Changcheng
On 10:56 Mon 21 Oct, Marc Roos wrote:
>
> Your collectd starts without the ceph plugin ok?
>
> I have also your error " didn't register a configuration callback",
> because I configured debug logging, but did not enable it by loading
esent.
Oct 21 10:30:55 c02 collectd[1031939]: ipmi plugin: sensor_read_handler:
sensor `P1-DIMMC2 TEMP memory_device (32.73)` of `main` not present.
-Original Message-
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] collectd Ceph metric
On 10:16 Mon 21 Oct, Marc Roos wrote:
&g
ceph-users] collectd Ceph metric
On 09:50 Mon 21 Oct, Marc Roos wrote:
>
> I am, collectd with luminous, and upgraded to nautilus and collectd
> 5.8.1-1.el7 this weekend. Maybe increase logging or so.
> I had to wait a long time before collectd was supporting the luminous
> release,
I am, collectd with luminous, and upgraded to nautilus and collectd
5.8.1-1.el7 this weekend. Maybe increase logging or so.
I had to wait a long time before collectd was supporting the luminous
release, maybe it is the same with octopus (=15?)
-Original Message-
From: Liu, Changch
Brad, many thanks!!! My cluster has finally HEALTH_OK af 1,5 year or so!
:)
-Original Message-
Subject: Re: Ceph pg repair clone_missing?
On Fri, Oct 4, 2019 at 6:09 PM Marc Roos
wrote:
>
> >
> >Try something like the following on each OSD that holds a copy
>
>Try something like the following on each OSD that holds a copy of
>rbd_data.1f114174b0dc51.0974 and see what output you get.
>Note that you can drop the bluestore flag if they are not bluestore
>osds and you will need the osd stopped at the time (set noout). Also
>note, snapids
h_pseudo = true;
}
CacheInode {
Chunks_HWMark = 7;
Entries_Hwmark = 200;
}
NFSV4 {
Graceless = true;
Allow_Numeric_Owners = true;
Only_Numeric_Owners = true;
}
LOG {
Components {
#NFS_READDIR = FULL_DEBUG;
#NFS4 = FULL_DEBUG;
#CACHE_INODE =
thentication, but users will have
the RGW access of their nfs-ganesha export. You can create exports with
disjoint privileges, and since recent L, N, RGW tenants.
Matt
On Tue, Oct 1, 2019 at 8:31 AM Marc Roos
wrote:
>
> I think you can run into problems
> with a multi user environment of
>
>>
>> I was following the thread where you adviced on this pg repair
>>
>> I ran these rados 'list-inconsistent-obj'/'rados
>> list-inconsistent-snapset' and have output on the snapset. I tried
to
>> extrapolate your comment on the data/omap_digest_mismatch_info onto
my
>> situation.
Hi Brad,
I was following the thread where you adviced on this pg repair
I ran these rados 'list-inconsistent-obj'/'rados
list-inconsistent-snapset' and have output on the snapset. I tried to
extrapolate your comment on the data/omap_digest_mismatch_info onto my
situation. But I don't know
Secret_Access_Key = "";
}
}
RGW {
ceph_conf = "//ceph.conf";
# for vstart cluster, name = "client.admin"
name = "client.rgw.foohost";
cluster = "ceph";
# init_args = "-d --debug-rgw=16";
}
Da
Just install these
http://download.ceph.com/nfs-ganesha/
nfs-ganesha-rgw-2.7.1-0.1.el7.x86_64
nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64
libnfsidmap-0.25-19.el7.x86_64
nfs-ganesha-mem-2.7.1-0.1.el7.x86_64
nfs-ganesha-xfs-2.7.1-0.1.el7.x86_64
nfs-ganesha-2.7.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.7.1-0.1.
What parameters are you exactly using? I want to do a similar test on
luminous, before I upgrade to Nautilus. I have quite a lot (74+)
type_instance=Osd.opBeforeDequeueOpLat
type_instance=Osd.opBeforeQueueOpLat
type_instance=Osd.opLatency
type_instance=Osd.opPrepareLatency
type_instance=Osd.opP
>
>> > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in
case
>>
>> I can use it later for storage)
>>
>> OS not? get enterprise ssd as os (I think some recommend it when
>> colocating monitors, can generate a lot of disk io)
>
>Yes, OS. I have no option to get a SSD.
o
>- Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case
I can use it later for storage)
OS not? get enterprise ssd as os (I think some recommend it when
colocating monitors, can generate a lot of disk io)
>- Install CentOS 7.7
Good choice
>- Use 2 vLANs, one for ceph int
How do I actually configure dovecot to use ceph for a mailbox? I have
build the plugins as mentioned here[0]
- but where do I copy/load what module?
- can I configure a specific mailbox only, via eg userdb:
test3:x:8267:231:Account with special settings for
dovecot:/home/popusers/test3:/bi
Hi Vitaliy, just saw you recommend someone to use ssd, and wanted to use
the oppurtunaty to thank you for composing this text[0], enoyed reading
it.
- What do you mean with: bad-SSD-only?
- Is this patch[1] in a Nautilus release?
[0]
https://yourcmc.ru/wiki/Ceph_performance
[1]
https://git
>
> complete DR with Ceph to restore it back to how it was at a given
point in time is a challenge.
>
> Trying to backup a Ceph cluster sounds very 'enterprise' and is
difficult to scale as well.
Hmmm, I was actually also curious how backups were done, especially on
these clusters that have
>> Reverting back to filestore is quite a lot of work and time again.
>> Maybe see first if with some tuning of the vms you can get better
results?
>
>None of the VMs are particularly disk-intensive. There's two users
accessing the system over a WiFi network for email, and some HTTP/SMTP
Reverting back to filestore is quite a lot of work and time again. Maybe
see first if with some tuning of the vms you can get better results?
What you also can try is for io intensive vm's add an ssd pool? I moved
some exchange servers on them. Tuned down the logging, because that is
writing
Maybe a bit of topic, just curious what speeds did you get previously?
Depending on how you test your native drive of 5400rpm, the performance
could be similar. 4k random read of my 7200rpm/5400 rpm results in
~60iops at 260kB/s.
I also wonder why filestore could be that much faster, is this n
H, ok ok, test it first, can't remember if it is finished. Checks
also if it is usefull to create a snapshot, by checking the size of the
directory.
[@ cron.daily]# cat backup-archive-mail.sh
#!/bin/bash
cd /home/
for account in `ls -c1 /home/mail-archive/ | sort`
do
/usr/local/sbin/ba
With the sas expander you are putting more drives on 'one port'. Just
make sure you do not create a bottle neck there, adding to many drives.
I guess this depends on the speed of the drives. Then you should be fine
not?
-Original Message-
From: Stolte, Felix [mailto:f.sto...@fz-ju
: paul.emmerich; ceph-users
Subject: Re: [ceph-users] "session established", "io error", "session
lost, hunting for new mon" solution/fix
On Fri, Jul 12, 2019 at 5:38 PM Marc Roos
wrote:
>
>
> Thanks Ilya for explaining. Am I correct to understand from the
Isn't that why you suppose to test up front? So you do not have shocking
surprises? You can find in the mailing list archives some performance
references also.
I think it would be good to publish some performance results on the
ceph.com website. Can’t be to difficult to put some default scen
h
wrote:
>
>
>
> On Thu, Jul 11, 2019 at 11:36 PM Marc Roos
wrote:
>> Anyone know why I would get these? Is it not strange to get them in a
>> 'standard' setup?
>
> you are probably running on an ancient kernel. this bug has been fixed
a long time ago.
Paul, this should have been/is back ported to this kernel not?
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Cc: ceph-users
Subject: Re: [ceph-users] "session established", "io error", "session
lost, hunting for new mon" solution/fix
Hi Paul,
Thanks for your reply, I am running 3.10.0-957.12.2.el7.x86_64, it is
from may 2019.
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: vrijdag 12 juli 2019 12:34
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] "session establ
Anyone know why I would get these? Is it not strange to get them in a
'standard' setup?
-Original Message-
Subject: [ceph-users] "session established", "io error", "session lost,
hunting for new mon" solution/fix
I have on a cephfs client again (luminous cluster, centos7, only 3
I noticed that the dmesg -T gives incorrect time, the messages have a
time in the future compared to the system time. Not sure if this is
libceph issue or a kernel issue.
[Thu Jul 11 10:41:22 2019] libceph: mon2 192.168.10.113:6789 session
lost, hunting for new mon
[Thu Jul 11 10:41:22 2019]
I just set osd_map_message_max=10 on a cluster of only 32 osds. While
this fix[0] has been reported for fairly clusters 600, 3500 osd's.
Should I start worrying?
1. Should I add this now permanently to ceph.conf?
[global]
osd map message max = 10
2. Will this prevent future problems like I re
I have on a cephfs client again (luminous cluster, centos7, only 32
osds!). Wanted to share the 'fix'
[Thu Jul 11 12:16:09 2019] libceph: mon0 192.168.10.111:6789 session
established
[Thu Jul 11 12:16:09 2019] libceph: mon0 192.168.10.111:6789 io error
[Thu Jul 11 12:16:09 2019] libceph: mon0
Can I temporary shutdown all my monitors? This only affects new
connections not? Existing will still keep running?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I decided to restart osd.0, then the load of the cephfs and on all osd
nodes dropped. After this I still have on the first server
[@~]# cat
/sys/kernel/debug/ceph/0f1701f5-453a-4a3b-928d-f652a2bbbcb0.client357431
0/osdc
REQUESTS 0 homeless 0
LINGER REQUESTS
BACKOFFS
[@~]# cat
/sys/kernel
Forgot to add these
[@ ~]# cat
/sys/kernel/debug/ceph/0f1701f5-453a-4a3b-928d-f652a2bbbcb0.client357431
0/osdc
REQUESTS 0 homeless 0
LINGER REQUESTS
BACKOFFS
[@~]# cat
/sys/kernel/debug/ceph/0f1701f5-453a-4a3b-928d-f652a2bbbcb0.client358422
4/osdc
REQUESTS 38 homeless 0
317841 osd020.d6
Maybe this requires some attention. I have a default centos7 (maybe not
the most recent kernel though), ceph luminous setup eg. no different
kernels.
This is 2nd or 3rd time that a vm is going into a high load (151) and
stopping its services. I have two vm's both mounting the same 2 cephfs
'
What about creating snaps on a 'lower level' in the directory structure
so you do not need to remove files from a snapshot as a work around?
-Original Message-
From: Lars Marowsky-Bree [mailto:l...@suse.com]
Sent: donderdag 11 juli 2019 10:21
To: ceph-users@lists.ceph.com
Subject: Re:
...@leblancnet.us]
Sent: vrijdag 28 juni 2019 18:30
To: Marc Roos
Cc: ceph-users; jgarcia
Subject: Re: [ceph-users] Migrating a cephfs data pool
Given that the MDS knows everything, it seems trivial to add a ceph 'mv'
command to do this. I looked at using tiering to try and do the move,
but I
the ceph
guys implement a mv that does what you expect from it. Now it acts more
or less like a linking.
-Original Message-
From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
Sent: vrijdag 28 juni 2019 17:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Migrating a cephfs data
What about adding the new data pool, mounting it and then moving the
files? (read copy because move between data pools does not what you
expect it do)
-Original Message-
From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
Sent: vrijdag 28 juni 2019 17:26
To: ceph-users
Subject: *SPA
I am also thinking of moving the wal/db to ssd of the sata hdd's. Did
you do tests before and after this change, and know what the difference
is iops? And is the advantage more or less when your sata hdd's are
slower?
-Original Message-
From: Stolte, Felix [mailto:f.sto...@fz-juelich
We have this, if it is any help
write-4k-seq: (groupid=0, jobs=1): err= 0: pid=1446964: Fri May 24
19:41:48 2019
write: IOPS=760, BW=3042KiB/s (3115kB/s)(535MiB/180001msec)
slat (usec): min=7, max=234, avg=16.59, stdev=13.59
clat (usec): min=786, max=167483, avg=1295.60, stdev=1933.25
What is wrong with?
service ceph-mgr@c stop
systemctl disable ceph-mgr@c
-Original Message-
From: Vandeir Eduardo [mailto:vandeir.edua...@gmail.com]
Sent: woensdag 5 juni 2019 16:44
To: ceph-users
Subject: [ceph-users] How to remove ceph-mgr from a node
Hi guys,
sorry, but I'm not f
Has anyone put the radosgw in a container? What files do I need to put
in the sandbox directory? Are there other things I should consider?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
How did this get damaged? You had 3x replication on the pool?
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: dinsdag 4 juni 2019 1:14
To: James Wilkins
Cc: ceph-users
Subject: Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps
On Mon, Jun 3, 2019 at 3:0
I had this with balancer active and "crush-compat"
MIN/MAX VAR: 0.43/1.59 STDDEV: 10.81
And by increasing the pg of some pools (from 8 to 64) and deleting empty
pools, I went to this
MIN/MAX VAR: 0.59/1.28 STDDEV: 6.83
(Do not want to go to this upmap yet)
-Original Message-
Fr
I switched first of may, and did not notice to much difference in memory
usage. After the restart of the osd's on the node I see the memory
consumption gradually getting back to as before.
Can't say anything about latency.
-Original Message-
From: Konstantin Shalygin
Sent: dinsda
Maybe my data can be useful to compare with? I have the samsung sm863.
This[0] is what I get from fio directly on the ssd, and from an rbd ssd
pool with 3x replication[1].
I also have included a comparisson with cephfs[3], would be nice if
there would be some sort of
manual page describing
I have still some account listing either "allow" or not. What should
this be? Should this not be kept uniform?
[client.xxx.xx]
key = xxx
caps mon = "allow profile rbd"
caps osd = "profile rbd pool=rbd,profile rbd pool=rbd.ssd"
[client.xxx]
key =
Sorry for not waiting until it is published on the ceph website but,
anyone attended this talk? Is it production ready?
https://cephalocon2019.sched.com/event/M7j8
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.c
I have been following this thread for a while, and thought I need to
have
"major ceph disaster" alert on the monitoring ;)
http://www.f1-outsourcing.eu/files/ceph-disaster.mp4
-Original Message-
From: Kevin Flöh [mailto:kevin.fl...@kit.edu]
Sent: donderdag 23 mei 2019 10:51
To:
I am still stuck with this situation, and do not want to restart(reset)
this host. I tried bringing down the eth connected to the client network
for a while, but after bringing it up, I am getting the same messages
-Original Message-
From: Marc Roos
Sent: dinsdag 21 mei 2019 11:42
I have this on a cephfs client, I had ceph common on 12.2.11, and
upgraded to 12.2.12 while having this error. They are writing here [0]
you need to upgrade kernel and it is fixed in 12.2.2
[@~]# uname -a
Linux mail03 3.10.0-957.5.1.el7.x86_6
[Tue May 21 11:23:26 2019] libceph: mon2 192.
[@ceph]# ps -aux | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 12527 0.0 0.0 123520 932 pts/1D+ 09:26 0:00 umount
/home/mail-archive
root 14549 0.2 0.0 0 0 ?D09:29 0:09
[kworker/0:0]
root 23350 0.0 0.
mds_standby_for_rank = 0
Regards,
Eugen
Zitat von Marc Roos :
> Should a not active mds be doing something??? When I restarted the not
> active mds.c, My client io on the fs_data pool disappeared.
>
>
> services:
> mon: 3 daemons, quorum a,b,c
> mgr: c(active), standbys: a,
, 32 in
rgw: 2 daemons active
-Original Message-
From: Marc Roos
Sent: dinsdag 21 mei 2019 10:01
To: ceph-users@lists.ceph.com; Marc Roos
Subject: RE: [ceph-users] cephfs causing high load on vm, taking down 15
min later another cephfs vm
I have evicted all client connections and
No, but even if, I never had any issues when running multiple scrubs.
-Original Message-
From: EDH - Manuel Rios Fernandez [mailto:mrios...@easydatahost.com]
Sent: dinsdag 21 mei 2019 10:03
To: Marc Roos; 'ceph-users'
Subject: RE: [ceph-users] cephfs causing high load on
I have evicted all client connections and have still high load on osd's
And ceph osd pool stats shows still client activity?
pool fs_data id 20
client io 565KiB/s rd, 120op/s rd, 0op/s wr
-Original Message-
From: Marc Roos
Sent: dinsdag 21 mei 2019 9:51
To: ceph-
I have got this today again? I cannot unmount the filesystem and
looks like some osd's are having 100% cpu utilization?
-Original Message-
From: Marc Roos
Sent: maandag 20 mei 2019 12:42
To: ceph-users
Subject: [ceph-users] cephfs causing high load on vm, taking down 1
I got my first problem with cephfs in a production environment. Is it
possible from these logfiles to deduct what happened?
svr1 is connected to ceph client network via switch
svr2 vm is collocated on c01 node.
c01 has osd's and the mon.a colocated.
svr1 was the first to report errors at 03:
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
-Original Message-
From: Florent B [mailto:flor...@coppint.com]
Sent: zondag 19 mei 2019 12:06
To: Paul Emmerich
Cc: Ceph Users
Subject: Re: [ceph-users] Default min_size value for EC pools
Thank you Paul for your ans
Hmmm, looks like diskpart is of, reports the same about a volume, that
fsutil fsinfo ntfsinfo c: report 512 (in this case correct, because it
is on a ssd)
Anyone knows how to use fsutil with a path mounted disk (without drive
letter)?
-Original Message-
From: Marc Roos
Sent
I am not sure if it is possible to run fsutil on disk without drive
letter, but mounted on path.
So I used:
diskpart
select volume 3
Filesystems
And gives me this:
Current File System
Type : NTFS
Allocation Unit Size : 4096
Flags :
File Systems Supported for F
Are you sure your osd's are up and reachable? (run ceph osd tree on
another node)
-Original Message-
From: Jan Kasprzak [mailto:k...@fi.muni.cz]
Sent: woensdag 15 mei 2019 14:46
To: ceph-us...@ceph.com
Subject: [ceph-users] Huge rebalance after rebooting OSD host (Mimic)
Hello
Hmmm, so if I have (wd) drives that list this in smartctl output, I
should try and reformat them to 4k, which will give me better
performance?
Sector Sizes: 512 bytes logical, 4096 bytes physical
Do you have a link to this download? Can only find some .cz site with
the rpms.
-Orig
> Fancy fast WAL/DB/Journals probably help a lot here, since they do
affect the "iops"
> you experience from your spin-drive OSDs.
What difference can be expected if you have a 100 iops hdd and you start
using
wal/db/journals on ssd? What would this 100 iops increase to
(estimating)?
--
sdag 7 mei 2019 16:18
To: Marc Roos; 'ceph-users'
Subject: RE: [ceph-users] EPEL packages issue
Hello,
I already did this step and have the packages in local repostry, but
still it aske for the EPEL repstry.
Regards.
mohammad almodallal
-Original Message-
From: Marc Roos
I have the same situation, where the servers do not have internet
connection and use my own repository servers.
I am just rsyncing the rpms to my custom repository like this, works
like a charm.
rsync -qrlptDvSHP --delete --exclude config.repo --exclude "local*"
--exclude "aarch64" --exclu
The reason why you moved to ceph storage, is that you do not want to do such
things. Remove the drive, and let ceph recover.
On May 6, 2019 11:06 PM, Florent B wrote: > > Hi, > > It seems that OSD disk is
dead (hardware problem), badblocks command > returns a lot of badblocks. > > Is
there an
:34
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] rbd ssd pool for (windows) vms
On Wed, May 1, 2019 at 5:00 PM Marc Roos
wrote:
>
>
> Do you need to tell the vm's that they are on a ssd rbd pool? Or does
> ceph and the libvirt drivers do this automatically for you?
kl 23:00 skrev Marc Roos :
Do you need to tell the vm's that they are on a ssd rbd pool? Or
does
ceph and the libvirt drivers do this automatically for you?
When testing a nutanix acropolis virtual install, I had to 'cheat'
it by
adding this
How did you retreive what osd nr to restart?
Just for future reference, when I run into a similar situation. If you
have a client hang on a osd node. This can be resolved by restarting
the osd that it is reading from?
-Original Message-
From: Dan van der Ster [mailto:d...@vanderste
I have only 366M meta data stored in an ssd pool, with 16TB (10 million
objects) of filesystem data (hdd pools).
The active mds is using 13GB memory.
Some stats of the active mds server
[@c01 ~]# ceph daemonperf mds.a
---mds --mds_cache--- --mds_log--
-mds_
Do you need to tell the vm's that they are on a ssd rbd pool? Or does
ceph and the libvirt drivers do this automatically for you?
When testing a nutanix acropolis virtual install, I had to 'cheat' it by
adding this
To make the installer think there was a ssd drive.
I only have 'Thin provis
I am not sure about your background knowledge of ceph, but if you are
starting. Maybe first try and get ceph working in a virtual environment,
that should not be to much of a problem. Then try migrating it to your
container. Now you are probably fighting to many issues at the same
time.
.x]
public addr = 192.168.10.x
cluster addr = 10.0.0.x
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: 22 April 2019 22:34
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Osd update from 12.2.11 to 12.2.12
Do you perhaps have anything in the ceph.conf files
Double thanks for the on-topic reply. The other two repsonses, were
making
me doubt if my chinese (which I didn't study) is better than my english.
>> I am a bit curious on how production ceph clusters are being used. I
am
>> reading here that the block storage is used a lot with openstack
Just updated luminous, and setting max_scrubs value back. Why do I get
osd's reporting differently
I get these:
osd.18: osd_max_scrubs = '1' (not observed, change may require restart)
osd_objectstore = 'bluestore' (not observed, change may require restart)
rocksdb_separate_wal_dir = 'false
I am a bit curious on how production ceph clusters are being used. I am
reading here that the block storage is used a lot with openstack and
proxmox, and via iscsi with vmare.
But I since nobody here is interested in a better rgw client for end
users. I am wondering if the rgw is even being u
I have been looking a bit at the s3 clients available to be used, and I
think they are quite shitty, especially this Cyberduck that processes
files with default reading rights to everyone. I am in the process to
advice clients to use for instance this mountain duck. But I am not to
happy abou
AFAIK you at least risk with cephfs on osd nodes this 'kernel deadlock'?
I have it also, but with enough memory. Search mailing list for this.
I am looking at similar setup, but with mesos and strugling with some
cni plugin we have to develop.
-Original Message-
From: Bob Farrell [ma
We have also hybrid ceph/libvirt-kvm setup, using some scripts to do
live migration, do you have auto failover in your setup?
-Original Message-
From: jes...@krogh.cc [mailto:jes...@krogh.cc]
Sent: 05 April 2019 21:34
To: ceph-users
Subject: [ceph-users] VM management setup
Hi. Know
1 - 100 of 500 matches
Mail list logo