I am looking a bit at ceph on a single node. Does anyone have experience
with cloudfuse?
Do I need to use the rados-gw? Does it even work with ceph?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -.
F1 Outsourcing Development Sp. z o.o.
Poland
t: +48 (0)124466845
f: +4
I have created a swift user, and can mount the object store with
cloudfuse, and can create files in the default pool .rgw.root
How can I have my test user go to a different pool and not use the
default .rgw.root?
Thanks,
Marc
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FYI, 5 or even more years ago I was trying zabbix and when I noticed
that when the monitored hosts increased, the load on the mysql server
was increasing. Without being able to recall exactly what was wrong (I
think every sample they did, was one insert statement), I do remember
that I got qu
I have updated a test cluster by just updating the rpm and issueing a
ceph osd require-osd-release because it was mentioned in the status. Is
there more you need to do?
- update on all nodes the packages
sed -i 's/Kraken/Luminous/g' /etc/yum.repos.d/ceph.repo
yum update
- then on each node f
On a test cluster with 994GB used, via collectd I get in influxdb an
incorrect 9.3362651136e+10 (93GB) reported and this should be 933GB (or
actually 994GB). Cluster.osdBytes is reported correctly
3.3005833027584e+13 (30TB)
cluster:
health: HEALTH_OK
services:
mon: 3 daemons, q
Does anyone have an idea, why I am having these osd_bytes=0?
ceph daemon mon.c perf dump cluster
{
"cluster": {
"num_mon": 3,
"num_mon_quorum": 3,
"num_osd": 6,
"num_osd_up": 6,
"num_osd_in": 6,
"osd_epoch": 3593,
"osd_bytes": 0,
I need a little help with fixing some errors I am having.
After upgrading from Kraken im getting incorrect values reported on
placement groups etc. At first I thought it is because I was changing
the public cluster ip address range and modifying the monmap directly.
But after deleting and add
Is it possible to change the cephfs meta data pool. I would like to
lower the pg's. And thought about just making a new pool, copying the
pool and then renaming them. But I guess cephfs works with the pool id
not? How can this be best done?
Thanks
___
No, but we are using Perl ;)
-Original Message-
From: Daniel Davidson [mailto:dani...@igb.illinois.edu]
Sent: donderdag 13 juli 2017 16:44
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Crashes Compiling Ruby
We have a weird issue. Whenever compiling Ruby, and only Ruby, on a
l
When are bugs like these http://tracker.ceph.com/issues/20563 available
in the rpm repository
(https://download.ceph.com/rpm-luminous/el7/x86_64/)?
I sort of don’t get it from this page
http://docs.ceph.com/docs/master/releases/. Maybe something here could
specifically mentioned about the av
We are running on
Linux c01 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.3.1611 (Core)
And didn’t have any issues installing/upgrading, but we are not using
ceph-deploy. In fact am surprised on how easy it is to install.
I just updated packages on one CentOS7 node and getting these errors:
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40 -1
WARNING: the following dangerous and experimental features are enabled:
bluestore
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40
With ceph auth I have set permissions like below, I can add and delete
objects in the test pool, but cannot set size of a the test pool. What
permission do I need to add for this user to modify the size of this
test pool?
mon 'allow r' mds 'allow r' osd 'allow rwx pool=test'
__
I just updated packages on one CentOS7 node and getting these errors.
Anybody an idea how to resolve this?
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40 -1
WARNING: the following dangerous and experimental features are enabled:
bluestore
Jul 18 12:03:34 c01 ceph-mon:
Thanks! updating all indeed resolved this.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: dinsdag 18 juli 2017 23:01
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Updating 12.1.0 -> 12.1.1
Yeah, some of the message formats changed (incompati
Should we report these?
[840094.519612] ceph[12010]: segfault at 8 ip 7f194fc8b4c3 sp
7f19491b6030 error 4 in libceph-common.so.0[7f194f9fb000+7e9000]
CentOS Linux release 7.3.1611 (Core)
Linux 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64 GNU/Li
I would like to work on some grafana dashboards, but since the upgrade
to luminous rc, there seems to have changed something in json and (a lot
of) metrics are not stored in influxdb.
Does any one have an idea when updates to collectd-ceph in the epel repo
will be updated? Or is there some s
I am running 12.1.1, and updated to it on the 18th. So I guess this is
either something else or it was not in the rpms.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: vrijdag 21 juli 2017 20:21
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Ceph
I would recommend logging into the host and running your commands from a
screen session, so they keep running.
-Original Message-
From: Martin Wittwer [mailto:martin.witt...@datonus.ch]
Sent: zondag 23 juli 2017 15:20
To: ceph-us...@ceph.com
Subject: [ceph-users] Restore RBD image
H
I have an error with a placement group, and seem to only find these
solutions based on a filesystem osd.
http://ceph.com/geen-categorie/ceph-manually-repair-object/
Anybody have a link to how can I do this with a bluestore osd?
/var/log/ceph/ceph-osd.9.log:48:2017-07-31 14:21:33.929855 7fbbb
I have got a placement group inconsistency, and saw some manual where
you can export and import this on another osd. But I am getting an
export error on every osd.
What does this export_files error -5 actually mean? I thought 3 copies
should be enough to secure your data.
> PG_DAMAGED Possi
:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Pg inconsistent / export_files error -5
It _should_ be enough. What happened in your cluster recently? Power
Outage, OSD failures, upgrade, added new hardware, any changes at all.
What is your Ceph version?
On Fri, Aug 4, 2017 at 11:22 AM
I tried to fix a 1 pg inconsistent by taking the osd 12 out, hoping for
the data to be copied to a different osd, and that one would be used as
'active?'.
- Would deleting the whole image in the rbd pool solve this? (or would
it fail because of this status)
- Should I have done this rather w
V_DONTNEED) = 0
<0.13>
23552 16:26:31.339235 madvise(0x7f4a02102000, 32768, MADV_DONTNEED) = 0
<0.14>
23552 16:26:31.339331 madvise(0x7f4a01df8000, 16384, MADV_DONTNEED) = 0
<0.19>
23552 16:26:31.339372 madvise(0x7f4a01df8000, 32768, MADV_DONTNEED) = 0
<0.13>
---
-12.1.1/src/rocksdb/db/db_impl.cc:343] Shutdown
complete
2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount
2017-08-09 11:41:25.705389 7f26db8ae100 1 bdev(0x7f26de472e00
/var/lib/ceph/osd/ceph-0/block) close
2017-08-09 11:41:25.944548 7f26db8ae100 1 bdev(0x7f26de2b3a00
/var/lib/ceph/osd/cep
I am not sure if I am the only one having this. But there is an issue
with the collectd plugin and the luminous release. I think I didn’t
have this in Kraken, looks like something changed in the JSON? I also
reported it here https://github.com/collectd/collectd/issues/2343, I
have no idea who
FYI when creating these rgw pools, not all are automatically 'enabled
application'
I created these
ceph osd pool create default.rgw
ceph osd pool create default.rgw.meta
ceph osd pool create default.rgw.control
ceph osd pool create default.rgw.log
ceph osd pool create .rgw.root
ceph osd po
rocksdb:
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.1.1/rpm/el7/BUILD/ceph-12.1.1/src/rocksdb/db/db_impl.cc:343] Shutdown
complete
2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount
2017-08-
Where can you get the nfs-ganesha-ceph rpm? Is there a repository that
has these?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I had some issues with the iscsi software starting to early, maybe this
can give you some ideas.
systemctl show target.service -p After
mkdir /etc/systemd/system/target.service.d
cat << 'EOF' > /etc/systemd/system/target.service.d/10-waitforrbd.conf
[Unit]
After=systemd-journald.socket sys-
ceph fs authorize cephfs client.bla /bla rw
Will generate a user with these permissions
[client.bla]
caps mds = "allow rw path=/bla"
caps mon = "allow r"
caps osd = "allow rw pool=fs_data"
With those permissions I cannot mount, I get a permission denied, until
I chang
17 22:29
To: TYLin
Cc: Marc Roos; ceph-us...@ceph.com
Subject: Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7
Marc,
These rpms (and debs) are built with the latest ganesha 2.5 stable
release and the latest luminous release on download.ceph.com:
http://download.ceph.com/nfs-ganesha/
I
nfs-ganesha-2.5.2-.el7.x86_64.rpm
^
Is this correct?
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 11:40
To: amaredia; wooertim
Cc: ceph-users
Subject: Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7
Ali, Very very nice! I was creating
Where can I find some examples on creating a snapshot on a directory.
Can I just do mkdir .snaps? I tried with stock kernel and a 4.12.9-1
http://docs.ceph.com/docs/luminous/dev/cephfs-snapshots/
___
ceph-users mailing list
ceph-users@lists.ceph.c
If now 12.2.0 is released, how and who should be approached for applying
patches for collectd?
Aug 30 10:40:42 c01 collectd: ceph plugin: JSON handler failed with
status -1.
Aug 30 10:40:42 c01 collectd: ceph plugin:
cconn_handle_event(name=osd.8,i=4,st=4): error 1
Aug 30 10:40:42 c01 colle
, allow rw path=/nfs
caps: [mon] allow r
caps: [osd] allow rwx pool=fs_meta,allow rwx pool=fs_data
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 23:48
To: ceph-users
Subject: [ceph-users] Centos7, luminous, cephfs, .snaps
Where can I find some examples on creating a
I have some osd with these permissions, and without mgr. What are the
correct ones to have for luminous?
osd.0
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.14
caps: [mon] allow profile osd
caps: [osd] allow *
___
I had this also once. If you update all nodes and then systemctl restart
'ceph-osd@*' on all nodes, you should be fine. But first the monitors of
course
-Original Message-
From: Thomas Gebhardt [mailto:gebha...@hrz.uni-marburg.de]
Sent: woensdag 30 augustus 2017 14:10
To: ceph-users
Should these messages not be gone in 12.2.0?
2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
What would be the best way to get an overview of all client connetions.
Something similar to the output of rbd lock list
cluster:
1 clients failing to respond to capability release
1 MDSs report slow requests
ceph daemon mds.a dump_ops_in_flight
{
"ops": [
Sorry to cut in your thread.
> Have you disabled te FLUSH command for the Samsung ones?
We have a test cluster currently only with spinners pool, but we have
SM863 available to create the ssd pool. Is there something specific that
needs to be done for the SM863?
-Original Message
Afaik ceph is is not supporting/working with bonding.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
(thread: Maybe some tuning for bonded network adapters)
-Original Message-
From: Andreas Herrmann [mailto:andr...@mx20.org]
Sent: vrijdag 8 september 2017 13:
I have been trying to setup the rados gateway (without deploy), but I am
missing some commands to enable the service I guess? How do I populate
the /var/lib/ceph/radosgw/ceph-gw1. I didn’t see any command like the
ceph-mon.
service ceph-radosgw@gw1 start
Gives:
2017-09-12 22:26:06.390523 7fb9
Original Message-
From: Jean-Charles Lopez [mailto:jelo...@redhat.com]
Sent: woensdag 13 september 2017 1:06
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Rgw install manual install luminous
Hi,
see comment in line
Regards
JC
> On Sep 12, 2017, at 13:31, Marc Roos w
Am I the only one having these JSON issues with collectd, did I do
something wrong in configuration/upgrade?
Sep 13 15:44:15 c01 collectd: ceph plugin: ds
Bluestore.kvFlushLat.avgtime was not properly initialized.
Sep 13 15:44:15 c01 collectd: ceph plugin: JSON handler failed with
status -1.
Is there something like this for scsi, to rescan the size of the rbd
device and make it available? (while it is being used)
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists
/21/refresh
(I am trying to online increase the size via kvm, virtio disk in win
2016)
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: maandag 18 september 2017 22:42
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Rbd resize, refresh rescan
I've never n
We use these :
NVDATA Product ID : SAS9207-8i
Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308
PCI-Express Fusion-MPT SAS-2 (rev 05)
Does someone by any chance know how to turn on the drive identification
lights?
-Original Message-
From: Jake Young
In my case it was syncing, and was syncing slowly (hour or so?). You
should see this in the log file. I wanted to report this, because my
store.db is only 200MB, and I guess you want your monitors up and
running quickly.
I also noticed that when the 3rd monitor left the quorum, ceph -s
comm
>From the looks of it, to bad the efforts could not be
combined/coordinated, that seems to be an issue with many open source
initiatives.
-Original Message-
From: mj [mailto:li...@merit.unu.edu]
Sent: zondag 24 september 2017 16:37
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-use
files at the end.
Ps. Is there some index of these slides? I have problems browsing back
to a specific one constantly.
-Original Message-
From: Danny Al-Gaaf [mailto:danny.al-g...@bisect.de]
Sent: maandag 25 september 2017 9:37
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] librmb
Maybe this will get you started with the permissions for only this fs
path /smb
sudo ceph auth get-or-create client.cephfs.smb mon 'allow r' mds 'allow
r, allow rw path=/smb' osd 'allow rwx pool=fs_meta,allow rwx
pool=fs_data'
-Original Message-
From: Yoann Moulin [mailto:yoann.m
I think that is because of the older kernel client, like mentioned here?
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39734.html
-Original Message-
From: Yoann Moulin [mailto:yoann.mou...@epfl.ch]
Sent: vrijdag 29 september 2017 10:00
To: ceph-users
Subject: Re: [ceph-u
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 3
Is this useful for someone?
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] lib
I have on luminous 12.2.1 on a osd node nfs-ganesha 2.5.2 (from ceph
download) running. And when I rsync on a vm that has the nfs mounted, I
get stalls.
I thought it was related to the amount of files of rsyncing the centos7
distro. But when I tried to rsync just one file it also stalled. It
Rbd resize is automatically on the mapped host.
However for the changes to appear in libvirt/qemu, I have to
virsh qemu-monitor-command vps-test2 --hmp "info block"
virsh qemu-monitor-command vps-test2 --hmp "block_resize
drive-scsi0-0-0-0 12G"
-Original Message-
Did you check this?
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39886.html
-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: dinsdag 17 oktober 2017 17:49
To: ceph-us...@ceph.com
Subject: [ceph-users] OSD are marked as down after jewel ->
What about not using deploy?
-Original Message-
From: Sean Sullivan [mailto:lookcr...@gmail.com]
Sent: donderdag 19 oktober 2017 2:28
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Luminous can't seem to provision more than 32 OSDs
per server
I am trying to install Ceph lumino
Hi Giang,
Can I ask you if you used the elrepo kernels? Because I tried these, but
they are not booting because of I think the mpt2sas mpt3sas drivers.
Regards,
Marc
-Original Message-
From: GiangCoi Mr [mailto:ltrgian...@gmail.com]
Sent: woensdag 25 oktober 2017 16:11
To: ceph-us
---
From: GiangCoi Mr [mailto:ltrgian...@gmail.com]
Sent: woensdag 25 oktober 2017 17:08
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] iSCSI gateway for ceph
Yes, I used elerepo to upgrade kernel, I can boot and show it, kernel
4.x. What is the problem?
Sent from my iPhone
> On Oct 25, 201
I hope I can post here a general question/comment regarding
distributions. Because I see a lot of stability issues passing by here.
Why are people choosing an ubuntu distribution to run in production?
Mostly I get an answer like they are accustomed to using it. But is the
OS not just a tool?
Is it possible to add a longer description with the created snapshot
(other than using name)?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
What is the new syntax for "ceph osd status" for luminous?
-Original Message-
From: I Gede Iswara Darmawan [mailto:iswaradr...@gmail.com]
Sent: donderdag 2 november 2017 6:19
To: ceph-users@lists.ceph.com
Subject: [ceph-users] No ops on some OSD
Hello,
I want to ask about my probl
Can anyone advice on a erasure pool config to store
- files between 500MB and 8GB, total 8TB
- just for archiving, not much reading (few files a week)
- hdd pool
- now 3 node cluster (4th coming)
- would like to save on storage space
I was thinking of a profile with jerasure k=3 m=2, but mayb
I, in test environment, centos7, on a luminous osd node, with binaries
from
download.ceph.com::ceph/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
Having these:
Nov 6 17:41:34 c01 kernel: ganesha.nfsd[31113]: segfault at 0 ip
7fa80a151a43 sp 7fa755ffa2f0 error 4 in
libdbus-1.so.3.7.4
How/where can I see how eg. 'profile rbd' is defined?
As in
[client.rbd.client1]
key = xxx==
caps mon = "profile rbd"
caps osd = "profile rbd pool=rbd"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.cep
What would be the correct way to convert the xml file rbdmapped images
to librbd?
I had this:
And for librbd this:
But this will give me a qemu
I would like store objects with
rados -p ec32 put test2G.img test2G.img
error putting ec32/test2G.img: (27) File too large
Changing the pool application from custom to rgw did not help
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
I added an erasure k=3,m=2 coded pool on a 3 node test cluster and am
getting these errors.
pg 48.0 is stuck undersized for 23867.00, current state
active+undersized+degraded, last acting [9,13,2147483647,7,2147483647]
pg 48.1 is stuck undersized for 27479.944212, current state
ac
Message-
From: Kevin Hrpcek [mailto:kevin.hrp...@ssec.wisc.edu]
Sent: donderdag 9 november 2017 21:09
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Pool shard/stripe settings for file too large
files?
Marc,
If you're running luminous you may need to increase osd_max_object
Do you know of a rados client that uses this? Maybe a simple 'mount' so
I can cp the files on it?
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: donderdag 9 november 2017 22:01
To: Kevin Hrpcek
Cc: Marc Roos; ceph-users
Subject:
osd's are crashing when putting a (8GB) file in a erasure coded pool,
just before finishing. The same osd's are used for replicated pools
rbd/cephfs, and seem to do fine. Did I made some error is this a bug?
Looks similar to
https://www.spinics.net/lists/ceph-devel/msg38685.html
http://lists.c
: iswaradr...@gmail.com / iswaradr...@live.com
On Sat, Nov 4, 2017 at 6:11 PM, Marc Roos
wrote:
What is the new syntax for "ceph osd status" for luminous?
-Original Message-
From: I Gede Iswara Darmawan [mailto:iswaradr...@gmail.com]
1. I don’t think an osd should 'crash' in such situation.
2. How else should I 'rados put' an 8GB file?
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: maandag 13 november 2017 0:12
To: Marc Roos
Cc: ceph-users
Subj
:
2017-11-10 20:39:31.296101 7f840ad45e40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
Or is that a leftover warning message from an old client?
Kind regards,
Caspar
2017-11-10 21:27 GMT+01:00 Marc Roos :
osd's are crashing when putting a
rom your
ceph.conf and see if that solves it.
Caspar
2017-11-12 15:56 GMT+01:00 Marc Roos :
[@c03 ~]# ceph osd status
2017-11-12 15:54:13.164823 7f478a6ad700 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
2017-
Very very nice, Thanks! Is there a heavy penalty to pay for enabling
this?
-Original Message-
From: John Spray [mailto:jsp...@redhat.com]
Sent: maandag 13 november 2017 11:48
To: Marc Roos
Cc: iswaradrmwn; ceph-users
Subject: Re: [ceph-users] No ops on some OSD
On Sun, Nov 12
Keep in mind also if you want to have fail over in the future. We were
running a 2nd server and were replicating via DRBD the raid arrays.
Expanding this storage is quite hastle, compared to just adding a few
osd's.
-Original Message-
From: Oscar Segarra [mailto:oscar.sega...@gmail
If I am not mistaken, the whole idea with the 3 replica's is dat you
have enough copies to recover from a failed osd. In my tests this seems
to go fine automatically. Are you doing something that is not adviced?
-Original Message-
From: Gonzalo Aguilar Delgado [mailto:gagui...@aguil
I was wondering if there are any statistics available that show the
performance increase of doing such things?
-Original Message-
From: German Anders [mailto:gand...@despegar.com]
Sent: dinsdag 28 november 2017 19:34
To: Luis Periquito
Cc: ceph-users
Subject: Re: [ceph-users] ceph
Total size: 51 M
Is this ok [y/d/N]: y
Downloading packages:
Package ceph-common-12.2.2-0.el7.x86_64.rpm is not signed
-Original Message-
From: Rafał Wądołowski [mailto:rwadolow...@cloudferro.com]
Sent: maandag 4 december 2017 14:18
To: ceph-users@lists.ceph.com
Subject: [ceph-use
Is there a disadvantage to just always start pg_num and pgp_num with
something low like 8, and then later increase it when necessary?
Question is then how to identify when necessary
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: dinsdag 2 ja
Maybe because of this 850 evo / 850 pro listed here as 1.9MB/s 1.5MB/s
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
-Original Message-
From: Rafał Wądołowski [mailto:rwadolow...@cloudferro.com]
Sent: donderdag 4 januari 2
On a default luminous test cluster I would like to limit logging of I
guess successful notifications related to deleted snapshots. I don’t
need there 77k messages of these in my syslog server.
What/where would be the best to place to do this? (but not dumping it at
syslog)
Jan 8 13:11:54 c
I guess the mds cache holds files, attributes etc but how many files
will the default "mds_cache_memory_limit": "1073741824" hold?
-Original Message-
From: Stefan Kooman [mailto:ste...@bit.nl]
Sent: vrijdag 5 januari 2018 12:54
To: Patrick Donnelly
Cc: Ceph Users
Subject: Re: [ceph-u
The script has not been adapted for this - at the end
http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
nfs-ganesha-rgw-2.5.4-.el7.x86_64.rpm
^
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 12:10
To: amare
I was thinking of enabling this jemalloc. Is there a recommended procedure for
a default centos7 cluster?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I regularly read the opposite here, and was thinking of switching to ec. Are
you sure about what is causing your poor results.
http://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
http://ceph.com/geen-categorie/ceph-pool-migration/
___
cep
Maybe for the future:
rpm {-V|--verify} [select-options] [verify-options]
Verifying a package compares information about the installed
files in the package with information about the files taken
from the package metadata stored in the rpm database. Among
other things, v
Hmmm, I have to disagree with
'too many services'
What do you mean, there is a process for each osd, mon, mgr and mds.
There are less processes running than on a default windows fileserver.
What is the complaint here?
'manage everything by your command-line'
What is so bad about this? Even mi
Is there a way to hide the stripped objects from view? Sort of with the
rbd type pool
[@c01 mnt]# rados ls -p ec21 | head
test2G.img.0023
test2G.img.011c
test2G.img.0028
test2G.img.0163
test2G.img.01e7
test2G.img.008d
test
I have seen messages pass by here, on when a monitor tries to join it
takes a while. I had the monitor disk run out of space. Monitor was
killed and now restarting it. I can't do a ceph -s and have to wait for
this monitor to join also.
2018-01-18 21:34:05.787749 7f5187a40700 0 -- 192.16
Took around 30min for the monitor join and I could execute ceph -s
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Are the guys of apache mesos agreeing to this? I have been looking at
mesos, dcos and still have to make up my mind which way to go. I like
that mesos has the unified containerizer that runs the docker images and
I don’t need to run the dockerd, how the adapt the to the cni standard.
How is t
Sorry for me asking maybe the obvious but is this the kernel available
in elrepo? Or a different one?
-Original Message-
From: Mike Christie [mailto:mchri...@redhat.com]
Sent: zaterdag 20 januari 2018 1:19
To: Steven Vacaroaia; Joshua Chen
Cc: ceph-users
Subject: Re: [ceph-users]
If I test my connections with sockperf via a 1Gbit switch I get around
25usec, when I test the 10Gbit connection via the switch I have around
12usec is that normal? Or should there be a differnce of 10x.
sockperf ping-pong
sockperf: Warmup stage (sending a few dummy messages)...
sockperf: S
:
On 01/20/2018 02:02 PM, Marc Roos wrote:
If I test my connections with sockperf via a 1Gbit switch I
get around
25usec, when I test the 10Gbit connection via the switch I
have around
12usec is that normal? Or should
Maybe first check what is using the swap?
swap-use.sh | sort -k 5,5 -n
#!/bin/bash
SUM=0
OVERALL=0
for DIR in `find /proc/ -maxdepth 1 -type d | egrep "^/proc/[0-9]"`
do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm --no-headers`
for SWAP in `grep Swap $DIR/smaps 2>/d
ceph osd pool application enable XXX rbd
-Original Message-
From: Steven Vacaroaia [mailto:ste...@gmail.com]
Sent: woensdag 24 januari 2018 19:47
To: David Turner
Cc: ceph-users
Subject: Re: [ceph-users] Luminous - bad performance
Hi ,
I have bundled the public NICs and added 2 more
1 - 100 of 500 matches
Mail list logo