Hello,
On Fri, 20 Oct 2017 13:35:55 -0500 Russell Glaue wrote:
> On the machine in question, the 2nd newest, we are using the LSI MegaRAID
> SAS-3 3008 [Fury], which allows us a "Non-RAID" option, and has no battery.
> The older two use the LSI MegaRAID SAS 2208 [Thunderbolt] I reported
> earlie
I am trying to stand up ceph (luminous) on 3 72 disk supermicro servers
running ubuntu 16.04 with HWE enabled (for a 4.10 kernel for cephfs). I am
not sure how this is possible but even though I am running the following
line to wipe all disks of their partitions, once I run ceph-disk to
partition t
On Sat, Oct 21, 2017 at 1:59 AM, Richard Bade wrote:
> Hi Lincoln,
> Yes the object is 0-bytes on all OSD's. Has the same filesystem
> date/time too. Before I removed the rbd image (migrated disk to
> different pool) it was 4MB on all the OSD's and md5 checksum was the
> same on all so it seems th
On Fri, Oct 20, 2017 at 8:23 PM, Ольга Ухина wrote:
> I was able to collect dump data during slow request, but this time I saw
> that it was related to high load average and iowait so I keep watching.
> And it was on particular two osds, but yesterday on other osds.
> I see in dump of these two
On Fri, Oct 20, 2017 at 7:35 PM, Josy wrote:
> Hi,
>
>>> What does your erasure code profile look like for pool 32?
>
> $ ceph osd erasure-code-profile get myprofile
> crush-device-class=
> crush-failure-domain=host
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=5
> m=3
> plugin=jer
What did you actually set the cephx caps to for that client?
On Fri, Oct 20, 2017 at 8:01 AM Keane Wolter wrote:
> Hello all,
>
> I am trying to limit what uid/gid a client is allowed to run as (similar
> to NFS' root squashing). I have referenced this email,
> http://lists.ceph.com/pipermail/ce
On the machine in question, the 2nd newest, we are using the LSI MegaRAID
SAS-3 3008 [Fury], which allows us a "Non-RAID" option, and has no battery.
The older two use the LSI MegaRAID SAS 2208 [Thunderbolt] I reported
earlier, each single drive configured as RAID0.
Thanks for everyone's help.
I a
Hi Lincoln,
Yes the object is 0-bytes on all OSD's. Has the same filesystem
date/time too. Before I removed the rbd image (migrated disk to
different pool) it was 4MB on all the OSD's and md5 checksum was the
same on all so it seems that only metadata is inconsistent.
Thanks for your suggestion, I
Hi Rich,
Is the object inconsistent and 0-bytes on all OSDs?
We ran into a similar issue on Jewel, where an object was empty across the
board but had inconsistent metadata. Ultimately it was resolved by doing a
"rados get" and then a "rados put" on the object. *However* that was a last
ditch e
Hi Everyone,
In our cluster running 0.94.10 we had a pg pop up as inconsistent
during scrub. Previously when this has happened running ceph pg repair
[pg_num] has resolved the problem. This time the repair runs but it
remains inconsistent.
~$ ceph health detail
HEALTH_ERR 1 pgs inconsistent; 2 scru
I don't know of any technical limitations within QEMU (although most likely
untested), but it definitely appears that libvirt restricts you to a file
or block device. Note that we do have a librbd backlog item for built-in
image migration [1] that would follow a similar approach as QEMU.
[1] https
Hello all,
I am trying to limit what uid/gid a client is allowed to run as (similar to
NFS' root squashing). I have referenced this email,
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-February/016173.html,
with no success. After generating the keyring, moving it to a client
machine, a
Am 2017-10-20 13:00, schrieb Mehmet:
Am 2017-10-20 11:10, schrieb Mehmet:
Hello,
yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
(12.2.1). This went realy smooth - Thanks! :)
Today i wanted to enable the BuildIn Dasboard via
#> vi ceph.conf
[...]
[mgr]
mgr_modules = dashboa
I ran the command no luck and waited hour still no luck
i rebuild my cluster and looked at RBD (incase its an s3 thing) and i am
having the same issue.
I tired to scrub but no luck also.
its just odd, i dont want to be addig disks when i dont need to
On Fri, Oct 20, 2017 at 11:25 AM, nigel davi
Hi,
The default collectd ceph plugin seems to parse the output of "ceph daemon
perf dump" and generate graphite output. However, I see more
fields in the dump than in collectd/graphite
Specifically I see get stats for rgw (ceph_rate-Client_rgw_nodename_get) but
not put stats (e.g. ceph_rate
Hi,
I have a bucket that according to radosgw-admin is about 8TB, even though it's
really only 961GB.
I have ran radosgw-admin gc process, and that completes quite fast.
root@osdnode04:~# radosgw-admin gc process
root@osdnode04:~# radosgw-admin gc list
[]
{
"bucket": "qnapnas",
Hi,
I see issues with resharding. rgw logging shows the following:
2017-10-20 15:17:30.018807 7fa1b219a700 -1 ERROR: failed to get entry from
reshard log, oid=reshard.13 tenant= bucket=qnapnas
radosgw-admin shows me there is one bucket in the queue to do resharding for:
radosgw-admin
Hi all,
Can export-diff work effectively without the fast-diff rbd feature as it
is not supported in kernel rbd ?
Maged
On 2017-10-19 23:18, Oscar Segarra wrote:
> Hi Richard,
>
> Thanks a lot for sharing your experience... I have made deeper investigation
> and it looks export-diff is t
Unless you manually issue a snapshot command on the pool, you will never
have a snapshot made. But again, I don't think you can disable it.
On Fri, Oct 20, 2017, 6:52 AM nigel davies wrote:
> ok i have set up an s3 bucket link to my ceph cluster so rgw,i only
> created my cluster yesterday.
>
>
Am 2017-10-20 11:10, schrieb Mehmet:
Hello,
yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
(12.2.1). This went realy smooth - Thanks! :)
Today i wanted to enable the BuildIn Dasboard via
#> vi ceph.conf
[...]
[mgr]
mgr_modules = dashboard
[...]
#> ceph-deploy --overwrite-c
ok i have set up an s3 bucket link to my ceph cluster so rgw,i only
created my cluster yesterday.
i am just trying to work out if i have snapshots enabled for a pool and if
so disable it
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists
I don't know that you can disable snapshots. There isn't an automated
method in ceph to run snapshots, but you can easily script it. There are a
lot of different types of snapshots in ceph depending if you're using rbd,
rgw, or CephFS. There are also caveats and config options you should tweak
depe
Hay
How would i check if snapshots are or are not enabled?
THanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I can attest that the battery in the raid controller is a thing. I'm used
to using lsi controllers, but my current position has hp raid controllers
and we just tracked down 10 of our nodes that had >100ms await pretty much
always were the only 10 nodes in the cluster with failed batteries on the
ra
Thanks i am running the command now. i did get a message
"2017-10-20 11:23:24.291921 7f6485a8ba00 0 client.75274.objecter FULL,
paused modify 0x55b1a8476820 tid 0"
On Fri, Oct 20, 2017 at 10:28 AM, Hans van den Bogert
wrote:
> My experience with RGW is that actual freeing up of space is asy
I was able to collect dump data during slow request, but this time I saw
that it was related to high load average and iowait so I keep watching.
And it was on particular two osds, but yesterday on other osds.
I see in dump of these two osds that operations are stuck on queued_for_pg,
for example:
Hi,
>> What does your erasure code profile look like for pool 32?
$ ceph osd erasure-code-profile get myprofile
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=5
m=3
plugin=jerasure
technique=reed_sol_van
w=8
On 20-10-2017 06:52, Brad Hubbar
My experience with RGW is that actual freeing up of space is asynchronous to
the a S3 client’s command to delete an object. I.e., it might take a while
before it’s actually freed up.
Can you redo your little experiment and simply wait for an hour to let the
garbage collector to do its thing, or
Hello,
yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
(12.2.1). This went realy smooth - Thanks! :)
Today i wanted to enable the BuildIn Dasboard via
#> vi ceph.conf
[...]
[mgr]
mgr_modules = dashboard
[...]
#> ceph-deploy --overwrite-conf config push monserver1 monserver
Hi! Thanks for your help.
How can I increase interval of history for command ceph daemon osd.
dump_historic_ops? It shows only for several minutes.
I see slow requests on random osds each time and on different hosts (there
are three). As I see in logs the problem doesn't relate to scrubbing.
Regar
hi gregory,
we more or less followed the instructions on the site (famous last
words, i know ;)
grepping for the error in the osd logs of the osds of the pg, the
primary logs had "5.5e3s0 shard 59(5) missing
5:c7ae919b:::10014d3184b.:head"
we looked for the object using the find command,
Hello, is it possible to use compression on a EC pool? I am trying to
enable this to use as a huge backup/archive disk, the data is almost
static and access to it is very sporadic, so, bad performance is not a
concern here.
I've created the RBD storing data to the EC pool (--data-pool option)
32 matches
Mail list logo