I was misled.In fact, this is not an automatic deletion, but the removal
of one object per op by application.
Reject.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I have a Jewel Ceph cluster with RGW index sharding enabled. I've configured
the index to have 128 shards. I am upgrading to Luminous. What will happen if
I enable dynamic bucket index resharding in ceph.conf? Will it maintain my 128
shards (the buckets are currently empty), and wil
Hey Nathan.
No blaming here. I'm very thankful for this great peace (ok, sometime more of a
beast ;) ) of open-source SDS and all the great work around it incl. community
and users... and happy the problem is identified and can be fixed for
others/the future as well :)
Well, yes, can confirm
Hi,
I'm Running 12.2.5 and I have no Problems at the moment.
However my servers reporting daily that they want to upgrade to 12.2.7, is this
save or should I wait for 12.2.8?
Are there any predictions when the 12.2.8 release will be available?
Micha Krause
___
Hi,
I also had the same issues and took to disabling this feature.
Thanks
On Mon, Jul 30, 2018 at 8:42 AM, Micha Krause wrote:
> Hi,
>
> I have a Jewel Ceph cluster with RGW index sharding enabled. I've
>> configured the index to have 128 shards. I am upgrading to Luminous. What
>> will h
Hi together,
for all others on this list, it might also be helpful to know which setups are
likely affected.
Does this only occur for Filestore disks, i.e. if ceph-volume has taken over
taking care of these?
Does it happen on every RHEL 7.5 system?
We're still on 13.2.0 here and ceph-detect-
for all others on this list, it might also be helpful to know which setups are
likely affected.
Does this only occur for Filestore disks, i.e. if ceph-volume has taken over
taking care of these?
Does it happen on every RHEL 7.5 system?
It affects all OSDs managed by ceph-disk on all RHEL syste
kzal t maar eens testen :)
On 30/07/18 10:54, Nathan Cutler wrote:
for all others on this list, it might also be helpful to know which
setups are likely affected.
Does this only occur for Filestore disks, i.e. if ceph-volume has
taken over taking care of these?
Does it happen on every RHEL 7.
Hi. I'm newbie with ceph. I know that i can define custom types in crush map.
And type id 0 used always for devices. But if i specify
type 0 slot
Does i need to specify devices with this name or use predefined name device?
For example:
type 0 device
device 0 osd.0 class hdd
or
type o slot
slot 0 s
Hi All,
there might be a a problem on Scientific Linux 7.5 too:
after upgrading directly from 12.2.5 to 13.2.1
[root@cephr01 ~]# ceph-detect-init
Traceback (most recent call last):
File "/usr/bin/ceph-detect-init", line 9, in
load_entry_point('ceph-detect-init==1.0.1', 'console_scripts',
Hello community,
I am building first cluster for project, that hosts millions of small
(from 20kb) and big (up to 10mb) files. Right now we are moving from
local 16tb raid storage to cluster of 12 small machines. We are
planning to have 11 OSD nodes, use erasure coding pool (10+1) and one
ho
Dear list,
we experience very poor single thread read performance (~35 MB/s) on our 5
node ceph cluster. I first encountered it in vms transferring data via rsync,
but could reproduce the problem with rbd and rados bench on the physical
nodes.
Let me shortly give an overview on our infrastruc
Something like smallfile perhaps? https://github.com/bengland2/smallfile
Or you just time creating/reading lots of files
With read benching you would want to ensure you've cleared your mds cache
or use a dataset larger than the cache.
I'd be interested in seeing your results, I this on the to do
Hello Ceph users,
We have updated our cluster from 10.2.7 to 10.2.11. A few hours after
the update, 1 OSD crashed.
When trying to add the OSD back to the cluster, other 2 OSDs started
crashing with segmentation fault. Had to mark all 3 OSDs as down as we
had stuck PGs and blocked operations and th
Hi!
I want to set up the dashboard behind a reverse proxy. How do people
determine which ceph-mgr is active? Is there any simple and elegant
solution?
Cheers,
Tobias Florek
signature.asc
Description: signature
___
ceph-users mailing list
ceph-users
On Fri, Jul 27, 2018 at 8:35 PM Scottix wrote:
>
> ceph tell mds.0 client ls
> 2018-07-27 12:32:40.344654 7fa5e27fc700 0 client.89408629 ms_handle_reset on
> 10.10.1.63:6800/1750774943
> Error EPERM: problem getting command descriptions from mds.0
You need "mds allow *" capabilities (the defaul
Awww that makes more sense now. I guess I didn't quite comprehend EPERM at
the time.
Thank You,
Scott
On Mon, Jul 30, 2018 at 7:19 AM John Spray wrote:
> On Fri, Jul 27, 2018 at 8:35 PM Scottix wrote:
> >
> > ceph tell mds.0 client ls
> > 2018-07-27 12:32:40.344654 7fa5e27fc700 0 client.894086
10+1 is a bad idea for obvious reasons (not enough coding chunks, you will
be offline if even one server is offline).
The real problem is that your 20kb files will be split up into 2kb chunks
and the metadata overhead and bluestore min alloc size will eat up your
disk space.
Paul
2018-07-30 13:
On Mon, Jul 30, 2018 at 3:55 PM Scottix wrote:
>
> Awww that makes more sense now. I guess I didn't quite comprehend EPERM at
> the time.
How could anyone misunderstand our super-clear and not-at-all-obscure
error messages? ;-)
Easy fix: https://github.com/ceph/ceph/pull/23330
John
>
> Thank
On 07/28/2018 03:59 PM, Wladimir Mutel wrote:
> Dear all,
>
> I want to share some experience of upgrading my experimental 1-host
> Ceph cluster from v13.2.0 to v13.2.1.
> First, I fetched new packages and installed them using 'apt
> dist-upgrade', which went smooth as usual.
> Then I no
On Sat, Jul 28, 2018 at 12:44 AM, Satish Patel wrote:
> I have simple question i want to use LVM with bluestore (Its
> recommended method), If i have only single SSD disk for osd in that
> case i want to keep journal + data on same disk so how should i create
> lvm to accommodate ?
bluestore does
On Fri, Jul 27, 2018 at 1:28 AM, Fabian Grünbichler
wrote:
> On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote:
>> Hi all,
>>
>> After the 12.2.6 release went out, we've been thinking on better ways
>> to remove a version from our repositories to prevent users from
>> upgrading/installi
Thanks Alfredo,
This is what i am trying to do with ceph-ansible v3.1 and getting
following error, where i am wrong?
---
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: /dev/sdb
TASK [ceph-osd : include scenarios/lvm.yml]
***
Try using master?
Not sure really what 3.1 supports.
On Mon, Jul 30, 2018 at 2:03 PM, Satish Patel wrote:
> Thanks Alfredo,
>
> This is what i am trying to do with ceph-ansible v3.1 and getting
> following error, where i am wrong?
>
> ---
> osd_objectstore: bluestore
> osd_scenario: lvm
> lvm_vo
Does't 10+1 mean that one server can go offline without loosing data and
functionality? We are quite short on hardware and need as much space as
possible... would 9+1 sound better with one more extra node?
Yes, that is what i see in my test in regard to space. Can min alloc size be
changed? Anto
Do you need to enable the option daemonperf?
[@c01 ~]# ceph daemonperf mds.a
Traceback (most recent call last):
File "/usr/bin/ceph", line 1122, in
retval = main()
File "/usr/bin/ceph", line 822, in main
done, ret = maybe_daemon_command(parsed_args, childargs)
File "/usr/bin/ceph"
Changing default 64k hdd min alloc size to 8k saved me 8 terabytes of disk
space on cephfs with 150 million small files. You will need to redeploy OSDs
for change to take effect.
> On 30.07.2018, at 22:37, Anton Aleksandrov wrote:
>
> Yes, that is what i see in my test in regard to space. Can
On Mon, Jul 30, 2018 at 10:27 PM Marc Roos wrote:
>
>
> Do you need to enable the option daemonperf?
This looks strange, it's supposed to have sensible defaults -- what
version are you on?
John
> [@c01 ~]# ceph daemonperf mds.a
> Traceback (most recent call last):
> File "/usr/bin/ceph", line
Hi.
We have recently setup our first ceph cluster (4 nodes) but our node failure
tests have revealed an intermittent problem. When we take down a node (i.e. by
powering it off) most of the time all clients reconnect to the cluster within
milliseconds, but occasionally it can take them 30 second
Hi all,
I want a non-admin client to be able to run `ceph fs status`, either via the
ceph CLI or a python script. Adding `mgr "allow *"` to this client's cephx caps
works, but I'd like to be more specific if possible. I can't find the complete
list of mgr cephx caps anywhere, so if you could p
30 matches
Mail list logo