rom strace:
open("/sys/bus/rbd/add", O_WRONLY) = 3
write(3, "10.66.104.64:6789,10.66.104.65:6789,10.66.104.66:6789
name=ff-ceph01-export,key=client.ff-ceph01-export
hosting_linux_sharedweb linux_ded_backup_01", 147) = -1 ERANGE
(Numerical result out o
Thanks for the reply.
Ceph is version 0.73-1precise, and the kernel release is
3.11.9-031109-generic.
also rbd showmapped shows 16 lines of output.
Thanks again. Tom.
On 01/04/14 15:49, Ilya Dryomov wrote:
On Tue, Apr 1, 2014 at 6:31 PM, Tom wrote:
Hi,
I'm seeing this error tryi
Hi again Ilya,
No, no snapshots in this case. It's a brand new RBD that I've created.
Cheers. Tom.
On 01/04/14 16:08, Ilya Dryomov wrote:
On Tue, Apr 1, 2014 at 6:55 PM, Tom wrote:
Thanks for the reply.
Ceph is version 0.73-1precise, and the kernel release is
3.11.9-031109-gene
me, and then afterwards with a different name. The pool
just seems to be broken on this box?
Please let me know what information would be required to look at this
further.
Many thanks. Tom.
On 02/04/14 09:29, Ilya Dryomov wrote:
On Wed, Apr 2, 2014 at 11:28 AM, Tom wrote:
Hi again Ilya,
Is it possible to run an erasure coded pool using default k=2, m=2 profile on a
single node?
(this is just for functionality testing). The single node has 3 OSDs.
Replicated pools run fine.
ceph.conf does contain:
osd crush chooseleaf type = 0
-- Tom Deneau
I need to set up a cluster where the rados client (for running rados
bench) may be on a different architecture and hence running a different
ceph version from the osd/mon nodes. Is there a list of which ceph
versions work together for a situation like this?
-- Tom
Yehuda Sadeh-Weinraub writes:
>
>
> - Original Message -
> > From: "Gregory Farnum"
> > To: "Tom Deneau"
> > Cc: ceph-users@...
> > Sent: Wednesday, February 25, 2015 3:20:07 PM
> > Subject: Re: [ceph-users] mixed ceph ver
Robert --
We are still having trouble with this.
Can you share your [client.radosgw.gateway] section of ceph.conf and
were there any other special things to be aware of?
-- Tom
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On
what I am doing wrong here?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
read/write benchmarking?
-- Tom
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Are the slides or videos from ceph days presentations made available
somewhere? I noticed some links in the Frankfurt Ceph day, but not for the
other Ceph Days.
-- Tom
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
comparison of the available erasure code plugins
that allows me to properly decide which one suits are needs best?
Many thanks for your help!
Tom
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Great info! Many thanks!
Tom
2015-03-25 13:30 GMT+01:00 Loic Dachary :
> Hi Tom,
>
> On 25/03/2015 11:31, Tom Verdaat wrote:> Hi guys,
> >
> > We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold
> data) that we intend to grow later on as mo
to get the latest cluster map
for each operation?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n seq -t 1
What bandwidth should I expect for the rados bench seq command here?
(I am seeing approximately 70 MB/s with -t 1)
-- Tom Deneau
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
clean osd drives but
I would also like to be able to restart and leave the osd drives as is.
-- Tom
> Hi,
> I have faced a similar issue. This happens if the ceph disks aren't
> purged/cleaned completely. Clear of the contents in the /dev/sdb1 device.
> There is a file named ceph
_size of 32M works fine and the cluster seems otherwise fine.
Seems related to this issue
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028288.html
But I didn't see a resolution for that.
Is there a timeout that is kicking in?
--
Ah, I see there is an osd parameter for this
osd max write size
Description:The maximum size of a write in megabytes.
Default:90
> -Original Message-
> From: Deneau, Tom
> Sent: Wednesday, April 08, 2015 3:57 PM
> To: 'ceph-users@lists.ceph.com
If my cluster is quiet and on one node I want to switch the location of the
journal from
the default location to a file on an SSD drive (or vice versa), what is the
quickest way to do that? Can I make a soft link to the new location and
do it without restarting the OSDs?
-- Tom Deneau, AMD
e if it is related but I do know that in the past I had created a
single partition on /dev/sdc and used that as an xfs data partition.
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
what is the correct way to make radosgw create its pools as erasure coded pools?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have a cluster where over the weekend something happened and successive calls
to ceph health detail show things like below.
What does it mean when the number of blocked requests goes up and down like
this?
Some clients are still running successfully.
-- Tom Deneau, AMD
HEALTH_WARN 20
I don't think there were any stale or unclean PGs, (when there are,
I have seen "health detail" list them and it did not in this case).
I have since restarted the 2 osds and the health went immediately to HEALTH_OK.
-- Tom
> -Original Message-
> From: Will.
re running 9.0.1?
-- Tom Deneau
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
False alarm, things seem to be fine now.
-- Tom
> -Original Message-
> From: Deneau, Tom
> Sent: Wednesday, July 15, 2015 1:11 PM
> To: ceph-users@lists.ceph.com
> Subject: Any workaround for ImportError: No module named ceph_argparse?
>
> I just installed 9.0.2
network.
But when I run both rados load-gen and rados bench as described, I see that
rados bench gets
about twice the throughput of rados load-gen. Why would that be?
I see there is a --max-backlog parameter, is there some setting of that
parameter
that would help the throughput?
-- Tom Deneau
Ah, I see that --max-backlog must be expressed in bytes/sec,
in spite of what the --help message says.
-- Tom
> -Original Message-
> From: Deneau, Tom
> Sent: Wednesday, July 22, 2015 5:09 PM
> To: 'ceph-users@lists.ceph.com'
> Subject: load-gen throughput nu
reports like
benchmark_data_myhost_20729_object73 is not correct!
I never saw these with similar rpm builds on these platforms from 9.0.2 sources.
Also, if I go to an x86-64 system running Ubuntu trusty for which I am able to
install prebuilt binary packages via
ceph-de
> -Original Message-
> From: Sage Weil [mailto:sw...@redhat.com]
> Sent: Tuesday, August 25, 2015 12:43 PM
> To: Deneau, Tom
> Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com;
> piotr.da...@ts.fujitsu.com
> Subject: Re: rados bench object not correct errors on v
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Deneau, Tom
> Sent: Tuesday, August 25, 2015 1:24 PM
> To: Sage Weil
> Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com;
> piotr.da...@ts.fuji
> -Original Message-
> From: Dałek, Piotr [mailto:piotr.da...@ts.fujitsu.com]
> Sent: Wednesday, August 26, 2015 2:02 AM
> To: Sage Weil; Deneau, Tom
> Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com
> Subject: RE: rados bench object not correct errors on v9.0.3
&g
l there in .rgw.buckets pool. When do they get
removed?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I see that the objects that were deleted last Friday are indeed gone now (via
gc I guess).
gc list does not show anything even after right after objects are deleted.
I couldn't get temp remove to do anything.
-- Tom
> -Original Message-
> From: Ben Hines [mailto:bhi..
20
> cluster addr = 10.20.4.120
> [osd.1]
> host = storage-02
> public addr = 10.20.3.121
> cluster addr = 10.20.4.121
> [osd.2]
> host = storage-03
> public addr = 10.20.3.122
> cluster addr = 10.20.4.122
A quick Google search on that port binding er
That was it!
Sorry the 10.20.4.x NICs weren't configured correctly on those two nodes.
I'll admit this one was definitely my mistake.
Thanks for pointing it out.
Tom
2013/7/9 Gregory Farnum
> On Tue, Jul 9, 2013 at 3:08 AM, Tom Verdaat wrote:
> > Hi all,
> >
&
the best performance for our use case?
3. Is anybody doing this already and willing to share their experience?
4. Is there anything important that you think we might have missed?
Your help is very much appreciated!
Thanks!
Tom
___
ceph-user
an ephemeral disk and copying it to its local
/var/lib/nova/instances
directory. If you want to be able to do live migrations and such you need
to mount a cluster filesystem at that path on every host machine.
And that's what my questions were about!
Tom
2013/7/12 McNamara, Bradley
>
lly is not really
feasible yet.
So the alternative is to mount a shared filesystem
on /var/lib/nova/instances of every compute node. Hence the RBD +
OCFS2/GFS2 question.
Tom
p.s. yes I've read the
rbd-openstack<http://ceph.com/docs/master/rbd/rbd-openstack/> page
which covers images a
on based on the info I've gathered so
far though.
Tom
Darryl Bond schreef op vr 12-07-2013 om 10:04 [+1000]:
> Tom,
> I'm no expert as I didn't set it up, but we are using Openstack
> Grizzly with KVM/QEMU and RBD volumes for VM's.
> We boot the VMs from the RBD volu
more bandwidth than with
configuration 2.
In general, any configuration where the osds span 2 nodes gets poorer
performance but in particular
when the 2 nodes have equal amounts of traffic.
Is there any ceph parameter that might be throttling the cases where osds span
2 nodes?
-- Tom Deneau
ceph parameter that might be throttling the
2 node configuation?
-- Tom
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Wednesday, September 02, 2015 7:29 PM
> To: ceph-users
> Cc: Deneau, Tom
> Subject: Re: [ceph-users] osds on 2 nodes vs. on
performance to the two-node arrangement.
So now my question is: Is it expected that there would be such
a large performance difference between using osds on a single node
where ceph-mon is running vs. using osds on a single node where
ceph-mon is not running?
-- Tom
> -Original Message-
>
dos
bench write,
there is often activity on the journal partitions which must be a carry over
from the rados
bench write.
What is the preferred way to ensure that all write activity is finished before
starting
to use rados bench seq?
-- Tom Deneau
___
gnose what is throttling things in the one-instance
case?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: Monday, September 14, 2015 5:32 PM
> To: Deneau, Tom
> Cc: ceph-users
> Subject: Re: [ceph-users] rados bench seq throttling
>
> On Thu, Sep 10, 2015 at 1:02 PM, Deneau, Tom wrote:
x for creating such an erasure pool?
-- Tom Deneau
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
leset 6 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 165 flags hashpspool stripe_width
4128
with 15 up osds.
and ceph health tells me I have too many PGs per OSD (375 > 300)
I'm not sure where the 375 comes from, since there are 896 pg
4292 0'0 227:9 [4,11,5,4,0]4
18.7e 0 0 0 0 0 0 0 0 active+clean2015-11-09
09:28:42.569645 0'0 227:9 [5,0,12,5,0]5
18.7f 0 0 0 0 0 0 0 0 active+clean2015-11-09
09:28:41.897589 0'0 227:9 [2,12,6,2,12] 2
How should s
pools manually before calling ceph-deploy rgw create?
Is there still a restriction on which gw pools can be ec? (I am running 9.1.0)
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
We recently upgraded to 0.94.3 from firefly and now for the last week have
had intermittent slow requests and flapping OSDs. We have been unable to
nail down the cause, but its feeling like it may be related to our osdmaps
not getting deleted properly. Most of our osds are now storing over 100GB
many OSDs,
and most of them have occurred in the middle of the night during our peak
load times.
On Mon, Nov 30, 2015 at 1:41 PM, Wido den Hollander wrote:
> On 11/30/2015 08:56 PM, Tom Christensen wrote:
> > We recently upgraded to 0.94.3 from firefly and now for the last week
> &
ough to see what the osd is doing, maybe you need debug_filestore=10
> also. If that doesn't show the problem, bump those up to 20.
>
> Good luck,
>
> Dan
>
> On 30 Nov 2015 20:56, "Tom Christensen" wrote:
> >
> > We recently upgraded to 0.94.3 from
this be the cause
of our growing osdmaps?
-Tom
On Tue, Dec 1, 2015 at 2:35 AM, HEWLETT, Paul (Paul) <
paul.hewl...@alcatel-lucent.com> wrote:
> I believe that ‘filestore xattr use omap’ is no longer used in Ceph – can
> anybody confirm this?
> I could not find any usage in the Cep
this new message indicate? Can it be disabled
or turned off? so that librbd sessions don't cause a new osdmap to be
generated?
In ceph -w output, whenever we see those entries, we immediately see a new
osdmap, hence my suspicion that this message is causing a new osdmap to be
generated.
On
Farnum wrote:
> On Tue, Dec 1, 2015 at 10:02 AM, Tom Christensen wrote:
> > Another thing that we don't quite grasp is that when we see slow requests
> > now they almost always, probably 95% have the "known_if_redirected" state
> > set. What does this state mean
We have been seeing this same behavior on a cluster that has been perfectly
happy until we upgraded to the ubuntu vivid 3.19 kernel. We are in the
process of "upgrading" back to the 3.16 kernel across our cluster as we've
not seen this behavior on that kernel for over 6 months and we're pretty
str
30 AM, Benedikt Fraunhofer
wrote:
> Hi Tom,
>
> > We have been seeing this same behavior on a cluster that has been
> perfectly
> > happy until we upgraded to the ubuntu vivid 3.19 kernel. We are in the
>
> i can't recall when we gave 3.19 a shot but now that you say i
We aren't running NFS, but regularly use the kernel driver to map RBDs and
mount filesystems in same. We see very similar behavior across nearly all
kernel versions we've tried. In my experience only very few versions of
the kernel driver survive any sort of crush map change/update while
somethin
We run deep scrubs via cron with a script so we know when deep scrubs are
happening, and we've seen nodes fail both during deep scrubbing and while
no deep scrubs are occurring so I'm pretty sure its not related.
On Tue, Dec 8, 2015 at 2:42 AM, Benedikt Fraunhofer
wrote:
> Hi Tom
ng to run an early 4.5 version in our test environment.
On Tue, Dec 8, 2015 at 3:35 AM, Ilya Dryomov wrote:
> On Tue, Dec 8, 2015 at 10:57 AM, Tom Christensen wrote:
> > We aren't running NFS, but regularly use the kernel driver to map RBDs
> and
> > mount filesystems in sa
To be clear, we are also using format 2 RBDs, so we didn't really expect it
to work until recently as it was listed as unsupported. We are under the
understanding that as of 3.19 RBD format 2 should be supported. Are we
incorrect in that understanding?
On Tue, Dec 8, 2015 at 3:44 AM
What kernel versions are required to be able to use CephFS thru mount -t ceph?
-- Tom Deneau
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I was using SLES 12, SP1 which has 3.12.49
It did have a /usr/sbin/mount.ceph command but using it gave
modprobe: FATAL: Module ceph not found.
failed to load ceph kernel module (1)
-- Tom
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent:
t controls whether these pools have the "default" prefix or not?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ah that makes sense. The places where it was not adding the "default"
prefix were all pre-jewel.
-- Tom
> -Original Message-
> From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
> Sent: Friday, June 10, 2016 2:36 PM
> To: Deneau, Tom
> Cc: ceph-users
&
there something else I have to do?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
wanted to be able to flush and
invalidate the rbd cache as an experiment.
I would have thought the socket would get created and stay there as
long as kvm is active (since kvm is using librbd). But even when I
access the rbd disk from the VM, I don't see any socket created at all.
-
el,
You may find our changes to the devstack ceph plugin here [1] for
systemd vs upstart vs sysvinit control of ceph services helpful. We
tested against xenial and fedora24 for the systemd paths.
Cheers,
-- Tom
[1] https://review.openstack.org/#/c/332484/
>
> thanks,
> Manuel
>
ching also optimize the reads ?
Do I need two SSD's per node
Kind regards,
Tom
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
with one SSD recommended or should i always have two SSD
in replicated mode ?
Kind regards,
Tom
On Mon, Aug 1, 2016 at 2:00 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 1 Aug 2016 11:09:00 +0200 Tom T wrote:
>
> > Hi Ceph users
> >
> > We are planning t
ching ?
Kind regards,
Tom
On Mon, Aug 1, 2016 at 2:46 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 1 Aug 2016 14:34:43 +0200 Tom T wrote:
>
> > Hi Christian,
> >
> > Thnx for your reply.
> >
> > Case:
> > CSE-825TQC-600LPB
> >
> &g
view it thru the /dev/rbd0 mount because on one of my systems,
the VM is not booting from the image.
-- Tom
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
time 2016-09-09 19:11:01.660671
common/HeartbeatMap.cc: 86: FAILED assert(0 == "hit suicide timeout")
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Bryan,
What version of Ceph are you currently running on, and do you run any erasure
coded pools or bluestore OSDs? Might be worth having a quick glance over the
recent changelogs:
http://docs.ceph.com/docs/master/releases/luminous/
Tom
From: ceph-users
link?
Just spitballing some ideas here until somebody more qualified may have an idea.
From: Bryan Banister
Sent: 17 July 2018 19:18:15
To: Bryan Banister; Tom W; ceph-users@lists.ceph.com
Subject: RE: Cluster in bad shape, seemingly endless cycle of OSDs failed
already done the usual tests to ensure it is traversing the
right interface, correct VLANs, reachable via ICMP, perhaps even run an iperf
and tpcdump to be certain the flow is as expected.
Tom
From: Bryan Banister
Sent: 17 July 2018 22:03
To: Tom W ; ceph-users@lists.ceph.com
Subject: RE: Cluster
raverse the public and cluster networks successfully?
Tom
From: Bryan Banister
Sent: 17 July 2018 22:36
To: Tom W ; ceph-users@lists.ceph.com
Subject: RE: Cluster in bad shape, seemingly endless cycle of OSDs failed, then
marked down, then booted, then failed again
Hi Tom,
I tried to check out the
going on. It might be best to limit
the impact of these from going 0 to 100 with these parameters (1 backfill at a
time, wait 0.1 seconds between recovery op per OSD).
ceph tell osd.* injectargs '--osd-max-backfills 1'
ceph tell osd.* injectargs '--osd-recovery-sleep 0.1'
Tom
Hi Bryan,
Try both the commands timing out again but with the -verbose flag, see if we
can get anything from that.
Tom
From: Bryan Banister
Sent: 17 July 2018 23:51
To: Tom W ; ceph-users@lists.ceph.com
Subject: RE: Cluster in bad shape, seemingly endless cycle of OSDs failed, then
marked
Test for Leo, please ignore.
NOTICE AND DISCLAIMER
This e-mail (including any attachments) is intended for the above-named
person(s). If you are not the intended recipient, notify the sender
immediately, delete this email from your system and do not disclose or
trigger a delete request though the API and this seems to be working in
lieu of a cleaner solution.
We will be upgrading to Luminous in the coming week, I’ll report back if we see
any significant change in this issue when we do.
Kind Regards,
Tom
From: ceph-users On Behalf Of Sean Redmond
Sent
update?
We have disabled resharding activity due to this issue,
https://tracker.ceph.com/issues/24551 and our gc queue is only a few items at
present.
Kind Regards,
Tom
NOTICE AND DISCLAIMER
This e-mail (including any attachments) is intended for the above
it is worth doing the logging over our management
network (this is a simple layer 2 network on 100Mbit links per host), or should
we perhaps be looking to do this over the public network (40G in our case)
instead?
Kind Regards,
Tom
NOTICE AND DISCLAIMER
This e
If using s3cmd to radosgw and using s3cmd's --disable-multipart option, is
there any limit to the size of the object that can be stored thru radosgw?
Also, is there a recommendation for multipart chunk size for radosgw?
-- Tom
___
ceph-users ma
I've just checked 1072 and 872, they both look the same, a single op for
the object in question, in retry+read state, appears to be retrying forever.
On Thu, Dec 17, 2015 at 10:05 AM, Tom Christensen wrote:
> I had already nuked the previous hang, but we have another one:
>
&g
--image-shared.
Is there something different I need to with the 3.13 kernel?
-- Tom
# rbd create --size 1000 --image-format 1 rbd/rbddemo
# rbd info rbddemo
rbd image 'rbddemo':
size 1000 MB in 250 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.4f0
brbd-dev on the
client).
-- Tom
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: Friday, January 29, 2016 4:53 PM
> To: Deneau, Tom
> Cc: ceph-users; c...@lists.ceph.com
> Subject: Re: [ceph-users] rbd kernel mapping on 3.13
>
>
> > Next this : > --- > 2016-02-12 01:35:33.915981 7f75be4d57c0 0 osd.2
> > 1788 load_pgs 2016-02-12 01:36:32.989709 7f75be4d57c0 0 osd.2 1788
> > load_pgs opened
> 564 pgs > --- > Another minute to load the PGs.
> Same OSD reboot as above : 8 seconds for this.
Do you really have 564 pgs on a si
dless of the number
of pgs/osd. Meaning it started out bad, and stayed bad but didn't get
worse as we added osds. We've had to reweight osds in our crushmap to get
anything close to a sane distribution of pgs.
-Tom
On Sat, Feb 13, 2016 at 10:57 PM, Christian Balzer wrote:
> On S
We've seen this as well as early as 0.94.3 and have a bug,
http://tracker.ceph.com/issues/13990 which we're working through
currently. Nothing fixed yet, still trying to nail down exactly why the
osd maps aren't being trimmed as they should.
On Thu, Feb 25, 2016 at 10:16 AM, Stillwell, Bryan <
b
If you are mapping the RBD with the kernel driver then you're not using
librbd so these settings will have no effect I believe. The kernel driver
does its own caching but I don't believe there are any settings to change
its default behavior.
On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo wrote:
Just checking...
On vanilla RHEL 7.2 (x64), should I be able to yum install ceph without adding
the EPEL repository?
(looks like the version being installed is 0.94.6)
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Yes, that is what lsb_release is showing...
> -Original Message-
> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
> Sent: Tuesday, March 08, 2016 5:01 PM
> To: Deneau, Tom
> Cc: ceph-users
> Subject: Re: [ceph-users] yum install ceph on RHEL 7.2
>
> On Wed
> -Original Message-
> From: Ken Dreyer [mailto:kdre...@redhat.com]
> Sent: Tuesday, March 08, 2016 10:24 PM
> To: Shinobu Kinjo
> Cc: Deneau, Tom; ceph-users
> Subject: Re: [ceph-users] yum install ceph on RHEL 7.2
>
> On Tue, Mar 8, 2016 at 4:11 PM, Shinobu Ki
shots
via a script that calls cephfs-snap directly rather than using Manila
-- and of course that's fine -- but if you'd share them it will help
us Manila developers consider whether there are use cases that we are
not currently addressing that we s
On 21/03/19 16:15 +0100, Dan van der Ster wrote:
On Thu, Mar 21, 2019 at 1:50 PM Tom Barron wrote:
On 20/03/19 16:33 +0100, Dan van der Ster wrote:
>Hi all,
>
>We're currently upgrading our cephfs (managed by OpenStack Manila)
>clusters to Mimic, and want to start enabling
's very little data in a share then the snapshot will
indeed also be small. But at least one can limit the number of
snapshots that are produced by a user and users don't have the ability
to target less than their whole share.
-- Tom
-- dan
On Fri, Mar 22, 2019 at 3:42 PM Paul Emme
97 matches
Mail list logo