This debugging started because the ceph-provisioner from k8s was making
those users...but what we found was doing something similar by hand caused
the same issue. Just surprised no one else using k8s and ceph backed
PVC/PVs ran into this issue.
Thanks again for all your help!
Cheers
Aaron
On
No worries, can definitely do that.
Cheers
Aaron
On Thu, Jan 16, 2020 at 8:08 PM Jeff Layton wrote:
> On Thu, 2020-01-16 at 18:42 -0500, Jeff Layton wrote:
> > On Wed, 2020-01-15 at 08:05 -0500, Aaron wrote:
> > > Seeing a weird mount issue. Some info:
> > >
> &g
ount
cephfs...but, what I don't understand is why I'm getting that -34
error with the 14.2.5 and 14.2.6 libs installed. I didn't have this
issue with 14.2.3 or 14.2.4.
Cheers,
Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://
Correct, it was pre-jewel. I believe we toyed with multisite replication back
then so it may have gotten baked into the zonegroup inadvertently. Thanks for
the info!
> On Jun 12, 2019, at 11:08 AM, Casey Bodley wrote:
>
> Hi Aaron,
>
> The data_log objects are storing log
rm": "read"
}
],
With these caps I'm able to use a python radosgw-admin lib to list buckets and
acls and users, but not keys. This user is also unable to read buckets and/or
keys through the normal s3 api. Is there a way to create an s3 user that has
read
Ah nevermind, I found ceph mon set addrs and I'm good to go.
Aaron
> On Apr 24, 2019, at 4:36 PM, Aaron Bassett
> wrote:
>
> Yea ok thats what I guessed. I'm struggling to get my mons to listen on both
> ports. On startup they report:
>
> 2019-04-24 19:58
I have to jump through the add/remove mons hoops or just
burn it down and start over? FWIW the docs seem to indicate they'll listen to
both by default (in nautilus).
Aaron
> On Apr 24, 2019, at 4:29 PM, Jason Dillaman wrote:
>
> AFAIK, the kernel clients for CephFS and RBD do not s
gure out how to get my mons
listening to both?
Thanks,
Aaron
CONFIDENTIALITY NOTICE
This e-mail message and any attachments are only for the use of the intended
recipient and may contain information that is privileged, confidential or
exempt from disclosure under applicable law. If you are n
ard to get it to start lagging.
Thanks, Aaron
> On Apr 12, 2019, at 11:16 AM, Matt Benjamin wrote:
>
> Hi Aaron,
>
> I don't think that exists currently.
>
> Matt
>
> On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett
> wrote:
>>
>> I have an rad
gw ops log data backlog =
Any backlogged data in excess to the specified size will be lost, so the socket
needs to be read constantly.
I'm wondering if theres a way I can query radosgw for the current size of that
backlog to help me narrow down where the bottleneck may be occuring.
Thanks,
Aar
of this to Jewel and/or luminous?
Aaron
On Mar 12, 2018, at 5:50 PM, Aaron Bassett
mailto:aaron.bass...@nantomics.com>> wrote:
Quick update:
adding the following to your config:
rgw log http headers = "http_authorization"
rgw ops log socket path = /tmp/rgw
rgw enable ops log =
Quick update:
adding the following to your config:
rgw log http headers = "http_authorization"
rgw ops log socket path = /tmp/rgw
rgw enable ops log = true
rgw enable usage log = true
and you can now
nc -U /tmp/rgw |./jq --stream 'fromstream(1|truncate_stream(inputs))'
{
"time": "2018-03-12
I also see an option for writing the ops log to a socket instead of the
bucket it normally writes to. Seems like a good place for me to snag the info I
need and transform and log it in an audit log. I'm going to investigate this
and see what turns up.
Aaron
On Mar 9, 2018, at 5:12 PM, Davi
to correlate the key id with
the rest of the request info.
Aaron
On Mar 8, 2018, at 8:18 PM, Matt Benjamin
mailto:mbenj...@redhat.com>> wrote:
Hi Yehuda,
I did add support for logging arbitrary headers, but not a
configurable log record a-la webservers. To level set, David, are you
spe
Yea thats what I was afraid of. I'm looking at possibly patching to add it, but
i really dont want to support my own builds. I suppose other alternatives are
to use proxies to log stuff, but that makes me sad.
Aaron
On Mar 8, 2018, at 12:36 PM, David Turner
mailto:drakonst...@gmai
of the request, which containers the
access key id which we can tie back into the systems we use to issue
credentials. Any thoughts?
Thanks,
Aaron
CONFIDENTIALITY NOTICE
This e-mail message and any attachments are only for the use of the intended
recipient and may contain information tha
27;m not sure if bringing
everything up with noout or norecover on confused things. Looking for advice...
Aaron
CONFIDENTIALITY NOTICE
This e-mail message and any attachments are only for the use of the intended
recipient and may contain information that is privileged, confidential or
exe
I issued the pg deep scrub command ~24 hours ago and nothing has changed. I see
nothing in the active osd's log about kicking off the scrub.
On Jul 13, 2017, at 2:24 PM, David Turner
mailto:drakonst...@gmail.com>> wrote:
# ceph pg deep-scrub 22.1611
On Thu, Jul 13, 2017 at 1:
) log [INF] :
21.1ae9 deep-scrub ok
each time I run it, its the same pg.
Is there some reason its not scrubbing all the pgs?
Aaron
> On Jul 13, 2017, at 10:29 AM, Aaron Bassett
> wrote:
>
> Ok good to hear, I just kicked one off on the acting primary so I guess I'll
>
Ok good to hear, I just kicked one off on the acting primary so I guess I'll be
patient now...
Thanks,
Aaron
> On Jul 13, 2017, at 10:28 AM, Dan van der Ster wrote:
>
> On Thu, Jul 13, 2017 at 4:23 PM, Aaron Bassett
> wrote:
>> Because it was a read error I check SMA
more problems I stopped the daemon to let ceph recover from the other osds. The
cluster has now finished rebalancing, but remains in ERR state as it still
thinks this pg is inconsistent.
ceph pg query output is here: https://hastebin.com/mamesokexa.cpp
Thanks,
Aaron
CONFIDENTIALITY NOTICE
This e-
Yup already working on fixing the client, but it seems like a potentially nasty
issue for RGW, as a malicious client could potentially DOS an endpoint pretty
easily this way.
Aaron
> On Jul 12, 2017, at 11:48 AM, Jens Rosenboom wrote:
>
> 2017-07-12 15:23 GMT+00:00 Aaron Bassett :
&g
an up to date Jewel cluster, using civetweb for the
web server.
I just wanted to reach out and see if anyone else has seen this before I dig in
more and try to find more details about where the problem may lay.
Aaron
CONFIDENTIALITY NOTICE
This e-mail message and any attachments are onl
.6742.1492634493.backtrace.txt
https://aarontc.com/ceph/dumps/core.ceph-osd.150.082e9ca887c34cfbab183366a214a84c.7202.1492634508.backtrace.txt
Hope that helps!
-Aaron
On Thu, May 4, 2017 at 2:25 PM, Sage Weil wrote:
> Hi Aaron-
>
> Sorry, lost track of this one. In ord
Were the backtraces we obtained not useful? Is there anything else we
can try to get the OSDs up again?
On Wed, Apr 19, 2017 at 4:18 PM, Aaron Ten Clay wrote:
> I'm new to doing this all via systemd and systemd-coredump, but I appear to
> have gotten cores from two OSD processes. W
an EMC solution, it's just muuuch cheaper and
more fun to operate!
Aaron
On Apr 24, 2017, at 12:59 AM, Richard Hesse
mailto:richard.he...@weebly.com>> wrote:
It's not a requirement to build out homogeneous racks of ceph gear. Most larger
places don't do that (it creates weird
ce we gain confidence in ceph to expand beyond a couple thousand osds in a
cluster, I will certainly look to simplify by cutting down to one
higher-throughput ToR per rack.
The logical public/private separation is to keep the traffic on a separate
network and for ease of monitoring.
Aaron
On Apr 23
with the IPs on multiple interfaces.
Aaron
Date: Sat, 22 Apr 2017 17:37:01 +
From: Maxime Guyot mailto:maxime.gu...@elits.com>>
To: Richard Hesse mailto:richard.he...@weebly.com>>,
Jan Marquardt
mailto:j...@artfiles.de>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users
dump# ceph -v
ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)
I am also investigating sysdig as recommended.
Thanks!
-Aaron
On Mon, Apr 17, 2017 at 8:15 AM, Sage Weil wrote:
> On Sat, 15 Apr 2017, Aaron Ten Clay wrote:
> > Hi all,
> >
> > Our cluster is exp
process was about 4.2GiB.
https://pastebin.com/nLQ8Jpwt
Thanks again for the insight!
-Aaron
On Sat, Apr 15, 2017 at 10:34 AM, Aaron Ten Clay
wrote:
> Thanks for the recommendation, Bob! I'll try to get this data later today
> and reply with it.
>
> -Aaron
>
> On Sat,
Thanks for the recommendation, Bob! I'll try to get this data later today
and reply with it.
-Aaron
On Sat, Apr 15, 2017 at 9:46 AM, Bob R wrote:
> I'd recommend running through these steps and posting the output as well
> http://docs.ceph.com/docs/master/rados/troubleshooting/
= benjamin,
jake, jennifer
mon_host=
10.42.5.38,10.42.5.37,10.42.5.36
[osd]
osd crush update on start = false
Thanks,
-Aaron
On Sat, Apr 15, 2017 at 5:39 AM, Peter Maloney <
peter.malo...@brockmann-consul
fore the problem started and that's the
last time data was written.
I can only assume we've found another crippling bug of some kind, this
level of memory usage is entirely unprecedented. What can we do?
Thanks in advance for any suggestions.
-Aaron
__
I'm seeing similar behavior as well.
-rw-rw-r-- 1 testuser testgroup 6 Nov 6 07:41 testfile
aaron@testhost$ groups
... testgroup ...
aaron@testhost$ cat > testfile
-bash: testfile: Permission denied
Running version 9.0.2. Were you able to make any progress on this?
Thanks,
-Aaron
up_from 8 up_thru 17 down_at 20
last_clean_interval [0,0) 10.241.226.117:6800/1949
10.241.226.117:6801/1949 10.241.226.117:6802/1949
10.241.226.117:6803/1949 exists e1aaeab0-627c-4f22-b506-4856f2c8befb
[root@node1 ceph-0]# ls current/0.11_head/
__head_0011__0
[root@node1 ceph]#
在 2015/9/7 16:16
r `objdump -rdS ` is needed
to interpret this.
--- begin dump of recent events ---
0> 2015-09-07 15:13:02.540432 7f74cea0d800 -1 *** Caught signal
(Aborted) **
in thread 7f74cea0d800
ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
1
h as though I'd lost a
disk?
-Aaron
On Fri, Aug 28, 2015 at 11:17 AM, David Zafman wrote:
>
> Without my latest branch which hasn't merged yet, you can't repair an EC
> pg in the situation that the shard with a bad checksum is in the first k
> chunks.
>
> A way
-s: http://hastebin.com/xetohugibi
ceph pg dump: http://hastebin.com/bijehoheve
ceph -v: ceph version 9.0.2 (be422c8f5b494c77ebcf0f7b95e5d728ecacb7f0)
ceph osd dump: http://hastebin.com/fitajuzeca
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ete-pgs-oh-my/
Thanks,
-Aaron
On Fri, Jun 5, 2015 at 11:17 AM, Robert LeBlanc
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Did you try to deep-scrub the PG after copying it to 29?
> -
> Robert LeBlanc
> GPG Fingerprint 79A2 9CA4 6CC4 45DD
ay help; search ceph.com/docs for 'incomplete')
Thanks in advance!
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ect to
CephFS file is to read the xattrs on the 0th stripe object and pick out the
strings.)
Thanks in advance for any suggestions/pointers!
--
Aaron Ten Clay
http://www.aarontc.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ient recently, though, so I'd be interested in hearing how that
works for you if you try it.
I run a single MDS, and haven't had any issues at all since switching to
the FUSE client :)
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
assumption that my uploads started stalling because too
many un-gc’ed parts accumulated, but I may be way off base there.
Any thoughts would be much appreciated, Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
ing in multiple
placements on 2 or 3 osds per pg.
It turns out what I'm trying to do is described here:
https://www.mail-archive.com/ceph-users%40lists.ceph.com/msg01076.html
But I can't find any other references to anything like this.
Thanks, Aaron
> On Dec 23, 2014, at 9:23 AM, Aaron Bas
sure that doesn't happen, but I'm having a hard time sorting out how to hit the
requirement of balancing among hosts *and* allowing for more than one osd per
host.
Thanks, Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
x27;48910307",
"log_tail": "50495'48906592",
The log tail seems to have lagged behind the last_update/last_complete. I
suspect this is whats causing the cluster to reject these pgs. Anyone know how
i can go about cleaning this up?
Aaron
> On Dec 1, 2014, at 8:12 PM, Aar
tamp": "2014-11-18 17:08:49.368486",
"last_clean_scrub_stamp": "2014-11-18 17:08:49.368486",
"log_size": 3001,
"ondisk_log_size": 3001,
Also in the peering section, all the peers now have the same last_update: which
makes me think it should just pick up and take off.
There is another think I’m having problems with and I’m not sure if it’s
related or not. I set a crush map manually as I have a mix of ssd and platter
osds and it seems to work when I set it, the cluster starts rebalancing, etc,
but if I do a restart ceph-all on all my nodes the crush maps seems to revert
to the one I didn’t set. I don’t know if its being blocked from taking by these
incomplete pgs or if I’m missing a step to get it to “stick” It makes me think
when I’m stopping and starting these osds to use ceph_objectstore_tool on them
they may be getting out of sync with the cluster.
Any insights would be greatly appreciated,
Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ends?
Aaron
> On Nov 12, 2014, at 4:00 PM, Craig Lewis wrote:
>
> http://tracker.ceph.com/issues/9206 <http://tracker.ceph.com/issues/9206>
>
> My post to the ML: http://www.spinics.net/lists/ceph-users/msg12665.html
> <http://www.spinics.net/lists/ceph-users/msg12
as seeing with Apache 2.4.
>
> Try downgrading the primary cluster to Apache 2.2. In my testing, the
> secondary cluster could run 2.2 or 2.4.
Do you have a link to that bug#? I want to see if it gives me any clues.
Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
earlier when I had mismatched keys. The .us-nh.rgw.buckets.index pool is
syncing properly, as are the users. It seems like really the only thing that
isn’t syncing is the .zone.rgw.buckets pool.
Thanks, Aaron
>
>
>
>
> On Tue, Nov 11, 2014 at 6:51 AM, Aaron Bassett <mai
16.10.103:0/1007381 done
calling dispatch on 0x7f51b4001460
2014-11-11 14:37:06.701815 7f54447f0700 0 WARNING: set_req_state_err err_no=5
resorting to 500
2014-11-11 14:37:06.701894 7f54447f0700 1 == req done req=0x7f546800f3b0
http_status=500 ==
Any information you could give me
Ah so I need both users in both clusters? I think I missed that bit, let me see
if that does the trick.
Aaron
> On Nov 5, 2014, at 2:59 PM, Craig Lewis wrote:
>
> One region two zones is the standard setup, so that should be fine.
>
> Is metadata (users and buckets) being repl
m also wondering if what I’m attempting with two cluster in the same region
as separate zones makes sense?
Thanks, Aaron ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
any copies are
stored. (Default is 3, IIRC.)
RADOS block device volumes are always striped across 4 MiB objects. I don't
believe that is configurable (at least not yet.)
FYI, this list is intended for discussion of Ceph community concerns. These
kinds of questions are better han
I know how to go
from a file in CephFS to the objects, but not the other way around!)
The object with troubles is:
/current/0.56_head/DIR_6/DIR_5/DIR_D/DIR_9/1023de2.0180__head_67269D56__0
And each of the three acting OSDs has different contents for this file.
Than
ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
> ___
On Mon, Jun 16, 2014 at 11:16 AM, Gregory Farnum wrote:
> On Mon, Jun 16, 2014 at 11:11 AM, Aaron Ten Clay
> wrote:
> > I would also like to see Ceph get smarter about inconsistent PGs. If we
> > can't automate the repair, at least the "ceph pg repair" command
to its own local log
> >> file. You'll need to identify for yourself which version is correct,
> >> which will probably involve going and looking at them inside each
> >> OSD's data store. If the primary is correct for all the objects in a
> >> PG,
f someone knows how to resolve this,
I'd appreciate some insight. I think this would be a good topic for adding
to the OSD/PG operations section of the manual, or at least a wiki article.
Thanks!
-Aaron
___
ceph-users mailing list
ceph-users@lists.cep
u setting the nodeep-scrub flag?
-Aaron
On Tue, May 20, 2014 at 5:21 PM, Mike Dawson wrote:
> Today I noticed that deep-scrub is consistently missing some of my
> Placement Groups, leaving me with the following distribution of PGs and the
> last day they were successfully deep-scrubbed.
Mike,
You can find the last scrub info for a given PG with "ceph pg x.yy query".
-Aaron
On Wed, May 7, 2014 at 8:47 PM, Mike Dawson wrote:
> Perhaps, but if that were the case, would you expect the max concurrent
> number of deep-scrubs to approach the number of OSDs in th
ts.ceph.com/listinfo.cgi/ceph-community-ceph.com
>
>
Hi,
You will probably get more help from the ceph-users list. I've CC'd your
message.
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Apr 8, 2014 at 4:50 PM, Michael Nelson wrote:
> I am trying to mount CephFS from a freshly installed v0.79 cluster using a
> kernel built from git.kernel.org:kernel/git/sage/ceph-client.git
> (for-linus a30be7cb) and running into the following dmesg errors on mount:
>
> libceph: mon0 198.1
Well that was quick!
osd.0 crashed already, here's the log (~20 MiB):
http://www.aarontc.com/logs/ceph-osd.0.log.bz2
I updated the bug report as well.
Thanks,
-Aaron
On Mon, Mar 31, 2014 at 2:16 PM, Aaron Ten Clay wrote:
> Greg,
>
> I'm in the process of doing so n
Greg,
I'm in the process of doing so now. joshd asked for "debug filestore = 20"
as well, and I just restarted an OSD with those changes. As soon as it
crashes again, I'll post the log file.
joshd also had me open a bug: http://tracker.ceph.com/issues/7922
Thanks,
-Aaron
d seems like a
generally-unhealthy state of being.
Here's a fresh log file (~3MiB) from one OSD that crashed (old log moved
aside before restarting after crash):
http://www.aarontc.com/logs/ceph-osd.4.log
Thanks,
-Aaron
___
ceph-users mailing list
ce
majority of that
howto, though, just the notes at the top :)
-Aaron
On Tue, Jan 28, 2014 at 2:32 PM, Philipp von Strobl-Albeg <
phil...@pilarkto.net> wrote:
> Hi all,
>
> thank you very much for your input.
>
> I sync the clock on all hosts per ntpdate pool.ntp.org and sync t
Udo,
I think you might have better luck using "ceph osd set noout" before doing
maintenance, rather than "ceph osd set nodown", since you want the node to
be marked down to avoid having I/O directed at it (but not out to avoid
having recovery backfill begin.)
-Aaron
On Tue
e top but never
really updated the body of the document... I'm not entirely sure it's
straightforward or up to date any longer :) I'd be happy to make changes as
needed but I haven't manually deployed a cluster in several months, and
Inktank now has a manual deployment guide for C
Alek,
Not sure if it's the right tool, but you might also consider BitTorrent
Sync[1].
1: http://www.bittorrent.com/sync
-Aaron
On Thu, Jan 2, 2014 at 3:01 PM, Dimitri Maziuk wrote:
> On 01/02/2014 04:20 PM, Alek Storm wrote:
> > Anything? Would really appreciate any wisdom
ch.
Hope that helps!
-Aaron
On Wed, Dec 18, 2013 at 10:32 PM, Yuri Weinstein
wrote:
> Wow!!!
> I tried everything but not this!
> I will give it a try.
>
> However it does sound strange that only for this value requirements differ.
>
> Why?
>
> Regards
>
>
>
Yuri,
The "mon addr" directive is expecting an IP address in standard IPv4
decimal notation, i.e. "10.212.117.78:6969". If you change your config to
reflect that you should have better luck :)
-Aaron
On Wed, Dec 18, 2013 at 1:02 PM, Yuri Weinstein wrote:
> I was not
simultaneously quite easily.
Maybe not the "enterprise" sector use case, but it's getting there :)
-Aaron
On Thu, Nov 14, 2013 at 12:11 PM, James Pearce wrote:
> On 2013-11-14 19:59, Dimitri Maziuk wrote:
>
>> Cehpfs is in fact one of ceph's big selling poin
lping as well. I currently maintain ebuilds for the
latest Ceph versions at an overlay called Nextoo, if anyone is interested:
https://github.com/nextoo/portage-overlay/tree/master/sys-cluster/ceph
I'm happy to help with other Gentoo-related Ceph development as well :)
--
Aaron Ten Clay
http://www.aarontc.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2 out
> of 3, 3 out of 4, 3 out of 5, 4 out of 6,...).
>
> -Joao
>
>
Joao,
The page at http://ceph.com/docs/master/rados/operations/add-or-rm-mons/only
lists "1; 3 out of 5; 4 out of 6; etc.". Perhaps it should be updated
if 2 out o
It sounds like you tried to go from 1 monitor to 2 monitors, which is an
unsupported configuration as far as I am aware. You must have either 1, or
3 or more monitors for a quorum to be possible.
More information is available here:
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
On
ves like Gilles indicated. When you use ceph-deploy, the
daemons that get started on your first host will bind to the correct IPs.
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s1
item osd.0 weight 3.630
}
root default {
id -1 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item gbl10134201 weight 0.000
item gbl10134202 weight 0.000
item gbl10134203 weight 0.000
lity
just yet. Changing this flag to false allows the CephFS to be mounted by
3.5.0 and 3.10.7 without a problem. There is probably a good opportunity to
add additional error logging to the in-kernel client here as well.
Thanks for the help!
-Aaron
On Fri, Sep 27, 2013 at 2:53 PM, Aaron Ten Cl
gt; What kernel version are you using?
I have two configurations I've tested:
root@chekov:~# uname -a
Linux chekov 3.5.0-40-generic #62~precise1-Ubuntu SMP Fri Aug 23 17:38:26
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
aaron@seven ~ $ uname -a
Linux seven 3.10.7-gentoo-r1 #1 SMP PREEMPT Thu Sep
Hi,
I probably did something wrong setting up my cluster with 0.67.3. I
previously built a cluster with 0.61 and everything went well, even after
an upgrade to 0.67.3. Now I built a fresh 0.67.3 cluster and when I try to
mount CephFS:
aaron@seven ~ $ sudo mount -t ceph 10.42.6.21:/ /mnt/ceph
On Wed, Sep 25, 2013 at 8:44 PM, Sage Weil wrote:
> On Wed, 25 Sep 2013, Aaron Ten Clay wrote:
> > Hi all,
> >
> > Does anyone know how to specify which pool the mds and CephFS data will
> be
> > stored in?
> >
> > After creating a new cluster, t
set only at pool creation time, so I am working under
the assumption I must create a new pool with a larger pg count and use that
for CephFS and the mds storage.
Thanks!
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
83 matches
Mail list logo