Hey Cephers,
There is a new OSS software called Delta Lake https://delta.io/
It is compatible with HDFS but seems ripe to add Ceph support as a backend
storage. Just want to put this on the radar for any feelers.
Best
___
ceph-users mailing list
ceph-us
I generally have gone the crush reweight 0 route
This way the drive can participate in the rebalance, and the rebalance
only happens once. Then you can take it out and purge.
If I am not mistaken this is the safest.
ceph osd crush reweight 0
On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov wrote
> from Xenial to Bionic, as well as new ceph nodes that installed straight to
> Bionic, due to the repo issues. Even if you try to use the xenial packages,
> you will run into issues with libcurl4 and libcurl3 I imagine.
>
> Reed
>
> On Jan 14, 2019, at 12:21 PM, Scottix
Hey,
I am having some issues upgrading to 12.2.10 on my 18.04 server. It is
saying 12.2.8 is the latest.
I am not sure why it is not going to 12.2.10, also the rest of my
cluster is already in 12.2.10 except this one machine.
$ cat /etc/apt/sources.list.d/ceph.list
deb https://download.ceph.com/de
I just had this question as well.
I am interested in what you mean by fullest, is it percentage wise or raw
space. If I have an uneven distribution and adjusted it, would it make more
space available potentially.
Thanks
Scott
On Thu, Jan 10, 2019 at 12:05 AM Wido den Hollander wrote:
>
>
> On 1
Awww that makes more sense now. I guess I didn't quite comprehend EPERM at
the time.
Thank You,
Scott
On Mon, Jul 30, 2018 at 7:19 AM John Spray wrote:
> On Fri, Jul 27, 2018 at 8:35 PM Scottix wrote:
> >
> > ceph tell mds.0 client ls
> > 2018-07-27 12:32:40.344
ceph tell mds.0 client ls
2018-07-27 12:32:40.344654 7fa5e27fc700 0 client.89408629 ms_handle_reset
on 10.10.1.63:6800/1750774943
Error EPERM: problem getting command descriptions from mds.0
mds log
2018-07-27 12:32:40.342753 7fc9c1239700 1 mds.CephMon203 handle_command:
received command from cl
So we have been testing this quite a bit, having the failure domain as
partially available is ok for us but odd, since we don't know what will be
down. Compared to a single MDS we know everything will be blocked.
It would be nice to have an option to have all IO blocked if it hits a
degraded state
nf file folder.
>
> On Mon, Apr 30, 2018 at 5:31 PM, Scottix wrote:
> > It looks like ceph-deploy@2.0.0 is incompatible with systems running
> 14.04
> > and it got released in the luminous branch with the new deployment
> commands.
> >
> > Is there anyway to down
It looks like ceph-deploy@2.0.0 is incompatible with systems running 14.04
and it got released in the luminous branch with the new deployment commands.
Is there anyway to downgrade to an older version?
Log of osd list
XYZ@XYZStat200:~/XYZ-cluster$ ceph-deploy --overwrite-conf osd list
XYZCeph204
er
> ranks. I'm not sure if it *could* following some code changes, but
> anyway that just not how it works today.
>
> Does that clarify things?
>
> Cheers, Dan
>
> [1] https://ceph.com/community/new-luminous-cephfs-subtree-pinning/
>
>
> On Fri, Apr 27, 2018
k Donnelly
wrote:
> On Thu, Apr 26, 2018 at 4:40 PM, Scottix wrote:
> >> Of course -- the mons can't tell the difference!
> > That is really unfortunate, it would be nice to know if the filesystem
> has
> > been degraded and to what degree.
>
> If a rank is laggy/
when you
say not optional that is not exactly true it will still run.
On Thu, Apr 26, 2018 at 3:37 PM Patrick Donnelly
wrote:
> On Thu, Apr 26, 2018 at 3:16 PM, Scottix wrote:
> > Updated to 12.2.5
> >
> > We are starting to test multi_mds cephfs and we are going through
Updated to 12.2.5
We are starting to test multi_mds cephfs and we are going through some
failure scenarios in our test cluster.
We are simulating a power failure to one machine and we are getting mixed
results of what happens to the file system.
This is the status of the mds once we simulate the
order would be after mon upgrade and before osd. There are couple
> threads related to colocated mon/osd upgrade scenario's.
>
> On Thu, Apr 26, 2018 at 9:05 AM, Scottix wrote:
> > Right I have ceph-mgr but when I do an update I want to make sure it is
> the
> > recommen
Vasu Kulkarni wrote:
> On Thu, Apr 26, 2018 at 8:52 AM, Scottix wrote:
> > Now that we have ceph-mgr in luminous what is the best upgrade order for
> the
> > ceph-mgr?
> >
> > http://docs.ceph.com/docs/master/install/upgrading-ceph/
> I think that is outdated and ne
Now that we have ceph-mgr in luminous what is the best upgrade order for
the ceph-mgr?
http://docs.ceph.com/docs/master/install/upgrading-ceph/
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-c
simpler once you get to that point.
>
> On Mon, Feb 26, 2018 at 9:08 AM Ronny Aasen
> wrote:
>
>> On 23. feb. 2018 23:37, Scottix wrote:
>> > Hey,
>> > We had one of our monitor servers die on us and I have a replacement
>> > computer now. In between th
Hey,
We had one of our monitor servers die on us and I have a replacement
computer now. In between that time you have released 12.2.3 but we are
still on 12.2.2.
We are on Ubuntu servers
I see all the binaries are in the repo but your package cache only shows
12.2.3, is there a reason for not kee
When I add in the next hdd i'll try the method again and see if I just
needed to wait longer.
On Tue, Nov 7, 2017 at 11:19 PM Wido den Hollander wrote:
>
> > Op 7 november 2017 om 22:54 schreef Scottix :
> >
> >
> > Hey,
> > I recently updated to lumino
Hey,
I recently updated to luminous and started deploying bluestore osd nodes. I
normally set osd_max_backfills = 1 and then ramp up as time progresses.
Although with bluestore it seems like I wasn't able to do this on the fly
like I used to with XFS.
ceph tell osd.* injectargs '--osd-max-backfil
Personally I kind of like the current format and fundamentally we are
talking about Data storage which should be the most tested and scrutinized
piece of software on your computer. Having XYZ feature later than sooner
compared to oh I lost all my data. I am thinking of a recent FS that had a
featur
Great to hear.
Best
On Mon, Aug 21, 2017 at 8:54 AM John Spray wrote:
> On Mon, Aug 21, 2017 at 4:34 PM, Scottix wrote:
> > I don't want to hijack another thread so here is my question.
> > I just learned about this option from another thread and from my
> > u
I don't want to hijack another thread so here is my question.
I just learned about this option from another thread and from my
understanding with our Ceph cluster that we have setup, the default value
is not good. Which is "rack" and I should have it on "host".
Which comes to my point why is it set
I'm by no means a Ceph expert but I feel this is not a fair representation
of Ceph, I am not saying numbers would be better or worse. Just the fact I
see some major holes that don't represent a typical Ceph setup.
1 Mon? Most have a minimum of 3
1 OSD? basically all your reads and writes are going
>
> ____
> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of John
> Spray [jsp...@redhat.com]
> Sent: Thursday, February 23, 2017 3:47 PM
> To: Scottix
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Rando
Ya the ceph-mon.$ID.log
I was running ceph -w when one of them occurred too and it never output
anything.
Here is a snippet for the the 5:11AM occurrence.
On Thu, Feb 23, 2017 at 1:56 PM Robin H. Johnson wrote:
> On Thu, Feb 23, 2017 at 09:49:21PM +0000, Scottix wrote:
> > ceph versi
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
We are seeing a weird behavior or not sure how to diagnose what could be
going on. We started monitoring the overall_status from the json query and
every once in a while we would get a HEALTH_WARN for a minute or two.
Monitoring logs.
eived it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
> --
>
> --
> *From:* Scottix [scot...@gma
or copying of this
message is prohibited.
--
--
*From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Scottix
[scot...@gmail.com]
*Sent:* Thursday, July 07, 2016 5:01 PM
*To:* ceph-users
*Subject:* Re: [ceph-users] Failing to Activate
I would take the analogy of a Raid scenario. Basically a standby is
considered like a spare drive. If that spare drive goes down. It is good to
know about the event, but it does in no way indicate a degraded system,
everything keeps running at top speed.
If you had multi active MDS and one goes do
Agreed no announcement like there usually is, what is going on?
Hopefully there is an explanation. :|
On Mon, Sep 26, 2016 at 6:01 AM Henrik Korkuc wrote:
> Hey,
>
> 10.2.3 is tagged in jewel branch for more than 5 days already, but there
> were no announcement for that yet. Is there any reasons
it in atleast.
--Scott
On Thu, Jul 7, 2016 at 2:54 PM Scottix wrote:
> Hey,
> This is the first time I have had a problem with ceph-deploy
>
> I have attached the log but I can't seem to activate the osd.
>
> I am running
> ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6
Hey,
This is the first time I have had a problem with ceph-deploy
I have attached the log but I can't seem to activate the osd.
I am running
ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
I did upgrade from Infernalis->Jewel
I haven't changed ceph ownership but I do have the conf
Great thanks.
--Scott
On Fri, Jun 3, 2016 at 8:59 AM John Spray wrote:
> On Fri, Jun 3, 2016 at 4:49 PM, Scottix wrote:
> > Is there anyway to check what it is currently using?
>
> Since Firefly, the MDS rewrites TMAPs to OMAPs whenever a directory is
> updated, so a pre-
Is there anyway to check what it is currently using?
Best,
Scott
On Fri, Jun 3, 2016 at 4:26 AM John Spray wrote:
> Hi,
>
> If you do not have a CephFS filesystem that was created with a Ceph
> version older than Firefly, then you can ignore this message.
>
> If you have such a filesystem, you
I have three comments on our CephFS deployment. Some background first, we
have been using CephFS since Giant with some not so important data. We are
using it more heavily now in Infernalis. We have our own raw data storage
using the POSIX semantics and keep everything as basic as possible.
Basicall
We have run into this same scenarios in terms of the long tail taking much
longer on recovery than the initial.
Either time we are adding osd or an osd get taken down. At first we have
max-backfill set to 1 so it doesn't kill the cluster with io. As time
passes by the single osd is performing the
I have been running some speed tests in POSIX file operations and I noticed
even just listing files can take a while compared to an attached HDD. I am
wondering is there a reason it takes so long to even just list files.
Here is the test I ran
time for i in {1..10}; do touch $i; done
Interna
Thanks for the responses John.
--Scott
On Wed, Feb 24, 2016 at 3:07 AM John Spray wrote:
> On Tue, Feb 23, 2016 at 5:36 PM, Scottix wrote:
> > I had a weird thing happen when I was testing an upgrade in a dev
> > environment where I have removed an MDS from a machine a while
I had a weird thing happen when I was testing an upgrade in a dev
environment where I have removed an MDS from a machine a while back.
I upgraded to 0.94.6 and low and behold the mds daemon started up on the
machine again. I know the /var/lib/ceph/mds folder was removed becaues I
renamed it /var/l
Looks like the bug with the kernel using ceph and XFS was fixed, I haven't
tested it yet but just wanted to give an update.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1527062
On Tue, Dec 8, 2015 at 8:05 AM Scottix wrote:
> I can confirm it seems to be kernels greater than
I can confirm it seems to be kernels greater than 3.16, we had this problem
where servers would lock up and had to perform restarts on a weekly basis.
We downgraded to 3.16, since then we have not had to do any restarts.
I did find this thread in the XFS forums and I am not sure if has been
fixed
OpenSuse 12.1
3.1.10-1.29-desktop
On Wed, Sep 30, 2015, 5:34 AM Yan, Zheng wrote:
> On Tue, Sep 29, 2015 at 9:51 PM, Scottix wrote:
>
>> I'm positive the client I sent you the log is 94. We do have one client
>> still on 87.
>>
> which version of kernel are
by increasing
>> client_cache_size (on the client) if your RAM allows it.
>>
>> John
>>
>> On Tue, Sep 29, 2015 at 12:58 AM, Scottix wrote:
>>
>>> I know this is an old one but I got a log in ceph-fuse for it.
>>> I got this on OpenSuse 12.1
>>>
quot;cct" must be the
> broken one, but maybe it's just the Inode* or something.
> -Greg
>
> On Mon, Sep 21, 2015 at 2:03 PM, Scottix wrote:
> > I was rsyncing files to ceph from an older machine and I ran into a
> > ceph-fuse crash.
> >
> > OpenSUSE 1
I was rsyncing files to ceph from an older machine and I ran into a
ceph-fuse crash.
OpenSUSE 12.1, 3.1.10-1.29-desktop
ceph-fuse 0.94.3
The rsync was running for about 48 hours then crashed somewhere along the
way.
I added the log, and can run more if you like, I am not sure how to
reproduce it
I have a program that monitors the speed, and I have seen 1TB/s pop up and
there is just no way that is true.
Probably the way it is calculated is prone to extreme measurements, where
if you average it out you get a more realistic number.
On Tue, Sep 15, 2015 at 12:25 PM Mark Nelson wrote:
> FWI
I saw this article on Linux Today and immediately thought of Ceph.
http://www.enterprisestorageforum.com/storage-management/object-storage-vs.-posix-storage-something-in-the-middle-please-1.html
I was thinking would it theoretically be possible with RGW to do a GET and
set a BEGIN_SEEK and OFFSET
I'll be more of a third-party person and try to be factual. =)
I wouldn't throw off Gluster too fast yet.
Besides what you described with the object and disk storage.
It uses Amazon Dynamo paper on eventually consistent methodology of
organizing data.
Gluster has different features so I would look
Ya Ubuntu has a process called mlocate which run updatedb
We basically turn it off shown here
http://askubuntu.com/questions/268130/can-i-disable-updatedb-mlocate
If you still want it you could edit the settings /etc/updatedb.conf and add
a prunepath to your ceph directory
On Tue, Jun 23, 2015 a
I noticed amd64 Ubuntu 12.04 hasn't updated its packages to 0.94.2
can you check this?
http://ceph.com/debian-hammer/dists/precise/main/binary-amd64/Packages
Package: ceph
Version: 0.94.1-1precise
Architecture: amd64
On Thu, Jun 11, 2015 at 10:35 AM Sage Weil wrote:
> This Hammer point release
>From a ease of use standpoint and depending on the situation you are
setting up your environment, the idea is as follow;
It seems like it would be nice to have some easy on demand control where
you don't have to think a whole lot other than knowing how it is going to
affect your cluster in a gene
As a point to
* someone accidentally removed a thing, and now they need a thing back
I thought MooseFS has an interesting feature that I thought would be good
for CephFS and maybe others.
Basically a timed Trashbin
"Deleted files are retained for a configurable period of time (a file
system level
I fully understand, why it is just a comment :)
Can't wait for scrub.
Thanks!
On Thu, Apr 9, 2015 at 10:13 AM John Spray wrote:
>
>
> On 09/04/2015 17:09, Scottix wrote:
> > Alright sounds good.
> >
> > Only one comment then:
> > From an IT/ops perspectiv
Wed, Apr 8, 2015 at 8:10 PM Yan, Zheng wrote:
> On Thu, Apr 9, 2015 at 7:09 AM, Scottix wrote:
> > I was testing the upgrade on our dev environment and after I restarted
> the
> > mds I got the following errors.
> >
> > 2015-04-08 15:58:34.056470 mds.0 [ERR] unmatched
I was testing the upgrade on our dev environment and after I restarted the
mds I got the following errors.
2015-04-08 15:58:34.056470 mds.0 [ERR] unmatched rstat on 605, inode has
n(v70 rc2015-03-16 09:11:34.390905), dirfrags have n(v0 rc2015-03-16
09:11:34.390905 1=0+1)
2015-04-08 15:58:34.056530
…
> The time variation is caused cache coherence. when client has valid
information
> in its cache, 'stat' operation will be fast. Otherwise the client need to
send
> request to MDS and wait for reply, which will be slow.
This sounds like the behavior I had with CephFS giving me question marks.
Ya we are not at 0.87.1 yet, possibly tomorrow. I'll let you know if it
still reports the same.
Thanks John,
--Scottie
On Tue, Mar 3, 2015 at 2:57 PM John Spray wrote:
> On 03/03/2015 22:35, Scottix wrote:
> > I was testing a little bit more and decided to run the
> ce
ad entry start ptr
(0x2aee800b3f) at 0x2aee80167e
2015-03-03 14:32:50.486354 7f47c3006780 -1 Bad entry start ptr
(0x2aee800e4f) at 0x2aee80198e
2015-03-03 14:32:50.577443 7f47c3006780 -1 Bad entry start ptr
(0x2aee801f65) at 0x2aee802aa4
Events by type:
On Tue, Mar 3, 2015 at 12:01 PM Scotti
d create any issues.
Anyway we are going to update the machine soon so, I can report if we keep
having the issue.
Thanks for your support,
Scott
On Mon, Mar 2, 2015 at 4:07 PM Scottix wrote:
> I'll try the following things and report back to you.
>
> 1. I can get a new kernel on ano
debug client = 20" it will output (a whole lot of) logging to the
> client's log file and you could see what requests are getting
> processed by the Ceph code and how it's responding. That might let you
> narrow things down. It's certainly not any kind of timeout.
> -G
r 2, 2015 at 3:47 PM, Gregory Farnum wrote:
>
>> On Mon, Mar 2, 2015 at 3:39 PM, Scottix wrote:
>> > We have a file system running CephFS and for a while we had this issue
>> when
>> > doing an ls -la we get question marks in the response.
>> >
>> &
We have a file system running CephFS and for a while we had this issue when
doing an ls -la we get question marks in the response.
-rw-r--r-- 1 wwwrun root14761 Feb 9 16:06
data.2015-02-08_00-00-00.csv.bz2
-? ? ? ? ??
data.2015-02-09_00-00-00.csv.bz2
If we
We currently have a 3 node system with 3 monitor nodes. I created them in
the initial setup and the ceph.conf
mon initial members = Ceph200, Ceph201, Ceph202
mon host = 10.10.5.31,10.10.5.32,10.10.5.33
We are in the process of expanding and installing dedicated mon servers.
I know I can run:
cep
I would say it depends on your system and where drives are connected
to. Some HBA have a cli tool to manage the drives connected like a
raid card would do.
One other method I found is sometimes it will expose the leds for you
http://fabiobaltieri.com/2011/09/21/linux-led-subsystem/ has an
article o
a 9a, 02-583 Warszawa
> T: [+48] 22 380 13 13
> F: [+48] 22 380 13 14
> E: mariusz.gronczew...@efigence.com
> <mailto:mariusz.gronczew...@efigence.com>
>
> ___________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-
Suggestion:
Can you link to a changelog of any new features or major bug fixes
when you do new releases.
Thanks,
Scottix
On Wed, Sep 10, 2014 at 6:45 AM, Alfredo Deza wrote:
> Hi All,
>
> There is a new bug-fix release of ceph-deploy that helps prevent the
> environment variable
&
ists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Follow Me: @Scottix
http://about.me/scottix
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for the info.
I was able to do a lazy unmount and started back up fine if anyone
wanted to know.
On Wed, Jul 16, 2014 at 10:29 AM, Gregory Farnum wrote:
> On Wed, Jul 16, 2014 at 9:20 AM, Scottix wrote:
>> I wanted to update ceph-fuse to a new version and I would like to h
tion
ceph-fuse[10474]: fuse failed to initialize
2014-07-16 09:08:57.784900 7f669be1a760 -1
fuse_mount(mountpoint=/mnt/ceph) failed.
ceph-fuse[10461]: mount failed: (5) Input/output error
Or is there a better way to do this?
--
Follow Me: @Scottix
http://about.me/scott
?
ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
Do I need to start over and not add the mds to be clean?
Thanks for your time
On Wed, May 21, 2014 at 12:18 PM, Wido den Hollander wrote:
> On 05/21/2014 09:04 PM, Scottix wrote:
>>
>> I am setting a CephFS cluster
standby? How reliable is the standby? or should a
single active mds be sufficient?
Thanks
--
Follow Me: @Scottix
http://about.me/scottix
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
f you see a mistake.
>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Fo
g list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Follow Me: @Scottix <http://www.twitter.com/scottix>
http://about.me/scottix
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n this cluster?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Follow Me: @Scottix <http://www.twitter.com/scottix>
http://about.me/scottix
scot...@
ts.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
--
Follow Me: @Scottix <http://www.twitter.com/scottix>
http://about.me/scottix
scot...@gmail.com
__
I was looking at someones question on the list and started looking up some
documentation and found this page.
http://ceph.com/docs/next/install/os-recommendations/
Do you think you can provide an update for dumpling.
Best Regards
___
ceph-users mailing
Great Thanks.
On Mon, Sep 9, 2013 at 11:31 AM, John Wilkins wrote:
> Yes. We'll have an update shortly.
>
> On Mon, Sep 9, 2013 at 11:29 AM, Scottix wrote:
> > I was looking at someones question on the list and started looking up
> some
> > documentation
es to
> a simple disk swap (assuming an intelligent hardware RAID controller).
> Obviously you still have a 50% reduction in disk space, but you have the
> advantage that your filesystem never sees the bad disk and all the problems
> that can cause.
>
> James
>
> _______
libcephfs.jar file, to see if
> CephPoolException.class is in there? It might just be that the
> libcephfs.jar is out-of-date.
>
> -Noah
>
> On Sun, Aug 4, 2013 at 8:44 PM, Scottix wrote:
> > I am running into an issues connecting hadoop to my ceph cluster and I'm
>
I am running into an issues connecting hadoop to my ceph cluster and I'm
sure I am missing something but can't figure it out.
I have a Ceph cluster with MDS running fine and I can do a basic mount
perfectly normal.
I have hadoop fs -ls with basic file:/// working well.
Info:
ceph cluster version 0
rs@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Follow Me: @Scottix <http://www.twitter.com/scottix>
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
is helps some people,
Scottix
On Wed, Jun 12, 2013 at 12:12 PM, Scottix wrote:
> Thanks Greg,
> I am starting to understand it better.
> I soon realized as well after doing some searching I hit this bug.
> http://tracker.ceph.com/issues/5194
> Which created the problem upon rebooting
Thanks Greg,
I am starting to understand it better.
I soon realized as well after doing some searching I hit this bug.
http://tracker.ceph.com/issues/5194
Which created the problem upon rebooting.
Thank You,
Scottix
On Wed, Jun 12, 2013 at 10:29 AM, Gregory Farnum wrote:
> On Wed, Jun
.
Thanks for responding,
Scottix
On Wed, Jun 12, 2013 at 6:35 AM, John Wilkins wrote:
> ceph-deploy adds the OSDs to the cluster map. You can add the OSDs to
> the ceph.conf manually.
>
> In the ceph.conf file, the settings don't require underscores. If you
> modify your conf
but I guess it doesn't matter since it works.
Thanks for clarification,
Scottix
--
Follow Me: @Scottix <http://www.twitter.com/scottix>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
87 matches
Mail list logo