[ceph-users] Delta Lake Support

2019-05-08 Thread Scottix
Hey Cephers, There is a new OSS software called Delta Lake https://delta.io/ It is compatible with HDFS but seems ripe to add Ceph support as a backend storage. Just want to put this on the radar for any feelers. Best ___ ceph-users mailing list ceph-us

Re: [ceph-users] Right way to delete OSD from cluster?

2019-01-30 Thread Scottix
I generally have gone the crush reweight 0 route This way the drive can participate in the rebalance, and the rebalance only happens once. Then you can take it out and purge. If I am not mistaken this is the safest. ceph osd crush reweight 0 On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov wrote

Re: [ceph-users] Bionic Upgrade 12.2.10

2019-01-14 Thread Scottix
> from Xenial to Bionic, as well as new ceph nodes that installed straight to > Bionic, due to the repo issues. Even if you try to use the xenial packages, > you will run into issues with libcurl4 and libcurl3 I imagine. > > Reed > > On Jan 14, 2019, at 12:21 PM, Scottix

[ceph-users] Bionic Upgrade 12.2.10

2019-01-14 Thread Scottix
Hey, I am having some issues upgrading to 12.2.10 on my 18.04 server. It is saying 12.2.8 is the latest. I am not sure why it is not going to 12.2.10, also the rest of my cluster is already in 12.2.10 except this one machine. $ cat /etc/apt/sources.list.d/ceph.list deb https://download.ceph.com/de

Re: [ceph-users] cephfs free space issue

2019-01-10 Thread Scottix
I just had this question as well. I am interested in what you mean by fullest, is it percentage wise or raw space. If I have an uneven distribution and adjusted it, would it make more space available potentially. Thanks Scott On Thu, Jan 10, 2019 at 12:05 AM Wido den Hollander wrote: > > > On 1

Re: [ceph-users] cephfs tell command not working

2018-07-30 Thread Scottix
Awww that makes more sense now. I guess I didn't quite comprehend EPERM at the time. Thank You, Scott On Mon, Jul 30, 2018 at 7:19 AM John Spray wrote: > On Fri, Jul 27, 2018 at 8:35 PM Scottix wrote: > > > > ceph tell mds.0 client ls > > 2018-07-27 12:32:40.344

[ceph-users] cephfs tell command not working

2018-07-27 Thread Scottix
ceph tell mds.0 client ls 2018-07-27 12:32:40.344654 7fa5e27fc700 0 client.89408629 ms_handle_reset on 10.10.1.63:6800/1750774943 Error EPERM: problem getting command descriptions from mds.0 mds log 2018-07-27 12:32:40.342753 7fc9c1239700 1 mds.CephMon203 handle_command: received command from cl

Re: [ceph-users] Multi-MDS Failover

2018-05-18 Thread Scottix
So we have been testing this quite a bit, having the failure domain as partially available is ok for us but odd, since we don't know what will be down. Compared to a single MDS we know everything will be blocked. It would be nice to have an option to have all IO blocked if it hits a degraded state

Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Scottix
nf file folder. > > On Mon, Apr 30, 2018 at 5:31 PM, Scottix wrote: > > It looks like ceph-deploy@2.0.0 is incompatible with systems running > 14.04 > > and it got released in the luminous branch with the new deployment > commands. > > > > Is there anyway to down

[ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Scottix
It looks like ceph-deploy@2.0.0 is incompatible with systems running 14.04 and it got released in the luminous branch with the new deployment commands. Is there anyway to downgrade to an older version? Log of osd list XYZ@XYZStat200:~/XYZ-cluster$ ceph-deploy --overwrite-conf osd list XYZCeph204

Re: [ceph-users] *** SPAM *** Re: Multi-MDS Failover

2018-04-27 Thread Scottix
er > ranks. I'm not sure if it *could* following some code changes, but > anyway that just not how it works today. > > Does that clarify things? > > Cheers, Dan > > [1] https://ceph.com/community/new-luminous-cephfs-subtree-pinning/ > > > On Fri, Apr 27, 2018

Re: [ceph-users] Multi-MDS Failover

2018-04-26 Thread Scottix
k Donnelly wrote: > On Thu, Apr 26, 2018 at 4:40 PM, Scottix wrote: > >> Of course -- the mons can't tell the difference! > > That is really unfortunate, it would be nice to know if the filesystem > has > > been degraded and to what degree. > > If a rank is laggy/

Re: [ceph-users] Multi-MDS Failover

2018-04-26 Thread Scottix
when you say not optional that is not exactly true it will still run. On Thu, Apr 26, 2018 at 3:37 PM Patrick Donnelly wrote: > On Thu, Apr 26, 2018 at 3:16 PM, Scottix wrote: > > Updated to 12.2.5 > > > > We are starting to test multi_mds cephfs and we are going through

[ceph-users] Multi-MDS Failover

2018-04-26 Thread Scottix
Updated to 12.2.5 We are starting to test multi_mds cephfs and we are going through some failure scenarios in our test cluster. We are simulating a power failure to one machine and we are getting mixed results of what happens to the file system. This is the status of the mds once we simulate the

Re: [ceph-users] Upgrade Order with ceph-mgr

2018-04-26 Thread Scottix
order would be after mon upgrade and before osd. There are couple > threads related to colocated mon/osd upgrade scenario's. > > On Thu, Apr 26, 2018 at 9:05 AM, Scottix wrote: > > Right I have ceph-mgr but when I do an update I want to make sure it is > the > > recommen

Re: [ceph-users] Upgrade Order with ceph-mgr

2018-04-26 Thread Scottix
Vasu Kulkarni wrote: > On Thu, Apr 26, 2018 at 8:52 AM, Scottix wrote: > > Now that we have ceph-mgr in luminous what is the best upgrade order for > the > > ceph-mgr? > > > > http://docs.ceph.com/docs/master/install/upgrading-ceph/ > I think that is outdated and ne

[ceph-users] Upgrade Order with ceph-mgr

2018-04-26 Thread Scottix
Now that we have ceph-mgr in luminous what is the best upgrade order for the ceph-mgr? http://docs.ceph.com/docs/master/install/upgrading-ceph/ Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-c

Re: [ceph-users] Install previous version of Ceph

2018-02-26 Thread Scottix
simpler once you get to that point. > > On Mon, Feb 26, 2018 at 9:08 AM Ronny Aasen > wrote: > >> On 23. feb. 2018 23:37, Scottix wrote: >> > Hey, >> > We had one of our monitor servers die on us and I have a replacement >> > computer now. In between th

[ceph-users] Install previous version of Ceph

2018-02-23 Thread Scottix
Hey, We had one of our monitor servers die on us and I have a replacement computer now. In between that time you have released 12.2.3 but we are still on 12.2.2. We are on Ubuntu servers I see all the binaries are in the repo but your package cache only shows 12.2.3, is there a reason for not kee

Re: [ceph-users] Bluestore osd_max_backfills

2017-11-08 Thread Scottix
When I add in the next hdd i'll try the method again and see if I just needed to wait longer. On Tue, Nov 7, 2017 at 11:19 PM Wido den Hollander wrote: > > > Op 7 november 2017 om 22:54 schreef Scottix : > > > > > > Hey, > > I recently updated to lumino

[ceph-users] Bluestore osd_max_backfills

2017-11-07 Thread Scottix
Hey, I recently updated to luminous and started deploying bluestore osd nodes. I normally set osd_max_backfills = 1 and then ramp up as time progresses. Although with bluestore it seems like I wasn't able to do this on the fly like I used to with XFS. ceph tell osd.* injectargs '--osd-max-backfil

Re: [ceph-users] Ceph release cadence

2017-09-08 Thread Scottix
Personally I kind of like the current format and fundamentally we are talking about Data storage which should be the most tested and scrutinized piece of software on your computer. Having XYZ feature later than sooner compared to oh I lost all my data. I am thinking of a recent FS that had a featur

Re: [ceph-users] mon osd down out subtree limit default

2017-08-21 Thread Scottix
Great to hear. Best On Mon, Aug 21, 2017 at 8:54 AM John Spray wrote: > On Mon, Aug 21, 2017 at 4:34 PM, Scottix wrote: > > I don't want to hijack another thread so here is my question. > > I just learned about this option from another thread and from my > > u

[ceph-users] mon osd down out subtree limit default

2017-08-21 Thread Scottix
I don't want to hijack another thread so here is my question. I just learned about this option from another thread and from my understanding with our Ceph cluster that we have setup, the default value is not good. Which is "rack" and I should have it on "host". Which comes to my point why is it set

Re: [ceph-users] Mysql performance on CephFS vs RBD

2017-05-01 Thread Scottix
I'm by no means a Ceph expert but I feel this is not a fair representation of Ceph, I am not saying numbers would be better or worse. Just the fact I see some major holes that don't represent a typical Ceph setup. 1 Mon? Most have a minimum of 3 1 OSD? basically all your reads and writes are going

Re: [ceph-users] Random Health_warn

2017-02-23 Thread Scottix
> > ____ > From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of John > Spray [jsp...@redhat.com] > Sent: Thursday, February 23, 2017 3:47 PM > To: Scottix > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Rando

Re: [ceph-users] Random Health_warn

2017-02-23 Thread Scottix
Ya the ceph-mon.$ID.log I was running ceph -w when one of them occurred too and it never output anything. Here is a snippet for the the 5:11AM occurrence. On Thu, Feb 23, 2017 at 1:56 PM Robin H. Johnson wrote: > On Thu, Feb 23, 2017 at 09:49:21PM +0000, Scottix wrote: > > ceph versi

[ceph-users] Random Health_warn

2017-02-23 Thread Scottix
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) We are seeing a weird behavior or not sure how to diagnose what could be going on. We started monitoring the overall_status from the json query and every once in a while we would get a HEALTH_WARN for a minute or two. Monitoring logs.

Re: [ceph-users] Failing to Activate new OSD ceph-deploy

2017-01-10 Thread Scottix
eived it > erroneously, please notify the sender and delete it, together with any > attachments, and be advised that any dissemination or copying of this > message is prohibited. > -- > > -- > *From:* Scottix [scot...@gma

Re: [ceph-users] Failing to Activate new OSD ceph-deploy

2017-01-10 Thread Scottix
or copying of this message is prohibited. -- -- *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Scottix [scot...@gmail.com] *Sent:* Thursday, July 07, 2016 5:01 PM *To:* ceph-users *Subject:* Re: [ceph-users] Failing to Activate

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-19 Thread Scottix
I would take the analogy of a Raid scenario. Basically a standby is considered like a spare drive. If that spare drive goes down. It is good to know about the event, but it does in no way indicate a degraded system, everything keeps running at top speed. If you had multi active MDS and one goes do

Re: [ceph-users] 10.2.3 release announcement?

2016-09-26 Thread Scottix
Agreed no announcement like there usually is, what is going on? Hopefully there is an explanation. :| On Mon, Sep 26, 2016 at 6:01 AM Henrik Korkuc wrote: > Hey, > > 10.2.3 is tagged in jewel branch for more than 5 days already, but there > were no announcement for that yet. Is there any reasons

Re: [ceph-users] Failing to Activate new OSD ceph-deploy

2016-07-07 Thread Scottix
it in atleast. --Scott On Thu, Jul 7, 2016 at 2:54 PM Scottix wrote: > Hey, > This is the first time I have had a problem with ceph-deploy > > I have attached the log but I can't seem to activate the osd. > > I am running > ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6

[ceph-users] Failing to Activate new OSD ceph-deploy

2016-07-07 Thread Scottix
Hey, This is the first time I have had a problem with ceph-deploy I have attached the log but I can't seem to activate the osd. I am running ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) I did upgrade from Infernalis->Jewel I haven't changed ceph ownership but I do have the conf

Re: [ceph-users] Required maintenance for upgraded CephFS filesystems

2016-06-03 Thread Scottix
Great thanks. --Scott On Fri, Jun 3, 2016 at 8:59 AM John Spray wrote: > On Fri, Jun 3, 2016 at 4:49 PM, Scottix wrote: > > Is there anyway to check what it is currently using? > > Since Firefly, the MDS rewrites TMAPs to OMAPs whenever a directory is > updated, so a pre-

Re: [ceph-users] Required maintenance for upgraded CephFS filesystems

2016-06-03 Thread Scottix
Is there anyway to check what it is currently using? Best, Scott On Fri, Jun 3, 2016 at 4:26 AM John Spray wrote: > Hi, > > If you do not have a CephFS filesystem that was created with a Ceph > version older than Firefly, then you can ignore this message. > > If you have such a filesystem, you

Re: [ceph-users] CephFS in the wild

2016-06-02 Thread Scottix
I have three comments on our CephFS deployment. Some background first, we have been using CephFS since Giant with some not so important data. We are using it more heavily now in Infernalis. We have our own raw data storage using the POSIX semantics and keep everything as basic as possible. Basicall

Re: [ceph-users] Weighted Priority Queue testing

2016-05-12 Thread Scottix
We have run into this same scenarios in terms of the long tail taking much longer on recovery than the initial. Either time we are adding osd or an osd get taken down. At first we have max-backfill set to 1 so it doesn't kill the cluster with io. As time passes by the single osd is performing the

Re: [ceph-users] cephfs rm -rf on directory of 160TB /40M files

2016-04-06 Thread Scottix
I have been running some speed tests in POSIX file operations and I noticed even just listing files can take a while compared to an attached HDD. I am wondering is there a reason it takes so long to even just list files. Here is the test I ran time for i in {1..10}; do touch $i; done Interna

Re: [ceph-users] Old MDS resurrected after update

2016-02-24 Thread Scottix
Thanks for the responses John. --Scott On Wed, Feb 24, 2016 at 3:07 AM John Spray wrote: > On Tue, Feb 23, 2016 at 5:36 PM, Scottix wrote: > > I had a weird thing happen when I was testing an upgrade in a dev > > environment where I have removed an MDS from a machine a while

[ceph-users] Old MDS resurrected after update

2016-02-23 Thread Scottix
I had a weird thing happen when I was testing an upgrade in a dev environment where I have removed an MDS from a machine a while back. I upgraded to 0.94.6 and low and behold the mds daemon started up on the machine again. I know the /var/lib/ceph/mds folder was removed becaues I renamed it /var/l

Re: [ceph-users] osd become unusable, blocked by xfsaild (?) and load > 5000

2016-02-17 Thread Scottix
Looks like the bug with the kernel using ceph and XFS was fixed, I haven't tested it yet but just wanted to give an update. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1527062 On Tue, Dec 8, 2015 at 8:05 AM Scottix wrote: > I can confirm it seems to be kernels greater than

Re: [ceph-users] osd become unusable, blocked by xfsaild (?) and load > 5000

2015-12-08 Thread Scottix
I can confirm it seems to be kernels greater than 3.16, we had this problem where servers would lock up and had to perform restarts on a weekly basis. We downgraded to 3.16, since then we have not had to do any restarts. I did find this thread in the XFS forums and I am not sure if has been fixed

Re: [ceph-users] CephFS Attributes Question Marks

2015-09-30 Thread Scottix
OpenSuse 12.1 3.1.10-1.29-desktop On Wed, Sep 30, 2015, 5:34 AM Yan, Zheng wrote: > On Tue, Sep 29, 2015 at 9:51 PM, Scottix wrote: > >> I'm positive the client I sent you the log is 94. We do have one client >> still on 87. >> > which version of kernel are

Re: [ceph-users] CephFS Attributes Question Marks

2015-09-29 Thread Scottix
by increasing >> client_cache_size (on the client) if your RAM allows it. >> >> John >> >> On Tue, Sep 29, 2015 at 12:58 AM, Scottix wrote: >> >>> I know this is an old one but I got a log in ceph-fuse for it. >>> I got this on OpenSuse 12.1 >>>

Re: [ceph-users] CephFS Fuse Issue

2015-09-21 Thread Scottix
quot;cct" must be the > broken one, but maybe it's just the Inode* or something. > -Greg > > On Mon, Sep 21, 2015 at 2:03 PM, Scottix wrote: > > I was rsyncing files to ceph from an older machine and I ran into a > > ceph-fuse crash. > > > > OpenSUSE 1

[ceph-users] CephFS Fuse Issue

2015-09-21 Thread Scottix
I was rsyncing files to ceph from an older machine and I ran into a ceph-fuse crash. OpenSUSE 12.1, 3.1.10-1.29-desktop ceph-fuse 0.94.3 The rsync was running for about 48 hours then crashed somewhere along the way. I added the log, and can run more if you like, I am not sure how to reproduce it

Re: [ceph-users] Cephfs total throughput

2015-09-15 Thread Scottix
I have a program that monitors the speed, and I have seen 1TB/s pop up and there is just no way that is true. Probably the way it is calculated is prone to extreme measurements, where if you average it out you get a more realistic number. On Tue, Sep 15, 2015 at 12:25 PM Mark Nelson wrote: > FWI

[ceph-users] Object Storage and POSIX Mix

2015-08-21 Thread Scottix
I saw this article on Linux Today and immediately thought of Ceph. http://www.enterprisestorageforum.com/storage-management/object-storage-vs.-posix-storage-something-in-the-middle-please-1.html I was thinking would it theoretically be possible with RGW to do a GET and set a BEGIN_SEEK and OFFSET

Re: [ceph-users] CephFS vs Lustre performance

2015-08-04 Thread Scottix
I'll be more of a third-party person and try to be factual. =) I wouldn't throw off Gluster too fast yet. Besides what you described with the object and disk storage. It uses Amazon Dynamo paper on eventually consistent methodology of organizing data. Gluster has different features so I would look

Re: [ceph-users] Unexpected period of iowait, no obvious activity?

2015-06-23 Thread Scottix
Ya Ubuntu has a process called mlocate which run updatedb We basically turn it off shown here http://askubuntu.com/questions/268130/can-i-disable-updatedb-mlocate If you still want it you could edit the settings /etc/updatedb.conf and add a prunepath to your ceph directory On Tue, Jun 23, 2015 a

Re: [ceph-users] v0.94.2 Hammer released

2015-06-12 Thread Scottix
I noticed amd64 Ubuntu 12.04 hasn't updated its packages to 0.94.2 can you check this? http://ceph.com/debian-hammer/dists/precise/main/binary-amd64/Packages Package: ceph Version: 0.94.1-1precise Architecture: amd64 On Thu, Jun 11, 2015 at 10:35 AM Sage Weil wrote: > This Hammer point release

Re: [ceph-users] Discuss: New default recovery config settings

2015-06-04 Thread Scottix
>From a ease of use standpoint and depending on the situation you are setting up your environment, the idea is as follow; It seems like it would be nice to have some easy on demand control where you don't have to think a whole lot other than knowing how it is going to affect your cluster in a gene

Re: [ceph-users] How to backup hundreds or thousands of TB

2015-05-06 Thread Scottix
As a point to * someone accidentally removed a thing, and now they need a thing back I thought MooseFS has an interesting feature that I thought would be good for CephFS and maybe others. Basically a timed Trashbin "Deleted files are retained for a configurable period of time (a file system level

Re: [ceph-users] MDS unmatched rstat after upgrade hammer

2015-04-09 Thread Scottix
I fully understand, why it is just a comment :) Can't wait for scrub. Thanks! On Thu, Apr 9, 2015 at 10:13 AM John Spray wrote: > > > On 09/04/2015 17:09, Scottix wrote: > > Alright sounds good. > > > > Only one comment then: > > From an IT/ops perspectiv

Re: [ceph-users] MDS unmatched rstat after upgrade hammer

2015-04-09 Thread Scottix
Wed, Apr 8, 2015 at 8:10 PM Yan, Zheng wrote: > On Thu, Apr 9, 2015 at 7:09 AM, Scottix wrote: > > I was testing the upgrade on our dev environment and after I restarted > the > > mds I got the following errors. > > > > 2015-04-08 15:58:34.056470 mds.0 [ERR] unmatched

[ceph-users] MDS unmatched rstat after upgrade hammer

2015-04-08 Thread Scottix
I was testing the upgrade on our dev environment and after I restarted the mds I got the following errors. 2015-04-08 15:58:34.056470 mds.0 [ERR] unmatched rstat on 605, inode has n(v70 rc2015-03-16 09:11:34.390905), dirfrags have n(v0 rc2015-03-16 09:11:34.390905 1=0+1) 2015-04-08 15:58:34.056530

Re: [ceph-users] Firefly, cephfs issues: different unix rights depending on the client and ls are slow

2015-03-13 Thread Scottix
… > The time variation is caused cache coherence. when client has valid information > in its cache, 'stat' operation will be fast. Otherwise the client need to send > request to MDS and wait for reply, which will be slow. This sounds like the behavior I had with CephFS giving me question marks.

Re: [ceph-users] CephFS Attributes Question Marks

2015-03-03 Thread Scottix
Ya we are not at 0.87.1 yet, possibly tomorrow. I'll let you know if it still reports the same. Thanks John, --Scottie On Tue, Mar 3, 2015 at 2:57 PM John Spray wrote: > On 03/03/2015 22:35, Scottix wrote: > > I was testing a little bit more and decided to run the > ce

Re: [ceph-users] CephFS Attributes Question Marks

2015-03-03 Thread Scottix
ad entry start ptr (0x2aee800b3f) at 0x2aee80167e 2015-03-03 14:32:50.486354 7f47c3006780 -1 Bad entry start ptr (0x2aee800e4f) at 0x2aee80198e 2015-03-03 14:32:50.577443 7f47c3006780 -1 Bad entry start ptr (0x2aee801f65) at 0x2aee802aa4 Events by type: On Tue, Mar 3, 2015 at 12:01 PM Scotti

Re: [ceph-users] CephFS Attributes Question Marks

2015-03-03 Thread Scottix
d create any issues. Anyway we are going to update the machine soon so, I can report if we keep having the issue. Thanks for your support, Scott On Mon, Mar 2, 2015 at 4:07 PM Scottix wrote: > I'll try the following things and report back to you. > > 1. I can get a new kernel on ano

Re: [ceph-users] CephFS Attributes Question Marks

2015-03-02 Thread Scottix
debug client = 20" it will output (a whole lot of) logging to the > client's log file and you could see what requests are getting > processed by the Ceph code and how it's responding. That might let you > narrow things down. It's certainly not any kind of timeout. > -G

Re: [ceph-users] CephFS Attributes Question Marks

2015-03-02 Thread Scottix
r 2, 2015 at 3:47 PM, Gregory Farnum wrote: > >> On Mon, Mar 2, 2015 at 3:39 PM, Scottix wrote: >> > We have a file system running CephFS and for a while we had this issue >> when >> > doing an ls -la we get question marks in the response. >> > >> &

[ceph-users] CephFS Attributes Question Marks

2015-03-02 Thread Scottix
We have a file system running CephFS and for a while we had this issue when doing an ls -la we get question marks in the response. -rw-r--r-- 1 wwwrun root14761 Feb 9 16:06 data.2015-02-08_00-00-00.csv.bz2 -? ? ? ? ?? data.2015-02-09_00-00-00.csv.bz2 If we

[ceph-users] Replacing Ceph mons & understanding initial members

2014-11-18 Thread Scottix
We currently have a 3 node system with 3 monitor nodes. I created them in the initial setup and the ceph.conf mon initial members = Ceph200, Ceph201, Ceph202 mon host = 10.10.5.31,10.10.5.32,10.10.5.33 We are in the process of expanding and installing dedicated mon servers. I know I can run: cep

Re: [ceph-users] jbod + SMART : how to identify failing disks ?

2014-11-12 Thread Scottix
I would say it depends on your system and where drives are connected to. Some HBA have a cli tool to manage the drives connected like a raid card would do. One other method I found is sometimes it will expose the leds for you http://fabiobaltieri.com/2011/09/21/linux-led-subsystem/ has an article o

Re: [ceph-users] cephfs survey results

2014-11-04 Thread Scottix
a 9a, 02-583 Warszawa > T: [+48] 22 380 13 13 > F: [+48] 22 380 13 14 > E: mariusz.gronczew...@efigence.com > <mailto:mariusz.gronczew...@efigence.com> > > ___________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-

Re: [ceph-users] [ANN] ceph-deploy 1.5.14 released

2014-09-10 Thread Scottix
Suggestion: Can you link to a changelog of any new features or major bug fixes when you do new releases. Thanks, Scottix On Wed, Sep 10, 2014 at 6:45 AM, Alfredo Deza wrote: > Hi All, > > There is a new bug-fix release of ceph-deploy that helps prevent the > environment variable &

Re: [ceph-users] ceph features monitored by nagios

2014-07-23 Thread Scottix
ists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Follow Me: @Scottix http://about.me/scottix scot...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph-fuse remount

2014-07-22 Thread Scottix
Thanks for the info. I was able to do a lazy unmount and started back up fine if anyone wanted to know. On Wed, Jul 16, 2014 at 10:29 AM, Gregory Farnum wrote: > On Wed, Jul 16, 2014 at 9:20 AM, Scottix wrote: >> I wanted to update ceph-fuse to a new version and I would like to h

[ceph-users] Ceph-fuse remount

2014-07-16 Thread Scottix
tion ceph-fuse[10474]: fuse failed to initialize 2014-07-16 09:08:57.784900 7f669be1a760 -1 fuse_mount(mountpoint=/mnt/ceph) failed. ceph-fuse[10461]: mount failed: (5) Input/output error Or is there a better way to do this? -- Follow Me: @Scottix http://about.me/scott

Re: [ceph-users] CephFS MDS Setup

2014-05-28 Thread Scottix
? ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) Do I need to start over and not add the mds to be clean? Thanks for your time On Wed, May 21, 2014 at 12:18 PM, Wido den Hollander wrote: > On 05/21/2014 09:04 PM, Scottix wrote: >> >> I am setting a CephFS cluster

[ceph-users] CephFS MDS Setup

2014-05-21 Thread Scottix
standby? How reliable is the standby? or should a single active mds be sufficient? Thanks -- Follow Me: @Scottix http://about.me/scottix scot...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Ceph User Committee monthly meeting #1 : executive summary

2014-04-04 Thread Scottix
f you see a mistake. > > Cheers > > -- > Loïc Dachary, Artisan Logiciel Libre > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Fo

Re: [ceph-users] shutting down for maintenance

2013-12-31 Thread Scottix
g list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Follow Me: @Scottix <http://www.twitter.com/scottix> http://about.me/scottix scot...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] optimal setup with 4 x ethernet ports

2013-12-03 Thread Scottix
n this cluster? > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Follow Me: @Scottix <http://www.twitter.com/scottix> http://about.me/scottix scot...@

Re: [ceph-users] Newbie question

2013-10-02 Thread Scottix
ts.ceph.com > http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> > -- Follow Me: @Scottix <http://www.twitter.com/scottix> http://about.me/scottix scot...@gmail.com __

[ceph-users] Documentation OS Recommendations

2013-09-09 Thread Scottix
I was looking at someones question on the list and started looking up some documentation and found this page. http://ceph.com/docs/next/install/os-recommendations/ Do you think you can provide an update for dumpling. Best Regards ___ ceph-users mailing

Re: [ceph-users] Documentation OS Recommendations

2013-09-09 Thread Scottix
Great Thanks. On Mon, Sep 9, 2013 at 11:31 AM, John Wilkins wrote: > Yes. We'll have an update shortly. > > On Mon, Sep 9, 2013 at 11:29 AM, Scottix wrote: > > I was looking at someones question on the list and started looking up > some > > documentation

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Scottix
es to > a simple disk swap (assuming an intelligent hardware RAID controller). > Obviously you still have a 50% reduction in disk space, but you have the > advantage that your filesystem never sees the bad disk and all the problems > that can cause. > > James > > _______

Re: [ceph-users] Ceph Hadoop Configuration

2013-08-05 Thread Scottix
libcephfs.jar file, to see if > CephPoolException.class is in there? It might just be that the > libcephfs.jar is out-of-date. > > -Noah > > On Sun, Aug 4, 2013 at 8:44 PM, Scottix wrote: > > I am running into an issues connecting hadoop to my ceph cluster and I'm >

[ceph-users] Ceph Hadoop Configuration

2013-08-04 Thread Scottix
I am running into an issues connecting hadoop to my ceph cluster and I'm sure I am missing something but can't figure it out. I have a Ceph cluster with MDS running fine and I can do a basic mount perfectly normal. I have hadoop fs -ls with basic file:/// working well. Info: ceph cluster version 0

Re: [ceph-users] Ceph-deploy

2013-07-12 Thread Scottix
rs@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Follow Me: @Scottix <http://www.twitter.com/scottix> scot...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph-deploy questions

2013-06-19 Thread Scottix
is helps some people, Scottix On Wed, Jun 12, 2013 at 12:12 PM, Scottix wrote: > Thanks Greg, > I am starting to understand it better. > I soon realized as well after doing some searching I hit this bug. > http://tracker.ceph.com/issues/5194 > Which created the problem upon rebooting

Re: [ceph-users] ceph-deploy questions

2013-06-12 Thread Scottix
Thanks Greg, I am starting to understand it better. I soon realized as well after doing some searching I hit this bug. http://tracker.ceph.com/issues/5194 Which created the problem upon rebooting. Thank You, Scottix On Wed, Jun 12, 2013 at 10:29 AM, Gregory Farnum wrote: > On Wed, Jun

Re: [ceph-users] ceph-deploy questions

2013-06-12 Thread Scottix
. Thanks for responding, Scottix On Wed, Jun 12, 2013 at 6:35 AM, John Wilkins wrote: > ceph-deploy adds the OSDs to the cluster map. You can add the OSDs to > the ceph.conf manually. > > In the ceph.conf file, the settings don't require underscores. If you > modify your conf

[ceph-users] ceph-deploy questions

2013-06-11 Thread Scottix
but I guess it doesn't matter since it works. Thanks for clarification, Scottix -- Follow Me: @Scottix <http://www.twitter.com/scottix> ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com