and see if your
"step" is set to osd or rack.
If it's not host then change it to that and pull it in again.
Check the docs on crush maps
http://ceph.com/docs/master/rados/operations/crush-map/ for more info.
-Michael
On 23/05/2014 10:53, Karan Singh wrote:
Try increasing the
Hi Peter,
Please use "ceph pg repair XX.xx". It might take a few seconds to kick
in after being instructed.
-Michael
On 27/05/2014 21:40, phowell wrote:
Hi
First apologies if this is the wrong place to ask this question.
We are running a small Ceph (0.79) cluster will about 12 o
ies but still usable, cluster
stops accepting data at one accessible copy.
-Michael
On 27/05/2014 18:38, Sudarsan, Rajesh wrote:
I am seeing the same error message with ceph health command. I am
using Ubuntu 14.04 with ceph 0.79. I am using the ceph distribution
that comes with the Ubuntu release.
Would it be feasible to try for an odd one out policy by default when
repairing from a pool of 3 or more disks? Or is the most common cause of
inconsistency most likely to not effect the primary?
-Michael
On 27/05/2014 23:55, Gregory Farnum wrote:
Note that while the "repair" com
ceph osd dump | grep size
Check that all pools are size 2, min size 2 or 1.
If not you can change on the fly with:
ceph osd pool set #poolname size/min_size #size
See docs http://ceph.com/docs/master/rados/operations/pools/ for
alterations to pool attributes.
-Michael
On 05/06/2014 17:29
ON split that doesn't risk the two
nodes being up and unable to serve data while the three are down so
you'd need to find a way to make it a 2/2/1 split instead.
-Michael
On 28/07/2014 18:41, Robert Fantini wrote:
OK for higher availability then 5 nodes is better then 3 . So we'
You can use multiple "steps" in your crush map in order to do things
like choose two different hosts then choose a further OSD on one of the
hosts and do another replication so that you can get three replicas onto
two hosts without risking ending up with three replicas on a single node.
On 28/
The mainline packages from Ubuntu should be helpful in testing.
Info: https://wiki.ubuntu.com/Kernel/MainlineBuilds
Packages: http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D
On 31/07/2014 10:31, James Eckersall wrote:
Ah, thanks for the clarification on that.
We are very close to the 250
How far out are your clocks? It's showing a clock skew, if they're too
far out it can cause issues with cephx.
Otherwise you're probably going to need to check your cephx auth keys.
-Michael
On 26/08/2014 12:26, yuelongguang wrote:
hi,all
i have 5 osds and 3 mons. its status is
t down you'll only have 1 mon left, 1/3 will fail quorum and so the cluster
will stop taking data to prevent split-brain scenarios. For 2 nodes to be down
and the cluster to continue to operate you'd need a minimum of 5 mons or you'd
need to mo
/0x100 [ceph]
[] vfs_getattr+0x4e/0x80
[] vfs_fstatat+0x4e/0x70
[] vfs_lstat+0x1e/0x20
[] sys_newlstat+0x1a/0x40
[] system_call_fastpath+0x16/0x1b
[] 0x
Started occurring shortly (within an hour or so) after adding a pool,
not sure if that's relevant yet.
-Michael
On 23/10
On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael wrote:
Tying to gather some more info.
CentOS - hanging ls
[root@srv ~]# cat /proc/14614/stack
[] wait_answer_interruptible+0x81/0xc0 [fuse]
[] fuse_request_send+0x1cb/0x290 [fuse]
[] fuse_do_getattr+0x10c/0x2c0
On 24/10/2013 13:53, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 5:43 PM, Michael wrote:
On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael
wrote:
Tying to gather some more info.
CentOS - hanging ls
[root@srv ~]# cat /proc/14614/stack
[] wait_answer_interruptible
1: 1/1/1 up {0=srv10=up:active}
Have done a full deep scrub/repair cycle on all of the osd which has
come back fine so not really sure where to start looking to find out
what's wrong with it.
Any ideas?
-Michael
___
ceph-users mailing list
ceph-
On 24/10/2013 14:55, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 9:13 PM, Michael wrote:
On 24/10/2013 13:53, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 5:43 PM, Michael
wrote:
On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael
wrote:
Tying to gather some more
7;leftover'
information here and the for older versions but otherwise ceph has some
very very nice documentation.
-Michael
On 26/10/2013 05:21, Raghavendra Lad wrote:
Hi Cephs,
I am new to Ceph. I am planning to install CEPH.
I already have Openstack Grizzly installed and for storage t
manually using
http://ceph.com/docs/next/install/rpm/
-Michael
On 29/10/2013 15:57, Narendra Trivedi wrote:
Hi All,
I am a newbie to ceph. I am installing ceph (dumpling release) using
*ceph-deploy* (issued from my admin node) on one monitor and two OSD
nodes running CentOS 6.4 (64-bit)
it's probably a good idea to double check your pg
numbers while you're doing this.
-Michael
On 08/11/2013 11:08, Karan Singh wrote:
Hello Joseph
This sounds like a solution , BTW how to set replication level to 1 , is there
any direct command or need to edit configuration
Apologies, that should have been: ceph osd dump | grep 'rep size'
What I get from blindly copying from a wiki!
-Michael
On 08/11/2013 11:38, Michael wrote:
Hi Karan,
There's info on http://ceph.com/docs/master/rados/operations/pools/
But primarily you need to check your rep
eing /dev/sda mean you're putting your journal onto an already
partitioned and in use by the OS SSD?
-Michael
On 12/11/2013 18:09, Gruher, Joseph R wrote:
I didn't think you could specify the journal in this manner (just
pointing multiple OSDs on the same host all to journal /d
Sorry, just spotted you're mounting on sdc. Can you chuck out a partx -v
/dev/sda to see if there's anything odd about the data currently on there?
-Michael
On 12/11/2013 18:22, Michael wrote:
As long as there's room on the SSD for the partitioner it'll just use
the conf v
eems to happen for periods of a couple of minutes then wake up again.
Thanks much,
-Michael
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
g during the block but this is now getting
more frequent and seems to be for longer periods.
Looking at the osd logs for 3 and 8 there's nothing of relevance in there.
Any ideas on the next step?
Thanks,
-Michael
On 25/11/2013 15:28, Ирек Фасихов wrote:
ceph health detail
--
С у
's running of it responding
noticeably slower.
Wish I knew what actually caused it. :/
What version of ceph are you on?
-Michael
On 27/11/2013 21:00, Andrey Korolyov wrote:
Hey,
What number do you have for a replication factor? As for three, 1.5k
IOPS may be a little bit high for 36 dis
This previous thread looks like it might be the same error, could be
helpful.
http://www.spinics.net/lists/ceph-users/msg05295.html
-Michael
On 29/11/2013 19:24, German Anders wrote:
Hi, i'm having issues while trying to add another monitor to my cluster:
ceph@ceph-deploy01:~/ceph-cl
The clocks on your two nodes are not aligned, you'll need to set up an
ntp daemon and either sync them to a remote system or sync them to an
internal system. Either way you just need to get them the same.
-Michael
On 17/12/2013 10:39, Umar Draz wrote:
2) After fixing the above issue I
Would also be interested to know about these performance issues, we have
a 12.04 cluster using RBD caching we're about to double in size so it'd
be good to know if we could be about to run into any potential bottlenecks.
-Michael
On 06/01/2014 21:03, LaSalle, Jurvis wrote:
On 1/
" the live migrations are allowed
regardless of cache mode.
https://www.suse.com/documentation/sles11/singlehtml/book_kvm/book_kvm.html#idm139742235036576
Afaik a full FS flush is called just as it completes copying the memory
across for the live migration.
-Michael
On 15/01/2014 02:41, C
good for a bit more piece of mind!
-Michael
On 15/01/2014 05:41, Christian Balzer wrote:
Hello,
Firstly thanks to Greg and Sage for clearing this up.
Now all I need for a very early Xmas is ganeti 2.10 released and a Debian
KVM release that has RBD enabled. ^o^
Meaning that for now I'm
h osd dump | grep 'pg_num'
And see the docs:
http://ceph.com/docs/master/rados/operations/placement-groups/
You can currently increase the number of PG/PGP of a pool but not
decrease them, so take care if you need to balance them as higher
numbers increases CPU load.
-Michael
Howe
Hi Ceph users,
Have always wondered this, why does data get shuffled twice when you
delete an OSD? You out an OSD and the data gets moved to other nodes -
understandable but then when you remove that OSD from crush it moves
data again, aren't outed OSD's and an OSD's not in crush the same fro
Hi All,
Have a log full of -
"log [ERR] : 1.9 log bound mismatch, info (46784'1236417,46797'1239418]
actual [46784'1235968,46797'1239418]"
"192.168.7.177:6800/15655 >> 192.168.7.183:6802/3348 pipe(0x20e4f00
sd=65 :56394 s=2 pgs=24194 cs=1 l=0 c=0x19668f20).fault, initiating
reconnect"
and
Thanks Gregory.
Currently it's just the one OSD with the issue. If it's more of a
general failing of an OSD I'll rip it out and replace the drive.
-Michael
On 20/02/2014 17:55, Gregory Farnum wrote:
On Thu, Feb 20, 2014 at 4:26 AM, Michael wrote:
Hi All,
Have a log full
Hi All,
Just wondering if there was a reason for no packages for Ubuntu Saucy in
http://ceph.com/packages/ceph-extras/debian/dists/. Could do with
upgrading to fix a few bugs but would hate to have to drop Ceph from
being handled through the package manager!
Thanks,
-Michael
Thanks Tim, I'll give the raring packages a try.
Found a tracker for Saucy packages, looks like the person they were
assigned to hasn't checking in for a fair while so they might have just
been overlooked http://tracker.ceph.com/issues/6726.
-Michael
On 27/02/2014 13:33, Tim Bi
tep
chooseleaf firstn 0 type osd" or similar depending on your crush setup.
Please see the documentation for more info
https://ceph.com/docs/master/rados/operations/crush-map/.
-Michael
On 29/04/2014 21:00, Vadim Kimlaychuk wrote:
Hello all,
I have tried to install subj. almo
Have just looked at the documentation Vadim was trying to use to set up
a cluster and http://eu.ceph.com/docs/wip-6919/start/quick-start/ should
really be updated or removed as it will not result in a working cluster
with recent Ceph versions.
-Michael
On 29/04/2014 21:09, Michael wrote:
Hi
Hi,
Have these been missed or have they been held back for a specific reason?
http://ceph.com/debian-firefly/dists/ looks like Trusty is the only one
that hasn't been updated.
-Michael
___
ceph-users mailing list
ceph-users@lists.ceph.com
Ah, thanks for the info. Will keep an eye on it there instead and clean
the ceph.com from the sources list.
-Michael
On 08/05/2014 21:48, Henrik Korkuc wrote:
hi,
trusty will include ceph in usual repos. I am tracking
http://packages.ubuntu.com/trusty/ceph and
https://bugs.launchpad.net
n scrub
status was completely ignoring standard restart commands which prevented
any scrubbing from continuing within the cluster even after update.
-Michael
On 13/05/2014 17:03, Fabrizio G. Ventola wrote:
I've upgraded to 0.80.1 on a testing instance: the cluster gets
cyclically active+clea
ot mounted
anywhere either. Any way I can clean out these pools now and reset the
pgp num etc?
Thanks,
-Michael
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Answered my own question. Created two new pools, used mds newfs on them
and then deleted the original pools and renamed the new ones.
-Michael
On 13/05/2014 22:20, Michael wrote:
Hi All,
Seems commit 2adc534a72cc199c8b11dbdf436258cbe147101b has removed the
ability to delete and recreate the
pool full of data and both of them full of objects.
Anyone else trying this out?
-Michael
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Sage, the cache system's look pretty great so far. Combined with
erasure coding it's really adding a lot of options.
-Michael
On 21/05/2014 21:54, Sage Weil wrote:
On Wed, 21 May 2014, Michael wrote:
Hi All,
Experimenting with cache pools for RBD, created two pools, sl
.
shadow_lin wrote:
What would be a good ec profile for archive purpose(decent write
perfomance and just ok read performace)?
I don't actually know that - but the default is not bad if you ask me
(not that it features writes faster than reads). Plus it lets you pick m.
- Michael
ome way in which I can tell rockdb to truncate or delete /
skip the respective log entries? Or can I get access to rocksdb('s
files) in some other way to just manipulate it or delete corrupted WAL
files manually?
-Michael
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ount, the OSD won't activate and the error is the same.
Is there any fix in .2 that might address this, or do you just mean that
in general there will be bug fixes?
Thanks for your response!
- Michael
___
ceph-users mailing list
ceph-users@lists.
ch.
As you might see on the bug tracker, the patch did apparently avoid the
immediate error for me, but Ceph then ran into another error.
- Michael
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Konstantin Shalygin wrote:
> I think Christian talks about version 12.2.2, not 12.2.*
Which isn't released yet, yes. I could try building the development
repository if you think that has a chance of resolving the issue?
Although I'd still like to know how I could theoretically get my hands
at
l replication
and so on will trigger *before* you remove it. (There is a configurable
timeout for how long an OSD can be down, after which the OSD is
essentially treated as dead already, at which point replication and
rebalancing starts).
-Michael
Hello,
I'm looking at purchasing Qty 3-4, Dell PowerEdge T630 or R730xd for my OSD
nodes in a Ceph cluster.
Hardware:
Qty x 1, E5-2630v3 2.4Ghz 8C/16T
128 GB DDR4 Ram
QLogic 57810 DP 10Gb DA/SFP+ Converged Network Adapter
I'm trying to determine which RAID controller to use, since I've read JBO
Alex Leake writes:
>
> Hello Michael,
>
> I maintain a small Ceph cluster at the University of Bath, our cluster
consists of:
>
> Monitors:
> 3 x Dell PowerEdge R630
>
> - 2x Intel(R) Xeon(R) CPU E5-2609 v3
> - 64GB RAM
> - 4x 300GB SAS (RAID 10)
>
HTTP/1.1" 100 0 "-" "Boto/2.27.0
Python/2.7.6 Linux/3.13.0-24-generic"
Do You have also problem with that?
I used for testing oryginal nginx and also have a problem with 100-Continue.
Only Apache 2.x works fine.
BR,
Michael
I haven't tried SSL yet. We currently do
Hi,
I have a question about Nginx and 100-Continue.
If I use client like boto or Cyberduck all works fine,
but when I want to upload file on 100% upload, progress bar
hangs and after about 30s Cyberduck reports that
HTTP 100-Continue timeouted.
I use nginx v1.4.1
Only when I use Apache 2 with fa
nrert report that this might be involved with mod fastcgi
(fcgi can't handle this, only fastcgi can handle http 100-Continue). I can't
find how this might be correlated with fastcgi in nginx.
Michael
Don't use nginx. The current version buffers all the uploads to the local
dis
Hi,
Ups, so I don't read carefully a doc...
I will try this solution.
Thanks!
Michael
From the docs, you need this setting in ceph.conf (if you're using
nginx/tengine):
rgw print continue = false
This will fix the 100-continue issues.
On 5/29/2014 5:56 AM, Michael Lukzak w
Hi all,
How do I get my Ceph Cluster back to a healthy state?
root@ceph-admin-storage:~# ceph -v
ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6)
root@ceph-admin-storage:~# ceph -s
cluster 6b481875-8be5-4508-b075-e1f660fd7b33
health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck
__
Von: Karan Singh [karan.si...@csc.fi]
Gesendet: Dienstag, 12. August 2014 10:35
An: Riederer, Michael
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4
pgs stuck unclean
Can you provide your cluster’s ceph osd du
2014 13:00
An: Riederer, Michael
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4
pgs stuck unclean
I am not sure if this helps , but have a look
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10078.html
- Karan -
On 12
Von: Craig Lewis [cle...@centraldesktop.com]
Gesendet: Dienstag, 12. August 2014 20:02
An: Riederer, Michael
Cc: Karan Singh; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4
pgs stuck unclean
For the incomplete PGs, can you give
b the pgs.
Many thanks for your help.
Regards,
Mike
Von: Craig Lewis [cle...@centraldesktop.com]
Gesendet: Mittwoch, 13. August 2014 19:48
An: Riederer, Michael
Cc: Karan Singh; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete;
Von: Craig Lewis [cle...@centraldesktop.com]
Gesendet: Donnerstag, 14. August 2014 19:56
An: Riederer, Michael
Cc: Karan Singh; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4
pgs stuck unclean
It sound likes you need to thro
-boun...@lists.ceph.com]" im Auftrag von "Riederer,
Michael [michael.riede...@br.de]
Gesendet: Montag, 18. August 2014 13:40
An: Craig Lewis
Cc: ceph-users@lists.ceph.com; Karan Singh
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4
pgs stuck unclean
Hi C
to to use some location type under host level to group
OSDs by type and use then it in mapping rules?
--
Michael Kolomiets
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Regards,
Mike
____
Von: Craig Lewis [cle...@centraldesktop.com]
Gesendet: Montag, 18. August 2014 19:22
An: Riederer, Michael
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4
pgs stuck unclean
I take it th
Hi Craig,
many thanks for your help. I decided to reinstall ceph.
Regards,
Mike
Von: Craig Lewis [cle...@centraldesktop.com]
Gesendet: Dienstag, 19. August 2014 22:24
An: Riederer, Michael
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs
.2’ saved [3249803264/3249803264]
root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2
5e28d425f828440b025d769609c5bb41 XXX.iso.2
root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2
5e28d425f828440b025d769609c5bb41 XXX.iso.2
--
Michael Kolomiets
___
c
03:00 Yan, Zheng :
> I suspect the client does not have permission to write to pool 3.
> could you check if the contents of XXX.iso.2 are all zeros.
>
> Yan, Zheng
>
> On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets
> wrote:
>> Hi!
>> I use ceph pool mount
ent does not have permission to write to pool 3.
> could you check if the contents of XXX.iso.2 are all zeros.
>
> Yan, Zheng
>
> On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets
> wrote:
>> Hi!
>> I use ceph pool mounted via cephfs for cloudstack secondary storage
&
C++ is case sensitive, it will be very difficult...
On Feb 21, 2015 3:44 AM, "Stefan Priebe - Profihost AG" <
s.pri...@profihost.ag> wrote:
> This will be very difficult with a broken keyboard!
>
> > Am 21.02.2015 um 12:16 schrieb khyati joshi :
> >
> > I WANT TO ADD 2 NEW FEATURES IN CEPH NAMELY
I¹d also like to set this up. I¹m not sure where to begin. When you say
enabled by default, where is it enabled?
Many thanks,
Mike
On 2/25/15, 1:49 PM, "Sage Weil" wrote:
>On Wed, 25 Feb 2015, Robert LeBlanc wrote:
>> We tried to get radosgw working with Apache + mod_fastcgi, but due to
>> t
Thanks Sage for the quick reply!
-=Mike
On 2/26/15, 8:05 AM, "Sage Weil" wrote:
>On Thu, 26 Feb 2015, Michael Kuriger wrote:
>> I¹d also like to set this up. I¹m not sure where to begin. When you
>>say
>> enabled by default, where is it enabled?
>
>Th
child_exception
[ceph201][ERROR ] OSError: [Errno 2] No such file or directory
[ceph201][ERROR ]
[ceph201][ERROR ]
Michael Kuriger
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I use reposync to keep mine updated when needed.
Something like:
cd ~ /ceph/repos
reposync -r Ceph -c /etc/yum.repos.d/ceph.repo
reposync -r Ceph-noarch -c /etc/yum.repos.d/ceph.repo
reposync -r elrepo-kernel -c /etc/yum.repos.d/elrepo.repo
Michael Kuriger
Sr. Unix Systems Engineer
S mk7
I always keep my pg number a power of 2. So I’d go from 2048 to 4096. I’m not
sure if this is the safest way, but it’s worked for me.
[yp]
Michael Kuriger
Sr. Unix Systems Engineer
• mk7...@yp.com<mailto:mk7...@yp.com> |• 818-649-7235
From: Chu Duc Minh mailto:chu.ducm...@gma
I don't think this came through the first time.. resending.. If it's a
dupe, my apologies..
For Firefly / Giant installs, I've had success with the following:
yum install ceph ceph-common --disablerepo=base --disablerepo=epel
Let us know if this works for you as well.
Thanks,
For Firefly / Giant installs, I've had success with the following:
yum install ceph ceph-common --disablerepo=base --disablerepo=epel
Let us know if this works for you as well.
Thanks,
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Apr 8, 2015
I had the same problem when doing benchmarks with small block sizes (<8k) to
RBDs. These settings seemed to fix the problem for me.
sudo ceph tell osd.* injectargs '--filestore_merge_threshold 40'
sudo ceph tell osd.* injectargs '--filestore_split_multiple 8'
After you apply the settings give it
I had an issue with my calamari server, so I built a new one from scratch.
I¹ve been struggling trying to get the new server to start up and see my
ceph cluster. I went so far as to remove salt and diamond from my ceph
nodes and reinstalled again. On my calamari server, it sees the hosts
connect
In my case, I did remove all salt keys. The salt portion of my install is
working. It’s just that the calamari server is not seeing the ceph
cluster.
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
On 5/12/15, 1:35 AM, "Alexandre DERUMIER" wrote:
.el6
Installed:
salt.noarch 0:2014.7.1-1.el6salt-minion.noarch 0:2014.7.1-1.el6
This is on CentOS 6.6
-=Mike Kuriger
[yp]
Michael Kuriger
Sr. Unix Systems Engineer
• mk7...@yp.com<mailto:mk7...@yp.com> |• 818-649-7235
From: Bruce McFarland
mailto:bruce.
= civetweb port=80
rgw_socket_path = /var/run/ceph/ceph-client.radosgw.ceph-gw3.asok
[yp]
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com<mailto:mk7...@yp.com> |* 818-649-7235
From: Florent MONTHEL mailto:fmont...@flox-arts.net>>
Date: Monday, May 18, 2015 at 6:14 PM
To:
You could mount /dev/sdb to a filesystem, such as /ceph-disk, and then do this:
ceph-deploy osd create ceph-node1:/ceph-disk
Your journal would be a file doing it this way.
[yp]
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com<mailto:mk7...@yp.com> |* 818-649-7235
From:
You may be able to use replication. Here is a site showing a good example of
how to set it up. I have not tested replicating within the same datacenter,
but you should just be able to define a new zone within your existing ceph
cluster and replicate to it.
http://cephnotes.ksperis.com/blog/20
1) set up mds server
ceph-deploy mds --overwrite-conf create
2) create filesystem
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 16
ceph fs new cephfs cephfs_metadata cephfs_data
ceph fs ls
ceph mds stat
3) mount it!
From: ceph-users [mailto:ceph-users-boun...@lists.
You might be able to accomplish that with something like dropbox or owncloud
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Trevor
Robinson - Key4ce
Sent: Wednesday, May 20, 2015 2:35 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Is Ceph right for me?
Hello,
C
) but deleting is not working unless I specify an exact file to
delete. Also, my radosgw-agent is not syncing buckets any longer. I¹m
using s3cmd to test reads/writes to the gateway.
Has anyone else had problems in giant?
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
Is it possible to disable the replication of /admin/log and other replication
logs? It seems that This log replication is occupying a lot of time in my
cluster(s). I’d like to only replicate user’s data.
Thanks!
[yp]
Michael Kuriger
Sr. Unix Systems Engineer
• mk7...@yp.com<mailto:
Thanks Mark you too
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
On 7/31/15, 3:02 PM, "ceph-users on behalf of Mark Nelson"
wrote:
>Most folks have either probably already left or are on their way out the
>door late on a friday, but I ju
On two different occasions I've had an osd crash and misplace objects when
rapid object deletion has been triggered by discard/trim operations with the
qemu rbd driver. Has anybody else had this kind of trouble? The objects are
still on disk, just not in a place where the osd thinks is valid.
Hello Everyone,
I have a Ceph test cluster doing storage for an OpenStack Grizzly platform
(also testing). Upgrading to 0.67 went fine on the Ceph side with the cluster
showing healthy but suddenly I can't upload images into Glance anymore. The
upload fails and glance-api throws an error:
2013-0
On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote:
> On 08/14/2013 02:22 PM, Michael Morgan wrote:
> >Hello Everyone,
> >
> > I have a Ceph test cluster doing storage for an OpenStack Grizzly
> > platform
> >(also testing). Upgrading to 0.67 we
I use the virtio-scsi driver.
On Aug 22, 2013, at 12:05 PM, David Blundell
wrote:
>> I see yet another caveat: According to that documentation, it only works with
>> the IDE driver, not with virtio.
>>
>>Guido
>
> I've just been looking into this but have not yet tested. It looks like
>
FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script and
the stock libvirt from raring.
> On Oct 2, 2013, at 10:59 PM, Josh Durgin wrote:
>
>> On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
>> Josh,
>>
>>> On 3 October 2013 10:36, Josh Durgin wrote:
>>> The version bas
There used to be, can't find it right now. Something like 'ceph osd set pg_num
' then 'ceph osd set pgp_num ' to actually move your data into the
new pg's. I successfully did it several months ago, when bobtail was current.
Sent from my iPad
> On Oct 9, 2013, at 10:30 PM, Guang wrote:
>
> T
at the same time?
> 2) What is the recommended way to scale a cluster from like 1PB to 2PB,
> should we scale it to like 1.1PB to 1.2PB or move to 2PB directly?
>
> Thanks,
> Guang
>
>> On Oct 10, 2013, at 11:10 AM, Michael Lowe wrote:
>>
>> There used to b
You must have a quorum or MORE than 50% of your monitors functioning for the
cluster to function. With one of two you only have 50% which isn't enough and
stops i/o.
Sent from my iPad
> On Oct 11, 2013, at 11:28 PM, "飞" wrote:
>
> hello, I am a new user of ceph,
> I have built a ceph testing
How fragmented is that file system?
Sent from my iPad
> On Oct 14, 2013, at 5:44 PM, Bryan Stillwell
> wrote:
>
> This appears to be more of an XFS issue than a ceph issue, but I've
> run into a problem where some of my OSDs failed because the filesystem
> was reported as full even though ther
I live migrate all the time using the rbd driver in qemu, no problems. Qemu
will issue a flush as part of the migration so everything is consistent. It's
the right way to use ceph to back vm's. I would strongly recommend against a
network file system approach. You may want to look into format
1. How about enabling trim/discard support in virtio-SCSI and using fstrim?
That might work for you.
4. Well you can mount them rw in multiple vm's with predictably bad results,
so I don't see any reason why you could not specify ro as a mount option and do
ok.
Sent from my iPad
> On Oct 21
1 - 100 of 313 matches
Mail list logo