Hi Craig,
I forgot to send the output of "ceph osd tree":
root@ceph-admin-storage:~# ceph osd tree
# idweighttype nameup/downreweight
-188.24root default
-844.12room room0
-215.92host ceph-1-storage
41.82osd.4up1
9
Hi all,
I've been trying to setup calamari but ran into an issue: after building
the Ubuntu 14.04 packages and installing them, Apache redirects me from the
"/" to "/login/?next=/dashboard/" URL and displays a 404.
I can't find any htaccess file for rewrites and the Apache config doesn't
do a rew
- Message from Haomai Wang -
Date: Tue, 19 Aug 2014 12:28:27 +0800
From: Haomai Wang
Subject: Re: [ceph-users] ceph cluster inconsistency?
To: Kenneth Waegeman
Cc: Sage Weil , ceph-users@lists.ceph.com
On Mon, Aug 18, 2014 at 7:32 PM, Kenneth Waegeman
wrote:
-
I have added the options as suggested, but no success yet!
Im also running radosgw manually (radosgw -c /etc/ceph/ceph.conf -n
client.radosgw.gw --rgw-frontends "civetweb port=80") using civetweb, and I
still cant login with Swift, and S3 uploads are broken.
Somenone on #ceph mention that ceph-rad
Johan,
(Copied to ceph-calamari list)
Sounds like you are missing the calamari-clients package. The
calamari-server package just gives you the REST API (at /api/v2/)
Cheers,
John
On Tue, Aug 19, 2014 at 9:25 AM, Johan Kooijman wrote:
> Hi all,
>
> I've been trying to setup calamari but ran
Thanks John, something was wrong with the install of the calamari-clients
package.
On Tue, Aug 19, 2014 at 12:05 PM, John Spray wrote:
> Johan,
>
> (Copied to ceph-calamari list)
>
> Sounds like you are missing the calamari-clients package. The
> calamari-server package just gives you the REST
UPDATE:
I have installed Tengine (nginx fork) and configured both HTTP and HTTPS to
use radosgw socket.
I can login with S3, create buckets and upload objects.
It's still not possible to use Swift credentials, can you help me on this
part? What do I use when I login (url, username, password) ?
H
this is happen after some OSD fail and i recreate osd.
i have did "ceph osd rm osd.4" to remove the osd.4 and osd.6. but when i
use ceph-deploy to install OSD by
"ceph-deploy osd --zap-disk --fs-type btrfs create ceph0x-vm:sdb",
ceph-deploy result said new osd is ready,
but the OSD can not sta
OK, I don't think the udev rules are on my machines. I built the cluster
manually and not with ceph-deploy. I must have missed adding the rules in
the manual or the Packages from Debian (Jessie) did not create them.
Robert LeBlanc
On Mon, Aug 18, 2014 at 5:49 PM, Sage Weil wrote:
> On Mon, 18
I feel a little embarrassed, 1024 rows still true for me.
I was wondering if you could give your all keys via
""ceph-kvstore-tool /var/lib/ceph/osd/ceph-67/current/ list
_GHOBJTOSEQ_ > keys.log“.
thanks!
On Tue, Aug 19, 2014 at 4:58 PM, Kenneth Waegeman
wrote:
>
> - Message from Haomai Wan
On Tue, 19 Aug 2014, Robert LeBlanc wrote:
> OK, I don't think the udev rules are on my machines. I built the cluster
> manually and not with ceph-deploy. I must have missed adding the rules in
> the manual or the Packages from Debian (Jessie) did not create them.
They are normally part of the cep
Thanks Sage, I was looking in /etc/udev/rules.d (duh!). If I'm reading the
rules right, my problem has to do with putting Ceph on the entire block
device and not setting up a partition (bad habit from LVM). This will give
me some practice with failing and rebuilding OSDs. If I understand right, a
u
Is there a repo for this version which works over HTTPS? Because of the
corporate firewall, I can’t install through regular HTTP.
--
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately not
The sst files are files used by leveldb to store its data; you cannot
remove them. Are you running on a very small VM? How much space are
the files taking up in aggregate?
Speaking generally, I think you should see something less than a GB
worth of data there, but some versions of leveldb under som
Check out http://ceph.com/docs/master/rados/operations/pools/#set-pool-values
"Hit sets" are bloom filters which we use to track which objects are
accessed ("hit") during a specific time period (hit_set_period). More
hit sets within a given time let us distinguish more fine-grained
accesses to the
Both: https://ceph.com/debian-testing/ and
https://ceph.com/rpm-testing/ seem to work for me. Are you seeing some
error?
On Tue, Aug 19, 2014 at 11:57 AM, LaBarre, James (CTR) A6IT
wrote:
> Is there a repo for this version which works over HTTPS? Because of the
> corporate firewall, I can’
On Wed, Aug 6, 2014 at 1:48 AM, Kenneth Waegeman
wrote:
> Hi,
>
> I did a test with 'rados -p ecdata bench 10 write' on an ECpool with a
> cache replicated pool over it (ceph 0.83).
> The benchmark wrote about 12TB of data. After the 10 seconds run, rados
> started to delete his benchmark
It's been a while since I worked on this, but let's see what I remember...
On Thu, Aug 14, 2014 at 11:34 AM, Craig Lewis wrote:
> In my effort to learn more of the details of Ceph, I'm trying to
> figure out how to get from an object name in RadosGW, through the
> layers, down to the files on dis
On Thu, Aug 14, 2014 at 12:40 PM, Robert LeBlanc wrote:
> We are looking to deploy Ceph in our environment and I have some musings
> that I would like some feedback on. There are concerns about scaling a
> single Ceph instance to the PBs of size we would use, so the idea is to
> start small like o
On Thu, Aug 14, 2014 at 6:32 PM, yuelongguang wrote:
> hi,all
>
> By reading the code , i notice everything of a OP is encoded into
> Transaction which is writed into journal later.
> does journal record everything(meta,xattr,file data...) of a OP.
> if so everything is writed into disk twice
Yes
On Mon, Aug 18, 2014 at 6:56 AM, Jasper Siero
wrote:
> Hi all,
>
> We have a small ceph cluster running version 0.80.1 with cephfs on five
> nodes.
> Last week some osd's were full and shut itself down. To help de osd's start
> again I added some extra osd's and moved some placement group director
On Thu, Aug 14, 2014 at 2:28 AM, NotExist wrote:
> Hello everyone:
>
> Since there's no cuttlefish package for 14.04 server on ceph
> repository (only ceph-deploy there), I tried to build cuttlefish from
> source on 14.04.
...why? Cuttlefish is old and no longer provided updates. You really
want
Nope, that one works. I just had a different source server, and couldn't find
what the path would be on the main server (not very well documented). It looks
to have updated properly. Updating my test configuration now.
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktan
On Tue, Aug 19, 2014 at 5:32 AM, Marco Garcês wrote:
>
> UPDATE:
>
> I have installed Tengine (nginx fork) and configured both HTTP and HTTPS to
> use radosgw socket.
Looking back at this thread, and considering this solution it seems to
me that you were running the wrong apache fastcgi module.
Hmm, you're not allowed to set real xattrs on the CephFS root and
we've had issues a few times with that and the layout xattrs. There
might have been a bug with that on v0.81 which is fixed in master, but
I don't remember exactly when it last happened.
-Greg
Software Engineer #42 @ http://inktank.c
Greg, thanks for the reply, please see in-line.
On Tue, Aug 19, 2014 at 11:34 AM, Gregory Farnum wrote:
>
> There are many groups running cluster >1PB, but whatever makes you
> comfortable. There is a bit more of a learning curve once you reach a
> certain scale than there is with smaller insta
On Tue, Aug 19, 2014 at 11:18 AM, Robert LeBlanc wrote:
> Greg, thanks for the reply, please see in-line.
>
>
> On Tue, Aug 19, 2014 at 11:34 AM, Gregory Farnum wrote:
>>
>>
>> There are many groups running cluster >1PB, but whatever makes you
>> comfortable. There is a bit more of a learning cur
Thanks, your responses have been helpful.
On Tue, Aug 19, 2014 at 1:48 PM, Gregory Farnum wrote:
> On Tue, Aug 19, 2014 at 11:18 AM, Robert LeBlanc
> wrote:
> > Greg, thanks for the reply, please see in-line.
> >
> >
> > On Tue, Aug 19, 2014 at 11:34 AM, Gregory Farnum
> wrote:
> >>
> >>
> >>
On Tue, Aug 19, 2014 at 1:22 AM, Riederer, Michael
wrote:
>
>
> root@ceph-admin-storage:~# ceph pg force_create_pg 2.587
> pg 2.587 now creating, ok
> root@ceph-admin-storage:~# ceph pg 2.587 query
> ...
> "probing_osds": [
> "5",
> "8",
>
Greetings,
I'm creating a new ceph cluster for testing and it's reporting "192
stale+incomplete" pgs.
`ceph health detail` lists all of the pgs that are stuck. Here's a
representative line.
pg 2.2c is stuck stale for 3076.510998, current state stale+incomplete,
last acting [0]
But when I run
On Tue, Aug 19, 2014 at 1:37 PM, Randy Smith wrote:
> Greetings,
>
> I'm creating a new ceph cluster for testing and it's reporting "192
> stale+incomplete" pgs.
>
> `ceph health detail` lists all of the pgs that are stuck. Here's a
> representative line.
>
> pg 2.2c is stuck stale for 3076.5109
[Re-adding the list]
On Tue, Aug 19, 2014 at 2:24 PM, Randy Smith wrote:
> Gregory,
>
> # ceph osd tree
> # idweight type name up/down reweight
> -1 0.2 root default
> -2 0.2 host cs00
> 0 0.0 osd.0 up 1
> 1 0.0
On Tue, Aug 19, 2014 at 3:36 PM, Gregory Farnum wrote:
> [Re-adding the list]
>
> On Tue, Aug 19, 2014 at 2:24 PM, Randy Smith wrote:
> > Gregory,
> >
> > # ceph osd tree
> > # idweight type name up/down reweight
> > -1 0.2 root default
> > -2 0.2 host cs00
>
Hi Sage/Sam,
During our testing we found a potential deadlock scenario in the filestore
journal code base. This is happening because of two reason.
1. This is because code is not signaling aio_cond from
check_aio_completion() in case seq = 0
2. Following changes in the write_thread
I think this is the issue..
http://tracker.ceph.com/issues/9073
Thanks & Regards
Somnath
From: Somnath Roy
Sent: Tuesday, August 19, 2014 6:25 PM
To: Sage Weil (s...@inktank.com); Samuel Just (sam.j...@inktank.com)
Cc: ceph-users@lists.ceph.com
Subject: Deadlock in ceph journal
Hi Sage/Sam,
Dur
I create a bucket and put some objects in the bucket。but I delete the all the
objects and the bucket, why the bucket.meta object and bucket index object
are exist? when ceph recycle them?
baijia...@126.com___
ceph-users mailing list
ceph-users@lists
I believe you need to remove the authorization for osd.4 and osd.6 before
re-creating them.
When I re-format disks, I migrate data off of the disk using:
ceph osd out $OSDID
Then wait for the remapping to finish. Once it does:
stop ceph-osd id=$OSDID
ceph osd out $OSDID
ceph auth del osd
My default, Ceph will wait two hours to garbage collect those RGW objects.
You can adjust that time by changing
rgw gc obj min wait
See http://ceph.com/docs/master/radosgw/config-ref/ for the full list of
configs.
On Tue, Aug 19, 2014 at 7:18 PM, baijia...@126.com
wrote:
> I create a buck
Looks like I need to upgrade to Firefly to get ceph-kvstore-tool before I
can proceed.
I am getting some hits just from grepping the LevelDB store, but so far
nothing has panned out.
Thanks for the help!
On Tue, Aug 19, 2014 at 10:27 AM, Gregory Farnum wrote:
> It's been a while since I worke
thanks for you help.
for example: when I create bucket named "" , and put a file named "",
size is 1M.
so in the .rgw pool ,I see ".bucket.meta.:default.4804.1" and "" two
objects,
in the .rgw.buckets.index pool, we see ".dir.default.4804.1 " one object,
in the .rgw.buckets po
40 matches
Mail list logo