And on the slave zone gateway instence ,the info is like this :
2013-11-14 12:54:24.516840 7f51e7fef700 1 == starting new request
req=0xb1e3b0 =
2013-11-14 12:54:24.526640 7f51e7fef700 1 == req done req=0xb1e3b0
http_status=200 ==
2013-11-14 12:54:24.545440 7f51e4fe9700 1
There is /etc/init.d/rbdmap; although I see no documentation for it,
there is a sample map file added to /etc/ceph as well.
Looks like it was added in Dumpling.
On 11/13/2013 01:31 PM, Dane Elwell wrote:
Hi,
Is there a preferable or supported way of having rbd’s mapped on boot?
We have a serv
Hey All,
Having a crack at installing Ceph with Chef. Running into an issue after
following the guides.
Get the following error when I run the initial sudo chef-client on the nodes.
[2013-11-14T10:02:55+08:00] ERROR: Running exception handlers
[2013-11-14T10:02:55+08:00] ERROR: Exception handle
Hi All,
I am testing install ceph cluster from ceph-deploy 1.3.2, I get a python
error when execute "ceph-deploy disk list".
Here is my output:
[root@ceph-02 my-cluster]# ceph-deploy disk list ceph-02
[ceph_deploy.cli][INFO ] Invoked (1.3.2): /usr/bin/ceph-deploy disk list
ceph-02
[ceph-02][DEBU
I'm not too familiar with the toolchain you're using, so can you
clarify what problem you're seeing with CephFS here?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Nov 13, 2013 at 12:06 PM, M. Piscaer wrote:
>
> Hi,
>
> I have an webcluster setup, where on the loadba
Upgrading to emperor from previous versions may cause an issue where
objects become marked lost erroneously. We suggest delaying upgrades
to emperor until this issue is resolved in a point release. We should
have the point release out within a day or two. If you have completed
the upgrade alread
Hi,
Is there a preferable or supported way of having rbd’s mapped on boot? We have
a server that will need to map several rbd’s and then mount them, and I was
wondering if there’s anything out there more elegant than dumping stuff in
/etc/rc.local?
I’ve seen this issue and related commit on th
Hello,
I am creating a 250 TB rbd image that may grow to 500 or 600 TB over the next
year or so. I initially formatted the image using ext4 as shown in the rbd
quick start guide, however I have found several examples across the internet
that show rbd being formatted with xfs instead.
I was ju
Please correct me if I'm wrong.
Initially there was supposed that recovery delay
(osd_recovery_delay_start) is brought in only during peering when
recovery_wq is enqueued with the placement groups scheduled for
recovery.
RecoveryWQ::_enqueue(...)
...
osd->defer_recovery_until = ceph_clock_now(g_c
Ah, the CRUSH tunables basically don't impact placement at all unless
CRUSH fails to do a placement for some reason. What you're seeing here
is the result of a pseudo-random imbalance. Increasing your PG and
pgp_num counts on the data pool should resolve it (though at the cost
of some data movement
Dear Greg,
I believe 3.8 is after CRUSH_TUNABLES v1 was implemented in the
kernel, so it shouldn't hurt you to turn them on if you need them.
(And the crush tool is just out of date; we should update that text!)
However, if you aren't having distribution issues on your cluster I
wouldn't bother
Hi,
I have an webcluster setup, where on the loadbalancers the persistence
timeout is 0. To share the sessions I use ceph version 0.56.7, like you
see on the diagram.
++
| Internet |
++
|
+-+---+
|
Am 13.11.2013 20:48, schrieb Andrey Korolyov:
In attached file I added two slices of degraded PGs for a first example
and they belongs to completely different sets of OSD. I had to report
that lowering
'osd recovery delay start'
to default 15s value increased recovery speed a lot but documentati
On Wed, Nov 13, 2013 at 8:34 AM, Oliver Schulz wrote:
> Dear Ceph Experts,
>
> We're running a production Ceph cluster with Ceph Dumpling,
> with Ubuntu 12.04.3 (kernel 3.8) on the cluster nodes and
> all clients. We're mainly using CephFS (kernel) and RBD
> (kernel and user-space/libvirt).
>
> Wo
How did you generate these scenarios? At first glance it looks to me
like you've got very low limits set on how many PGs an OSD can be
recovering at once, and in the first example they were all targeted to
that one OSD, while in the second they were distributed.
-Greg
Software Engineer #42 @ http:/
Kevin,
If you've ever added PGs to this pool after its creation, you could be
seeing this issue:
http://tracker.ceph.com/issues/6751
John
On Wed, Nov 13, 2013 at 3:59 PM, Patrick McGarry wrote:
> Hey Kevin,
>
> What version are you running? I see a couple of tracker items looking at df:
>
> ht
Dear Ceph Experts,
We're running a production Ceph cluster with Ceph Dumpling,
with Ubuntu 12.04.3 (kernel 3.8) on the cluster nodes and
all clients. We're mainly using CephFS (kernel) and RBD
(kernel and user-space/libvirt).
Would you recommend to activate CRUSH_TUNABLES (1, not 2) for
our use
Hi Ceph,
A few upcoming Ceph related events were added today in
http://ceph.com/community/events/. If something is happening somewhere and you
would like to see it listed, just reply to this mail with a description ( a
HTML snippet that looks like the existing posts would be easiest for me to c
Hey Kevin,
What version are you running? I see a couple of tracker items looking at df:
http://tracker.ceph.com/issues/2209
http://tracker.ceph.com/issues/3484
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @in
On Nov 13, 2013, at 12:16 AM, wrote:
> my core-site.conf list :
> fs.ceph.impl=org.apache.hadoop.fs.ceph.CephFileSystem
> fs.default.name=ceph://ca189:6789/
> ceph.conf.file=/etc/ceph/ceph.conf
> ceph.root.dir=/mnt/fuse
This looks suspicious. This should point to a root directory within C
I just created http://tracker.ceph.com/issues/6761
More and more guests are crashing over time and I have no clue what I can do...
:(
Is it safe to downgrade the monitors and osds to latest dumpling?
I'm really in urgent help. Sorry guys!
Corin
Am 13.11.2013 16:00, schrieb Corin Langosch:
H
Hi guys,
all my systems run ubuntu 12.10. I was running dumpling for a few months without
any errors.
I just upgraded all my monitors (3) and one osd (total 14) to emporer. The
cluster is healthy and seems to be running fine. A few minutes after upgrading a
few of my qemu (kvm) machines just
Generally these steps need to be taken:
1) Compile the custom methods into a shared library
2) Place the library in the class load path of the OSD
3) Invoke the methods via librados exec method
The easiest way to do this is to use the ceph build system by adding your
module to src/cls/Makefile.a
On Wed, Nov 13, 2013 at 8:17 AM, Tim Zhang wrote:
> Hello guys,
> Is there any way to remove the cluster data rapidly and redeploy another
> ceph cluster with different configuration? This is extremely useful while
> testing the performance under different configuration.
> I have try ceph-deploy p
Hi Joseph,
According to your advice, I try to add the journal device like this:
ceph-deploy osd create ceph0:sdb:/dev/sda1
but after first deploy, I clean the cluster with "ceph-deploy purgedata",
and redeploy the cluster, the sdb mount laggy is happen again.
2013/11/13 Tim Zhang
> Hi Michael,
Hi All,
I'm happy to announce a new release of ceph-deploy, the easy
deployment tool for Ceph.
The only two (very important) changes made for this release are:
* Automatic SSH key copying/generation for hosts that do not have keys
setup when using `ceph-deploy new`
* All installs will now use t
Hello guys,
Is there any way to remove the cluster data rapidly and redeploy another
ceph cluster with different configuration? This is extremely useful while
testing the performance under different configuration.
I have try ceph-deploy purgedata hosts, but after that the cluster can't
redeploy whi
On Wed, Nov 13, 2013 at 8:04 AM, Alfredo Deza wrote:
> On Tue, Nov 12, 2013 at 10:19 PM, Berant Lemmenes wrote:
>>
>> On Tue, Nov 12, 2013 at 7:28 PM, Joao Eduardo Luis
>> wrote:
>>>
>>>
>>> This looks an awful lot like you started another instance of an OSD with
>>> the same ID while another wa
On Tue, Nov 12, 2013 at 10:19 PM, Berant Lemmenes wrote:
>
> On Tue, Nov 12, 2013 at 7:28 PM, Joao Eduardo Luis
> wrote:
>>
>>
>> This looks an awful lot like you started another instance of an OSD with
>> the same ID while another was running. I'll walk you through the log lines
>> that point m
Hello,
Using 5c65e1ee3932a021cfd900a74cdc1d43b9103f0f with
large amount of data commit and relatively low PG rate,
I`ve observed unexplainable long recovery times for PGs
even if the degraded object count is almost zero:
04:44:42.521896 mon.0 [INF] pgmap v24807947: 2048 pgs: 911 active+clean,
113
Am 13.11.2013 09:34, schrieb Martin B Nielsen:
Probably common sense but I was bitten by this once in a likewise situation..
If you run 3x replica and distribute them over 3x hosts (is that default now?)
make sure that the disks on the host with the failed disk have space for it -
the remainin
Probably common sense but I was bitten by this once in a likewise
situation..
If you run 3x replica and distribute them over 3x hosts (is that default
now?) make sure that the disks on the host with the failed disk have space
for it - the remaining two disks will have to hold the content of the
fa
Hi ,list
We've ever reflected that ,radosgw-agent sync data failed all the time ,before.
We paste the concert log here to seek any help now .
application/json; charset=UTF-8
Wed, 13 Nov 2013 07:24:45 GMT
x-amz-copy-source:sss%2Frgwconf
/sss/rgwconf
2013-11-13T15:24:45.510 11171:DEBUG:boto:Signa
i want to use hadoop with ceph
i followed http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/1809 to
install.
but,after i configure,there is a strange problem:i use command 'hadoop fs
-ls',it print 'Bad connection to FS. command aborted. exception: '
there is nothing follow exception
m
34 matches
Mail list logo