Of course, but it means that in case of failure you can no longer trust
your data consistency and should recheck it against separately stored
checksums or so. I`m leaving aside such fact that Ceph will not probably
recover pool properly with replication number lower than 2 in many
cases. So general
Hello,
I think you misunderstood me. I known how can I replace bad HDD, thanks. My
problem is the following:
Object replica number is 3. Objects that in 11.15d PG which store data on
osd.0 bad sectors place inter alia. Ceph should know objects are in 11.15d
on osd.0 is bad (because deep-scrub and
Hello,
We have the plan to run ceph as block storage for openstack, but from
test we found the IOPS is slow.
Our apps primarily use the block storage for saving logs (i.e, nginx's
access logs).
How to improve this?
Thanks.
___
ceph-users mailing l
Do you have journals on separate disks (SSD, preferably)?
On 2013-11-15 10:14, Dnsbed Ops wrote:
Hello,
We have the plan to run ceph as block storage for openstack, but from
test we found the IOPS is slow.
Our apps primarily use the block storage for saving logs (i.e,
nginx's access logs).
How
we didn't have.
On 2013-11-15 18:16, James Pearce wrote:
Do you have journals on separate disks (SSD, preferably)?
On 2013-11-15 10:14, Dnsbed Ops wrote:
Hello,
We have the plan to run ceph as block storage for openstack, but from
test we found the IOPS is slow.
Our apps primarily use the bl
Hi, all
I am a beginner of ceph and tried to integrate radosgw and keystone
according to the guide
here(http://ceph.com/docs/master/radosgw/config/#integrating-with-openstack-
keystone). My ceph version is v0.67.4.
I was able to run: swift -V 1.0 -A http://10.239.149.9:8000/auth -U
test:swift
Yip,
I went to the link. Where can the script ( nfsceph) be downloaded? How's
the robustness and performance of this technique? (That is, is there are
any reason to believe that it would more/less robust and/or performant than
option #3 mentioned in the original thread?)
On Fri, Nov 15, 2013 at
On 2013-11-15 08:26, Gautam Saxena wrote:
Yip,
I went to the link. Where can the script ( nfsceph) be downloaded? How's
the robustness and performance of this technique? (That is, is there are
any reason to believe that it would more/less robust and/or performant
than option #3 mentioned in the
Using ceph-deploy 1.3.2 with ceph 0.72.1. Ceph-deploy disk zap will fail and
exit with error, but then on retry will succeed. This is repeatable as I go
through each of the OSD disks in my cluster. See output below.
I am guessing the first attempt to run changes something about the initial
s
This point release addresses issue #6761
(http://tracker.ceph.com/issues/6761). Upgrades to v0.72 can cause
reads to begin returning ENFILE (Too many open files in system).
Changes:
* osd: fix upgrade issue with object_info_t encodings
* ceph_filestore_tool: add tool to repair osd stores affected
Procedure when an OSD is down or Error encountered during Ceph status checks :
Ceph version 0.67.4
1).Is the Cluster just started and has not complete starting OSD’s.
2).Ensure continues Hard Access to the Ceph Node:
-either via HW serial console server and serial console redirect.
-by Video-Over
> 3).Comment out, #hashtag the bad OSD drives in the “/etc/fstab”.
This is unnecessary if your using the provided upstart and udev
scripts, OSD data devices will be identified by label and mounted. If
you choose not to use the upstart and udev scripts then you should
write init scripts that do si
On Fri, Nov 15, 2013 at 2:53 PM, Gruher, Joseph R
wrote:
> Using ceph-deploy 1.3.2 with ceph 0.72.1. Ceph-deploy disk zap will fail
> and exit with error, but then on retry will succeed. This is repeatable as
> I go through each of the OSD disks in my cluster. See output below.
>
>
>
> I am gue
Alistar,
The region map error is not really related. There is a separate command to set
the region, for example "radosgw-admin -k ceph.client.admin.keyring regionmap
update".
Make sure that the key you pass is value in swift_key:secret_key. For you the
secret_key may be null. If so, you can ge
> We have the plan to run ceph as block storage for openstack, but from test
> we found the IOPS is slow.
>
> Our apps primarily use the block storage for saving logs (i.e, nginx's
> access logs).
> How to improve this?
There are a number of things you can do, notably:
1. Tuning cache on the hype
Thanks Kyle,
--I'll look into and try out udev and upstart.
-- yes on set "noout", definitely a good idea, until for sure that osd is gone
for good.
If osd disk is totally gone,
Then down-n'-out.
Remove from crushmap/Update crushmap.
Verify crushmap
Then used ceph-deploy to add a replacement
Replication does not occur until the OSD is “out.” This creates a new mapping
in the cluster of where the PGs should be and thus data begins to move and/or
create sufficient copies. This scheme lets you control how and when you want
the replication to occur. If you have plenty of space and y
17 matches
Mail list logo