On 23/09/14 18:22, Aegeaner wrote:
Now I use the following script to create key/value backended OSD, but
the OSD is created down and never go up.
ceph osd create
umount /var/lib/ceph/osd/ceph-0
rm -rf /var/lib/ceph/osd/ceph-0
mkdir /var/lib/ceph/osd/ceph-0
ceph osd crush add
On 23/09/14 18:22, Aegeaner wrote:
Now I use the following script to create key/value backended OSD, but
the OSD is created down and never go up.
ceph osd create
umount /var/lib/ceph/osd/ceph-0
rm -rf /var/lib/ceph/osd/ceph-0
mkdir /var/lib/ceph/osd/ceph-0
ceph osd crush add
This is my /var/log/ceph/ceph-osd.0.log :
2014-09-23 15:38:14.040699 7fbaccb1e7a0 0 ceph version 0.80.5
(38b73c67d375a2552d8ed67843c8a65c2c0feba6), process ceph-osd, pid 9764
2014-09-23 15:38:14.045192 7fbaccb1e7a0 1 mkfs in
/var/lib/ceph/osd/ceph-0
2014-09-23 15:38:14.046127 7fb
hi,all
take a look at the link ,
http://www.ceph.com/docs/master/architecture/#smart-daemons-enable-hyperscale
could you explain point 2, 3 in that picture.
1.
at point 2,3, before primary writes data to next osd, where is the data? it
is in momory or on disk already?
2. where is the
hi,all
my question is from my test.
let's take a example. object1(4MB)--> pg 0.1 --> osd 1,2,3,p1
when client is writing object1, during the write , osd1 is down. let suppose
2MB is writed.
1.
when the connection to osd1 is down, what does client do? ask monitor for
new osdmap? or only
Hi fellow cephers,
I'm being asked questions around our backup of ceph, mainly due to data
deletion.
We are currently using ceph to store RBD, S3 and eventually cephFS; and we
would like to be able to devise a plan to backup the information as to
avoid issues with data being deleted from the clus
Any thoughts on how to improve the delete process performance?
thanks,
On Mon, Sep 8, 2014 at 9:17 AM, Luis Periquito wrote:
> Hi,
>
> I've been trying to tweak and improve the performance of our ceph
> cluster.
>
> One of the operations that I can't seem to be able to improve much is the
> del
Luis,
you may want to take a look at rbd export/import and export-diff import-diff
functionality. this could be used to copy data to another cluster or offsite.
S3 has regions, which you could use for async replication.
Not sure how the cephfs work for backups.
Andrei
- Original Messa
Hi Nathan,
that was indeed the Problem! I was increasing the max_pid value to 65535
and the problem is gone! Thank you!
It was a bit misleading that there is also a
/proc/sys/kernel/threads-max, which has a much higher number. And since
I was only seeing around 400 processes and wasn't aware that
bump
I have observed this crash on ubuntu with kernel 3.13 and centos with 3.16 as
well now.
rbd hangs, and iostat shows something similar to the Output below.
Micha Krause
Am 19.09.2014 um 09:22 schrieb Micha Krause:
Hi,
> I have build an NFS Server based on Sebastiens Blog Post here:
ht
On Fri, Sep 19, 2014 at 11:22 AM, Micha Krause wrote:
> Hi,
>
>> I have build an NFS Server based on Sebastiens Blog Post here:
>>
>> http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
>>
>> Im using Kernel 3.14-0.bpo.1-amd64 on Debian wheezy, the host is a VM on
>> Vmware.
>>
>> Using rsyn
I'm currently running 2 ceph clusters (ver. 0.80.1) which are providing
secondary storage for CloudPlatform. Each cluster resides in a different
datacenter and our federated gateway consists of a region (us-east-1) with 2
zones (zone-a [master], zone-b [slave]). Objects appear to be
replicating/sy
hi Gregory,
Thanks for your response.
I'm installed ceph v80.5 on a single node, and my mds status always be
"creating".
The output of "ceph -s" as following:
root@ubuntu165:~# ceph -s
cluster 3cd658c3-34ca-43f3-93c7-786e5162e412
health HEALTH_WARN 200 pgs incomplete; 200 pgs stuck ina
What about writes with Giant?
On 18 Sep 2014, at 08:12, Zhang, Jian wrote:
> Have anyone ever testing multi volume performance on a *FULL* SSD setup?
> We are able to get ~18K IOPS for 4K random read on a single volume with fio
> (with rbd engine) on a 12x DC3700 Setup, but only able to get ~23
On Mon, Sep 22, 2014 at 1:22 PM, LaBarre, James (CTR) A6IT
wrote:
> If I have a machine/VM I am using as an Admin node for a ceph cluster, can I
> relocate that admin to another machine/VM after I’ve built a cluster? I
> would expect as the Admin isn’t an actual operating part of the cluste
On 09/22/2014 05:17 AM, Robin H. Johnson wrote:
Can somebody else make comments about migrating S3 buckets with
preserved mtime data (and all of the ACLs & CORS) then?
I don't know how radosgw objects are stored, but have you considered a
lower level rados export/import ?
IMPORT AND EXPORT
I would:
Keep Cluster A intact and migrate it to your new hardware. You can do this with
no downtime, assuming you have enough IOPS to support data migration and normal
usage simultaneously. Bring up the new OSDs and let everything rebalance, then
remove the old OSDs one at a time. Replace the
Is anyone aware of a way to either reconcile or remove possible orphaned
"shadow" files in a federated gateway configuration? The issue we're seeing
is the number of chunks/shadow files on the slave has many more "shadow"
files than the master, the breakdown is as follows:
master zone:
.region-1
On Tue, Sep 23, 2014 at 3:05 PM, Lyn Mitchell wrote:
> Is anyone aware of a way to either reconcile or remove possible orphaned
> “shadow” files in a federated gateway configuration? The issue we’re seeing
> is the number of chunks/shadow files on the slave has many more “shadow”
> files than the
I've had some issues in my secondary cluster. I'd like to restart
replication from the beginning, without destroying the data in the
secondary cluster.
Reading the radosgw-agent and Admin REST API code, I believe I just need to
stop replication, delete the secondary zone's log_pool, recreate the
Is osd.12 doing anything strange? Is it consuming lots of CPU or IO? Is
it flapping? Writing any interesting logs? Have you tried restarting it?
If that doesn't help, try the other involved osds: 56, 27, 6, 25, 23. I
doubt that it will help, but it won't hurt.
On Mon, Sep 22, 2014 at 11:
I turned on the debug option, and this is what I got:
# ./kv.sh
removed osd.0
removed item id 0 name 'osd.0' from crush map
0
umount: /var/lib/ceph/osd/ceph-0: not found
updated
add item id 0 name 'osd.0' weight 1 at location
{host=CVM-0-11,root=default} to crush map
meta
On 24/09/14 14:07, Aegeaner wrote:
I turned on the debug option, and this is what I got:
# ./kv.sh
removed osd.0
removed item id 0 name 'osd.0' from crush map
0
umount: /var/lib/ceph/osd/ceph-0: not found
updated
add item id 0 name 'osd.0' weight 1 at location
{host=
On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote:
> Keep Cluster A intact and migrate it to your new hardware. You can do
> this with no downtime, assuming you have enough IOPS to support data
> migration and normal usage simultaneously. Bring up the new OSDs and
> let everything rebala
I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
run "service ceph start" i got:
# service ceph start
ERROR:ceph-disk:Failed to activate
ceph-disk: Does not look like a Ceph OSD, or incompatible version:
/var/lib/ceph/tmp/mnt.I71N5T
mount: /dev/hioa1 already mo
After a reboot all the redundant partitions have gone, but after running
the script I still got:
ERROR:ceph-disk:Failed to activate
ceph-disk: Does not look like a Ceph OSD, or incompatible version:
/var/lib/ceph/tmp/mnt.SFvU7O
ceph-disk: Error: One or more partitions failed to activate
在 20
On 24/09/14 14:29, Aegeaner wrote:
I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
run "service ceph start" i got:
# service ceph start
ERROR:ceph-disk:Failed to activate
ceph-disk: Does not look like a Ceph OSD, or incompatible version:
/var/lib/ceph/tmp/mnt.
I have got my ceph OSDs running with keyvalue store now!
Thank Mark! I have been confused for a whole week.
Cheers
Aegeaner
在 2014-09-24 10:46, Mark Kirkwood 写道:
On 24/09/14 14:29, Aegeaner wrote:
I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
run "servic
On 24/09/14 16:21, Aegeaner wrote:
I have got my ceph OSDs running with keyvalue store now!
Thank Mark! I have been confused for a whole week.
Pleased to hear it! Now you can actually start plying with key value
store backend.
There are quite a few parameters, not fully documented yet - se
On Tue, Sep 23, 2014 at 7:23 PM, Robin H. Johnson wrote:
> On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote:
>> Keep Cluster A intact and migrate it to your new hardware. You can do
>> this with no downtime, assuming you have enough IOPS to support data
>> migration and normal usage si
On Tue, Sep 23, 2014 at 4:54 PM, Craig Lewis wrote:
> I've had some issues in my secondary cluster. I'd like to restart
> replication from the beginning, without destroying the data in the secondary
> cluster.
>
> Reading the radosgw-agent and Admin REST API code, I believe I just need to
> stop
Hi All,
Anyone can help me out here.
Sahana Lokeshappa
Test Development Engineer I
From: Varada Kari
Sent: Monday, September 22, 2014 11:52 PM
To: Sage Weil; Sahana Lokeshappa; ceph-us...@ceph.com;
ceph-commun...@lists.ceph.com
Subject: RE: [Ceph-community] Pgs are in stale+down+peering state
On Thu, Sep 18, 2014 at 03:36:48PM +0200, Alexandre DERUMIER wrote:
> >>Have anyone ever testing multi volume performance on a *FULL* SSD setup?
>
> I known that Stefan Priebe run full ssd clusters in production, and have done
> benchmark.
> (Ad far I remember, he have benched around 20k peak w
33 matches
Mail list logo