On Wednesday, April 10, 2013, Gregory Farnum wrote:
> On Wednesday, April 10, 2013 at 2:53 AM, Waed Bataineh wrote:
>> Hello,
>>
>> I have several question i'll be appreciated if i got answers for them:
>>
>> 1. does the osd have a fixed size or it compatible with the machine
>> i'm working with.
We've discussed the order of work (you can see my recent Ceph blog
post on the subject; though it's subject to revision) but haven't
committed to any dates at this time. Sorry. :(
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 10, 2013 at 12:43 PM, Maik Kulbe
wrot
Am 10.04.2013 um 21:36 schrieb Wido den Hollander :
> On 04/10/2013 09:16 PM, Stefan Priebe wrote:
>> Hello list,
>>
>> i'm using ceph 0.56.4 and i've to replace some drives. But while ceph is
>> backfilling / recovering all VMs have high latencies and sometimes
>> they're even offline. I just re
So in fact, there is no other solution than normal file-based backups and
hoping the MDS won't crash?
Are there any plans which release will contain a stable CephFS/MDS?
Well, if you've made changes to your data which impacted the metadata,
and then you restore to a backup of the metadata pool
On 04/10/2013 09:16 PM, Stefan Priebe wrote:
Hello list,
i'm using ceph 0.56.4 and i've to replace some drives. But while ceph is
backfilling / recovering all VMs have high latencies and sometimes
they're even offline. I just replace one drive at a time.
I putted in the new drives and i'm rewei
Well, if you've made changes to your data which impacted the metadata,
and then you restore to a backup of the metadata pool, but not the
data, then what's there isn't what CephFS thinks is there. Which would
be confusing for all the same reasons that it is in a local
filesystem. You could construc
Hello list,
i'm using ceph 0.56.4 and i've to replace some drives. But while ceph is
backfilling / recovering all VMs have high latencies and sometimes
they're even offline. I just replace one drive at a time.
I putted in the new drives and i'm reweighting them from 0.0 to 1.0 in
0.1 steps.
I think going backwards in time would be, what a backup is for, isn't it? ;)
My question really just is, if it is possible, to back it up. It just really
stinks to completely rebuild the cluster and re-import all the data everytime I
make some small mistake in the environment and the metadata b
[Please keep all mail on the list.]
Hmm, that OSD log doesn't show a crash. I thought you said they were all
crashing? Do they come up okay when you turn them back on again?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wednesday, April 10, 2013 at 9:27 AM, Witalij Po
[Re-adding the list.]
When the OSDs crash they will print out to their log a short description of
what happened, with a bunch of function names.
Unfortunately the problem you've run into is probably non-trivial to solve as
you've introduced a bit of a weird situation into the permanent record
Sounds like they aren't handling the transition very well when trying to
calculate old OSDs which might have held the PG. Are you trying to salvage the
data that was in it, or can you throw it away?
Can you post the backtrace they're producing?
-Greg
Software Engineer #42 @ http://inktank.com |
On Wednesday, April 10, 2013 at 2:53 AM, Waed Bataineh wrote:
> Hello,
>
> I have several question i'll be appreciated if i got answers for them:
>
> 1. does the osd have a fixed size or it compatible with the machine
> i'm working with.
You can weight OSDs to account for different capacities or
When executing ceph -w I see the following warning:
2013-04-09 22:38:07.288948 osd.2 [WRN] slow request 30.180683 seconds old,
received at 2013-04-09 22:37:37.108178: osd_op(client.4107.1:9678
102.01df [write 0~4194304 [6@0]] 0.4e208174 snapc 1=[])
currently waiting for subops from [0]
by what means you want a pool with replication=0?
发自我的 iPhone
在 2013-4-10,18:59,"Witalij Poljatchek"
mailto:witalij.poljatc...@aixit.com>> 写道:
Hello,
need help to solve segfault on all osd in my test cluster.
Setup ceph from scratch.
service ceph -a start
ceph -w
health HEALTH_OK
m
Hello,
need help to solve segfault on all osd in my test cluster.
Setup ceph from scratch.
service ceph -a start
*ceph -w*
health HEALTH_OK
monmap e1: 3 mons at
{1=10.200.20.1:6789/0,2=10.200.20.2:6789/0,3=10.200.20.3:6789/0},
election epoch 6, quorum 0,1,2 1,2,3
osdmap e5: 4 osd
Hello,
I have several question i'll be appreciated if i got answers for them:
1. does the osd have a fixed size or it compatible with the machine
i'm working with.
if the next case is true what is the equation?
2. i can list the whole objects that in a certain pool, but can we
determine the obje
16 matches
Mail list logo