On 01/02/2014 01:40 PM, James Harper wrote:
I just had to restore an ms exchange database after an ceph hiccup (no actual
data lost - Exchange is very good like that with its no loss restore!). The
order
of events went something like:
. Loss of connection on osd to the cluster network (public
On 01/02/2014 10:51 PM, James Harper wrote:
I've not used ceph snapshots before. The documentation says that the rbd device
should not be in use before creating a snapshot. Does this mean that creating a
snapshot is not an atomic operation? I'm happy with a crash consistent
filesystem if that'
Hi guys,
Could someone explain what's the new perf stats show and if the numbers are
reasonable on my cluster?
I am concerned about the high fs_commit_latency, which seems to be above 150ms
for all osds. I've tried to find the documentation on what this command
actually shows, but couldn't f
On 01/02/2014 04:00 PM, Kuo Hugo wrote:
Hi all,
I did a test to ensure Rados's recovering.
1. echo string into a object from a placement group's directory on a OSD.
2. After osd scrub, the ceph health shows " 1pgs inconsistent " . Will
it be fixed later?
You manually have to instruct the OSD
On 01/02/2014 05:42 PM, upendrayadav.u wrote:
Hi,
1. Is ceph is feasible for storing large no. of small files in ceph
cluster with care of osd failure and recovery process.
2. if we have *4TB **OSD(almosst 85% full)* and storing only small size
files(500 KB to 1024 KB), And it got failed(due to
Hello,
I have a problem on Rados Gw
When I do a wget http://p1.13h.com/swift/v1/test/test.mp3 on this object, there
is no problem to get it.
but I put it in a browser or VLC, it stopped playing after 32 seconds or less
Any one could help me ?
Regards,
Julien__
Thanks a lot... for your detailed and very clear answer :)Regards,Upendra YadavDFS On Fri, 03 Jan 2014 15:52:09 +0530 Wido den Hollander wrote On 01/02/2014 05:42 PM, upendrayadav.u wrote: > Hi, > > 1. Is ceph is feasible for storing large no. of small files in ceph > cluster with care of
Hi All,
There is a new release of ceph-deploy, the easy deployment tool for Ceph.
This is mostly a bug-fix release, although one minor feature was
added: the ability to
install/remove packages from remote hosts with a new sub-command: `pkg`
As we continue to add features (or improve old ones) we
You'll need to register the new pool with the MDS:
ceph mds add_data_pool
On Thu, Jan 2, 2014 at 9:48 PM, 鹏 wrote:
> Hi all;
> today, I want to use the fuction of ceph_open_layout() in libcephFs.h
>
> I creat a new pool success,
> # rados mkpool data1
> and then I edit the code like thi
On Fri, 3 Jan 2014, ? wrote:
> Hi all;
> today, I want to use the fuction of ceph_open_layout() in libcephFs.h
>
> I creat a new pool success,
> # rados mkpool data1
You also need to do
ceph mds add_data_pool data1
sage
> and then I edit the code like this:
>
> int fd = ceph_open_layout(
Run
'ceph osd crush tunables optimal'
or adjust an offline map file via the crushtool command line (more
annoying) and retest; I suspect that is the problem.
http://ceph.com/docs/master/rados/operations/crush-map/#tunables
sage
On Fri, 3 Jan 2014, Dietmar Maurer wrote:
> > In both cases, yo
On 1/3/14, 3:21 AM, "Josh Durgin" wrote:
>On 01/02/2014 10:51 PM, James Harper wrote:
>> I've not used ceph snapshots before. The documentation says that the
>>rbd device should not be in use before creating a snapshot. Does this
>>mean that creating a snapshot is not an atomic operation? I'm h
That's useful information.
Thanks.
2014/1/3 Wido den Hollander
> On 01/02/2014 04:00 PM, Kuo Hugo wrote:
>
>> Hi all,
>>
>> I did a test to ensure Rados's recovering.
>>
>> 1. echo string into a object from a placement group's directory on a OSD.
>> 2. After osd scrub, the ceph health shows
Hi all
I have a problem with gateway and swift.
When i try ti get by wget, curl, or swift command, I have no problem to get my
file ! But when I tried to do it directly in my browser it stopped between 6
and 40 seconds.
Ceph.conf:
[client.radosgw.gateway]
host = p1
keyring = /etc/ceph/keyr
I figured out why this was happening. When I went through the quick start
guide, I created a directory on the admin node that was /home/ceph/storage
and this is where ceph.conf, ceph.log, keyrings, etc. ended up. What I
realized though is that when i was running the ceph commands on the admin
nod
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi,
I was wondering if there are any procedures for rebooting a node? Presumably
when a node is rebooted, Ceph will lose contact with the OSDs and begin moving
data around. I’ve not actually had to reboot a node of my cluster yet, but may
need to
On Jan 3, 2014, at 4:43 PM, Dane Elwell wrote:
> I was wondering if there are any procedures for rebooting a node? Presumably
> when a node is rebooted, Ceph will lose contact with the OSDs and begin
> moving data around. I’ve not actually had to reboot a node of my cluster yet,
> but may need
17 matches
Mail list logo