Hi,
this is ceph 0.77, Ubuntu 13.04 (ceph-server and ceph client)
df-command gives goofy results:
/root@bd-a:/mnt/myceph/Backup/bs3/tapes#//
//root@bd-a:/mnt/myceph/Backup/bs3/tapes# df -h .//
//Dateisystem GröÃe Benutzt Verf. Verw% Eingehängt auf//
//xxx.xxx.xxx.xxx:6789:/ 60T
Dear all,
In the spirit of the Ceph User Committee's onjectives, a long overdue
meetup for Lisbon has been created [1].
There's no date for the first get-together just yet, as that should be
scheduled to optimize the number of participants.
If any of the list members are in Lisbon, outskir
I think the result reported by df is correct. It's likely you have
lots of sparse files in cephfs.
For sparse files, cephfs increase the "used" space by the full file size. See
http://ceph.com/docs/next/dev/differences-from-posix/
Yan, Zheng
On Fri, Feb 21, 2014 at 6:13 PM, Markus Goldberg
wrote
Hi !
I have failover clusters (IMAP service) with 2 members configured with
Ubuntu + Drbd + Ext4. My IMAP clusters works fine with ~ 50k email accounts.
See design here: http://adminlinux.com.br/my_imap_cluster_design.txt
I would like to use a distributed architecture of the filesystem to
pro
Thanks Greg for the response, my comments inline…
Thanks,
Guang
On Feb 20, 2014, at 11:16 PM, Gregory Farnum wrote:
> On Tue, Feb 18, 2014 at 7:24 AM, Guang Yang wrote:
>> Hi ceph-users,
>> We are using Ceph (radosgw) to store user generated images, as GET latency
>> is critical for us, most re
Hi Ghislain,
Try to erase all keyring files and after exec ceph-deploy gatherkey
mon_host before trying to create your new osd!
:-)
2014-02-19 18:26 GMT+01:00 :
>
> Hi all,
>
> I'd like to submit a strange behavior...
>
> Context : lab platform
> CEPH emperor
> Ceph-deploy 1.3.4
> Ubuntu 1
Hi Greg,
Yes, this still happens after the updatedb fix.
[root@xxx dan]# mount
...
zzz:6789:/ on /mnt/ceph type ceph (name=cephfs,key=client.cephfs)
[root@xxx dan]# pwd
/mnt/ceph/dan
[root@xxx dan]# dd if=/dev/zero of=yyy bs=4M count=2000
2000+0 records in
2000+0 records out
8388608000 bytes (8.
I'm wondering how Ceph deals with OSDs that have been away for a while.
Do they need to be completely rebuilt, or does it know which objects are
good and which need to go?
I know Ceph handles well the situation of an OSD going away, and
rebalances etc to maintain the required redundancy levels. Bu
Hi,
no, it's sure that the backup-files are so big. The output of the
du-command is correct.
The files were rsynced from an other system, which is not cephfs.
Markus
Am 21.02.2014 13:34, schrieb Yan, Zheng:
I think the result reported by df is correct. It's likely you have
lots of sparse files
It depends on how long ago (in terms of data writes) it disappeared.
Each PG has a log of the changes that have been made (by default I
think it's 3000? Maybe just 1k), and if an OSD goes away and comes
back while the logs still overlap it will just sync up the changed
objects. Otherwise it has to
I haven't done the math, but it's probably a result of how the df
command interprets the output of the statfs syscall. We changed the
fr_size and block_size units we report to make it work more
consistently across different systems "recently"; I don't know if that
change was before or after the ker
Thanks Greg. Can I just confirm, does it do a full backfill
automatically in the case where the log no longer overlaps?
I guess the key question is - do I have to worry about it, or will it
always "do the right thing"?
Tim.
On Fri, Feb 21, 2014 at 11:57:09AM -0800, Gregory Farnum wrote:
> It dep
Srinivasa, I pretty much think your problem is your fedora systems are missing
some right lib files. I just got s3 working on my ubuntu raring setup. Just
follow exactly what is written on
http://ceph.com/docs/master/install/install-ceph-gateway/ . Still a question
to everyone else: for swif
You don't have to worry about it; the OSDs will always just do the
right thing. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Feb 21, 2014 at 12:40 PM, Tim Bishop wrote:
> Thanks Greg. Can I just confirm, does it do a full backfill
> automatically in the case where
On Fri, Jan 31, 2014 at 9:52 PM, Arne Wiebalck wrote:
> Hi,
>
> We observe that we can easily create slow requests with a simple dd on
> CephFS:
>
> -->
> [root@p05153026953834 dd]# dd if=/dev/zero of=xxx bs=4M count=1000
> 1000+0 records in
> 1000+0 records out
> 4194304000 bytes (4.2 GB) copied,
On Sat, Feb 22, 2014 at 12:04 AM, Dan van der Ster
wrote:
> Hi Greg,
> Yes, this still happens after the updatedb fix.
>
> [root@xxx dan]# mount
> ...
> zzz:6789:/ on /mnt/ceph type ceph (name=cephfs,key=client.cephfs)
>
> [root@xxx dan]# pwd
> /mnt/ceph/dan
>
> [root@xxx dan]# dd if=/dev/zero of=
16 matches
Mail list logo