Ok. I will try, thanks.
One further question - does needed manually start /etc/init.d/radosgw all
time when this host have been rebooted? Why it is not part of service ceph
-a start?
On Tue, Apr 23, 2013 at 11:05 PM, Lorieri wrote:
> I've made some tests again with s3cmd
>
> you need to have a
Hi All,
I am running the MapReduce wordcount code (on a ceph cluster consisting of
2 VMs) on a data set consisting of 5000 odd files (approx. 10gb size in
total). Periodically, the ceph health says that the mds is
laggy/unresponsive, and I get messages like the following:
13/04/24 10:41:00 INFO m
Hi guys,
The "librbd: Error listing snapshots: (95) Operation not supported"
issue has been resolved, the root cause is when I deployed the OSD, I
just copy /usr/local/bin/* from MON-MDS node to the OSD, not include the
/usr/lib/rados-classes. So I re-transfer the /usr/lib/rados-classes
folder
Hi,
I'm currently looking into several options on how to use ceph in a small to mid
size web cluster.
I've ruled out CephFS as it is sadly not stable enaugh.
Then I went with RBD and different approaches. OCFS2 on RBD did the job well
but had extrem performance issues when two processes where
Dear all
I have 2 server:
earch server:
1 card raid: -- raid 6: 54TB,divided into 4 OSD (format ext4)
-- raid 0: 248G, journal for 4 OSD (ext4).
My config file:
[global]
auth supported = cephx
auth cluster required = cephx
auth service required
2013/4/24 Maik Kulbe :
> At the moment I'm trying a solution that uses RBD with a normal FS like EXT4
> or ZFS and where two server export that block device via NFS(with heartbeat
> for redundancy and failover) but that involves problems with file system
> consistency.
If you don't need load balan
Hi,
I've been working with a Ceph 0.56.4 setup and I've been seeing some RBD
read performance issues with single processes / threads.
The setup is:
- 36 OSDs (2TB WD RE drives)
- 9 hosts (4 per OSD)
- 120GB Intel SSD as a journal per host
- 32GB Ram per host
- Quad Core Xeon CPU (E3-1220 V2 @
On 04/24/2013 06:17 AM, Wido den Hollander wrote:
Hi,
I've been working with a Ceph 0.56.4 setup and I've been seeing some RBD
read performance issues with single processes / threads.
The setup is:
- 36 OSDs (2TB WD RE drives)
- 9 hosts (4 per OSD)
- 120GB Intel SSD as a journal per host
- 32GB
On 04/24/2013 05:18 AM, Maik Kulbe wrote:
Hi,
I'm currently looking into several options on how to use ceph in a small
to mid size web cluster.
I've ruled out CephFS as it is sadly not stable enaugh.
Then I went with RBD and different approaches. OCFS2 on RBD did the job
well but had extrem pe
On 04/24/2013 05:18 AM, Maik Kulbe wrote:
> Hi,
>
> I'm currently looking into several options on how to use ceph in a
small
> to mid size web cluster.
>
> I've ruled out CephFS as it is sadly not stable enaugh.
>
> Then I went with RBD and different approaches. OCFS2 on RBD did the
job
> well but
Varun,
What version of Ceph are you running? Can you confirm that the MDS daemon
(ceph-mds) is still running or has crashed when the MDS becomes
laggy/unresponsive? If it has crashed checked the MDS log for a crash report.
There were a couple Hadoop workloads that caused the MDS to misbehave fo
Hey world,
I know that this question is tricky since it depends on the cluster size
and objects profiles.
For those who are using CephFS, what is your working cache size on your
cluster? What problems have you encountered with this configuration?
And for InkTank, do you have any recommendation o
On Wed, Apr 24, 2013 at 8:39 AM, Kevin Decherf wrote:
> Hey world,
>
> I know that this question is tricky since it depends on the cluster size
> and objects profiles.
>
> For those who are using CephFS, what is your working cache size on your
> cluster? What problems have you encountered with thi
On 04/24/2013 02:23 PM, Mark Nelson wrote:
On 04/24/2013 06:17 AM, Wido den Hollander wrote:
Hi,
I've been working with a Ceph 0.56.4 setup and I've been seeing some RBD
read performance issues with single processes / threads.
The setup is:
- 36 OSDs (2TB WD RE drives)
- 9 hosts (4 per OSD)
-
Hi everyone-
We are down to a handful of urgent bugs (3!) and a cuttlefish release date
that is less than a week away. Thank you to everyone who has been
involved in coding, testing, and stabilizing this release. We are close!
If you would like to test the current release candidate, your effo
Hi
I use the kernel module. I found only one mount parameter for
readahead: rsize. But didnt help.
> These settings are a bit silly. I think what you've got there is
> logically equivalent to having the stripe_unit and object_size both
> set to 512KB, but I'm not certain. 512KB is also a bit small
I'm seeing a few messages like this on my OSD logfiles:
2013-04-25 00:00:08.174869 e3ca2b70 0 bad crc in data 1652929673 != exp
2156854821
2013-04-25 00:00:08.179749 e3ca2b70 0 -- 192.168.200.191:6882/30908 >>
192.168.200.197:0/3338580093 pipe(0xc70e1c0 sd=24 :6882 s=0 pgs=0 cs=0
l=0).accept
Dear all
I use ceph 0.56.4 - CentOS 6.3 up kernel 3.8.8-1.el6.elrepo.x86_64.
I use multiple versions of the kernel, but I still encountering status:
libceph: osd3 172.30.33.2:6810 socket closed (con state OPEN)
libceph: osd3 172.30.33.2:6810 socket error on write
libceph: osd3 172.30.33.2:6810 s
Given a partition, is there a command which can be run to validate if
the partition is used as a journal of an OSD and, if so, what OSD it
belongs to?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
I'm not sure what the point of running with replication set to 1, but a new
feature adds ceph commands to turn off scrubbing:
Check ceph --help to see if you have a version that has this.
ceph osd set
ceph osd unset
You might want to turn off both kinds of scrubbing.
ceph osd set noscru
Mandell,
Not sure if you can start with a partition to see which OSD it belongs
to, but you can start with the OSDs to see what journal partition
belongs to it:
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep
osd_journal | grep -v size
- Mike
On 4/24/2013 9:05 PM, Man
Hi David Zafman
I use Ceph 0.56.4
I don't use command "ceph osd set nodeep-scrub"
This command only use in version 0.60 ?
Thanks.
TienBm
Skype: tien.bm0805
On Thu, Apr 25, 2013 at 8:12 AM, David Zafman wrote:
>
> I'm not sure what the point of running with replication set to 1, but a
> new f
Ceph version was a 0.58 build i cloned from github master branch
(0.58-500-gaf3b163 (af3b16349a49a8aee401e27c1b71fd704b31297c), The mds
daemon had crashed when it became laggy, I restarted it and the MR code
continued to execute. I am unable to see any mds logs in /var/log though.
Should I be enabl
You may need to be root to look at the logs in /var/log/ceph. Turning up
logging is helpful, too. Is the bug reproducible? It'd be great if you could
get a core dump file for the crashed MDS process.
-Noah
On Apr 24, 2013, at 9:53 PM, Varun Chandramouli wrote:
> Ceph version was a 0.58 build
24 matches
Mail list logo