i followed the following documentation to add monitors to my already existing
cluster with 1 mon
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
when i follow this documentation.
the monitor assimilates the old monitor so my monitor status is gone.
but when i skip the "ceph mon add
Здравствуйте!
On Tue, Jul 07, 2015 at 02:21:56PM +0530, mallikarjuna.biradar wrote:
> Hi all,
> Setup details:
> Two storage enclosures each connected to 4 OSD nodes (Shared storage).
> Failure domain is Chassis (enclosure) level. Replication count is 2.
> Each host has allotted with 4 drives.
Hello all,
When I try to add more than one osds to a host and the backfilling process
starts , all the osd daemons except one of them become stuck in D state. When
this happends they are shown as out and down (when running ceph osd tree).
The only way I can kill the processes is to
remove the o
Hello Jan,
I am testing your scripts, because we want also to test OSDs and VMs
on the same server.
I am new to cgroups, so this might be a very newbie question.
In your script you always reference to the file
/cgroup/cpuset/libvirt/cpuset.cpus
but I have the file in /sys/fs/cgroup/cpuset/libvir
Hi!
The /cgroup/* mount point is probably a RHEL6 thing, recent distributions seem
to use /sys/fs/cgroup like in your case (maybe because of systemd?). On RHEL 6
the mount points are configured in /etc/cgconfig.conf and /cgroup is the
default.
I also saw the pull request from you on github and
When those processes become blocked are the drives busy or idle?
Can you post the output from
"ps -awexo pid,tt,user,fname,tmout,f,wchan” on those processes when that
happens?
My guess would be they really are waiting for the disk array for some reason -
can you check if you can read/write to t
On 27-07-15 14:21, Jan Schermer wrote:
> Hi!
> The /cgroup/* mount point is probably a RHEL6 thing, recent distributions
> seem to use /sys/fs/cgroup like in your case (maybe because of systemd?). On
> RHEL 6 the mount points are configured in /etc/cgconfig.conf and /cgroup is
> the default.
>
On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander wrote:
> I'm testing with it on 48-core, 256GB machines with 90 OSDs each. This
> is a +/- 20PB Ceph cluster and I'm trying to see how much we would
> benefit from it.
Cool. How many OSDs total?
Cheers, Dan
_
On 27-07-15 14:56, Dan van der Ster wrote:
> On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander wrote:
>> I'm testing with it on 48-core, 256GB machines with 90 OSDs each. This
>> is a +/- 20PB Ceph cluster and I'm trying to see how much we would
>> benefit from it.
>
> Cool. How many OSDs tot
Cool! Any immediate effect you noticed? Did you partition it into 2 cpusets
corresponding to NUMA nodes or more?
Jan
> On 27 Jul 2015, at 15:21, Wido den Hollander wrote:
>
>
>
> On 27-07-15 14:56, Dan van der Ster wrote:
>> On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander wrote:
>>> I'm
Hi all,
the faq at http://ceph.com/docs/cuttlefish/faq/ mentions the possibility to
run export a mounted cephfs via samba. This combination exhibits a very
weird behaviour, though.
We have a directory on cephfs with many small xml snippets. If I repeadtedly
ls the directory on Unix, I get the sam
On 27-07-15 15:28, Jan Schermer wrote:
> Cool! Any immediate effect you noticed? Did you partition it into 2 cpusets
> corresponding to NUMA nodes or more?
>
Not yet. Cluster is still in build state. Will run benchmarks with and
without pinning set.
Currently the setup is to 2 cpusets with 2
Hi all,
I'm working on algorithm to estimate PG count for set of pools
with minimal input from user. The main target is openstack deployments.
I know about ceph.com/pgcalc/, but would like to write down a rules and
get a python code.
Can you comment following, please?
Input:
* pg_count has no in
What's the full stack you're using to run this with? If you're using
the kernel client, try updating it or switching to the userspace
(ceph-fuse, or Samba built-in) client. If using userspace, please make
sure you've got the latest one.
-Greg
On Mon, Jul 27, 2015 at 3:16 PM, Jörg Henne wrote:
> H
I don't have any answers but I am also seeing some strange results
exporting a Ceph file system using the Samba VFS interface on Ceph
version 9.0.2. If I mount a Linux client with vers=1, I see the file
system the same as I see it on a ceph file system mount. If I use
vers=2.0 or vers=3.0 on the
Dear Cephers,
I did a simple test to understand the performance loss of ceph. Here's my
environment:
CPU: 2 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Memory: 4 * 8G 1067 MHz
NIC: 2 * Intel Corporation 10-Gigabit X540-AT2
HDD:
1 * WDC WD1003FZEX ATA Disk 1TB
4 * Seagate ST2000NM0011 ATA Disk 2TB
The server has 128 GB RAM (it also runs KVM virtual machines and they use ~ 95
GB ).
The HBA is LSI Logic SAS1068 PCI-X Fusion-MPT SAS (kernel module is mptsas
version 3.04.20)
I have two HBAs ,but I didn't want to use multipath, so there is only one path
/ LUN (the array's controllers cannot r
Hi,
the nfs-ganesha documentation states:
"... This FSAL links to a modified version of the CEPH library that has
been extended to expose its distributed cluster and replication
facilities to the pNFS operations in the FSAL. ... The CEPH library
modifications have not been merged into the ups
On Mon, Jul 27, 2015 at 4:33 PM, Burkhard Linke
wrote:
> Hi,
>
> the nfs-ganesha documentation states:
>
> "... This FSAL links to a modified version of the CEPH library that has been
> extended to expose its distributed cluster and replication facilities to the
> pNFS operations in the FSAL. ...
I added an OSD to a device that I did not really want to and now I am unable to
remove it. Any suggestions as to what I am missing?
Thanks in advance
[rdo@n001 c140-ceph]$ ceph osd out 21
osd.21 is already out.
[rdo@n001 c140-ceph]$ ceph osd down 21
marked down osd.21.
[rdo@n001 c140-ceph]$ ceph
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Some more info:
Small PG count:
* More varied data distribution in cluster
Large PG count:
* More even data distribution in cluster
* A very high number of PG can starve CPU/RAM causing performance to decrease
We are targeting 50 PGs per OSD to ke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Did you kill the OSD process, you are still showing 28 OSD up. I'm not
sure that should stop you from removing it though. You can also try
ceph osd crush rm osd.21
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E6
Gregory Farnum writes:
>
> What's the full stack you're using to run this with? If you're using
> the kernel client, try updating it or switching to the userspace
> (ceph-fuse, or Samba built-in) client. If using userspace, please make
> sure you've got the latest one.
> -Greg
The system is:
ro
On Mon, Jul 27, 2015 at 5:46 PM, Jörg Henne wrote:
> Gregory Farnum writes:
>>
>> What's the full stack you're using to run this with? If you're using
>> the kernel client, try updating it or switching to the userspace
>> (ceph-fuse, or Samba built-in) client. If using userspace, please make
>> s
Thanks, stopping the osd daemon seemed to do the trick.
-Original Message-
From: Robert LeBlanc [mailto:rob...@leblancnet.us]
Sent: Monday, July 27, 2015 11:48 AM
To: Paul Schaleger
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Trying to remove osd
-BEGIN PGP SIGNED MESSAGE
Gregory Farnum writes:
>
> Yeah, I think there were some directory listing bugs in that version
> that Samba is probably running into. They're fixed in a newer kernel
> release (I'm not sure which one exactly, sorry).
Ok, thanks, good to know!
> > and then detaches itself but the mountpoint stay
Moving this to the ceph-user list where it has a better chance of
being answered.
On Mon, Jul 27, 2015 at 5:35 AM, jingxia@baifendian.com
wrote:
> Dear ,
> I have questions to ask.
> The doc says hadoop on ceph but requires Hadoop 1.1.X stable series
> I want to know if CephFS Hadoop plugin
Hi All,
I recently added some OSDs to the Ceph cluster (0.94.2). I noticed that 'ceph
-s' reported both misplaced AND degraded PGs.
Why should any PGs become degraded? Seems as though Ceph should only be
reporting misplaced PGs?
From the Giant release notes:
Degraded vs misplaced: the Ceph he
Hi Sam,
> The pg might also be degraded right after a map change which changes the
> up/acting sets since the few objects updated right before the map change
> might be new on some replicas and old on the other replicas. While in that
> state, those specific objects are degraded, and the pg would
Hmm, that's odd. Can you attach the osdmap and ceph pg dump prior to the
addition (with all pgs active+clean), then the osdmap and ceph pg dump
afterwards?
-Sam
- Original Message -
From: "Chad William Seys"
To: "Samuel Just" , "ceph-users"
Sent: Monday, July 27, 2015 12:57:23 PM
Subj
Hi Sam,
I'll need help getting the osdmap and pg dump prior to addition.
I can remove the OSDs and add again if the osdmap (etc.) is not logged
somewhere.
Chad.
> Hmm, that's odd. Can you attach the osdmap and ceph pg dump prior to the
> addition (with all pgs active+clean), the
Hi Sam,
I think I may have the problem: I noticed that the new host was created with
straw2 instead of straw. Would this account for 50% of PGs being degraded?
(I'm removing the OSDs on that host and will recreate with 'firefly' tunables.)
Thanks!
Chad.
On Monday, July 27, 2015 15:09:21 Chad
On 15 July 2015 at 17:34, John Spray wrote:
>
>
> On 15/07/15 16:11, Roland Giesler wrote:
>
>
>
> I mount cephfs in /etc/fstab and all seemed well for quite a few
> months. Now however, I start seeing strange things like directories with
> corrupted files names in the file system.
>
>
> When y
> If I understand correctly you want to look at how many “guest filesystem
> block size” blocks there are that are empty?
> This might not be that precise because we do not discard blocks inside the
> guests, but if you tell me how to gather this - I can certainly try that.
> I’m not sure if my bas
We are looking at using Ganesha NFS with the Ceph file system.
Currently I am testing the FSAL interface on Ganesha NFS Release =
V2.2.0-2 running on Ceph 9.0.2. This is all early work, as Ceph FS is
still not considered production ready, and Ceph 9.0.2 is a development
release.
Currently I am on
Hi:
I am defining Objects broken down from a file to be stored in Object
Storage Cloud, as Storage Objects.
What I know: I have read documents and papers about object storage cloud.
Most of the time, the documents assume that storage objects from a file (to
be stored) have been already created an
Hi Paul,
Did you try to stop the osd first before marking it down and out ?
stop ceph-osd id=21 or /etc/init.d/ceph stop osd.21
Ceph osd crush remove osd.21
Ceph auth del osd.21
Ceph osd rm osd.310
Regards,
Nikhil Mitra
From: ceph-users
mailto:ceph-users-boun...@lists.ceph.com>>
on behalf of
Hi, list,
I found on the ceph FAQ that, ceph kernel client should not run on machines
belong to ceph cluster.
As ceph FAQ metioned, “In older kernels, Ceph can deadlock if you try to
mount CephFS or RBD client services on the same host that runs your test Ceph
cluster. This is not a Ceph-re
38 matches
Mail list logo