I tend to use something along the lines
for osd in $(grep osd /etc/mtab | cut -d ' ' -f 2); do echo "$(echo $osd | cut
-d '-' -f 2): $(readlink -f $(readlink $osd/journal))";done | sort -k 2
Cheers,
Josef
> On 08 May 2015, at 02:47, Robert LeBlanc wrote:
>
> You may also be able to use `ceph
Thanks, Robert, for sharing so many experience! I feel like I don't deserve
it :)
I have another but very same situation which I don't understand.
Last time i tried to hard kill OSD daemons.
This time i add a new node with 2 OSDs to my cluster and also monitor the
IO. I wrote a script which adds a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Can you provide the output of the CRUSH map and a copy of the script
that you are using to add the OSDs? Can you also provide the pool size
and pool min_size?
-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.c
Interesting. The 'rbd diff' operation uses the same librbd API method as 'rbd
export-diff' to calculate all the updated image extents, so it's very strange
that one works and the other doesn't given that you have a validly formatted
export. I tried to recreate your issues on Giant and was unab
I have 42 OSDs on 6 servers. I'm planning to double that this quarter by
adding 6 more servers to get to 84 OSDs.
I have 3 monitor VMs. Two of them are running on two different blades in
the same chassis, but their networking is on different fabrics. The third
one is on a blade in a different chas
Hi,
I had a problem with a cephfs freeze in a client. Impossible to
re-enable the mountpoint. A simple "ls /mnt" command totally
blocked (of course impossible to umount-remount etc.) and I had
to reboot the host. But even a "normal" reboot didn't work, the
host didn't stop. I had to do a hard rebo
On Thu, May 14, 2015 at 10:15 AM, Francois Lafont wrote:
> Hi,
>
> I had a problem with a cephfs freeze in a client. Impossible to
> re-enable the mountpoint. A simple "ls /mnt" command totally
> blocked (of course impossible to umount-remount etc.) and I had
> to reboot the host. But even a "norm
On 14/05/2015 18:15, Francois Lafont wrote:
Hi,
I had a problem with a cephfs freeze in a client. Impossible to
re-enable the mountpoint. A simple "ls /mnt" command totally
blocked (of course impossible to umount-remount etc.) and I had
to reboot the host. But even a "normal" reboot didn't work
On Thu, May 14, 2015 at 2:47 PM, John Spray wrote:
>
> Greg's response is pretty comprehensive, but for completeness I'll add
> that the specific case of shutdown blocking is
> http://tracker.ceph.com/issues/9477
I've seen the same thing before with /dev/rbd mounts when the network
temporarily g
On 2015-04-23 19:39:33 +, Sage Weil said:
On Thu, 23 Apr 2015, Pavel V. Kaygorodov wrote:
Hi!
I have copied two of my pools recently, because old ones has too many pgs.
Both of them contains RBD images, with 1GB and ~30GB of data.
Both pools was copied without errors, RBD images are mounta
You should be able to do just that. We recently upgraded from Firefly
to Hammer like that. Follow the order described on the website.
Monitors, OSDs, MDSs.
Notice that the Debian packages do not restart running daemons, but
they _do_ start up not running ones. So say for some reason before you
On 2015-05-14 21:04:06 +, Daniel Schneller said:
On 2015-04-23 19:39:33 +, Sage Weil said:
On Thu, 23 Apr 2015, Pavel V. Kaygorodov wrote:
Hi!
I have copied two of my pools recently, because old ones has too many pgs.
Both of them contains RBD images, with 1GB and ~30GB of data.
Both
Hi!
I am trying to get behind the values in ceph -w, especially those
regarding throughput(?) at the end:
2015-05-15 00:54:33.333500 mon.0 [INF] pgmap v26048646: 17344 pgs:
17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail;
6023 kB/s rd, 549 kB/s wr, 7564 op/s
2015-05-15
Hi ,
I encountered other problems when i installed ceph .
#1. When i run the command , " ceph-deploy new ceph-0 " , and got
the ceph.conf
file . However , there is not any information aboutosd pool default
size or public network .
[root@ceph-2 my-cluster]# more ceph.conf
[global]
auth
14 matches
Mail list logo