Unfortunately that doesn't help. I restarted both the active and standby mds
but that doesn't change the state of the mds. Is there a way to force the mds
to look at the 1832 epoch (or earlier) instead of 1833 (need osdmap epoch 1833,
have 1832)?
Thanks,
Jasper
___
thanks , Lewis.and i got one suggestion it is better to put similar OSD
size .
2014-08-20 9:24 GMT+07:00 Craig Lewis :
> I believe you need to remove the authorization for osd.4 and osd.6 before
> re-creating them.
>
> When I re-format disks, I migrate data off of the disk using:
> ceph os
Hello Gregory:
I'm doing some comparison about performance between different
combination of environment. Therefore I have to try such old version.
Thanks for your kindly help! The solution you provided does work! I
think I was relying on ceph-disk too much therefore I didn't noticed
this.
2014-08-
Hi All,
We monitored two of our osd as down using the ceph osd tree
command. We tried starting it using the following commands but ceph osd
tree command still reports it as down. Please see below for the commands
used.
command:sudo start ceph-osd id=osd.0
output: ceph-osd
(ceph/osd.0) stop/pre
Hello,
Yehuda, I know I was using the correct fastcgi module, it was the one on
Ceph repositories; I had also disabled in apache, all other modules;
I tried to create a second swift user, using the provided instructions,
only to get the following:
# radosgw-admin user create --uid=marcogarces --
We have a ceph system here, and we're seeing performance regularly
descend into unusability for periods of minutes at a time (or longer).
This appears to be triggered by writing large numbers of small files.
Specifications:
ceph 0.80.5
6 machines running 3 OSDs each (one 4 TB rotational HD
Hi,
Do you get slow requests during the slowness incidents? What about monitor
elections?
Are your MDSs using a lot of CPU? did you try tuning anything in the MDS (I
think the default config is still conservative, and there are options to cache
more entries, etc…)
What about iostat on the OSDs
Hi,
On 20 Aug 2014, at 16:55, German Anders
mailto:gand...@despegar.com>> wrote:
Hi Dan,
How are you? I want to know how you disable the indexing on the
/var/lib/ceph OSDs?
# grep ceph /etc/updatedb.conf
PRUNEPATHS = "/afs /media /net /sfs /tmp /udev /var/cache/ccache
/var/spool/cups
Hi, Dan,
Some questions below I can't answer immediately, but I'll spend
tomorrow morning irritating people by triggering these events (I think
I have a reproducer -- unpacking a 1.2 GiB tarball with 25 small
files in it) and giving you more details. For the ones I can answer
right now:
After restarting your MDS, it still says it has epoch 1832 and needs
epoch 1833? I think you didn't really restart it.
If the epoch numbers have changed, can you restart it with "debug mds
= 20", "debug objecter = 20", "debug ms = 1" in the ceph.conf and post
the resulting log file somewhere?
-Greg
Hi guys,
Anyone has done copy/move data between clusters? If yes, what are the best
practices for you?
Thanks
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.cep
We do it with rbd volumes. We're using rbd export/import and netcat to
transfer it across clusters. This was the most efficient solution, that
did not require one cluster to have access to the other clusters (though
it does require some way of starting the process on the different machines).
Looks like I need to upgrade to Firefly to get ceph-kvstore-tool
before I can proceed.
I am getting some hits just from grepping the LevelDB store, but so
far nothing has panned out.
Thanks for the help!
On Tue, Aug 19, 2014 at 10:27 AM, Gregory Farnum wrote:
> It's been a while since I worked o
On Wed, 20 Aug 2014, Craig Lewis wrote:
> Looks like I need to upgrade to Firefly to get ceph-kvstore-tool
> before I can proceed.
> I am getting some hits just from grepping the LevelDB store, but so
> far nothing has panned out.
FWIW if you just need the tool, you can wget the .deb and 'dpkg -x
Hugo,
I would look at setting up a cache pool made of 4-6 ssds to start with. So, if
you have 6 osd servers, stick at least 1 ssd disk in each server for the cache
pool. It should greatly reduce the osd's stress of writing a large number of
small files. Your cluster should become more responsiv
I have a cluster with 1 monitor and 3 OSD Servers. Each server has multiple
OSD's running on it. When I start the OSD using /etc/init.d/ceph start osd.0
I see the expected interaction between the OSD and the monitor authenticating
keys etc and finally the OSD starts.
Running watching the cluster
Hello,
On Wed, 20 Aug 2014 15:39:11 +0100 Hugo Mills wrote:
>We have a ceph system here, and we're seeing performance regularly
> descend into unusability for periods of minutes at a time (or longer).
> This appears to be triggered by writing large numbers of small files.
>
>Specificati
17 matches
Mail list logo