I don't believe that there would be a perceptible increase in data usage. The
next release called Cuttlefish is less than a week from release, so you might
wait for that.
Product questions should go to one of our mailing lists, not directly to
developers.
David Zafman
Senior Developer
http:/
I'm doing some testing and wanted to see the effect of increasing journal
speed, and the fastest way to do this seemed to be to put it on a ramdisk where
latency should drop to near zero and I can see what other inefficiencies exist.
I created a tmpfs of sufficient size, copied journal on to tha
Hi,
On version 0.56.4, I'm having a problem with my crush map.
The output of osd tree is:
# idweighttype nameup/downreweight
00osd.0up1
10osd.1up1
20osd.2up1
30osd.3up1
40osd.4up1
50osd.5up
Sorry, maybe this sounds weird to you, but what if totally exclude LACP
client side? Do you tried this tests directly on one of OSD nodes? What
about RAM utilization on OSD nodes?
On Wed, Apr 24, 2013 at 10:21 PM, Wido den Hollander wrote:
> On 04/24/2013 02:23 PM, Mark Nelson wrote:
>
>> On 04
Hi,
I tried to keep 2 mds, and re-ran the MR example. Still, the mds crashed. I
configured ceph to dump logs in syslog. Here is the error I came across
while going through the logs:
Apr 25 13:54:33 varunc4-virtual-machine ceph-mds: 2013-04-25
13:54:33.207001 bf148b40 0 -- 10.72.148.209:6800/3568
Hi,
if I shutdown an OSD, the OSD gets marked down after 20 seconds, after
300 seconds the osd should get marked out, an the cluster should resync.
But that doesn't happened, the OSD stays in the status down/in forever,
therefore the cluster stays forever degraded.
I can reproduce it with a new in
On Wed, Apr 24, 2013 at 10:20:57AM -0700, Gregory Farnum wrote:
> It's not about total cluster size but about the number of active
> files, and the size of your directories. :)
>
> That said, the default 100,000 inodes is a very conservative number;
> you can probably go at least an order of magni
On 2013-04-25 09:39, James Harper wrote:
I'm doing some testing and wanted to see the effect of increasing
journal speed, and the fastest way to do this seemed to be to put it
on a ramdisk where latency should drop to near zero and I can see
what other inefficiencies exist.
I created a tmpfs of
Mike,
I use a process like:
crushtool -c new-crushmap.txt -o new-crushmap && ceph osd setcrushmap -i
new-crushmap
I did not attempt to validate your crush map. If that command fails, I
would scrutinize your crushmap for validity/correctness.
Once you have the new crushmap injected, you can
Hi,
We're pretty new to using ceph, and we've been trying to build and use the
Galaxy open source biomedical code (http://galaxyproject.org/) with ceph
0.56.3 under Ubuntu 12.10 on a Calxeda/ARM server platform. We were able to get
ceph to build successfully, and seemingly works well, except in
On 04/25/2013 12:48 PM, Igor Laskovy wrote:
Sorry, maybe this sounds weird to you, but what if totally exclude LACP
client side? Do you tried this tests directly on one of OSD nodes? What
about RAM utilization on OSD nodes?
The LACP isn't the issue here, since the reads are no where near the
l
On Apr 25, 2013, at 4:08 AM, Varun Chandramouli wrote:
> 2013-04-25 13:54:36.182188 bff8cb40 -1 common/Thread.cc: In function 'void
> Thread::create(size_t)' thread bff8cb40 time 2013-04-25
> 13:54:36.053392#012common/Thread.cc: 110: FAILED assert(ret == 0)#012#012
> ceph version 0.58-500-gaf
Yep, 0.60 do all snapshot-related things way faster and it is
obviously faster on r/w with small blocks - comparing to 0.56.4 on
same disk commit percentage I may say ten times faster on average
request in-flight time.
On Mon, Apr 22, 2013 at 7:37 PM, Andrey Korolyov wrote:
> Mentioned cluster is
On Thu, 25 Apr 2013, Martin Mailand wrote:
> Hi,
>
> if I shutdown an OSD, the OSD gets marked down after 20 seconds, after
> 300 seconds the osd should get marked out, an the cluster should resync.
> But that doesn't happened, the OSD stays in the status down/in forever,
> therefore the cluster s
On Thu, Apr 25, 2013 at 5:34 AM, Kevin Decherf wrote:
> On Wed, Apr 24, 2013 at 10:20:57AM -0700, Gregory Farnum wrote:
>> It's not about total cluster size but about the number of active
>> files, and the size of your directories. :)
>>
>> That said, the default 100,000 inodes is a very conservat
On Thu, Apr 25, 2013 at 3:11 AM, Mike Bryant wrote:
> Hi,
> On version 0.56.4, I'm having a problem with my crush map.
> The output of osd tree is:
> # idweighttype nameup/downreweight
>
> 00osd.0up1
> 10osd.1up1
> 20osd.2up1
> 30
On Thu, Apr 25, 2013 at 7:14 AM, M C wrote:
> Hi,
> We're pretty new to using ceph, and we've been trying to build and use the
> Galaxy open source biomedical code (http://galaxyproject.org/) with ceph
> 0.56.3 under Ubuntu 12.10 on a Calxeda/ARM server platform. We were able to
> get ceph to bui
On Thu, Apr 25, 2013 at 8:22 AM, Noah Watkins wrote:
>
> On Apr 25, 2013, at 4:08 AM, Varun Chandramouli wrote:
>
>> 2013-04-25 13:54:36.182188 bff8cb40 -1 common/Thread.cc: In function 'void
>> Thread::create(size_t)' thread bff8cb40 time 2013-04-25
>> 13:54:36.053392#012common/Thread.cc: 110:
Hi Sage,
On 25.04.2013 18:17, Sage Weil wrote:
> What is the output from 'ceph osd tree' and the contents of your
> [mon*] sections of ceph.conf?
>
> Thanks!
> sage
root@store1:~# ceph osd tree
# idweight type name up/down reweight
-1 24 root default
-3 24
On Wed, Apr 24, 2013 at 6:05 PM, Mandell Degerness
wrote:
> Given a partition, is there a command which can be run to validate if
> the partition is used as a journal of an OSD and, if so, what OSD it
> belongs to?
The new-style deployment stuff sets partition type GUIDs to special
values that en
On Tue, Apr 23, 2013 at 12:49 AM, Marco Aroldi wrote:
> Hi,
> this morning I have this situation:
>health HEALTH_WARN 1540 pgs backfill; 30 pgs backfill_toofull; 113
> pgs backfilling; 43 pgs degraded; 38 pgs peering; 5 pgs recovering;
> 484 pgs recovery_wait; 38 pgs stuck inactive; 2180 pgs s
I have found a solution:
ceph-osd --get-journal-fsid --osd-journal= -i 0
Note: the "-i 0" is required by the command line, but makes no
difference in the output - i.e. '-i 2' works just as well.
This results in an fsid which matches the fsid of the OSD it belongs to. :)
On Thu, Apr 25, 2013 at
Hey all,
A while ago you may remember that we launched some "office hours"
where we could focus an engineer on answering community questions.
Thanks to some generous contributions of time and expertise from the
community we were able to expand this to the new "Geek on Duty"
program.
Expanded hour
I filed tracker bug 4822 and have wip-4822 with a fix. My manual testing shows
that it works. I'm building a teuthology test.
Given your osd tree has a single rack it should always mark OSDs down after 5
minutes by default.
David Zafman
Senior Developer
http://www.inktank.com
On Apr 25,
G'day James,
On Thu, Apr 25, 2013 at 07:39:27AM +, James Harper wrote:
> I'm doing some testing and wanted to see the effect of increasing journal
> speed, and the fastest way to do this seemed to be to put it on a ramdisk
> where latency should drop to near zero and I can see what other
>
Hi all,
I use ceph 0.56.4, when I restart ceph services, mds not active, I rebooted
several times, but mds still not active.
e707: 1/1/1 up {0=1=up:replay}, 1 up:standby
--
Bui Minh Tien
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
26 matches
Mail list logo