Hi,
I have changed my plan and now i want to use the following supermicro
server :-
SuperStorage Server 6047R-E1R24L
Can any one tell meis this server is good for the OSD nodes...two ssd
on RAID1 (OS & journal) and 24 HDD for OSD (JBOD on the motherboard
controller).
On Fri, Apr 4, 201
Hi,
The server would be good as a OSD node I believe, even though it's a
tad bigger than you set out for. You talked about using 10 disks before,
http://www.supermicro.nl/products/system/2U/6027/SSG-6027R-E1R12T.cfm or
http://www.supermicro.nl/products/system/2U/6027/SSG-6027R-E1R12L.cfm
may be b
On Wed, 9 Apr 2014 14:59:30 +0800 Punit Dambiwal wrote:
> Hi,
>
> I have changed my plan and now i want to use the following supermicro
> server :-
>
> SuperStorage Server 6047R-E1R24L
>
> Can any one tell meis this server is good for the OSD nodes...two ssd
> on RAID1 (OS & journal) and 24
On Wed, Apr 9, 2014 at 3:47 PM, Florent B wrote:
> Hi,
>
> I'm trying again and my system has now a load average of 3.15.
>
> I did a perf report, 91,42% of CPU time is used by :
>
> + 91.42% 63080 swapper [kernel.kallsyms] [k]
> native_safe_halt
>
could you give -g (Enables cal
Thank you very much!
I did as what you said. But there are some mistake.
[root@ceph69 ceph]# radosgw-agent -c region-data-sync.conf
Traceback (most recent call last):
File "/usr/bin/radosgw-agent", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.6/site-packa
Hi Karan,
Just to double check - run the same command after ssh'ing into each of
the osd hosts, and maybe again on the monitor hosts too (just in case
you have *some* hosts successfully updated to 0.79 and some still on <
0.78).
Regards
Mark
On 08/04/14 22:32, Karan Singh wrote:
Hi Loic
Hi all,
I've noticed that objects are using twice their actual space for a few
minutes after they are 'put' via rados:
$ ceph -v
ceph version 0.79-42-g010dff1 (010dff12c38882238591bb042f8e497a1f7ba020)
$ ceph osd tree
# idweight type name up/down reweight
-1 0.03998 root defau
Hi,
The logic of going clustered file system is that ctdb needs it. The brief is
simply to provide a "windows file sharing" cluster without using windows, which
would require us to buy loads of CALs for 2012, so isn't an option. The SAN
would provide this, but only if we bought the standalone
On Tue, 8 Apr 2014 09:35:19 -0700 Gregory Farnum wrote:
> On Tuesday, April 8, 2014, Christian Balzer wrote:
>
> > On Tue, 08 Apr 2014 14:19:20 +0200 Josef Johansson wrote:
> > >
> > > On 08/04/14 10:39, Christian Balzer wrote:
> > > > On Tue, 08 Apr 2014 10:31:44 +0200 Josef Johansson wrote:
>
Now I can configure it but it seems make no sense.
The following is the Error info.
[root@ceph69 ceph]# radosgw-agent -c /etc/ceph/cluster-data-sync.conf
INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination
Traceba
> So .. the idea was that ceph would provide the required clustered filesystem
> element,
> and it was the only FS that provided the required "resize on the fly and
> snapshotting" things that were needed.
> I can't see it working with one shared lun. In theory I can't see why it
> couldn't wor
> This is "similar" to ISCSI except that the data is distributed accross x ceph
> nodes.
> Just as ISCSI you should mount this on two locations unless you run a
> clustered filesystem (e.g. GFS / OCFS)
Oops I meant, should NOT mount this on two locations unles... :)
Cheers,
Robert
_
2014-04-07 20:24 GMT+02:00 Yehuda Sadeh :
> Try bumping up logs (debug rgw = 20, debug ms = 1). Not enough info
> here to say much, note that it takes exactly 30 seconds for the
> gateway to send the error response, may be some timeout. I'd verify
> that the correct fastcgi module is running.
Sorr
On Wed, Apr 9, 2014 at 4:45 AM, Mark Kirkwood
wrote:
> Hi Karan,
>
> Just to double check - run the same command after ssh'ing into each of the
> osd hosts, and maybe again on the monitor hosts too (just in case you have
> *some* hosts successfully updated to 0.79 and some still on < 0.78).
Just
Hi lists,
Recently we are trying to use teuthology for some tests, however, we met
some issues when trying to ignore the existing cluster.
We fount it pretty hard to find the related documents, we need to go
through the code to understand which parameter to set. But, even use
use_existing_cluster:
Yesterday we found out that there was a dependency issue for the init
script on CentOS/RHEL
distros where we depend on some functions that are available through
redhat-lsb-core but were
not declared in the ceph.spec file.
This will cause daemons not to start at all since the init script will
attem
I don't think the backing store should be seeing any effects like
that. What are the filenames which are using up that space inside the
folders?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 9, 2014 at 1:58 AM, Mark Kirkwood
wrote:
> Hi all,
>
> I've noticed that
journal_max_write_bytes: the maximum amount of data the journal will
try to write at once when it's coalescing multiple pending ops in the
journal queue.
journal_queue_max_bytes: the maximum amount of data allowed to be
queued for journal writing.
In particular, both of those are about how much is
Yeah, the log's not super helpful, but that and your description give
us something to talk about. Thanks!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Apr 8, 2014 at 8:20 PM, Craig Lewis wrote:
>
> Craig Lewis
> Senior Systems Engineer
> Office +1.714.602.1309
> Ema
Hello,
On Wed, 9 Apr 2014 07:31:53 -0700 Gregory Farnum wrote:
> journal_max_write_bytes: the maximum amount of data the journal will
> try to write at once when it's coalescing multiple pending ops in the
> journal queue.
> journal_queue_max_bytes: the maximum amount of data allowed to be
> que
Thanks all for helping to clarify in this matter :)
On 09/04/14 17:03, Christian Balzer wrote:
> Hello,
>
> On Wed, 9 Apr 2014 07:31:53 -0700 Gregory Farnum wrote:
>
>> journal_max_write_bytes: the maximum amount of data the journal will
>> try to write at once when it's coalescing multiple pendin
On Wed, Apr 9, 2014 at 8:03 AM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 9 Apr 2014 07:31:53 -0700 Gregory Farnum wrote:
>
>> journal_max_write_bytes: the maximum amount of data the journal will
>> try to write at once when it's coalescing multiple pending ops in the
>> journal queue.
>> jou
On 4/9/2014 3:33 AM, wsnote wrote:
Now I can configure it but it seems make no sense.
The following is the Error info.
[root@ceph69 ceph]# radosgw-agent -c /etc/ceph/cluster-data-sync.conf
INFO:urllib3.connectionpool:Starting new HTTPS connection (1):
s3.ceph71.com
ERROR:root:Could not retriev
Actually my intent is to use EC with RGW pools :). If I fiddle around with
cap bits temporarily will I be able to get things to work, or will
protocol issues / CRUSH map parsing get me into trouble?
Is there an idea of when this might work in general? Even if the kernel
doesn't support EC pool
I'm not sure when that'll happen -- supporting partial usage isn't
something we're targeting right now. Most users are segregated into
one kind of client (userspace or kernel).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 9, 2014 at 12:10 PM, Michael Nelson wrot
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I had a question about one of the points on the slide. Slide 24, last
bullet point, says:
* If you use XFS, don't put your OSD journal as a file on the disk
o Use a separate partition, the first partition!
o We still need to reinstall our whole cluster to re-partition the
OSDs
In cluster-data-sync.conf, if I use https,then it will show the error:
INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/radosgw_agent/c
It is only that single pg using the space (see attached) - but essentially:
$ du -m /var/lib/ceph/osd/ceph-1
...
2048/var/lib/ceph/osd/ceph-1/current/5.1a_head
2053/var/lib/ceph/osd/ceph-1/current
2053/var/lib/ceph/osd/ceph-1/
Which is resized to 1025 soon after. Interestingly I am n
Right, but I'm interested in the space allocation within the PG. The
best guess I can come up with without trawling through the code is
that some layer in the stack is preallocated and then trimmed the
objects back down once writing stops, but I'd like some more data
points before I dig.
-Greg
Soft
Ah right - sorry, I didn't realize that my 'du' was missing the files! I
will retest and post updated output.
Cheers
Mark
On 10/04/14 15:04, Gregory Farnum wrote:
Right, but I'm interested in the space allocation within the PG. The
best guess I can come up with without trawling through the co
Hi all,
We're building a new OpenStack zone very soon. Our compute hosts are spec'd
with direct attached disk for ephemeral instance storage and we have a
bunch of other storage nodes for Ceph serving Cinder volumes.
We're wondering about the feasibility of setting up the compute nodes as
OSDs an
I think you need to bind osd to specified cores and bind qemu-kvm to
other cores. Memory size is another factor need to take care of. If
your vm's root disk uses local file, the IO problem maybe
intractability
On Thu, Apr 10, 2014 at 12:47 PM, Blair Bethwaite
wrote:
> Hi all,
>
> We're building a
33 matches
Mail list logo