Hello!
I observe very high memory consumption on client with write-intensive load
with qemu 1.6.0 + librbd 0.67.3.
For benchmarking purposes I'm trying to simultaneously run 15 VMs with 3
GiB of RAM on the one host. Each VM uses RBD image cloned from protected
snapshot of "master image". After bo
The –debug command worked as described. Can anyone give me a synposis of how
the authentication token is generated?
Token generated:
AUTH_rgwtk0b007261646f733a7377696674046eff2c9ac6a5041b00545248a7893b900677683adaaca1095128b6edf8fc378d7d49d8"
The first part looks like a header: 'AUTH_
Hi All,
In our prior tests with 0.67.3, keystone authtoken caching was broken
causing dreadful performance - see
http://www.spinics.net/lists/ceph-users/msg04531.html
We upgraded to release 0.67.4 as we wanted to test the apparent fix to
authtoken caching that was included in the release notes.
I forgot to reply, but this did indeed work. Thanks Darren.
--
Warren
On Oct 4, 2013, at 8:22 AM, Darren Birkett wrote:
> Hi Warren,
>
> Try using the ceph specific fastcgi module as detailed here:
>
> http://ceph.com/docs/next/radosgw/manual-install/
>
> And see if that helps.
>
> There w
Hi, i started to follow a project with OpenStack and I have some
questions about the storage and specifically the use of a storage SAN.
Actualy I have two SAN without cluster and replication feature. Can i
integrate the SAN with Ceph give these replication features / clusters
to storage (SAN)?
We use ceph as the storage of kvm .
I found the VMs errors when force umount the ceph disk.
Is it just right ? How to repair it ?
Many thanks .
--higkoo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-user
Based on my experience I think you are grossly underestimating the expense and
frequency of flushes issued from your vm's. This will be especially bad if you
aren't using the async flush from qemu >= 1.4.2 as the vm is suspended while
qemu waits for the flush to finish. I think your best cours
Thank's Mike,
Kyle Bader suggest me also to use my large SSD (900 GB) as cache
drive using "bcache" or "flashcache".
Since I have already plan to use SSD for my journal, I would certainly
use also SSD as cache drive in addition.
I will have to read documentation about "bcache" and his integ
I also would be interested in how bcache or flashcache would integrate.
On Mon, Oct 7, 2013 at 11:34 AM, Martin Catudal wrote:
> Thank's Mike,
> Kyle Bader suggest me also to use my large SSD (900 GB) as cache
> drive using "bcache" or "flashcache".
> Since I have already plan to use SSD f
I found this without much effort.
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote:
> I also would be interested in how bcache or flashcache would integrate.
>
>
> On Mon, Oct 7, 2013 at 11:34 AM, Martin Catudal
Hi,
I am trying to install Ceph on a Red Hat Linux server that does not have
external access through which it can access the URL's and download the files
needed. The documentation is not clear (to me) on how to install the software
under these circumstances.
Should I be downloading the sour
I brought this up within the context of the RAID discussion, but it did not
garner any responses. [1]
In our small test deployments (160 HDs and OSDs across 20 machines) our
performance is quickly bounded by CPU and memory overhead. These are 2U
machines with 2x 6-core Nehalem; and running 8 OSDs
On Mon, Oct 7, 2013 at 9:15 AM, Scott Devoid wrote:
> I brought this up within the context of the RAID discussion, but it did not
> garner any responses. [1]
>
> In our small test deployments (160 HDs and OSDs across 20 machines) our
> performance is quickly bounded by CPU and memory overhead. The
Hi Scott,
On 10/07/2013 11:15 AM, Scott Devoid wrote:
I brought this up within the context of the RAID discussion, but it did
not garner any responses. [1]
In our small test deployments (160 HDs and OSDs across 20 machines) our
performance is quickly bounded by CPU and memory overhead. These ar
Am 07.10.2013 18:23, schrieb Gregory Farnum:
There are a few tradeoffs you can make to reduce memory usage (I
believe the big one is maintaining a shorter PG log, which lets nodes
catch up without going through a full backfill), and there is also a
I wonder why this log has to be fully kept in me
On 10/07/2013 09:36 AM, Mr.Salvatore Rapisarda wrote:
Hi, i started to follow a project with OpenStack and I have some
questions about the storage and specifically the use of a storage SAN.
Actualy I have two SAN without cluster and replication feature. Can i
integrate the SAN with Ceph give thes
>> In our small test deployments (160 HDs and OSDs across 20 machines)
>> our performance is quickly bounded by CPU and memory overhead. These
>> are 2U machines with 2x 6-core Nehalem; and running 8 OSDs consumed
>> 25% of the total CPU time. This was a cuttlefish deployment.
>
>You might be inte
Hi Scott,
Just some observations from here.
We run 8 nodes, 2U units with 12x OSD each (4x 500GB ssd, 8x 4TB platter)
attached to 2x LSI 2308 cards. Each node uses an intel E5-2620 with 32G mem.
Granted, we only have like 25 VM (some fairly io-hungry, both iops and
throughput-wise though) on tha
On 10/07/2013 12:29 PM, Gruher, Joseph R wrote:
In our small test deployments (160 HDs and OSDs across 20 machines)
our performance is quickly bounded by CPU and memory overhead. These
are 2U machines with 2x 6-core Nehalem; and running 8 OSDs consumed
25% of the total CPU time. This was a cutt
Hi Alistair,
You can download the dumpling release rpms from this location: http://ceph
.com/rpm-dumpling/rhel6/x86_64/
And cuttlefish from here: http://ceph.com/rpm-cuttlefish/rhel6/x86_64/
You can download ceph-deploy from here: http://ceph.com/rpm-dumpling/rhel6/
noarch/
But from my personal
The original documentation was written with a script called mkcephfs
in mind. Then, we began including some documentation for Chef and
Crowbar. We actually only had developer documentation for doing
things manually. We're working on providing manual steps now. While
it's not in the deployment sect
Sounds like it's probably an issue with the fs on the rbd disk? What
fs was the vm using on the rbd?
-Sam
On Mon, Oct 7, 2013 at 8:11 AM, higkoohk wrote:
> We use ceph as the storage of kvm .
>
> I found the VMs errors when force umount the ceph disk.
>
> Is it just right ? How to repair it ?
>
The ping tests you're running are connecting to different interfaces
(10.23.37.175) than those you specify in the "mon_hosts" option
(10.0.0.2, 10.0.0.3, 10.0.0.4). The client needs to be able to connect
to the specified address; I'm guessing it's not routable from outside
that network?
The error
Also, mkfs, mount, and kvm disk options?
Mark
On 10/07/2013 03:15 PM, Samuel Just wrote:
Sounds like it's probably an issue with the fs on the rbd disk? What
fs was the vm using on the rbd?
-Sam
On Mon, Oct 7, 2013 at 8:11 AM, higkoohk wrote:
We use ceph as the storage of kvm .
I found the
Thanks for the reply. This eventually resolved itself when I upgraded the
client kernel from the Ubuntu Server 12.04.2 default to the 3.6.10 kernel. Not
sure if there is a good causal explanation there or if it might be a
coincidence. I did see the kernel recommendations in the docs but I had
You can do this with s3 acls.
-Sam
On Wed, Oct 2, 2013 at 9:32 AM, Jefferson Alcantara
wrote:
> I need share buckets created by one user with outher users without share the
> same access_key or secret_key , for example I have user jmoura with bucket
> name Jeff and I need share this bucket with u
On Mon, Oct 7, 2013 at 1:35 PM, Gruher, Joseph R
wrote:
> Thanks for the reply. This eventually resolved itself when I upgraded the
> client kernel from the Ubuntu Server 12.04.2 default to the 3.6.10 kernel.
> Not sure if there is a good causal explanation there or if it might be a
> coincid
Could you clarify something for me... I have a cluster network (10.0.0.x) and a
public network (10.23.37.x). All the Ceph machines have one interface on each
network and clients (when configured normally) would only be on the public
network. My ceph.conf uses 10.0.0.x IPs for the monitors but
On Mon, Oct 7, 2013 at 2:40 PM, Gruher, Joseph R
wrote:
> Could you clarify something for me... I have a cluster network (10.0.0.x) and
> a public network (10.23.37.x). All the Ceph machines have one interface on
> each network and clients (when configured normally) would only be on the
> publ
Thanks everyone, the env like this :
Linux 3.0.97-1.el6.elrepo.x86_64 CentOS 6.4
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
/dev/sdd1 on /var/lib/ceph/osd/ceph-2 type xfs (rw)
/dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs (rw)
/dev/sdc1 on /var/lib/ceph/osd/ceph-4 type xfs (
I tried putting Flashcache on my spindle OSDs using an Intel SSL and it
works great. This is getting me read and write SSD caching instead of just
write performance on the journal. It should also allow me to protect the
OSD journal on the same drive as the OSD data and still get benefits of SSD
c
2013-10-8 上午9:00于 "higkoohk" 写道:
>
> Thanks everyone, the env like this :
>
> Linux 3.0.97-1.el6.elrepo.x86_64 CentOS 6.4
>
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
>
> /dev/sdd1 on /var/lib/ceph/osd/ceph-2 type xfs (rw)
> /dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs (rw)
Thanks everyone ,
I think `umount -l` is the error,we shouldn't just do this operation
without any other conjunction operation.
I will continue to do more extreme test. I shouldn't exec `umount -l` ,
and I need to stop anyone to exec `umount -l`.
Lots of thanks !
-- Forwarde
Hi Joao,
Thanks for replying. All of my monitors are up and running and connected to
each other. "ceph -s" is failing on the cluster with following error:
2013-10-07 10:12:25.099261 7fd1b948d700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2013-10-07 10:12:25.
hi,
It is possible to (safe) upgrade directly from bobtail (0.56.6) to
dumpling (latest)?
Is there any instruction?
--
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> I am trying to install Ceph on a Red Hat Linux server that does not have
> external access
> through which it can access the URL’s and download the files needed. The
> documentation is
> not clear (to me) on how to install the software under these circumstances.
>Should I be downloading the
36 matches
Mail list logo