On 11/08/2013 03:13 PM, ja...@peacon.co.uk wrote:
On 2013-11-08 03:20, Haomai Wang wrote:
On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin
wrote:
I just list commands below to help users to understand:
cinder qos-create high_read_low_write consumer="front-end"
read_iops_sec=1000 write_iops_sec=10
when one head out of ten fails: disks can keep working with the
nine remaining heads...
some info on this at last in the SATA-IO 3.2 Spec... "Rebuild
Assist...
Some info on the command set (SAS & SATA implementations):
http://www.seagate.com/files/staticfiles/docs/pdf/whitepaper/tp620-1-1110us
On 2013-11-08 03:20, Haomai Wang wrote:
On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin
wrote:
I just list commands below to help users to understand:
cinder qos-create high_read_low_write consumer="front-end"
read_iops_sec=1000 write_iops_sec=10
Does this have any normalisation of the IO uni
Hi list
We deploied a master zone and a slave zone in two clusters to test the
multi-locals backup . The radosgw-anget sync buckets successfully .We can find
the same buckets info in the slave zone .
But the running radosgw-agent throw out error info that objects
failed to sync , just like th
Well, as you've noted you're getting some slow requests on the OSDs
when they turn back on; and then the iSCSI gateway is panicking
(probably because the block device write request is just hanging).
We've gotten prior reports that iSCSI is a lot more sensitive to a few
slow requests than most use c
On Wed, Nov 6, 2013 at 6:05 AM, Gautam Saxena wrote:
> I'm a little confused -- does CEPH support incremental snapshots of either
> VMs or the CEPH-FS? I saw in the release notes for "dumpling" release
> (http://ceph.com/docs/master/release-notes/#v0-67-dumpling) this statement:
> "The MDS now dis
On Thu, Nov 7, 2013 at 6:43 AM, Kenneth Waegeman
wrote:
> Hi everyone,
>
> I just started to look at the documentation of Ceph and I've hit something I
> don't understand.
> It's about something on http://ceph.com/docs/master/architecture/
>
> """
> use the following steps to compute PG IDs.
>
> T
On Thu, Nov 7, 2013 at 1:26 AM, lixuehui wrote:
> Hi all
> Ceph Object Store service can spans geographical locals . Now ceph also
> provides FS and RBD .IF our applications need the RBD service .Can we
> provide backup and disaster recovery for it via gateway through some
> transfermation ? I
On Tue, Nov 5, 2013 at 2:39 PM, Dickson, Matt MR
wrote:
> UNOFFICIAL
>
> Hi,
>
> I'm new to Ceph and investigating how objects can be aged off, ie delete all
> objects older than 7 days. Is there funtionality to do this via the Ceph
> SWIFT api or alternatively using a java rados libaray?
Not in
I don't think this is anything we've observed before. Normally when a
Ceph node is using more memory than its peers it's a consequence of
something in that node getting backed up. You might try looking at the
perf counters via the admin socket and seeing if something about them
is different between
It sounds like maybe your PG counts on your pools are too low and so
you're just getting a bad balance. If that's the case, you can
increase the PG count with "ceph osd pool set pgnum ".
OSDs should get data approximately equal to /, so higher weights get more data and all its associated
traffic.
I don't remember how this has come up or been dealt with in the past,
but I believe it has been. Have you tried just doing it via the ceph
or rados CLI tools with an empty pool name?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Nov 5, 2013 at 6:58 AM, Wido den Hollan
Hi Loic,
On 08.11.2013 00:19, Loic Dachary wrote:
On 08/11/2013 04:57, Kyle Bader wrote:
I think this is a great idea. One of the big questions users have is
"what kind of hardware should I buy." An easy way for users to publish
information about their setup (hardware, software versions, use-
On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin wrote:
> On 11/08/2013 03:50 AM, Wido den Hollander wrote:
>
>> On 11/07/2013 08:42 PM, Gruher, Joseph R wrote:
>>
>>> Is there any plan to implement some kind of QoS in Ceph? Say I want to
>>> provide service level assurance to my OpenStack VMs and I
On 11/07/2013 09:48 AM, lixuehui wrote:
Hi all :
After we build a region with two zones distributed in two ceph
cluster.Start the agent ,it start works!
But what we find in the radosgw-agent stdout is that it failed to sync
objects all the time .Paste the info:
(env)root@ceph-rgw41:~/myproject#
On 11/08/2013 03:50 AM, Wido den Hollander wrote:
On 11/07/2013 08:42 PM, Gruher, Joseph R wrote:
Is there any plan to implement some kind of QoS in Ceph? Say I want to
provide service level assurance to my OpenStack VMs and I might have to
throttle bandwidth to some to provide adequate bandwid
Thanks for floating this out there Loic!
A few thoughts inline below:
On Thu, Nov 7, 2013 at 6:45 PM, Loic Dachary wrote:
> Hi,
>
> It looks like there indeed is enough interest to move forward :-) The next
> action items would be :
>
> * Setup a home page somewhere ( should it be a separate w
On 11/08/2013 12:15 AM, Jens-Christian Fischer wrote:
Hi all
we have installed a Havana OpenStack cluster with RBD as the backing
storage for volumes, images and the ephemeral images. The code as
delivered in
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L498
f
Hi,
It looks like there indeed is enough interest to move forward :-) The next
action items would be :
* Setup a home page somewhere ( should it be a separate web site or could we
simply take over http://ceph.com/ ? )
* Create the "About" page describing the User Committee and get a consensus
> Would this be something like
> http://wiki.ceph.com/01Planning/02Blueprints/Firefly/Ceph-Brag ?
Something very much like that :)
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 2013-11-06 09:33, Sage Weil wrote:
On Wed, 6 Nov 2013, Loic Dachary wrote:
Hi Ceph,
People from Western Digital suggested ways to better take advantage
of
the disk error reporting... when one head out of ten fails :
disks can keep working with the nine remaining heads. Losing 1/10 of
the
On 08/11/2013 04:57, Kyle Bader wrote:
>> I think this is a great idea. One of the big questions users have is
>> "what kind of hardware should I buy." An easy way for users to publish
>> information about their setup (hardware, software versions, use-case,
>> performance) when they have succes
On Thu, Nov 7, 2013 at 3:25 PM, Trivedi, Narendra
wrote:
> I can't install Ubuntu... I am not sure why would it do on a new install of
> CentOS. I wanted to try this to if I can take it as RBD/Radosgw backend for
> OpenStack production but I can't believe it has taken forever to get it
> runnin
I have 2 SSDs (same model, smaller capacity) for / connected on the mainboard.
Their sync write performance is also poor - less than 600 iops, 4k blocks.
On Nov 7, 2013, at 9:44 PM, Kyle Bader wrote:
>> ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
>
> The problem might be SATA transp
It appears that the RBD driver in Cinder only checks that the image is
accessible, and if it is, assumes it is cloneable, regardless of its
format. I think it would be more useful if the driver also confirmed
the image format, and reverted to straight copy instead of
copy-on-write if the format is
Am 06.11.2013 15:05, schrieb Gautam Saxena:
> We're looking to deploy CEPH on about 8 Dell servers to start, each of
> which typically contain 6 to 8 harddisks with Perc RAID controllers which
> support write-back cache (~512 MB usually). Most machines have between 32
> and 128 GB RAM. Our question
Sweet, thanks!
I had to add --caps="metadata=read" ,but it worked great.
On Thu, Nov 7, 2013 at 12:11 PM, Yehuda Sadeh wrote:
> You can do it through the metadata api. Try doing something like:
>
> GET /admin/metadata/user
>
> Yehuda
>
> On Thu, Nov 7, 2013 at 12:06 PM, Nelson Jeppesen
> wro
> I think this is a great idea. One of the big questions users have is
> "what kind of hardware should I buy." An easy way for users to publish
> information about their setup (hardware, software versions, use-case,
> performance) when they have successful deployments would be very valuable.
> Ma
I can't install Ubuntu... I am not sure why would it do on a new install of
CentOS. I wanted to try this to if I can take it as RBD/Radosgw backend for
OpenStack production but I can't believe it has taken forever to get it running
and I am not there yet!
-Original Message-
From: ceph-u
You can do it through the metadata api. Try doing something like:
GET /admin/metadata/user
Yehuda
On Thu, Nov 7, 2013 at 12:06 PM, Nelson Jeppesen
wrote:
> I've looked around but could not find it. Can I open a ticket for this
> issue?
>
> Not being able to enumerate users via API is a road blo
I've looked around but could not find it. Can I open a ticket for this
issue?
Not being able to enumerate users via API is a road block for me and I'd
like to work and get it resolved. Thanks.
--
Nelson Jeppesen
___
ceph-users mailing list
ceph-users@l
On Thu, Nov 7, 2013 at 11:50 PM, Wido den Hollander wrote:
> On 11/07/2013 08:42 PM, Gruher, Joseph R wrote:
>>
>> Is there any plan to implement some kind of QoS in Ceph? Say I want to
>> provide service level assurance to my OpenStack VMs and I might have to
>> throttle bandwidth to some to pro
On 11/07/2013 08:42 PM, Gruher, Joseph R wrote:
Is there any plan to implement some kind of QoS in Ceph? Say I want to
provide service level assurance to my OpenStack VMs and I might have to
throttle bandwidth to some to provide adequate bandwidth to others - is
anything like that planned for Ce
> Zackc, Loicd, and I have been the main participants in a weekly Teuthology
> call the past few weeks. We've talked mostly about methods to extend
> Teuthology to capture performance metrics. Would you be willing to join us
> during the Teuthology and Ceph-Brag sessions at the Firefly Developer
>
> ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
The problem might be SATA transport protocol overhead at the expander.
Have you tried directly connecting the SSDs to SATA2/3 ports on the
mainboard?
--
Kyle
___
ceph-users mailing list
ceph-user
Is there any plan to implement some kind of QoS in Ceph? Say I want to provide
service level assurance to my OpenStack VMs and I might have to throttle
bandwidth to some to provide adequate bandwidth to others - is anything like
that planned for Ceph? Generally with regard to block storage (rb
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 11/7/2013 2:12 PM, Kyle Bader wrote:
Once I know a drive has had a head failure, do I trust that the rest of the
drive isn't going to go at an inconvenient
> 1. To build a high performance yet cheap radosgw storage, which pools should
> be placed on ssd and which on hdd backed pools? Upon installation of
> radosgw, it created the following pools: .rgw, .rgw.buckets,
> .rgw.buckets.index, .rgw.control, .rgw.gc, .rgw.root, .usage, .users,
> .users.email
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of ??
>Sent: Wednesday, November 06, 2013 10:04 PM
>To: ceph-users
>Subject: [ceph-users] please help me.problem with my ceph
>
>1. I have installed ceph with one mon/mds and one osd.When i use 'ceph -
Hi 皓月,
You can try "ls -al /mnt/ceph ", check if the current user have W/R access to
the directory. Maybe you need to use "chown" to change the directory owner.
Regards,
Kai
At 2013-11-06 22:03:31,"皓月" wrote:
1. I have installed ceph with one mon/mds and one osd.When i use 'ceph
-s',there s
>> Once I know a drive has had a head failure, do I trust that the rest of the
>> drive isn't going to go at an inconvenient moment vs just fixing it right
>> now when it's not 3AM on Christmas morning? (true story) As good as Ceph
>> is, do I trust that Ceph is smart enough to prevent spreadin
For #2, I just wrote a document on setting up a federated
architecture. You can view it here:
http://ceph.com/docs/master/radosgw/federated-config/ This
functionality will be available in the Emperor release.
The use case I described involved two zones in a master region talking
to the same underl
I was under the same impression - using a small portion of the SSD via
partitioning (in my case - 30 gigs out of 240) would have the same effect as
activating the HPA explicitly.
Am I wrong?
On Nov 7, 2013, at 8:16 PM, ja...@peacon.co.uk wrote:
> On 2013-11-07 17:47, Gruher, Joseph R wrote:
I've seen this before too. CentOS starts up without networking on by
default. In my case, the problem was that the monitors cannot form a
quorum and OSDs cannot find each other or monitors. Hence, you get
that broken pipe error. You either need to have the networking start
on startup before the OSD
On 2013-11-07 17:47, Gruher, Joseph R wrote:
I wonder how effective trim would be on a Ceph journal area.
If the journal empties and is then trimmed the next write cycle
should
be faster, but if the journal is active all the time the benefits
would be lost almost immediately, as those cells ar
On 11/07/2013 11:47 AM, Gruher, Joseph R wrote:
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Dinu Vlad
Sent: Thursday, November 07, 2013 3:30 AM
To: ja...@peacon.co.uk; ceph-users@lists.ceph.com
Subject: Re: [ceph-user
Under grizzly we disabled completely the image injection via
libvirt_inject_partition = -2 in nova.conf. I'm not sure rbd images can even be
mounted that way - but then again, I don't have experience with havana. We're
using config disks (which break live migrations) and/or the metadata service
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Dinu Vlad
>Sent: Thursday, November 07, 2013 3:30 AM
>To: ja...@peacon.co.uk; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph cluster performance
>In this case h
Hi all
we have installed a Havana OpenStack cluster with RBD as the backing storage
for volumes, images and the ephemeral images. The code as delivered in
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L498
fails, because the RBD.path it not set. I have patched
On Wed, Nov 6, 2013 at 8:25 PM, Eyal Gutkind wrote:
> Trying to install ceph on my machines.
>
> Using RHEL6.3 I get the following error while invoking ceph-deploy.
>
>
>
> Tried to install sphinx on ceph-node, seems to be success full and
> installed.
>
> Still, it seems that during the installat
1. I have installed ceph with one mon/mds and one osd.When i use 'ceph
-s',there si a warning:health HEALTH_WARN 384 pgs degraded; 384 pgs stuck
unclean; recovery 21/42 degraded (50.000%)
2. i mount a client.'192.168.3.189:/ 100G 1009M 97G 2% /mnt/ceph'
but i can't creat a file or a
Hi!
I have a question about activating OSD on whole disk. I can't bypass this issue.
Conf spec: 8 VMs - ceph-deploy; ceph-admin; ceph-mon0-2 and ceph-node0-2;
I started from creating MON - all good .
After that I want to prepare and activate 3x OSD with dm-crypt.
So I put on ceph.conf this
[osd
Hi everyone,
I just started to look at the documentation of Ceph and I've hit
something I don't understand.
It's about something on http://ceph.com/docs/master/architecture/
"""
use the following steps to compute PG IDs.
The client inputs the pool ID and the object ID. (e.g., pool =
?liver
Hi All,
There is a new (bug-fix) release of ceph-deploy, the easy deployment
tool for Ceph.
There were a couple of issues related to GPG keys when installing in
Debian and Debian-based distros that where addressed.
A fix was added to improve moving temporary files to overwrite other
files like c
On Thu, Nov 7, 2013 at 7:53 AM, nicolasc wrote:
> Hi every one,
>
> The version 1.3 of ceph-deploy I installed yesterday from official repo
> used:
> sudo wget ... | apt-key add
> to install which failed because the apt-key command was not run with sudo,
> but the version 1.3.1 I got this morning
Hi every one,
The version 1.3 of ceph-deploy I installed yesterday from official repo
used:
sudo wget ... | apt-key add
to install which failed because the apt-key command was not run with
sudo, but the version 1.3.1 I got this morning seems to work (no pipe
anymore, it uses a file, and sudo
Looking forward to it. Tests done so far show some interesting results - so I'm
considering it for future production use.
On Nov 7, 2013, at 1:01 PM, Sage Weil wrote:
> The challenge here is that libzfs is currently a build time dependency, which
> means it needs to be included in the target
I had great results from the older 530 series too.
In this case however, the SSDs were only used for journals and I don't know if
ceph-osd sends TRIM to the drive in the process of journaling over a block
device. They were also under-subscribed, with just 3 x 10G partitions out of
240 GB raw c
The challenge here is that libzfs is currently a build time dependency, which
means it needs to be included in the target distro already, or we need to
bundle it in the Ceph.com repos.
I am currently looking at the possibility of making the OSD back end
dynamically linked at runtime, which woul
Any chance this option will be included for future emperor binaries? I don't
mind compiling software, but I would like to keep things upgradable via apt-get
…
Thanks,
Dinu
On Nov 7, 2013, at 4:05 AM, Sage Weil wrote:
> Hi Dinu,
>
> You currently need to compile yourself, and pass --with-zf
Hi all
Ceph Object Store service can spans geographical locals . Now ceph also
provides FS and RBD .IF our applications need the RBD service .Can we provide
backup and disaster recovery for it via gateway through some transfermation ?
In fact the cluster stored RBD data as objects in pool
61 matches
Mail list logo