Hello Greg,
Output of 'ceph osd tree':
# idweight type name up/down reweight
-1 27.3root default
-2 9.1 host stor1
0 3.64osd.0 up 1
1 3.64osd.1 up 1
2 1.82osd.2 up
On Tue, Aug 13, 2013 at 10:41:53AM -0500, Mark Nelson wrote:
Hi Mark,
> On 08/13/2013 02:56 AM, Dmitry Postrigan wrote:
> >>>I am currently installing some backup servers with 6x3TB drives in them. I
> >>>played with RAID-10 but I was not
> >>>impressed at all with how it performs during a recov
Hello to all,
I've a big issue with Ceph RadosGW.
I did a PoC some days ago with radosgw. It worked well.
Ceph version 0.67.3 under CentOS 6.4
Now, I'm installing a new cluster but, I can't succeed. I do not understand why.
Here is some elements :
ceph.conf:
[global]
filestore_xattr_use_omap =
Hi to all.
Let's assume a Ceph cluster used to store VM disk images.
VMs will be booted directly from the RBD.
What will happens in case of OSD failure if the failed OSD is the
primary where VM is reading from ?
___
ceph-users mailing list
ceph-users@lis
Yeah,rbd clone works well, thanks a lot!
2013/9/16 Sage Weil
> On Mon, 16 Sep 2013, Chris Dunlop wrote:
> > On Mon, Sep 16, 2013 at 09:20:29AM +0800, ??? wrote:
> > > Hi all:
> > >
> > > I have a 30G rbd block device as virtual machine disk, Aleady installed
> > > ubuntu 12.04. About 1G space u
hi
i follow the admin api document
http://ceph.com/docs/master/radosgw/adminops/ ,
when i get user info , it rentue 405 not allowed
my commond is
curl -XGET http://kp/admin/user?format=json -d'{"uid":"user1"}'
-H'Authoeization:AWS **:**' -H'Date:**' -i -v
the reasult is
405 M
Hello,
I'm trying to download objects from one container (which contains 3 million
objects, file sizes between 16K and 1024K) parallel 10 threads. I'm using
"s3" binary comes from libs3. I'm monitoring download time, response time
of 80% lower than 50-80 ms. But sometimes download hanging up, up t
On 09/13/2013 01:02 PM, Mihály Árva-Tóth wrote:
Hello,
How can I decrease logging level of radosgw? I uploaded 400k pieces of
objects and my radosgw log raises to 2 GiB. Current settings:
rgw_enable_usage_log = true
rgw_usage_log_tick_interval = 30
rgw_usage_log_flush_threshold = 1024
rgw_usage
On Mon, Sep 16, 2013 at 8:30 PM, Gruher, Joseph R
wrote:
>>-Original Message-
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>Subject: Re: [ceph-users] problem with ceph-deploy hanging
>>
>>ceph-deploy will use the user as you are currently executing. That is why, if
>>you are cal
On 09/16/2013 11:29 AM, Nico Massenberg wrote:
Am 16.09.2013 um 11:25 schrieb Wido den Hollander :
On 09/16/2013 11:18 AM, Nico Massenberg wrote:
Hi there,
I have successfully setup a ceph cluster with a healthy status.
When trying to create a rbd block device image I am stuck with an error w
Le 17/09/2013 14:48, Alfredo Deza a écrit :
On Mon, Sep 16, 2013 at 8:30 PM, Gruher, Joseph R
wrote:
[...]
Unfortunately, logging in as my ceph user on the admin system (with a matching user on
the target system) does not affect my result. The "ceph-deploy install" still
hangs here:
[cephtes
Hello all,
I am new to the list.
I have a single machines setup for testing Ceph. It has a dual proc 6
cores(12core total) for CPU and 128GB of RAM. I also have 3 Intel 520
240GB SSDs and an OSD setup on each disk with the OSD and Journal in
separate partitions formatted with ext4.
My goal here
The VM read will hang until a replica gets promoted and the VM resends the
read. In a healthy cluster with default settings this will take about 15
seconds.
-Greg
On Tuesday, September 17, 2013, Gandalf Corvotempesta wrote:
> Hi to all.
> Let's assume a Ceph cluster used to store VM disk images.
Your 8k-block dd test is not nearly the same as your 8k-block rados bench
or SQL tests. Both rados bench and SQL require the write to be committed to
disk before moving on to the next one; dd is simply writing into the page
cache. So you're not going to get 460 or even 273MB/s with sync 8k
writes r
Windows default (NTFS) is a 4k block. Are you changing the allocation unit to
8k as a default for your configuration?
- Original Message -
From: "Gregory Farnum"
To: "Jason Villalta"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, September 17, 2013 10:40:09 AM
Subject: Re: [ceph-use
Oh, and you should run some local sync benchmarks against these drives to
figure out what sort of performance they can deliver with two write streams
going on, too. Sometimes the drives don't behave the way one would expect.
-Greg
On Tuesday, September 17, 2013, Gregory Farnum wrote:
> Your 8k-bl
2013/9/17 Gregory Farnum :
> The VM read will hang until a replica gets promoted and the VM resends the
> read. In a healthy cluster with default settings this will take about 15
> seconds.
Thank you.
___
ceph-users mailing list
ceph-users@lists.ceph.com
You could be suffering from a known, but unfixed issue [1] where spindle
contention from scrub and deep-scrub cause periodic stalls in RBD. You
can try to disable scrub and deep-scrub with:
# ceph osd set noscrub
# ceph osd set nodeep-scrub
If your problem stops, Issue #6278 is likely the caus
Thanks for you feed back it is helpful.
I may have been wrong about the default windows block size. What would be
the best tests to compare native performance of the SSD disks at 4K blocks
vs Ceph performance with 4K blocks? It just seems their is a huge
difference in the results.
On Tue, Sep
Ahh thanks I will try the test again with that flag and post the results.
On Sep 17, 2013 11:38 AM, "Campbell, Bill"
wrote:
> As Gregory mentioned, your 'dd' test looks to be reading from the cache
> (you are writing 8GB in, and then reading that 8GB out, so the reads are
> all cached reads) so t
Well, that all looks good to me. I'd just keep writing and see if the
distribution evens out some.
You could also double or triple the number of PGs you're using in that
pool; it's not atrocious but it's a little low for 9 OSDs.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
O
On Tue, Sep 17, 2013 at 1:29 AM, Alexis GÜNST HORN
wrote:
> Hello to all,
>
> I've a big issue with Ceph RadosGW.
> I did a PoC some days ago with radosgw. It worked well.
>
> Ceph version 0.67.3 under CentOS 6.4
>
> Now, I'm installing a new cluster but, I can't succeed. I do not understand
> wh
Hi!
I've a remote server with one unit where is installed Ubuntu. I can't create
another partition on the disk to install OSD because is mounted. There is
another way to install OSD? Maybe in a folder?
And another question... Could I configure Ceph to make a particular replica in
a particular O
I see that you added your public and cluster networks under an [osd]
section. All daemons use the public network, and OSDs use the cluster
network. Consider moving those settings to [global].
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks
Also, I do believe I had
If you use OpenStack, you should fill out the user survey:
https://www.openstack.org/user-survey/Login
In particular, it helps us to know how openstack users consume their
storage, and it helps the larger community to know what kind of storage
systems are being deployed.
sage
_
As Gregory mentioned, your 'dd' test looks to be reading from the cache (you are writing 8GB in, and then reading that 8GB out, so the reads are all cached reads) so the performance is going to seem good. You can add the 'oflag=direct' to your dd test to try and get a more accurate reading from th
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Gilles Mocellin
>
>So you can add something like this in all ceph nodes' /etc/sudoers (use
>visudo) :
>
>Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy"
You can deploy an osd using ceph deploy to folder. Use ceph-deploy odd
prepare host:/path
On Sep 17, 2013 1:40 PM, "Jordi Arcas" wrote:
> Hi!
> I've a remote server with one unit where is installed Ubuntu. I can't
> create another partition on the disk to install OSD because is mounted.
> There
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>
>I was about to ask if you had tried running that command through SSH, but
>you did and had correct behavior. This is puzzling for me because that is
>exactly what ceph-deploy does :/
>
>When you say 'via SSH comman
I will try both suggestions, Thank you for your input.
On Tue, Sep 17, 2013 at 5:06 PM, Josh Durgin wrote:
> Also enabling rbd writeback caching will allow requests to be merged,
> which will help a lot for small sequential I/O.
>
>
> On 09/17/2013 02:03 PM, Gregory Farnum wrote:
>
>> Try it wi
Hi All,
I set up a new cluster today w/ 20 OSDs spanning 4 machines (journals not
stored on separate disks), and a single MON running on a separate server
(understand the single MON is not ideal for production environments).
The cluster had the default pools along w/ the ones created by radosgw.
Try it with oflag=dsync instead? I'm curious what kind of variation
these disks will provide.
Anyway, you're not going to get the same kind of performance with
RADOS on 8k sync IO that you will with a local FS. It needs to
traverse the network and go through work queues in the daemon; your
primary
Here are the results:
dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=dsync
819200 bytes (8.2 GB) copied, 266.873 s, 30.7 MB/s
On Tue, Sep 17, 2013 at 5:03 PM, Gregory Farnum wrote:
> Try it with oflag=dsync instead? I'm curious what kind of variation
> these disks will provide.
I have examined logs.
Yes, first time it can be scrubbing. It repaired some self.
I had 2 servers before first problem: one dedicated for osd (osd.0), and second
- with osd and websites (osd.1).
After problem I add third server - dedicated for osd (osd.2) and call
ceph osd set out osd.1 for repl
Also enabling rbd writeback caching will allow requests to be merged,
which will help a lot for small sequential I/O.
On 09/17/2013 02:03 PM, Gregory Farnum wrote:
Try it with oflag=dsync instead? I'm curious what kind of variation
these disks will provide.
Anyway, you're not going to get the s
Here are the stats with direct io.
dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=direct
819200 bytes (8.2 GB) copied, 68.4789 s, 120 MB/s
dd if=ddbenchfile of=/dev/null bs=8K
819200 bytes (8.2 GB) copied, 19.7318 s, 415 MB/s
These numbers are still over all much faster than wh
Hi all,
I am testing a ceph environment installed in debian wheezy, and, when
testing file upload of more than 1 GB, I am getting errors. For files
larger than 5 GB, I get a "400 Bad Request EntityTooLarge" response;
looking at the radosgw server, I notice that only the apache process is
co
So what I am gleaming from this is it better to have more than 3 ODSs since
the OSD seems to add additional processing overhead when using small blocks.
I will try to do some more testing by using the same three disks but with 6
or more OSDs.
If the OSD has is limited by processing is it safe to
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public network.
Here is the status of my cluster.
~/fio
Hello Ceph Users Group,
Looking for rbd.ko for Centos6.3_x64 (2.6.32) or Centos6.4_x64 (2.6.38).
Or point me to a buildable source or a rpm kernel package that has it.
Regards,
Ben
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of raj kumar
Sent: Mo
On Tue, Sep 17, 2013 at 3:21 PM, Gerd Jakobovitsch wrote:
> Hi all,
>
> I am testing a ceph environment installed in debian wheezy, and, when
> testing file upload of more than 1 GB, I am getting errors. For files larger
> than 5 GB, I get a "400 Bad Request EntityTooLarge" response; looking at
Alright last resort here. Still getting the error message as quoted
below, doesn't matter if I do it with ceph-deploy on the admin node or
ceph-disk on the actual osd node. I've turned off all options for
automounting on desktop Ubuntu 13.04, and even reinstalled the node
with server Ubuntu 13.04.
Hi ceph-users, ceph-devel,
Previously I sent one mail to ask for help on ceph unit test and function
test. Thanks to one of your guys, I got replied about unit test.
Since we are planning to use ceph, but with strict quality bar inside, we
have to evaluate and test the major features we want to u
Hi,all
I installed rgw with a healthy ceph cluster .Although it works well with S3
api ,can it be connected by Cyberduck ?
I've tried with the rgw user configure,but failed all the time.
{ "user_id": "johndoe",
"display_name": "John Doe",
"email": "",
"suspended": 0,
"max_buckets": 1000
Hi ceph-users,
Previously I sent one mail to ask for help on ceph unit test and function
test. Thanks to one of your guys, I got replied about unit test.
Since we are planning to use ceph, but with strict quality bar inside, we
have to evaluate and test the major features we want to use by oursel
Is there a way to enable index documents for radosgw buckets? If not, is that
on the roadmap? I've looked around but have not seen anything. Thanks!
Nelson Jeppesen
Disney Technology Solutions and Services
Phone 206-588-5001
___
ceph-users mailin
I rename few images when cluster was in degradeted state. Now I can't map one
of them with error:
rbd: add failed: (6) No such device or address
I try rename failed image to old name, it isn't solve problem.
P.S. Now clusted in degrade state too - it remap data between osd after mark
one of osd
47 matches
Mail list logo