It turns out the permission problem.
When I change to ceph.admin, I can read the file, and the file content
seems garbage.
Best regards,
On 2015年05月01日 02:07, Gregory Farnum wrote:
The not permitted bit usually means that your client doesn't have
access permissions to the data pool in use.
Is there any way to confirm (beforehand) that using SSDs for journals will
help?
We're seeing very disappointing Ceph performance. We have 10GigE
interconnect (as a shared public/internal network).
We're wondering whether it makes sense to buy SSDs and put journals on
them. But we're looking for
How many Rsync's are doing at a time? If it is only a couple, you will not
be able to take advantage of the full number of OSD's, as each block of data
is only located on 1 OSD (not including replicas). When you look at disk
statistics you are seeing an average over time, so it will look like the
O
Thanks for your answer, Nick.
Typically it's a single rsync session at a time (sometimes two, but rarely
more concurrently). So it's a single ~5GB typical linux filesystem from one
random VM to another random VM.
Apart from using RBD Cache, is there any other way to improve the overall
performanc
Also remember to drive your Ceph cluster as hard as you got means to, eg.
tuning the VM OSes/IO sub systems like using multiple RBD devices per VM (to
issue more out standing IOPs from VM IO subsystem), best IO scheduler, CPU
power + memory per VM, also ensure low network latency + bandwidth bet
Yeah, that's your problem, doing a single thread rsync when you have quite
poor write latency will not be quick. SSD journals should give you a fair
performance boost, otherwise you need to coalesce the writes at the client
so that Ceph is given bigger IOs at higher queue depths.
RBD Cache can
Piotr,
You may also investigate if the cache tier made of a couple of ssds could help
you. Not sure how the data is used in your company, but if you have a bunch of
hot data that moves around from one vm to another it might greatly speed up the
rsync. On the other hand, if a lot of rsync data i
On 01-05-15 11:42, Nick Fisk wrote:
> Yeah, that’s your problem, doing a single thread rsync when you have
> quite poor write latency will not be quick. SSD journals should give you
> a fair performance boost, otherwise you need to coalesce the writes at
> the client so that Ceph is given bigger
I have freshly install the Ceph hammer version 0.94.1 .
I am facing problems while configuring Rados gateway.I want to map
specific users to specific pools. For this I followed the following
links.
(1). http://comments.gmane.org/gmane.comp.file-systems.ceph.user/4992
(2). http://cephnotes.ksper
Hi,
On 01.05.2015 10:30, Piotr Wachowicz wrote:
> Is there any way to confirm (beforehand) that using SSDs for journals
> will help?
yes SSD-Journal helps a lot (if you use the right SSDs) for write speed,
and I made the experiences that this also helped (but not too much) for
read-performance.
>
We run a ceph cluster with radosgw on top of it. During the installation we
have never specified any regions or zones, which means that every bucket
currently resides in the default region. To support a federated config we have
built a test cluster that replicates the current production setup wi
> yes SSD-Journal helps a lot (if you use the right SSDs)
>
What SSDs to avoid for journaling from your experience? Why?
>
> > We're seeing very disappointing Ceph performance. We have 10GigE
> > interconnect (as a shared public/internal network).
> Which kind of CPU do you use for the OSD-hosts?
Hello,
On Fri, 1 May 2015 15:45:41 +0200 Piotr Wachowicz wrote:
> > yes SSD-Journal helps a lot (if you use the right SSDs)
> >
>
> What SSDs to avoid for journaling from your experience? Why?
>
Read the rather countless SSD threads on this ML, use the archive and your
google foo.
Like the _cu
By what I read on some of the topics, is it you guys opinion that Ceph cannot
scale nicely on full SSD cluster. Meaning that no matter how many OSD Node we
add, at some point you won’t be able to scale pass some throughput.
---
Anthony Lévesque
GloboTech Communications
Phone: 1-514-907-0050 x 208
On Fri, 1 May 2015, tuomas.juntu...@databasement.fi wrote:
> Hi
>
> I deleted the images and img pools and started osd's, they still die.
>
> Here's a log of one of the osd's after this, if you need it.
>
> http://beta.xaasbox.com/ceph/ceph-osd.19.log
I've pushed another commit that should avoi
Hi all,
I feel a bit like an idiot at the moment - I know there is a command
through ceph to query the monitor and OSD daemons to check their version
level, but I can't remember what it is to save my life and I'm having
trouble locating it in the docs. I need to make sure the entire cluster is
ru
Hello,
On Fri, 1 May 2015 12:03:59 -0400 Anthony Levesque wrote:
> By what I read on some of the topics, is it you guys opinion that Ceph
> cannot scale nicely on full SSD cluster. Meaning that no matter how many
> OSD Node we add, at some point you won’t be able to scale pass some
> throughput.
ceph --admin-daemon version
On Fri, May 1, 2015 at 10:44 AM, Tony Harris wrote:
> Hi all,
>
> I feel a bit like an idiot at the moment - I know there is a command through
> ceph to query the monitor and OSD daemons to check their version level, but
> I can't remember what it is to save my life a
Thanks, I'll do this when the commit is available and report back.
And indeed, I'll change to the official ones after everything is ok.
Br,
Tuomas
> On Fri, 1 May 2015, tuomas.juntu...@databasement.fi wrote:
>> Hi
>>
>> I deleted the images and img pools and started osd's, they still die.
>>
>>
On 30/04/2015 09:21, flisky wrote:
When I read the file through the ceph-fuse, the process crashed.
Here is the log -
terminate called after throwing an instance of
'ceph::buffer::end_of_buffer'
what(): buffer::end_of_buffer
*** Caught signal (Aborted) **
in thread 7
Hi Experts,
I need a quick advise on deployment of ceph cluster on AWS EC2 VMs.
1) I have two separate AWS accounts and I am trying to create ceph cluster
on one account and
create ceph-client on another account and connect.
(EC2 Account and VMs + ceph client) public ip---> (EC2 Account B + c
Hey there,
Sorry for the delay. I have been moving apartments UGH. Our dev team
found out how to quickly identify these files that are downloading a
smaller size::
iterate through all of the objects in a bucket and call for a key.size
in each item and compare it to conn.get_bucket().get_key(
On 2015年05月02日 03:02, John Spray wrote:
On 30/04/2015 09:21, flisky wrote:
When I read the file through the ceph-fuse, the process crashed.
Here is the log -
terminate called after throwing an instance of
'ceph::buffer::end_of_buffer'
what(): buffer::end_of_buffer
***
23 matches
Mail list logo