Hi,
My environment has 32 core CPU, and 256GB memory. The SSD can get
30k write IOPS when use directIO.
Finally, i figure out the problem, after change the scheduler of SSD to
noop, the
performance improve obviously.
Please forgive me, i don't realize IO scheduler could impact performance so
much
Hi,
I came into a strange problem I've never seen, like this:
esta@storageOne:~$ sudo ceph -s
[sudo] password for esta:
cluster 0b9b05db-98fe-49e6-b12b-1cce0645c015
health HEALTH_WARN
512 pgs stuck unclean
recovery 1440/2160 objects degraded (66.667%)
r
The fact is that journal could help a lot for rbd use cases,
especially for small ios. I don' t think it will be bottleneck. If we
just want to reduce double write, it doesn't solve any performance
problem.
For rgw and cephfs, we actually need journal to keep atomic.
On Tue, Oct 20, 2015 at 8:54
> On 20 Oct 2015, at 01:43, Josh Durgin wrote:
>
> On 10/19/2015 02:45 PM, Jan Schermer wrote:
>>
>>> On 19 Oct 2015, at 23:15, Gregory Farnum wrote:
>>>
>>> On Mon, Oct 19, 2015 at 11:18 AM, Jan Schermer wrote:
I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear
>>>
On 10/19/2015 02:45 PM, Jan Schermer wrote:
On 19 Oct 2015, at 23:15, Gregory Farnum wrote:
On Mon, Oct 19, 2015 at 11:18 AM, Jan Schermer wrote:
I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear what
other people using Ceph think.
If I were to use RADOS directly in m
> If I were to use RADOS directly in my app I'd probably rejoice at its
> capabilities and how useful and non-legacy it is, but my use is basically
> for RBD volumes with OpenStack (libvirt, qemu...). And for that those
> capabilities are unneeded.
Just to clarify, RBD does utilize librados transa
The classic case is when you are just trying Ceph out on a laptop (e.g.,
using file directories for OSDs, setting the replica size to 2, and
setting osd_crush_chooseleaf_type to 0).
The statement is a guideline. You could, in fact, create a CRUSH hierachy
consisting of OSD/journal groups within a
Can librbd interface provide abort api for aborting IO? If yes, can the
abort interface detach write buffer immediately? I hope can reuse the write
buffer quickly after issued the abort request, while not waiting IO aborted
in osd side.
thanks.
___
ceph-
On Mon, Oct 19, 2015 at 3:26 PM, Erming Pei wrote:
> I see. That's also what I needed.
> Thanks.
>
> Can we only allow a part of the 'namespace' or directory tree to be mounted
> from server end? Just like NFS exporting?
> And even setting of permissions as well?
This just got merged into the mas
I see. That's also what I needed.
Thanks.
Can we only allow a part of the 'namespace' or directory tree to be
mounted from *server* end? Just like NFS exporting?
And even setting of permissions as well?
Erming
On 10/19/15, 4:07 PM, Gregory Farnum wrote:
On Mon, Oct 19, 2015 at 3:06 PM, Er
Hi, I am curious whether we need journal instead using file system existing journal, at least for the block use case. Can you help explain more how ceph guarantee data, file, states, leveldb update atomically by using ceph journal?发自网易邮箱大师
在2015年10月15日 02:29,Somnath Roy 写道:File
On Mon, Oct 19, 2015 at 3:06 PM, Erming Pei wrote:
> Hi,
>
>Is there a way to list the namespaces in cephfs? How to set it up?
>
>From man page of ceph.mount, I see this:
>
> To mount only part of the namespace:
>
> mount.ceph monhost1:/some/small/thing /mnt/thing
>
> But how t
Hi,
Is there a way to list the namespaces in cephfs? How to set it up?
From man page of ceph.mount, I see this:
/To mount only part of the namespace://
//
// mount.ceph monhost1:/some/small/thing /mnt/thing/
But how to know the namespaces at first?
Thanks,
Erming
--
-
> On 19 Oct 2015, at 23:15, Gregory Farnum wrote:
>
> On Mon, Oct 19, 2015 at 11:18 AM, Jan Schermer wrote:
>> I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear what
>> other people using Ceph think.
>>
>> If I were to use RADOS directly in my app I'd probably rejoice at
On Mon, Oct 19, 2015 at 11:18 AM, Jan Schermer wrote:
> I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear what
> other people using Ceph think.
>
> If I were to use RADOS directly in my app I'd probably rejoice at its
> capabilities and how useful and non-legacy it is, but m
This Hammer point fixes several important bugs in Hammer, as well as
fixing interoperability issues that are required before an upgrade to
Infernalis. That is, all users of earlier version of Hammer or any
version of Firefly will first need to upgrade to hammer v0.94.4 or
later before upgrading to
As the infernalis release notes state, if you're upgrading you first
need to step through the current development hammer branch or the
(not-quite-release 0.94.4).
-Greg
On Thu, Oct 15, 2015 at 7:27 AM, German Anders wrote:
> Hi all,
>
> I'm trying to upgrade a ceph cluster (prev hammer release) t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think if there was a new disk format, we could get away without the
journal. It seems that Ceph is trying to do extra things because
regular file systems don't do exactly what is needed. I can understand
why the developers aren't excited about buil
Sorry about that, I guess newer releases than my Dumpling calculate it
differently, then.
I can take a look tomorrow at the exact numbers I get, but I'm pretty sure it's
just a sum on D.
Jan
> On 19 Oct 2015, at 20:40, John Spray wrote:
>
> On Mon, Oct 19, 2015 at 7:28 PM, Jan Schermer wrote
On Mon, Oct 19, 2015 at 7:28 PM, Jan Schermer wrote:
> Cinder checking free space will not help.
> You will get one full OSD long before you run "out of space" from Ceph
> perspective, and it gets worse with the number of OSDs you have. Using 99%
> of space in Ceph is not the same as having all th
Cinder checking free space will not help.
You will get one full OSD long before you run "out of space" from Ceph
perspective, and it gets worse with the number of OSDs you have. Using 99% of
space in Ceph is not the same as having all the OSDs 99% full because the data
is not distributed in a co
I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear what
other people using Ceph think.
If I were to use RADOS directly in my app I'd probably rejoice at its
capabilities and how useful and non-legacy it is, but my use is basically for
RBD volumes with OpenStack (libvirt, qem
Cinder will periodically inspect the free space of the volume services and
use this data when determining which one to schedule to when a request is
received. In this case the cinder volume create request may error out in
scheduling. You may also see an error when instantiating a volume from an
ima
Hi John,
Thanks for your explanations.
Actually, clients can. Clients can request fairly complex operations like
"read an xattr, stop if it's not there, now write the following discontinuous
regions of the file...". RADOS executes these transactions atomically.
[James] Could you m
I'm working with some teams who would like to not only create ACLs within
RADOSGW to a tenant level, they would like to tailor ACLs to users within that
tenant. After trial and error, I can only seem to get ACLs to stick at a
tenant level using the keystone tenant ID uuid.
Is this expected beh
John Spray 2015-10-19 11:34:
CephFS supports capabilities to manages access to objects, enforce
consistency of data etc. IMHO a sane way to handle the page cache is use a
capability to inform the mds about caches objects; as long as no other
client claims write access to an object or its metada
Hi,
when an OSD gets full, any write operation to the entire cluster will be
disabled.
As a result, creating a single RBD will become impossible and all VMs that need
to write to one of their Ceph back RBDs will suffer the same pain.
Usually, this ends up as a bad sorry for the VMs.
The best
Hi,
It got taken down when there was that security issue on ceph.com a
couple of weeks back. I'll bug the website admins again about getting
it back up.
Mark
On 10/19/2015 06:13 AM, Iezzi, Federico wrote:
Hi there,
The content sharing at http://nhm.ceph.com/ is not anymore reachable on
In
>>> So: the key thing to realise is that caching behaviour is full of
>>> tradeoffs, and this is really something that needs to be tunable, so
>>> that it can be adapted to the differing needs of different workloads.
>>> Having an optional "hold onto caps for N seconds after file close"
>>> sounds
CC-ing ceph-users where this message belongs.
On 10/16/2015 05:41 PM, Michael Joy wrote:
> Hey Everyone,
>
> Is is possible to use Kerberos for authentication vs. the built in
> Cephx? Does anyone know the process to get it working if it is possible?
No, but it is on the wishlist for Jewel. Let
On Mon, Oct 19, 2015 at 12:52 PM, Dan van der Ster wrote:
> Your assumption doesn't match what I've seen (in high energy physics
> (HEP)). The implicit hint you describe is much more apparent when
> clients use object storage APIs like S3 or one of the oodles of
> network storage systems we use in
On Mon, Oct 19, 2015 at 12:52 PM, Dan van der Ster wrote:
>> So: the key thing to realise is that caching behaviour is full of
>> tradeoffs, and this is really something that needs to be tunable, so
>> that it can be adapted to the differing needs of different workloads.
>> Having an optional "hol
Hi,
On 10/19/2015 12:34 PM, John Spray wrote:
On Mon, Oct 19, 2015 at 8:59 AM, Burkhard Linke
wrote:
Hi,
On 10/19/2015 05:27 AM, Yan, Zheng wrote:
On Sat, Oct 17, 2015 at 1:42 AM, Burkhard Linke
wrote:
Hi,
I've noticed that CephFS (both ceph-fuse and kernel client in version
4.2.3)
remove
On Mon, Oct 19, 2015 at 12:34 PM, John Spray wrote:
> On Mon, Oct 19, 2015 at 8:59 AM, Burkhard Linke
> wrote:
>> Hi,
>>
>> On 10/19/2015 05:27 AM, Yan, Zheng wrote:
>>>
>>> On Sat, Oct 17, 2015 at 1:42 AM, Burkhard Linke
>>> wrote:
Hi,
I've noticed that CephFS (both ceph-fus
Hi there,
The content sharing at http://nhm.ceph.com/ is not anymore reachable on
Internet.
Could you please fix it?
Thanks,
F.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I tried upgrading ceph from 0.9.3 to 9.1.0, but ran into some troubles.
I chowned the /var/lib/ceph folder as described in the release notes,
but my journal is on a seperate partition, so I get:
Oct 19 11:58:59 ceph001.cubone.os systemd[1]: Started Ceph object
storage daemon.
Oct 19 1
On Mon, Oct 19, 2015 at 8:55 AM, Jan Schermer wrote:
> I understand this. But the clients can't request something that doesn't fit a
> (POSIX) filesystem capabilities
Actually, clients can. Clients can request fairly complex operations
like "read an xattr, stop if it's not there, now write the
On Mon, Oct 19, 2015 at 8:59 AM, Burkhard Linke
wrote:
> Hi,
>
> On 10/19/2015 05:27 AM, Yan, Zheng wrote:
>>
>> On Sat, Oct 17, 2015 at 1:42 AM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>> I've noticed that CephFS (both ceph-fuse and kernel client in version
>>> 4.2.3)
>>> remove files from pag
Thanks Jan!!
Cheers
Bharath
On 10/19/15, 3:17 PM, "Jan Schermer" wrote:
>It happened to me once but I didn't really have any time to investigate
>how exactly it behaves. Some VMs had to be rebooted, other VMs survived
>but I can't tell if for example rewriting the same block is possible.
>Only
It happened to me once but I didn't really have any time to investigate how
exactly it behaves. Some VMs had to be rebooted, other VMs survived but I can't
tell if for example rewriting the same block is possible.
Only writes should block in any case.
I don't know what happens to Cinder, but I d
I mean cluster OSDs are physically full.
I understand its not a pretty way to operate CEPH allowing to become full,
but I just wanted to know the boundary condition if it becomes full.
Will cinder create volume operation creates new volume at all or error is
thrown at Cinder API level itself stat
Do you mean when the CEPH cluster (OSDs) is physically full or when the quota
is reached?
If CEPH becomes full it just stalls all IO (maybe just write IO, but
effectively same thing) - not pretty and you must never ever let it become full.
Jan
> On 19 Oct 2015, at 11:15, Bharath Krishna wrot
Hi
What happens when Cinder service with CEPH backend storage cluster capacity is
FULL?
What would be the out come of new cinder create volume request?
Will volume be created with space not available for use or an error thrown from
Cinder API stating no space available for new volume.
I could
Hi,
On 10/19/2015 10:34 AM, Shinobu Kinjo wrote:
What kind of applications are you talking about regarding to applications
for HPC.
Are you talking about like netcdf?
Caching is quite necessary for some applications for computation.
But it's not always the case.
It's not quite related to this
What kind of applications are you talking about regarding to applications
for HPC.
Are you talking about like netcdf?
Caching is quite necessary for some applications for computation.
But it's not always the case.
It's not quite related to this topic but I'm really interested in your
thought usi
Hi,
On 10/19/2015 05:27 AM, Yan, Zheng wrote:
On Sat, Oct 17, 2015 at 1:42 AM, Burkhard Linke
wrote:
Hi,
I've noticed that CephFS (both ceph-fuse and kernel client in version 4.2.3)
remove files from page cache as soon as they are not in use by a process
anymore.
Is this intended behaviour?
I understand this. But the clients can't request something that doesn't fit a
(POSIX) filesystem capabilities. That means the requests can map 1:1 into the
filestore (O_FSYNC from client == O_FSYNC on the filestore object... ).
Pagecache/io-schedulers are already smart enough to merge requests,
47 matches
Mail list logo