Hello!
We are using Ceph RBD kernel module, on RHEL 7.0, with Ceph "Firefly"
0.80.5..
Does RBD kernel module support Cache Tiering in Firefly?
If not, when will RBD kernel module support Cache Tiering (Linux kernel
version and Ceph version)?
Regards,
Amit Vijairania | Cisco Systems, Inc.
--*--
Hello!
We are using Ceph RBD kernel module, on RHEL 7.0, with Ceph "Firefly" 0.80.5..
Does RBD kernel module support Cache Tiering in Firefly?
If not, when will RBD kernel module support Cache Tiering (Linux
kernel version and Ceph version)?
Regards,
Amit Vijairania | Cisco Systems, Inc.
--*--
Hello!
In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per
host / 150 OSDs per rack), is it possible to create a ruleset for a
pool such that the Primary and Secondary PGs/replicas are placed in
one rack and Tertiary PG/replica is placed in the other rack?
root standard {
id -1 #
Hi,
I would like to known with libleveldb should be us with firefly.
I'm using debian wheezy which provide really old libleveldb (I don't use it),
and in wheezy backport 1.17 is provided.
But in intank repositories , I see that 1.9 is provide for some distribs.
So, what is the best/tested ve
Hey guys,
After reading ceph source code, I find that there is a file named
common/likely.h and it implements the function likely() and unlikey() which
will optimize the prediction of code branch for cpu.
But there isn't any place using these two functions, I am curious
about why the developer of
Perhaps this question belongs in ceph-dev ?
*Marco Garcês*
*#sysadmin*
Maputo - Mozambique
*[Phone]* +258 84 4105579
*[Skype]* marcogarces
On Mon, Sep 15, 2014 at 12:28 PM, Tim Zhang wrote:
> Hey guys,
> After reading ceph source code, I find that there is a file named
> common/likely.h and it
Greg,
So is the consensus that the appropriate way to implement this scenario is
to have the fs created on the EC backing pool vs. the cache pool but that
the UI check needs to be tweaked to distinguish between this scenario and
just trying to use a EC pool alone?
I'm also interested in the scena
Hi
Does anyone know how to check the basic cache pool stats for the information
like how well the cache layer is working for a recent or historic time frame?
Things like cache hit ratio would be very helpful as well as.
Thanks
Andrei
___
ceph-use
Hi,
I have some strange OSD problems. Before the weekend I started some
rsync tests over CephFS, on a cache pool with underlying EC KV pool.
Today the cluster is completely degraded:
[root@ceph003 ~]# ceph status
cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d
health HEALTH_WARN 19
Hi all,
I have no idea why running out of filehandles should produce a "out of
memory" error, but well. I've increased the ulimit as you told me, and
nothing changed. I've noticed that the osd init script sets the max open
file handles explicitly, so I was setting the corresponding option in my
ce
Hey guys,
After reading ceph source code, I find that there is a file named
common/likely.h and it implements the function likely() and unlikey() which
will optimize the prediction of code branch for cpu.
But there isn't any place using these two functions, I am curious
about why the developer of
Hi,
I have some strange OSD problems. Before the weekend I started some
rsync tests over CephFS, on a cache pool with underlying EC KV pool.
Today the cluster is completely degraded:
[root@ceph003 ~]# ceph status
cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d
health HEALTH_WARN 19 pg
- Original Message -
> From: "Mark Nelson"
> To: ceph-users@lists.ceph.com
> Sent: Monday, 15 September, 2014 1:13:01 AM
> Subject: Re: [ceph-users] Bcache / Enhanceio with osds
> On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote:
> > Hello guys,
> >
> > Was wondering if anyone uses or d
On 09/15/2014 07:35 AM, Andrei Mikhailovsky wrote:
*From: *"Mark Nelson"
*To: *ceph-users@lists.ceph.com
*Sent: *Monday, 15 September, 2014 1:13:01 AM
*Subject: *Re: [ceph-users] Bcache / Enhanceio with os
the ceph-dev always deny my mail telling that my mail is a jam because it
include html code, actually it is not.
2014-09-15 19:43 GMT+08:00 Marco Garcês :
> Perhaps this question belongs in ceph-dev ?
>
>
> *Marco Garcês*
> *#sysadmin*
> Maputo - Mozambique
> *[Phone]* +258 84 4105579
> *[Skype]*
On Mon, 15 Sep 2014 22:48:07 +0800 Tim Zhang wrote:
> the ceph-dev always deny my mail telling that my mail is a jam because it
> include html code, actually it is not.
>
Actually it is, like this very mail from you.
I would think/hope that there is a configuration option in Gmail to turn
that o
Hi
ceph daemon osd.x perf dump will show you the stats Andrei
JC
On Monday, September 15, 2014, Andrei Mikhailovsky
wrote:
> Hi
>
> Does anyone know how to check the basic cache pool stats for the
> information like how well the cache layer is working for a recent or
> historic time frame? Thi
Hi Amit,
On Mon, 15 Sep 2014, Amit Vijairania wrote:
> Hello!
>
> In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per
> host / 150 OSDs per rack), is it possible to create a ruleset for a
> pool such that the Primary and Secondary PGs/replicas are placed in
> one rack and Tertiary
Hi there,
I am new to Ceph and in general to Cloud Storage.
I want to know that is there different Ceph configuration for variant
DC storage needs? By DC storage needs I mean, for example Reliability
or High performance Storage system.
In other words, is there different Ceph configuration if I want
I don't know where the file came from, but likely/unlikely markers are the
kind of micro-optimization that isn't worth the cost in Ceph dev resources
right now.
-Greg
On Monday, September 15, 2014, Tim Zhang wrote:
> Hey guys,
> After reading ceph source code, I find that there is a file named
>
I agree with Greg. When dealing with the latencies that we deal with due to
different IO operations (networking, storage), it's mostly not worth the
trouble. I think the main reason we didn't actually put it to use is that
we forgot we've had this macro defined, and it really wasn't worth the
troub
Not sure, but have you checked the clocks on their nodes? Extreme
clock drift often results in strange cephx errors.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Sun, Sep 14, 2014 at 11:03 PM, Florian Haas wrote:
> Hi everyone,
>
> [Keeping this on the -users list for no
The pidfile bug is already fixed in master/giant branches.
As for the crashing, I'd try killing all the osd processes and turning
them back on again. It might just be some daemon restart failed, or
your cluster could be sufficiently overloaded that the node disks are
going unresponsive and they're
On Mon, Sep 15, 2014 at 6:32 AM, Berant Lemmenes wrote:
> Greg,
>
> So is the consensus that the appropriate way to implement this scenario is
> to have the fs created on the EC backing pool vs. the cache pool but that
> the UI check needs to be tweaked to distinguish between this scenario and
> j
Thanks Sage! We will test this and share our observations..
Regards,
Amit
Amit Vijairania | 415.610.9908
--*--
On Mon, Sep 15, 2014 at 8:28 AM, Sage Weil wrote:
> Hi Amit,
>
> On Mon, 15 Sep 2014, Amit Vijairania wrote:
>> Hello!
>>
>> In a two (2) rack Ceph cluster, with 15 hosts per rack
If it's true, is there any other tools I can use to check and repair the
file system?
Thanks,
Brandon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Sep 15, 2014 at 3:23 PM, brandon li wrote:
> If it's true, is there any other tools I can use to check and repair the
> file system?
Not much, no. That said, you shouldn't really need an fsck unless the
underlying RADOS store went through some catastrophic event. Is there
anything in part
Thanks for the reply, Greg.
With traditional file system experience, I have to admit it will take me
some time to get used to the way CephFS works. Considering it as part of
my learning curve. :-)
One of concerns I have it that, without tools like fsck, how could we know
the file system is stil
CephFS in general has a lot fewer metadata structures than traditional
filesystems generally do; about the only thing that could go wrong
without users noticing directly is:
1) The data gets corrupted
2) Files somehow get removed from folders.
Data corruption is something RADOS is responsible for
Hi, I am new to ceph. Now I have a problem about my ceph status. The
following is my ceph status:
# ceph --version
ceph version 0.72.1 (4d923861868f6a15dcb33fef7f50f674997322de)
# ceph -w
cluster 323e974d-ea51-4d10-94e5-8b1ae7a41429
health HEALTH_WARN 305 pgs degraded; 448 pgs stuck uncl
Hi all!
As document says, ceph has some default pools for radosgw instance. These pools
are:
* .rgw.root
* .rgw.control
* .rgw.gc
* .rgw.buckets
* .rgw.buckets.index
* .log
* .intent-log
* .usage
* .users
* .users.ema
Great to know you are working on it!
I am new to the mailing list. Is there any reference of discussion last
year, so I can look into. or any bug number I can watch to keep track of
the development?
Thanks,
Brandon
___
ceph-users mailing list
ceph-user
ok, thank you all.
2014-09-16 0:52 GMT+08:00 Yehuda Sadeh :
> I agree with Greg. When dealing with the latencies that we deal with due
> to different IO operations (networking, storage), it's mostly not worth the
> trouble. I think the main reason we didn't actually put it to use is that
> we for
33 matches
Mail list logo