Any user access to an object promotes it into the cache pool.
On Wednesday, June 11, 2014, Alexandre DERUMIER wrote:
> >>We haven't really quantified that yet. In particular, it's going to
> >>depend on how many objects are accessed within a period; the OSD sizes
> >>them based on the previous a
>>We haven't really quantified that yet. In particular, it's going to
>>depend on how many objects are accessed within a period; the OSD sizes
>>them based on the previous access count and the false positive
>>probability that you give it
Ok, thanks Greg.
Another question, the doc describe how
Success! You nailed it. Thanks, Yehuda.
I can successfully use the second subuser.
Given this success, I also tried the following:
$ rados -p .users.swift get '' tmp
$ rados -p .users.swift put hive_cache:swift tmp
$ rados -p .users.swift rm ''
$ rados -p .users.swift ls
hive_cache:swift2
hive_c
The code checks the pg with the oldest scrub_stamp/deep_scrub_stamp to see
whether the osd_scrub_min_interval/osd_deep_scrub_interval time has elapsed.
So the output you are showing with the very old scrub stamps shouldn’t happen
under default settings. As soon set deep-scrub is re-enabled, t
On Mon, Jun 9, 2014 at 3:49 PM, Liu Baogang wrote:
> Dear Sir,
>
> In our test, we use ceph firefly to build a cluster. On a node with kernel
> 3.10.xx, if using kernel client to mount cephfs, when use 'ls' command,
> sometime no all the files can be listed. If using ceph-fuse 0.80.x, so far
> it
This http://ceph.com/docs/master/start/os-recommendations/ appears to be a
bit out of date only goes to Ceph 0.72). Presumably Ubuntu Trusty should
now be on that list in some form, e.g., for Firefly?
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-
(resending also to list)
Right. So Basically the swift subuser wasn't created correctly. I created
issue #8587. Can you try creating a second subuser, see if it's created
correctly the second time?
On Wed, Jun 11, 2014 at 2:03 PM, David Curtiss
wrote:
> Hmm Using that method, the subuser ob
I'll update the docs to incorporate the term "incomplete." I believe this
is due to an inability to complete backfilling. Your cluster is nearly
full. You indicated that you installed Ceph. Did you store data in the
cluster? Your usage indicates that you have used 111GB of 125GB. So you
only have a
New logs, with debug ms = 1, debug osd = 20.
In this timeline, I started the deep-scrub at 11:04:00 Ceph start
deep-scrubing at 11:04:03.
osd.11 started consuming 100% CPU around 11:07. Same for osd.0. CPU
usage is all user; iowait is < 0.10%. There is more variance in the
CPU usage now, ran
Hi Dimitri
It was already resolved , moderator took a long time to approve my email to get
posted to mailing list.
Thanks for your solution .
- Karan -
On 12 Jun 2014, at 00:02, Dimitri Maziuk wrote:
> On 06/09/2014 03:08 PM, Karan Singh wrote:
>
>>1. When installing Ceph using package
Hi Eric,
increase the number of PGs in your pool with
Step 1: ceph osd pool set pg_num
Step 2: ceph osd pool set pgp_num
You can check the number of PGs in your pool with ceph osd dump | grep ^pool
See documentation: http://ceph.com/docs/master/rados/operations/pools/
JC
On Jun 11, 20
On 06/09/2014 03:08 PM, Karan Singh wrote:
> 1. When installing Ceph using package manger and ceph repositores , the
> package manager i.e YUM does not respect the ceph.repo file and takes ceph
> package directly from EPEL .
Option 1: install yum-plugin-priorities, add priority = X to
I installed ceph and then I was ceph health it gives me the following output
HEALTH_WARN 384 pgs incomplete; 384 pgs stuck inactive; 384 pgs
stuck unclean; 2 near full osd(s)
This is the output of a single pg when I use ceph health detail
pg 2.2 is incomplete, acting [0] (reduci
Dear Sir,
In our test, we use ceph firefly to build a cluster. On a node with kernel
3.10.xx, if using kernel client to mount cephfs, when use ‘ls’ command,
sometime no all the files can be listed. If using ceph-fuse 0.80.x, so far it
seems it work well.
I guess that the kernel 3.10.xx is too
Hi,
I am seeing the following warning on one of my test clusters:
# ceph health detail
HEALTH_WARN pool Ray has too few pgs
pool Ray objects per pg (24) is more than 12 times cluster average (2)
This is a reported issue and is set to "Won't Fix" at:
http://tracker.ceph.com/issues/8103
My test
On Wed, Jun 11, 2014 at 12:44 PM, Alexandre DERUMIER
wrote:
> Hi,
>
> I'm reading tiering doc here
> http://ceph.com/docs/firefly/dev/cache-pool/
>
> "
> The hit_set_count and hit_set_period define how much time each HitSet should
> cover, and how many such HitSets to store. Binning accesses over
Hi,
I'm reading tiering doc here
http://ceph.com/docs/firefly/dev/cache-pool/
"
The hit_set_count and hit_set_period define how much time each HitSet should
cover, and how many such HitSets to store. Binning accesses over time allows
Ceph to independently determine whether an object was accesse
We need to move Ceph cluster to different network segment for
interconnectivity between mon and osc, anybody has the procedure regarding
how that can be done? Note that the host name reference will be changed, so
originally the osd host referenced as cephnode1, in the new segment it will
be cephnod
We have not experienced any downsides to this approach performance or
stability-wise, if you prefer you can experiment with the values, but I see no
real advantage in doing so.
Regards,
Maciej Bonin
Systems Engineer | M247 Limited
M247.com Connected with our Customers
Contact us today to discus
On Wed, Jun 11, 2014 at 5:18 AM, Davide Fanciola wrote:
> Hi,
>
> we have a similar setup where we have SSD and HDD in the same hosts.
> Our very basic crushmap is configured as follows:
>
> # ceph osd tree
> # id weight type name up/down reweight
> -6 3 root ssd
> 3 1 osd.3 up 1
> 4 1 osd.4 up 1
On Wed, Jun 11, 2014 at 4:56 AM, wrote:
> Hi All,
>
>
>
> I have a four node ceph cluster. The metadata service is showing as degraded
> in health. How to remove the mds service from ceph ?
Unfortunately you can't remove it entirely right now, but if you
create a new filesystem using the "newfs"
Hello I address you with this issue i noticed it with ceph 072.2 and
linux ubuntu 13.10 and with 0.80.1 with ubuntu 14.04.
here is what i do:
1) I create and format to ext4 or xfs a rbd image of 4 TB . the image
has --order 25 and --image-format 2
2) I create a snapshot of that rbd image
3) I
On Wednesday, June 11, 2014, Florent B wrote:
> Hi every one,
>
> Sometimes my MDS crashes... sometimes after a few hours, sometimes after
> a few days.
>
> I know I could enable debugging and so on to get more information. But
> if it crashes after a few days, it generates gigabytes of debugging
Thanks Bonin. Do you have totally 48 OSDs or there are 48 OSDs on each storage
node? Do you think "kernel.pid_max = 4194303" is reasonable since it increase
a lot from the default OS setting.
Wei Cao (Buddy)
-Original Message-
From: Maciej Bonin [mailto:maciej.bo...@m247.com]
Sent:
On Wed, Jun 11, 2014 at 9:29 AM, Markus Goldberg
wrote:
> Hi,
> ceph-deploy-1.5.3 can make trouble, if a reboot is done between preparation
> and aktivation of an osd:
>
> The osd-disk was /dev/sdb at this time, osd itself should go to sdb1,
> formatted to cleared, journal should go to sdb2, forma
Hello,
The values we use are as follows:
# sysctl -p
net.ipv4.ip_local_port_range = 1024 65535
net.core.netdev_max_backlog = 3
net.core.somaxconn = 16384
net.ipv4.tcp_max_syn_backlog = 252144
net.ipv4.tcp_max_tw_buckets = 36
net.ipv4.tcp_fin_timeout = 3
net.ipv4.tcp_max_orphans = 262144
ne
Hi, what is the recommended value for /proc/sys/kernel/pid_max? Is 32768 enough
for Ceph cluster with 4 nodes (40 1T OSDs on each node)? My ceph node already
run into "create thread fail" problem in osd log which root cause at pid_max.
Wei Cao (Buddy)
__
Hi,
ceph-deploy-1.5.3 can make trouble, if a reboot is done between
preparation and aktivation of an osd:
The osd-disk was /dev/sdb at this time, osd itself should go to sdb1,
formatted to cleared, journal should go to sdb2, formatted to btrfs
I prepared an osd:
root@bd-a:/etc/ceph# ceph-dep
Hi Craig,
It's hard to say what is going wrong with that level of logs. Can you
reproduce with debug ms = 1 and debug osd = 20?
There were a few things fixed in scrub between emperor and firefly. Are
you planning on upgrading soon?
sage
On Tue, 10 Jun 2014, Craig Lewis wrote:
> Every time
Hi,
we have a similar setup where we have SSD and HDD in the same hosts.
Our very basic crushmap is configured as follows:
# ceph osd tree
# id weight type name up/down reweight
-6 3 root ssd
3 1 osd.3 up 1
4 1 osd.4 up 1
5 1 osd.5 up 1
-5 3 root platters
0 1 osd.0 up 1
1 1 osd.1 up 1
2 1 osd.2 u
Hi All,
I have a four node ceph cluster. The metadata service is showing as degraded in
health. How to remove the mds service from ceph ?
=-
root@cephadmin:/home/oss# ceph -s
cluster 9acd33d7-759b-45f4-b48f-a4682fd6c674
health HEALTH_WARN mds cluster is degraded
monmap
On 06/11/2014 01:23 PM, yalla.gnan.ku...@accenture.com wrote:
I have a four node ceph storage cluster. Ceph –s is showing one monitor
as down . How to start it and in which server do I have to start it ?
It's cephnode3 which is down. Log in and do:
$ start ceph-mon-all
---
roo
On 06/11/2014 12:51 PM, Florent B wrote:
Hi,
I would like to know if Ceph uses CRUSH algorithm when a read operation
occurs, for example to select the nearest OSD storing the asked object.
CRUSH is used when reading since it's THE algorithm inside Ceph to
determine data placement.
CRUSH doe
I have a four node ceph storage cluster. Ceph -s is showing one monitor as
down . How to start it and in which server do I have to start it ?
---
root@cephadmin:/home/oss# ceph -w
cluster 9acd33d7-759b-45f4-b48f-a4682fd6c674
health HEALTH_WARN 1 mons down, quorum 0,1 cephno
On 06/11/2014 08:20 AM, Sebastien Han wrote:
Thanks for your answers
u I have that for an apt-cache since more than 1 year now, never had
an issue. Of course, your question is not about having a krbd device
backing an OSD of the same cluster ;-)
<>_
after I created 1 mon and prepared 2 osds,I checked and found that the fsid of
the three are same,but when I input *ceph-deploy osd activate
node2:/var/local/osd0 node3:/var/local/osd1*, the error output were as follows:
node2][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph with fs
On 10 Jun 2014, at 11:59, Dan Van Der Ster wrote:
> One idea I had was to check the behaviour under different disk io schedulers,
> trying exploit thread io priorities with cfq. So I have a question for the
> developers about using ionice or ioprio_set to lower the IO priorities of the
> threa
Hi Greg,
This tracker issue is relevant: http://tracker.ceph.com/issues/7288
Cheers, Dan
On 11 Jun 2014, at 00:30, Gregory Farnum wrote:
> Hey Mike, has your manual scheduling resolved this? I think I saw
> another similar-sounding report, so a feature request to improve scrub
> scheduling would
38 matches
Mail list logo