Thanks for your info.
I would like to know how large i/o that you mentioned, and what kind of app
you used to do benchmarking?
Sincerely,
Kinjo
On Tue, Jun 16, 2015 at 12:04 AM, Barclay Jameson
wrote:
> I am currently implementing Ceph into our HPC environment to handle
> SAS temp workspace.
>
Can you tell us when it was fixed so that we see this fix on github?
Kinjo
On Sat, Jul 4, 2015 at 8:08 PM, Dan van der Ster wrote:
> Hi,
>
> You should upgrade to the latest firefly release. Your probably suffering
> from the known issue with snapshot trimming.
>
> Cheers, Dan
>
> On Jul 4, 20
Thx for your reply!! There would be any updates.
Kinjo
On Sat, Jul 4, 2015 at 9:23 PM, Loic Dachary wrote:
>
>
> On 04/07/2015 13:41, Shinobu Kinjo wrote:
> > Hi, just asking you what is the initial conversation with Ken? I'm just
> cofused because this list is, y
; On Jul 4, 2015 1:37 PM, "Shinobu Kinjo" wrote:
>
>> Can you tell us when it was fixed so that we see this fix on github?
>>
>> Kinjo
>>
>> On Sat, Jul 4, 2015 at 8:08 PM, Dan van der Ster
>> wrote:
>>
>>> Hi,
>>>
>>>
That's good!
So was the root cause is because the osd was full? What's your thought
about that?
Was there any reason to delete any files?
Kinjo
On Sun, Jul 5, 2015 at 6:51 PM, Jacek Jarosiewicz <
jjarosiew...@supermedia.pl> wrote:
> ok, I got it working...
>
> first i manually deleted some fi
o about this problem..
>
> I was thinking that maybe - if I upped the near full and full ratio - the
> warning would go away and maybe I would be able to flush the cache pool.
> But that's only a solution for the cache pool - I'd rather not touch the
> normal data on the cold s
Why do you stick to 32bit?
Kinjo
On Mon, Jul 13, 2015 at 7:35 PM, Daleep Bais wrote:
> Hi,
>
> I am building a ceph cluster on Arm. Is there any limitation for 32 bit in
> regard to number of nodes, storage capacity etc?
>
> Please suggest..
>
> Thanks.
>
> Daleep Singh Bais
>
> __
Thanks for your quick action!!
- Shinobu
On Fri, Jul 31, 2015 at 11:01 PM, Ilya Dryomov wrote:
> On Fri, Jul 31, 2015 at 2:21 PM, pixelfairy wrote:
> > according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
> > you have two choices,
> >
> > format 1: you can mount with rbd kerne
Hello,
Ceph is not problem. Problem is that btrfs is not still production.
There are many testing line in source codes.
But it's really up to you which filesystem you use.
Each filesystem has unique functions so you have to consider
them to get best performance from one of them.
Meaning that th
Hello,
What is your performance or just general requirement?
Because, as you might know, reliability, performance and any kind of things
are trade-off.
Sincerely,
Shinobu
On Sat, Aug 8, 2015 at 9:20 PM, Stijn De Weirdt
wrote:
> hi jan,
>
> The answer to this, as well as life, universe and eve
> filestore_fd_cache_random = true
not true
Shinobu
On Fri, Aug 21, 2015 at 10:20 PM, Jan Schermer wrote:
> Thanks for the config,
> few comments inline:, not really related to the issue
>
> > On 21 Aug 2015, at 15:12, J-P Methot wrote:
> >
> > Hi,
> >
> > First of all, we are sure that the r
> IIRC, it only triggers the move (merge or split) when that folder is hit by a
> request, so most likely it happens gradually.
Do you know what causes this?
I would like to be more clear "gradually".
Shinobu
- Original Message -
From: "GuangYang"
To: "Ben Hines" , "Nick Fisk"
Cc: "ce
Very nice.
You're my hero!
Shinobu
- Original Message -
From: "GuangYang"
To: "Shinobu Kinjo"
Cc: "Ben Hines" , "Nick Fisk" , "ceph-users"
Sent: Saturday, September 5, 2015 9:40:06 AM
Subject
Since jemalloc tries to create arena on different thread for lock contention
problems, no matter when system is busy.
This causes increasing memory usage.
I think we probably need to think how to a little bit carefully make use of:
pthread_create()
tcache
redzone
Etc...
And I a
The best answer is:
http://ceph.com/docs/master/rados/configuration/network-config-ref/
I think that you should be able to know how each component communicates with
each other with that.
And this would be more help:
https://ceph.com/docs/v0.79/rados/operations/auth-intro/
Shinobu
How heavy network traffic was?
Have you tried to capture that traffic between cluster and public network
to see where such a bunch of traffic came from?
Shinobu
- Original Message -
From: "Jan Schermer"
To: "Mariusz Gronczewski"
Cc: ceph-users@lists.ceph.com
Sent: Monday, September 7,
Are you using lacp in 10g interfaces?
- Original Message -
From: "Mariusz Gronczewski"
To: "Shinobu Kinjo"
Cc: "Jan Schermer" , ceph-users@lists.ceph.com
Sent: Monday, September 7, 2015 9:58:33 PM
Subject: Re: [ceph-users] Huge memory usage spike in OS
> master/slave
Meaning that you are using bonding?
- Original Message -
From: "Mariusz Gronczewski"
To: "Shinobu Kinjo"
Cc: "Jan Schermer" , ceph-users@lists.ceph.com
Sent: Monday, September 7, 2015 10:05:23 PM
Subject: Re: [ceph-users] Huge memory u
O.k, that's the protocol, 803.ad.
- Original Message -
From: "Mariusz Gronczewski"
To: "Shinobu Kinjo"
Cc: "Jan Schermer" , ceph-users@lists.ceph.com
Sent: Monday, September 7, 2015 10:19:23 PM
Subject: Re: [ceph-users] Huge memory usage spike in OSD
I have a bunch of question about performance of Lustre,
which should be discussed in lustre-discuss list.
How many OSTs are you using now?
How did you configure LNET?
How are you using extra RAM as read cache?
Shinobu
- Original Message -
From: "Vickey Singh"
To: ceph-users@lists.cep
ment?
Shinobu
- Original Message -
From: "Mariusz Gronczewski"
To: "池信泽"
Cc: "Shinobu Kinjo" , ceph-users@lists.ceph.com
Sent: Tuesday, September 8, 2015 7:09:32 PM
Subject: Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant
For those interested:
Bu
That would be my life saver.
Thanks a lot!
> you simply need to compile qemu with --enable-jemalloc, to enable jemmaloc
> support.
- Original Message -
From: "Alexandre DERUMIER"
To: "ceph-users" , "ceph-devel"
Sent: Tuesday, September 8, 2015 7:58:15 PM
Subject: [ceph-users] qemu jem
t; >>
> >> The patch https://github.com/ceph/ceph/pull/5656
> >> https://github.com/ceph/ceph/pull/5451 merged into master would fix it, and
> >> it would be backport.
> >>
> >> I think ceph v0.93 or newer version maybe hit this bug
That's a good news.
Shinobu
- Original Message -
From: "Sage Weil"
To: "Andras Pataki"
Cc: ceph-users@lists.ceph.com, ceph-de...@vger.kernel.org
Sent: Wednesday, September 9, 2015 3:07:29 AM
Subject: Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix
On Tue, 8 Sep 2015,
Have you ever?
http://ceph.com/docs/master/rados/troubleshooting/memory-profiling/
Shinobu
- Original Message -
From: "Chad William Seys"
To: "Mariusz Gronczewski" , "Shinobu Kinjo"
, ceph-users@lists.ceph.com
Sent: Wednesday, September 9, 2015 6:14:
I email you guys about using jemalloc.
There might be workaround to use it much more effectively.
I hope some of you saw my email...
Shinobu
- Original Message -
From: "Mark Nelson"
To: "Alexandre DERUMIER" , "ceph-devel"
, "ceph-users"
Sent: Wednesday, September 9, 2015 8:52:35 AM
Sub
Did you try to identify what kind of processes were accessing filesystem using
fuser or lsof and then kill them?
If not, you had to do that first.
Shinobu
- Original Message -
From: "Goncalo Borges"
To: ski...@redhat.com
Sent: Wednesday, September 9, 2015 5:04:23 PM
Subject: Re: [ceph-u
Anyhow this page would help you:
http://ceph.com/docs/master/cephfs/disaster-recovery/
Shinobu
- Original Message -
From: "Shinobu Kinjo"
To: "Goncalo Borges"
Cc: "ceph-users"
Sent: Wednesday, September 9, 2015 5:28:38 PM
Subject: Re: [ceph-user
How many disks does each osd node have?
How about networking layer?
There are several factors to make your cluster much more stronger.
Probably you may need to take a look at other discussion on this mailing list.
There was a bunch of discussion about performance.
Shinobu
- Original Message
Are you using that hdd as also for storing journal data?
Or are you using ssd for that purpose?
Shinobu
- Original Message -
From: "Daleep Bais"
To: "Shinobu Kinjo"
Cc: "Ceph-User"
Sent: Wednesday, September 9, 2015 5:59:33 PM
Subject: Re: [ceph-users] P
They may be more help to do something for performance analysis -;
http://ceph.com/docs/master/start/hardware-recommendations/
http://www.sebastien-han.fr/blog/2013/10/03/quick-analysis-of-the-ceph-io-layer/
Shinobu
- Original Message -
From: "Shinobu Kinjo"
To: "D
t;>
>> # cephfs-data-scan scan_extents cephfs_dt
>> # cephfs-data-scan scan_inodes cephfs_dt
>>
>> # cephfs-data-scan scan_extents --force-pool cephfs_mt
>> (doesn't seem to work)
>>
>> e./ After running the cephfs too
That's good point actually.
Probably saves our life -;
Shinobu
- Original Message -
From: "Ben Hines"
To: "Mark Kirkwood"
Cc: "ceph-users"
Sent: Thursday, September 10, 2015 8:23:26 AM
Subject: Re: [ceph-users] purpose of different default pools created by radosgw
instance
The Ceph d
Hello,
I'm seeing 859 parameters in the output of:
$ ./ceph --show-config | wc -l
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
859
In:
$ ./ceph --version
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
ceph version 9.0.2-1454-
---
From: "Gregory Farnum"
To: "Shinobu Kinjo"
Cc: "ceph-users" , "ceph-devel"
Sent: Thursday, September 10, 2015 5:57:52 PM
Subject: Re: [ceph-users] Ceph.conf
On Thu, Sep 10, 2015 at 9:44 AM, Shinobu Kinjo wrote:
> Hello,
>
> I'm seeing 859 param
deration.
But you can ignore me anyhow or point out anything to me -;
Shinobu
- Original Message -
From: "Abhishek L"
To: "Shinobu Kinjo"
Cc: "Gregory Farnum" , "ceph-users"
, "ceph-devel"
Sent: Thursday, September 10, 2015 6:35:31 PM
Subje
>> Finally the questions:
>>
>> a./ Under a situation as the one describe above, how can we safely terminate
>> cephfs in the clients? I have had situations where umount simply hangs and
>> there is no real way to unblock the situation unless I reboot the client. If
>> we have hundreds of clients,
>> c./ After recovering the cluster, I though I was in a cephfs situation where
>> I had
>> c.1 files with holes (because of lost PGs and objects in the data pool)
>> c.2 files without metadata (because of lost PGs and objects in the
>> metadata pool)
>
> What does "files without metadata"
If you really want to improve performance of *distributed* filesystem
like Ceph, Lustre, GPFS,
you must consider from networking of the linux kernel.
L5: Socket
L4: TCP
L3: IP
L2: Queuing
In this discussion, problem could be in L2 which is queuing in descriptor.
We may have to take a closer l
and tcpdump, tc would give you more concu-
rrency information to solve the problem.
Shinobu
- Original Message -
From: "Shinobu Kinjo"
To: "Vickey Singh"
Cc: ceph-users@lists.ceph.com
Sent: Friday, September 11, 2015 10:32:27 PM
Subject: Re: [ceph-users] Ceph clu
There should be some complains in /var/log/messages.
Can you attach?
Shinobu
- Original Message -
From: "谷枫"
To: "ceph-users"
Sent: Saturday, September 12, 2015 1:30:49 PM
Subject: [ceph-users] ceph-fuse auto down
Hi,all
My cephfs cluster deploy on three nodes with Ceph Hammer 0.94.3
Ah, you are using ubuntu, sorry for that.
How about:
/var/log/dmesg
I believe you can attach file not paste.
Pasting a bunch of logs would not be good for me -;
And when did you notice that cephfs was hung?
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Ki
> In your procedure, the umount problems have nothing to do with
> corruption. It's (sometimes) hanging because the MDS is offline. If
How did you notice that the MDS was offline?
It's just because ceph client could not unmount filesystem, or anything?
I would like to see logs in mds and osd. B
Thank you for log archives.
I went to dentist -;
Please do not forget CCing ceph-users from the next because there is a bunch of
really **awesome** guys;
Can you re-attach log files again so that they see?
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Kinjo&q
In _usr_bin_ceph-fuse.0.crash.client2.tar
What I'm seeing now is:
3 Date: Sat Sep 12 06:37:47 2015
...
6 ExecutableTimestamp: 1440614242
...
7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m
10.3.1.11,10.3.1.12,10.3.1.13 /grdata
...
30 7f32de7fe000-7f32deffe000 rw
In _usr_bin_ceph-fuse.0.crash.client2.tar
What I'm seeing now is:
3 Date: Sat Sep 12 06:37:47 2015
...
6 ExecutableTimestamp: 1440614242
...
7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m
10.3.1.11,10.3.1.12,10.3.1.13 /grdata
...
30 7f32de7fe000-7f32deffe000 rw
;谷枫"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Sunday, September 13, 2015 10:51:35 AM
Subject: Re: [ceph-users] ceph-fuse auto down
sorry Shinobu,
I don't understand what's the means what you pasted.
Multi ceph-fuse crash just now today.
The ceph-fuse complete
can.
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Sunday, September 13, 2015 11:30:57 AM
Subject: Re: [ceph-users] ceph-fuse auto down
Yes, when some ceph-fuse crash , the mount driver has gone, and can't
remo
> Looked a bit more into this, swift apis seem to support the use
> of an admin tenant, user & token for validating the bearer token,
> similar to other openstack service which use a service tenant
> credentials for authenticating.
Yes, it's just working as middleware under Keystone.
> Though it
When did you face this issue?
From the beginning or...?
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Sunday, September 13, 2015 12:06:25 PM
Subject: Re: [ceph-users] ceph-fuse auto down
All clients use same ceph-fuse
Did you made that script, or be there by default?
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Monday, September 14, 2015 10:48:16 AM
Subject: Re: [ceph-users] ceph-fuse auto down
Hi,Shinobu
I found the logrotate s
e to monitor client to
see if it cause problem or not?
And can you make sure module is there now?
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Monday, September 14, 2015 10:57:31 AM
Subject: Re: [ceph-users] ceph-fuse
Yes, that is exactly what I'm going to do.
Thanks for your follow-up.
Shinobu
- Original Message -
From: "Zheng Yan"
To: "谷枫"
Cc: "Shinobu Kinjo" , "ceph-users"
Sent: Monday, September 14, 2015 12:19:44 PM
Subject: Re: [ceph-users] cep
Before dumping core:
ulimit -c unlimited
After dumping core:
gdb
# Just do backtrace
(gdb)bt
# There would be some signal.
Then give full output to us.
Shinobu
- Original Message -
From: "谷枫"
To: "Shinobu Kinjo"
Cc: "Zheng Yan" , "ceph-users
ssage -
From: "Goncalo Borges"
To: "Shinobu Kinjo" , "John Spray"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, September 15, 2015 12:39:57 PM
Subject: Re: [ceph-users] Question on cephfs recovery tools
Hi Shinobu
>>> c./ After recovering the cluster, I th
I do not think that it's best practice to increase that number at the moment.
It's kind of lack of consideration.
We might need to do that as a result.
But what we should do, first, is to check current actual number of aio using:
watch -dc cat /proc/sys/fs/aio-nr
then increase, if it's necessa
roc/sys/fs/aio-max-nr
Meaning that, since you set 5 for aio-max-nr, you ended up with
lack of resource.
If you have any question, concern or whatever you have, just let
us know.
Shinobu
- Original Message -
From: "Peter Sabaini"
To: "Shinobu Kinjo"
Cc:
What's error message you saw when you tried?
Shinobu
- Original Message -
From: "Abhishek L"
To: "Robert Duncan"
Cc: ceph-us...@ceph.com
Sent: Friday, September 18, 2015 12:29:20 PM
Subject: Re: [ceph-users] radosgw and keystone version 3 domains
On Fri, Sep 18, 2015 at 4:38 AM, Robert
> when you enable the "exclusive-lock" feature, only one RBD client is able to
> modify the image while the lock is held.
means
rbd_lock_exclusive
> However, that won't stop other RBD clients from *requesting* that maintenance
> operations be performed on the image (e.g. snapshot, resize).
Thanks for the info.
Shinobu
- Original Message -
From: "Luis Periquito"
To: "Shinobu Kinjo"
Cc: "Abhishek L" , "Robert Duncan"
, "ceph-users"
Sent: Friday, September 25, 2015 8:52:48 PM
Subject: Re: [ceph-users] radosgw and keystone
> and need to use openstack client.
Yes, you have to for v3 anyway.
Shinobu
- Original Message -
From: "Robert Duncan"
To: "Luis Periquito"
Cc: "Shinobu Kinjo" , "Abhishek L"
, "ceph-users"
Sent: Friday, September 25, 2015 11:29
If any of you could provide keystone.log with me,
it would be more helpful.
and: keystone --version
Shinobu
- Original Message -
From: "Shinobu Kinjo"
To: "Robert Duncan"
Cc: "Luis Periquito" , "Abhishek L"
, "ceph-users"
Sent: Sa
Hello,
Thank!!
Anyhow have you ever tried to access to swift object using v3?
Shinobu
- Original Message -
From: "Robert Duncan"
To: "Shinobu Kinjo" , ceph-users@lists.ceph.com
Sent: Tuesday, September 29, 2015 8:48:57 PM
Subject: Re: [ceph-users] radosgw and keysto
FYI:
http://docs.ceph.com/docs/giant/install/get-packages/
Shinobu
- Original Message -
From: "MinhTien MinhTien"
To: ceph-users@lists.ceph.com
Sent: Friday, October 2, 2015 11:01:14 AM
Subject: [ceph-users] Can not download from
http://ceph.com/packages/ceph-extras/rpm/centos6.3/
Can you run same test several times? Not one, two or three time, just more
several times.
And check more in detail. For instance descriptor, statics of networking served
in /sys/class/net//statistics/* and other things.
If every single result is same, there would be problems in some layer,
netw
What kind of applications are you talking about regarding to applications
for HPC.
Are you talking about like netcdf?
Caching is quite necessary for some applications for computation.
But it's not always the case.
It's not quite related to this topic but I'm really interested in your
thought usi
Hello,
Have any of you tried to upgrade the Ceph cluster through the following upgrade
path.
Dumpling -> Firefly -> Hammer
* Each version is newest.
After upgrading from Dumpling, Firefly to Hammer following this:
http://docs.ceph.com/docs/master/install/upgrading-ceph/
I ended up with hitti
Is there anything we have to do?
or that upgrade path is not doable...
Shinobu
- Original Message -
From: "Gregory Farnum"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Tuesday, December 8, 2015 10:36:34 AM
Subject: Re: [ceph-users] [Ceph-Users] Upgrade
Thanks!
- Original Message -
From: "Gregory Farnum"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Tuesday, December 8, 2015 12:06:51 PM
Subject: Re: [ceph-users] [Ceph-Users] Upgrade Path to Hammer
The warning is informational -- it doesn't harm any
On Sat, Apr 30, 2016 at 5:32 PM, Oliver Dzombic wrote:
> Hi,
>
> there is a memory allocation bug, at least in hammer.
>
Could you give us any pointer?
> Mouting an rbd volume as a block device on a ceph node might run you
> into that. Then your mount wont work, and you will have to restart the
i...@ip-interactive.de
>
> Anschrift:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
>
What will the followings show you?
ceph pg 12.258 list_unfound // maybe hung...
ceph pg dump_stuck
and enable debug to osd.4
debug osd = 20
debug filestore = 20
debug ms = 1
But honestly my best bet is to upgrade to the latest. It would save
your life much more.
- Shinobu
On Thu, May 26, 20
Would you enable debug for osd.177
debug osd = 20
debug filestore = 20
debug ms = 1
Cheers,
Shinobu
On Thu, Jun 2, 2016 at 2:31 AM, Jeffrey McDonald wrote:
> Hi,
>
> I just performed a minor ceph upgrade on my ubuntu 14.04 cluster from ceph
> version to0.94.6-1trusty to 0.94.7-1trusty. Upo
What does `ceph pg 6.263 query` show you?
On Thu, Jun 30, 2016 at 12:02 PM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear Cephers...
>
> Today our ceph cluster gave us a couple of scrub errors regarding
> inconsistent pgs. We just upgraded from 9.2.0 to 10.2.2 two days ago.
>
> #
;: "Started\/Primary\/Active",
> "enter_time": "2016-06-27 04:57:36.876639",
> "might_have_unfound": [],
> "recovery_progress": {
> "backfill_targets": [],
> "waiting
clients write operations to rados will be
cancel (maybe `cancel` is not appropriate word in this sentence) until the
full epoch before touching same object.
Since clients must have latest OSD map.
Does it make sense?
Anyway in case I've been missing something, some will add more.
>
> Do
Reproduce with 'debug mds = 20' and 'debug ms = 20'.
shinobu
On Mon, Jul 4, 2016 at 9:42 PM, Lihang wrote:
> Thank you very much for your advice. The command "ceph mds repaired 0"
> work fine in my cluster, my cluster state become HEALTH_OK and the cephfs
> state become normal also. but in the
Can you reproduce with debug client = 20?
On Tue, Jul 5, 2016 at 10:16 AM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear All...
>
> We have recently migrated all our ceph infrastructure from 9.2.0 to 10.2.2.
>
> We are currently using ceph-fuse to mount cephfs in a number of client
You may want to change value of "osd_pool_default_crush_replicated_ruleset".
shinobu
On Fri, Jul 15, 2016 at 7:38 AM, Oliver Dzombic
wrote:
> Hi,
>
> wow, figured it out.
>
> If you dont have a ruleset 0 id, you are in trouble.
>
> So the solution is, that you >MUST< have a ruleset id 0.
>
> -
t:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 15.07.2016 um 06:22 schrieb Shinobu Kinjo:
osd_heartbeat_addr must be in [osd] section.
On Thu, Jul 28, 2016 at 4:31 AM, Venkata Manojawa Paritala
wrote:
> Hi,
>
> I have configured the below 2 networks in Ceph.conf.
>
> 1. public network
> 2. cluster_network
>
> Now, the heart beat for the OSDs is happening thru cluster_network. How can
On Sun, Aug 7, 2016 at 6:56 PM, Christian Balzer wrote:
>
> [Reduced to ceph-users, this isn't community related]
>
> Hello,
>
> On Sat, 6 Aug 2016 20:23:41 +0530 Venkata Manojawa Paritala wrote:
>
>> Hi,
>>
>> We have configured single Ceph cluster in a lab with the below
>> specification.
>>
>>
On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik wrote:
> Dear ceph community,
>
> One of the OSDs in my cluster cannot start due to the
>
> ERROR: osd init failed: (28) No space left on device
>
> A while ago it was recommended to manually delete PGs on the OSD to let it
> start.
Who recommended t
:
> @Shinobu
>
> According to
> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
>
> "If you cannot start an OSD because it is full, you may delete some data by
> deleting some placement group directories in the full OSD."
>
>
> On 8 Au
It would be much better to explain why as of today, object-map feature
is not supported by the kernel client, or document it.
On Tue, Aug 15, 2017 at 8:08 PM, Ilya Dryomov wrote:
> On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote:
>> Hi All,
>>
>> I have search everywhere for some sort of t
Just for clarification.
Did you upgrade your cluster from Hammer to Luminous, then hit an assertion?
On Wed, Sep 27, 2017 at 8:15 PM, Richard Hesketh
wrote:
> As the subject says... any ceph fs administrative command I try to run hangs
> forever and kills monitors in the background - sometimes t
Are we going to have next CDM in an APAC friendly time slot again?
On Thu, Sep 28, 2017 at 12:08 PM, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> http://wiki.ceph.com/Planning
>
> If you have work that y
So the problem you faced has been completely solved?
On Thu, Sep 28, 2017 at 7:51 PM, Richard Hesketh
wrote:
> On 27/09/17 19:35, John Spray wrote:
>> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
>> wrote:
>>> On 27/09/17 12:32, John Spray wrote:
On Wed, Sep 27, 2017 at 12:15 PM, Richar
On Fri, Oct 13, 2017 at 3:29 PM, Ashley Merrick wrote:
> Hello,
>
>
> Is it possible to limit a cephx user to one image?
>
>
> I have looked and seems it's possible per a pool, but can't find a per image
> option.
What did you look at?
Best reg
Can you open a ticket with exact version of your ceph cluster?
http://tracker.ceph.com
Thanks,
On Sun, Dec 10, 2017 at 10:34 PM, Martin Preuss wrote:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
> consisting of 3 hosts, each host has 3-4 OSDs (using 4TB hdds, cu
On Sat, Nov 19, 2016 at 6:59 AM, Brad Hubbard wrote:
> +ceph-devel
>
> On Fri, Nov 18, 2016 at 8:45 PM, Nick Fisk wrote:
>> Hi All,
>>
>> I want to submit a PR to include fix in this tracker bug, as I have just
>> realised I've been experiencing it.
>>
>> http://tracker.ceph.com/issues/9860
>>
>
On Sat, Dec 10, 2016 at 11:00 PM, Jason Dillaman wrote:
> I should clarify that if the OSD has silently failed (e.g. the TCP
> connection wasn't reset and packets are just silently being dropped /
> not being acked), IO will pause for up to "osd_heartbeat_grace" before
The number is how long an O
On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong
wrote:
> hi cephers:
> we are using ceph hammer 0.94.9, yes, It's not the latest ( jewel),
> with some ssd osds for tiering, cache-mode is set to readproxy,
> everything seems to be as expected,
> but when reading some small files from c
-p ${cache pool} ls
# rados -p ${cache pool} get ${object} /tmp/file
# ls -l /tmp/file
-- Original --
From: "Shinobu Kinjo";
Date: Tue, Dec 13, 2016 06:21 PM
To: "JiaJia Zhong";
Cc: "CEPH list"; "ukernel";
Subject: Re:
Would you give us some outputs?
# getfattr -n ceph.quota.max_bytes /some/dir
and
# ls -l /some/dir
On Thu, Dec 15, 2016 at 4:41 PM, gjprabu wrote:
>
> Hi Team,
>
> We are using ceph version 10.2.4 (Jewel) and data's are mounted
> with cephfs file system in linux. We are trying to s
ng set-overlay ? we didn't sweep the clients out while setting overlay
>
> -- Original --
> From: "JiaJia Zhong";
> Date: Wed, Dec 14, 2016 11:24 AM
> To: "Shinobu Kinjo";
> Cc: "CEPH list"; "ukernel";
&
Can you share exact steps you took to build the cluster?
On Thu, Dec 22, 2016 at 3:39 AM, Aakanksha Pudipeddi
wrote:
> I mean setup a Ceph cluster after compiling from source and make install. I
> usually use the long form to setup the cluster. The mon setup is fine but
> when I create an OSD u
Would you be able to execute ``ceph pg ${PG ID} query`` against that
particular PG?
On Wed, Dec 21, 2016 at 11:44 PM, Andras Pataki
wrote:
> Yes, size = 3, and I have checked that all three replicas are the same zero
> length object on the disk. I think some metadata info is mismatching what
> t
: 2293522445,
> "omap_digest": 4294967295,
> "expected_object_size": 4194304,
> "expected_write_size": 4194304,
> "alloc_hint_flags": 53,
> "watchers": {}
> }
>
> Depending on the output one method for
On Sun, Dec 25, 2016 at 7:33 AM, Brad Hubbard wrote:
> On Sun, Dec 25, 2016 at 3:33 AM, w...@42on.com wrote:
>>
>>
>>> Op 24 dec. 2016 om 17:20 heeft L. Bader het volgende
>>> geschreven:
>>>
>>> Do you have any references on this?
>>>
>>> I searched for something like this quite a lot and did
1 - 100 of 234 matches
Mail list logo