Re: [ceph-users] anyone using CephFS for HPC?

2015-06-15 Thread Shinobu Kinjo
Thanks for your info. I would like to know how large i/o that you mentioned, and what kind of app you used to do benchmarking? Sincerely, Kinjo On Tue, Jun 16, 2015 at 12:04 AM, Barclay Jameson wrote: > I am currently implementing Ceph into our HPC environment to handle > SAS temp workspace. >

Re: [ceph-users] Slow requests when deleting rbd snapshots

2015-07-04 Thread Shinobu Kinjo
Can you tell us when it was fixed so that we see this fix on github? Kinjo On Sat, Jul 4, 2015 at 8:08 PM, Dan van der Ster wrote: > Hi, > > You should upgrade to the latest firefly release. Your probably suffering > from the known issue with snapshot trimming. > > Cheers, Dan > > On Jul 4, 20

Re: [ceph-users] systemd support

2015-07-04 Thread Shinobu Kinjo
Thx for your reply!! There would be any updates. Kinjo On Sat, Jul 4, 2015 at 9:23 PM, Loic Dachary wrote: > > > On 04/07/2015 13:41, Shinobu Kinjo wrote: > > Hi, just asking you what is the initial conversation with Ken? I'm just > cofused because this list is, y

Re: [ceph-users] Slow requests when deleting rbd snapshots

2015-07-04 Thread Shinobu Kinjo
; On Jul 4, 2015 1:37 PM, "Shinobu Kinjo" wrote: > >> Can you tell us when it was fixed so that we see this fix on github? >> >> Kinjo >> >> On Sat, Jul 4, 2015 at 8:08 PM, Dan van der Ster >> wrote: >> >>> Hi, >>> >>>

Re: [ceph-users] problem with cache tier

2015-07-05 Thread Shinobu Kinjo
That's good! So was the root cause is because the osd was full? What's your thought about that? Was there any reason to delete any files? Kinjo On Sun, Jul 5, 2015 at 6:51 PM, Jacek Jarosiewicz < jjarosiew...@supermedia.pl> wrote: > ok, I got it working... > > first i manually deleted some fi

Re: [ceph-users] problem with cache tier

2015-07-05 Thread Shinobu Kinjo
o about this problem.. > > I was thinking that maybe - if I upped the near full and full ratio - the > warning would go away and maybe I would be able to flush the cache pool. > But that's only a solution for the cache pool - I'd rather not touch the > normal data on the cold s

Re: [ceph-users] 32 bit limitation for ceph on arm

2015-07-13 Thread Shinobu Kinjo
Why do you stick to 32bit? Kinjo On Mon, Jul 13, 2015 at 7:35 PM, Daleep Bais wrote: > Hi, > > I am building a ceph cluster on Arm. Is there any limitation for 32 bit in > regard to number of nodes, storage capacity etc? > > Please suggest.. > > Thanks. > > Daleep Singh Bais > > __

Re: [ceph-users] update docs? just mounted a format2 rbd image with client 0.80.8 server 0.87.2

2015-07-31 Thread Shinobu Kinjo
Thanks for your quick action!! - Shinobu On Fri, Jul 31, 2015 at 11:01 PM, Ilya Dryomov wrote: > On Fri, Jul 31, 2015 at 2:21 PM, pixelfairy wrote: > > according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering, > > you have two choices, > > > > format 1: you can mount with rbd kerne

Re: [ceph-users] btrfs w/ centos 7.1

2015-08-07 Thread Shinobu Kinjo
Hello, Ceph is not problem. Problem is that btrfs is not still production. There are many testing line in source codes. But it's really up to you which filesystem you use. Each filesystem has unique functions so you have to consider them to get best performance from one of them. Meaning that th

Re: [ceph-users] btrfs w/ centos 7.1

2015-08-08 Thread Shinobu Kinjo
Hello, What is your performance or just general requirement? Because, as you might know, reliability, performance and any kind of things are trade-off. Sincerely, Shinobu On Sat, Aug 8, 2015 at 9:20 PM, Stijn De Weirdt wrote: > hi jan, > > The answer to this, as well as life, universe and eve

Re: [ceph-users] Bad performances in recovery

2015-08-21 Thread Shinobu Kinjo
> filestore_fd_cache_random = true not true Shinobu On Fri, Aug 21, 2015 at 10:20 PM, Jan Schermer wrote: > Thanks for the config, > few comments inline:, not really related to the issue > > > On 21 Aug 2015, at 15:12, J-P Methot wrote: > > > > Hi, > > > > First of all, we are sure that the r

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Shinobu Kinjo
> IIRC, it only triggers the move (merge or split) when that folder is hit by a > request, so most likely it happens gradually. Do you know what causes this? I would like to be more clear "gradually". Shinobu - Original Message - From: "GuangYang" To: "Ben Hines" , "Nick Fisk" Cc: "ce

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Shinobu Kinjo
Very nice. You're my hero! Shinobu - Original Message - From: "GuangYang" To: "Shinobu Kinjo" Cc: "Ben Hines" , "Nick Fisk" , "ceph-users" Sent: Saturday, September 5, 2015 9:40:06 AM Subject

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-09-05 Thread Shinobu Kinjo
Since jemalloc tries to create arena on different thread for lock contention problems, no matter when system is busy. This causes increasing memory usage. I think we probably need to think how to a little bit carefully make use of: pthread_create() tcache redzone Etc... And I a

Re: [ceph-users] Network failure

2015-09-07 Thread Shinobu Kinjo
The best answer is: http://ceph.com/docs/master/rados/configuration/network-config-ref/ I think that you should be able to know how each component communicates with each other with that. And this would be more help: https://ceph.com/docs/v0.79/rados/operations/auth-intro/ Shinobu

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
How heavy network traffic was? Have you tried to capture that traffic between cluster and public network to see where such a bunch of traffic came from? Shinobu - Original Message - From: "Jan Schermer" To: "Mariusz Gronczewski" Cc: ceph-users@lists.ceph.com Sent: Monday, September 7,

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
Are you using lacp in 10g interfaces? - Original Message - From: "Mariusz Gronczewski" To: "Shinobu Kinjo" Cc: "Jan Schermer" , ceph-users@lists.ceph.com Sent: Monday, September 7, 2015 9:58:33 PM Subject: Re: [ceph-users] Huge memory usage spike in OS

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
> master/slave Meaning that you are using bonding? - Original Message - From: "Mariusz Gronczewski" To: "Shinobu Kinjo" Cc: "Jan Schermer" , ceph-users@lists.ceph.com Sent: Monday, September 7, 2015 10:05:23 PM Subject: Re: [ceph-users] Huge memory u

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
O.k, that's the protocol, 803.ad. - Original Message - From: "Mariusz Gronczewski" To: "Shinobu Kinjo" Cc: "Jan Schermer" , ceph-users@lists.ceph.com Sent: Monday, September 7, 2015 10:19:23 PM Subject: Re: [ceph-users] Huge memory usage spike in OSD

Re: [ceph-users] Extra RAM use as Read Cache

2015-09-07 Thread Shinobu Kinjo
I have a bunch of question about performance of Lustre, which should be discussed in lustre-discuss list. How many OSTs are you using now? How did you configure LNET? How are you using extra RAM as read cache? Shinobu - Original Message - From: "Vickey Singh" To: ceph-users@lists.cep

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Shinobu Kinjo
ment? Shinobu - Original Message - From: "Mariusz Gronczewski" To: "池信泽" Cc: "Shinobu Kinjo" , ceph-users@lists.ceph.com Sent: Tuesday, September 8, 2015 7:09:32 PM Subject: Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant For those interested: Bu

Re: [ceph-users] qemu jemalloc support soon in master (applied in paolo upstream branch)

2015-09-08 Thread Shinobu Kinjo
That would be my life saver. Thanks a lot! > you simply need to compile qemu with --enable-jemalloc, to enable jemmaloc > support. - Original Message - From: "Alexandre DERUMIER" To: "ceph-users" , "ceph-devel" Sent: Tuesday, September 8, 2015 7:58:15 PM Subject: [ceph-users] qemu jem

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Shinobu Kinjo
t; >> > >> The patch https://github.com/ceph/ceph/pull/5656 > >> https://github.com/ceph/ceph/pull/5451 merged into master would fix it, and > >> it would be backport. > >> > >> I think ceph v0.93 or newer version maybe hit this bug

Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix

2015-09-08 Thread Shinobu Kinjo
That's a good news. Shinobu - Original Message - From: "Sage Weil" To: "Andras Pataki" Cc: ceph-users@lists.ceph.com, ceph-de...@vger.kernel.org Sent: Wednesday, September 9, 2015 3:07:29 AM Subject: Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix On Tue, 8 Sep 2015,

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Shinobu Kinjo
Have you ever? http://ceph.com/docs/master/rados/troubleshooting/memory-profiling/ Shinobu - Original Message - From: "Chad William Seys" To: "Mariusz Gronczewski" , "Shinobu Kinjo" , ceph-users@lists.ceph.com Sent: Wednesday, September 9, 2015 6:14:

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Shinobu Kinjo
I email you guys about using jemalloc. There might be workaround to use it much more effectively. I hope some of you saw my email... Shinobu - Original Message - From: "Mark Nelson" To: "Alexandre DERUMIER" , "ceph-devel" , "ceph-users" Sent: Wednesday, September 9, 2015 8:52:35 AM Sub

Re: [ceph-users] Question on cephfs recovery tools

2015-09-09 Thread Shinobu Kinjo
Did you try to identify what kind of processes were accessing filesystem using fuser or lsof and then kill them? If not, you had to do that first. Shinobu - Original Message - From: "Goncalo Borges" To: ski...@redhat.com Sent: Wednesday, September 9, 2015 5:04:23 PM Subject: Re: [ceph-u

Re: [ceph-users] Question on cephfs recovery tools

2015-09-09 Thread Shinobu Kinjo
Anyhow this page would help you: http://ceph.com/docs/master/cephfs/disaster-recovery/ Shinobu - Original Message - From: "Shinobu Kinjo" To: "Goncalo Borges" Cc: "ceph-users" Sent: Wednesday, September 9, 2015 5:28:38 PM Subject: Re: [ceph-user

Re: [ceph-users] Poor IOPS performance with Ceph

2015-09-09 Thread Shinobu Kinjo
How many disks does each osd node have? How about networking layer? There are several factors to make your cluster much more stronger. Probably you may need to take a look at other discussion on this mailing list. There was a bunch of discussion about performance. Shinobu - Original Message

Re: [ceph-users] Poor IOPS performance with Ceph

2015-09-09 Thread Shinobu Kinjo
Are you using that hdd as also for storing journal data? Or are you using ssd for that purpose? Shinobu - Original Message - From: "Daleep Bais" To: "Shinobu Kinjo" Cc: "Ceph-User" Sent: Wednesday, September 9, 2015 5:59:33 PM Subject: Re: [ceph-users] P

Re: [ceph-users] Poor IOPS performance with Ceph

2015-09-09 Thread Shinobu Kinjo
They may be more help to do something for performance analysis -; http://ceph.com/docs/master/start/hardware-recommendations/ http://www.sebastien-han.fr/blog/2013/10/03/quick-analysis-of-the-ceph-io-layer/ Shinobu - Original Message - From: "Shinobu Kinjo" To: "D

Re: [ceph-users] Question on cephfs recovery tools

2015-09-09 Thread Shinobu Kinjo
t;> >> # cephfs-data-scan scan_extents cephfs_dt >> # cephfs-data-scan scan_inodes cephfs_dt >> >> # cephfs-data-scan scan_extents --force-pool cephfs_mt >> (doesn't seem to work) >> >> e./ After running the cephfs too

Re: [ceph-users] purpose of different default pools created by radosgw instance

2015-09-09 Thread Shinobu Kinjo
That's good point actually. Probably saves our life -; Shinobu - Original Message - From: "Ben Hines" To: "Mark Kirkwood" Cc: "ceph-users" Sent: Thursday, September 10, 2015 8:23:26 AM Subject: Re: [ceph-users] purpose of different default pools created by radosgw instance The Ceph d

[ceph-users] Ceph.conf

2015-09-10 Thread Shinobu Kinjo
Hello, I'm seeing 859 parameters in the output of: $ ./ceph --show-config | wc -l *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 859 In: $ ./ceph --version *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** ceph version 9.0.2-1454-

Re: [ceph-users] Ceph.conf

2015-09-10 Thread Shinobu Kinjo
--- From: "Gregory Farnum" To: "Shinobu Kinjo" Cc: "ceph-users" , "ceph-devel" Sent: Thursday, September 10, 2015 5:57:52 PM Subject: Re: [ceph-users] Ceph.conf On Thu, Sep 10, 2015 at 9:44 AM, Shinobu Kinjo wrote: > Hello, > > I'm seeing 859 param

Re: [ceph-users] Ceph.conf

2015-09-10 Thread Shinobu Kinjo
deration. But you can ignore me anyhow or point out anything to me -; Shinobu - Original Message - From: "Abhishek L" To: "Shinobu Kinjo" Cc: "Gregory Farnum" , "ceph-users" , "ceph-devel" Sent: Thursday, September 10, 2015 6:35:31 PM Subje

Re: [ceph-users] Question on cephfs recovery tools

2015-09-10 Thread Shinobu Kinjo
>> Finally the questions: >> >> a./ Under a situation as the one describe above, how can we safely terminate >> cephfs in the clients? I have had situations where umount simply hangs and >> there is no real way to unblock the situation unless I reboot the client. If >> we have hundreds of clients,

Re: [ceph-users] Question on cephfs recovery tools

2015-09-10 Thread Shinobu Kinjo
>> c./ After recovering the cluster, I though I was in a cephfs situation where >> I had >> c.1 files with holes (because of lost PGs and objects in the data pool) >> c.2 files without metadata (because of lost PGs and objects in the >> metadata pool) > > What does "files without metadata"

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-11 Thread Shinobu Kinjo
If you really want to improve performance of *distributed* filesystem like Ceph, Lustre, GPFS, you must consider from networking of the linux kernel. L5: Socket L4: TCP L3: IP L2: Queuing In this discussion, problem could be in L2 which is queuing in descriptor. We may have to take a closer l

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-11 Thread Shinobu Kinjo
and tcpdump, tc would give you more concu- rrency information to solve the problem. Shinobu - Original Message - From: "Shinobu Kinjo" To: "Vickey Singh" Cc: ceph-users@lists.ceph.com Sent: Friday, September 11, 2015 10:32:27 PM Subject: Re: [ceph-users] Ceph clu

Re: [ceph-users] ceph-fuse auto down

2015-09-11 Thread Shinobu Kinjo
There should be some complains in /var/log/messages. Can you attach? Shinobu - Original Message - From: "谷枫" To: "ceph-users" Sent: Saturday, September 12, 2015 1:30:49 PM Subject: [ceph-users] ceph-fuse auto down Hi,all My cephfs cluster deploy on three nodes with Ceph Hammer 0.94.3

Re: [ceph-users] ceph-fuse auto down

2015-09-11 Thread Shinobu Kinjo
Ah, you are using ubuntu, sorry for that. How about: /var/log/dmesg I believe you can attach file not paste. Pasting a bunch of logs would not be good for me -; And when did you notice that cephfs was hung? Shinobu - Original Message - From: "谷枫" To: "Shinobu Ki

Re: [ceph-users] Question on cephfs recovery tools

2015-09-11 Thread Shinobu Kinjo
> In your procedure, the umount problems have nothing to do with > corruption. It's (sometimes) hanging because the MDS is offline. If How did you notice that the MDS was offline? It's just because ceph client could not unmount filesystem, or anything? I would like to see logs in mds and osd. B

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
Thank you for log archives. I went to dentist -; Please do not forget CCing ceph-users from the next because there is a bunch of really **awesome** guys; Can you re-attach log files again so that they see? Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo&q

Re: [ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Shinobu Kinjo
In _usr_bin_ceph-fuse.0.crash.client2.tar What I'm seeing now is: 3 Date: Sat Sep 12 06:37:47 2015 ... 6 ExecutableTimestamp: 1440614242 ... 7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m 10.3.1.11,10.3.1.12,10.3.1.13 /grdata ... 30 7f32de7fe000-7f32deffe000 rw

[ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
In _usr_bin_ceph-fuse.0.crash.client2.tar What I'm seeing now is: 3 Date: Sat Sep 12 06:37:47 2015 ... 6 ExecutableTimestamp: 1440614242 ... 7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m 10.3.1.11,10.3.1.12,10.3.1.13 /grdata ... 30 7f32de7fe000-7f32deffe000 rw

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
;谷枫" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Sunday, September 13, 2015 10:51:35 AM Subject: Re: [ceph-users] ceph-fuse auto down sorry Shinobu, I don't understand what's the means what you pasted. Multi ceph-fuse crash just now today. The ceph-fuse complete

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
can. Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Sunday, September 13, 2015 11:30:57 AM Subject: Re: [ceph-users] ceph-fuse auto down Yes, when some ceph-fuse crash , the mount driver has gone, and can't remo

Re: [ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Shinobu Kinjo
> Looked a bit more into this, swift apis seem to support the use > of an admin tenant, user & token for validating the bearer token, > similar to other openstack service which use a service tenant > credentials for authenticating. Yes, it's just working as middleware under Keystone. > Though it

Re: [ceph-users] ceph-fuse auto down

2015-09-13 Thread Shinobu Kinjo
When did you face this issue? From the beginning or...? Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Sunday, September 13, 2015 12:06:25 PM Subject: Re: [ceph-users] ceph-fuse auto down All clients use same ceph-fuse

Re: [ceph-users] ceph-fuse auto down

2015-09-13 Thread Shinobu Kinjo
Did you made that script, or be there by default? Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Monday, September 14, 2015 10:48:16 AM Subject: Re: [ceph-users] ceph-fuse auto down Hi,Shinobu I found the logrotate s

Re: [ceph-users] ceph-fuse auto down

2015-09-13 Thread Shinobu Kinjo
e to monitor client to see if it cause problem or not? And can you make sure module is there now? Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Monday, September 14, 2015 10:57:31 AM Subject: Re: [ceph-users] ceph-fuse

Re: [ceph-users] ceph-fuse auto down

2015-09-13 Thread Shinobu Kinjo
Yes, that is exactly what I'm going to do. Thanks for your follow-up. Shinobu - Original Message - From: "Zheng Yan" To: "谷枫" Cc: "Shinobu Kinjo" , "ceph-users" Sent: Monday, September 14, 2015 12:19:44 PM Subject: Re: [ceph-users] cep

Re: [ceph-users] ceph-fuse auto down

2015-09-13 Thread Shinobu Kinjo
Before dumping core: ulimit -c unlimited After dumping core: gdb # Just do backtrace (gdb)bt # There would be some signal. Then give full output to us. Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo" Cc: "Zheng Yan" , "ceph-users

Re: [ceph-users] Question on cephfs recovery tools

2015-09-14 Thread Shinobu Kinjo
ssage - From: "Goncalo Borges" To: "Shinobu Kinjo" , "John Spray" Cc: ceph-users@lists.ceph.com Sent: Tuesday, September 15, 2015 12:39:57 PM Subject: Re: [ceph-users] Question on cephfs recovery tools Hi Shinobu >>> c./ After recovering the cluster, I th

Re: [ceph-users] ceph osd won't boot, resource shortage?

2015-09-18 Thread Shinobu Kinjo
I do not think that it's best practice to increase that number at the moment. It's kind of lack of consideration. We might need to do that as a result. But what we should do, first, is to check current actual number of aio using: watch -dc cat /proc/sys/fs/aio-nr then increase, if it's necessa

Re: [ceph-users] ceph osd won't boot, resource shortage?

2015-09-18 Thread Shinobu Kinjo
roc/sys/fs/aio-max-nr Meaning that, since you set 5 for aio-max-nr, you ended up with lack of resource. If you have any question, concern or whatever you have, just let us know. Shinobu - Original Message - From: "Peter Sabaini" To: "Shinobu Kinjo" Cc:

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-18 Thread Shinobu Kinjo
What's error message you saw when you tried? Shinobu - Original Message - From: "Abhishek L" To: "Robert Duncan" Cc: ceph-us...@ceph.com Sent: Friday, September 18, 2015 12:29:20 PM Subject: Re: [ceph-users] radosgw and keystone version 3 domains On Fri, Sep 18, 2015 at 4:38 AM, Robert

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Shinobu Kinjo
> when you enable the "exclusive-lock" feature, only one RBD client is able to > modify the image while the lock is held. means rbd_lock_exclusive > However, that won't stop other RBD clients from *requesting* that maintenance > operations be performed on the image (e.g. snapshot, resize).

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-25 Thread Shinobu Kinjo
Thanks for the info. Shinobu - Original Message - From: "Luis Periquito" To: "Shinobu Kinjo" Cc: "Abhishek L" , "Robert Duncan" , "ceph-users" Sent: Friday, September 25, 2015 8:52:48 PM Subject: Re: [ceph-users] radosgw and keystone

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-25 Thread Shinobu Kinjo
> and need to use openstack client. Yes, you have to for v3 anyway. Shinobu - Original Message - From: "Robert Duncan" To: "Luis Periquito" Cc: "Shinobu Kinjo" , "Abhishek L" , "ceph-users" Sent: Friday, September 25, 2015 11:29

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-25 Thread Shinobu Kinjo
If any of you could provide keystone.log with me, it would be more helpful. and: keystone --version Shinobu - Original Message - From: "Shinobu Kinjo" To: "Robert Duncan" Cc: "Luis Periquito" , "Abhishek L" , "ceph-users" Sent: Sa

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-29 Thread Shinobu Kinjo
Hello, Thank!! Anyhow have you ever tried to access to swift object using v3? Shinobu - Original Message - From: "Robert Duncan" To: "Shinobu Kinjo" , ceph-users@lists.ceph.com Sent: Tuesday, September 29, 2015 8:48:57 PM Subject: Re: [ceph-users] radosgw and keysto

Re: [ceph-users] Can not download from http://ceph.com/packages/ceph-extras/rpm/centos6.3/

2015-10-01 Thread Shinobu Kinjo
FYI: http://docs.ceph.com/docs/giant/install/get-packages/ Shinobu - Original Message - From: "MinhTien MinhTien" To: ceph-users@lists.ceph.com Sent: Friday, October 2, 2015 11:01:14 AM Subject: [ceph-users] Can not download from http://ceph.com/packages/ceph-extras/rpm/centos6.3/

Re: [ceph-users] ceph-fuse and its memory usage

2015-10-02 Thread Shinobu Kinjo
Can you run same test several times? Not one, two or three time, just more several times. And check more in detail. For instance descriptor, statics of networking served in /sys/class/net//statistics/* and other things. If every single result is same, there would be problems in some layer, netw

Re: [ceph-users] CephFS and page cache

2015-10-19 Thread Shinobu Kinjo
What kind of applications are you talking about regarding to applications for HPC. Are you talking about like netcdf? Caching is quite necessary for some applications for computation. But it's not always the case. It's not quite related to this topic but I'm really interested in your thought usi

[ceph-users] [Ceph-Users] Upgrade Path to Hammer

2015-12-07 Thread Shinobu Kinjo
Hello, Have any of you tried to upgrade the Ceph cluster through the following upgrade path. Dumpling -> Firefly -> Hammer * Each version is newest. After upgrading from Dumpling, Firefly to Hammer following this: http://docs.ceph.com/docs/master/install/upgrading-ceph/ I ended up with hitti

Re: [ceph-users] [Ceph-Users] Upgrade Path to Hammer

2015-12-07 Thread Shinobu Kinjo
Is there anything we have to do? or that upgrade path is not doable... Shinobu - Original Message - From: "Gregory Farnum" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Tuesday, December 8, 2015 10:36:34 AM Subject: Re: [ceph-users] [Ceph-Users] Upgrade

Re: [ceph-users] [Ceph-Users] Upgrade Path to Hammer

2015-12-07 Thread Shinobu Kinjo
Thanks! - Original Message - From: "Gregory Farnum" To: "Shinobu Kinjo" Cc: "ceph-users" Sent: Tuesday, December 8, 2015 12:06:51 PM Subject: Re: [ceph-users] [Ceph-Users] Upgrade Path to Hammer The warning is informational -- it doesn't harm any

Re: [ceph-users] Mapping RBD On Ceph Cluster Node

2016-04-30 Thread Shinobu Kinjo
On Sat, Apr 30, 2016 at 5:32 PM, Oliver Dzombic wrote: > Hi, > > there is a memory allocation bug, at least in hammer. > Could you give us any pointer? > Mouting an rbd volume as a block device on a ceph node might run you > into that. Then your mount wont work, and you will have to restart the

Re: [ceph-users] Mapping RBD On Ceph Cluster Node

2016-04-30 Thread Shinobu Kinjo
i...@ip-interactive.de > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > >

Re: [ceph-users] Blocked ops, OSD consuming memory, hammer

2016-05-25 Thread Shinobu Kinjo
What will the followings show you? ceph pg 12.258 list_unfound // maybe hung... ceph pg dump_stuck and enable debug to osd.4 debug osd = 20 debug filestore = 20 debug ms = 1 But honestly my best bet is to upgrade to the latest. It would save your life much more. - Shinobu On Thu, May 26, 20

Re: [ceph-users] OSD issue: unable to obtain rotating service keys

2016-06-01 Thread Shinobu Kinjo
Would you enable debug for osd.177 debug osd = 20 debug filestore = 20 debug ms = 1 Cheers, Shinobu On Thu, Jun 2, 2016 at 2:31 AM, Jeffrey McDonald wrote: > Hi, > > I just performed a minor ceph upgrade on my ubuntu 14.04 cluster from ceph > version to0.94.6-1trusty to 0.94.7-1trusty. Upo

Re: [ceph-users] object size changing after a pg repair

2016-06-29 Thread Shinobu Kinjo
What does `ceph pg 6.263 query` show you? On Thu, Jun 30, 2016 at 12:02 PM, Goncalo Borges < goncalo.bor...@sydney.edu.au> wrote: > Dear Cephers... > > Today our ceph cluster gave us a couple of scrub errors regarding > inconsistent pgs. We just upgraded from 9.2.0 to 10.2.2 two days ago. > > #

Re: [ceph-users] object size changing after a pg repair

2016-06-29 Thread Shinobu Kinjo
;: "Started\/Primary\/Active", > "enter_time": "2016-06-27 04:57:36.876639", > "might_have_unfound": [], > "recovery_progress": { > "backfill_targets": [], > "waiting

Re: [ceph-users] object size changing after a pg repair

2016-06-29 Thread Shinobu Kinjo
clients write operations to rados will be cancel (maybe `cancel` is not appropriate word in this sentence) until the full epoch before touching same object. Since clients must have latest OSD map. Does it make sense? Anyway in case I've been missing something, some will add more. > > Do

Re: [ceph-users] 答复: 转发: how to fix the mds damaged issue

2016-07-04 Thread Shinobu Kinjo
Reproduce with 'debug mds = 20' and 'debug ms = 20'. shinobu On Mon, Jul 4, 2016 at 9:42 PM, Lihang wrote: > Thank you very much for your advice. The command "ceph mds repaired 0" > work fine in my cluster, my cluster state become HEALTH_OK and the cephfs > state become normal also. but in the

Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-04 Thread Shinobu Kinjo
Can you reproduce with debug client = 20? On Tue, Jul 5, 2016 at 10:16 AM, Goncalo Borges < goncalo.bor...@sydney.edu.au> wrote: > Dear All... > > We have recently migrated all our ceph infrastructure from 9.2.0 to 10.2.2. > > We are currently using ceph-fuse to mount cephfs in a number of client

Re: [ceph-users] setting crushmap while creating pool fails

2016-07-14 Thread Shinobu Kinjo
You may want to change value of "osd_pool_default_crush_replicated_ruleset". shinobu On Fri, Jul 15, 2016 at 7:38 AM, Oliver Dzombic wrote: > Hi, > > wow, figured it out. > > If you dont have a ruleset 0 id, you are in trouble. > > So the solution is, that you >MUST< have a ruleset id 0. > > -

Re: [ceph-users] setting crushmap while creating pool fails

2016-07-15 Thread Shinobu Kinjo
t: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 15.07.2016 um 06:22 schrieb Shinobu Kinjo:

Re: [ceph-users] How to configure OSD heart beat to happen on public network

2016-08-01 Thread Shinobu Kinjo
osd_heartbeat_addr must be in [osd] section. On Thu, Jul 28, 2016 at 4:31 AM, Venkata Manojawa Paritala wrote: > Hi, > > I have configured the below 2 networks in Ceph.conf. > > 1. public network > 2. cluster_network > > Now, the heart beat for the OSDs is happening thru cluster_network. How can

Re: [ceph-users] OSDs going down when we bring down some OSD nodes Or cut-off the cluster network link between OSD nodes

2016-08-07 Thread Shinobu Kinjo
On Sun, Aug 7, 2016 at 6:56 PM, Christian Balzer wrote: > > [Reduced to ceph-users, this isn't community related] > > Hello, > > On Sat, 6 Aug 2016 20:23:41 +0530 Venkata Manojawa Paritala wrote: > >> Hi, >> >> We have configured single Ceph cluster in a lab with the below >> specification. >> >>

Re: [ceph-users] Recovering full OSD

2016-08-08 Thread Shinobu Kinjo
On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik wrote: > Dear ceph community, > > One of the OSDs in my cluster cannot start due to the > > ERROR: osd init failed: (28) No space left on device > > A while ago it was recommended to manually delete PGs on the OSD to let it > start. Who recommended t

Re: [ceph-users] Recovering full OSD

2016-08-08 Thread Shinobu Kinjo
: > @Shinobu > > According to > http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/ > > "If you cannot start an OSD because it is full, you may delete some data by > deleting some placement group directories in the full OSD." > > > On 8 Au

Re: [ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread Shinobu Kinjo
It would be much better to explain why as of today, object-map feature is not supported by the kernel client, or document it. On Tue, Aug 15, 2017 at 8:08 PM, Ilya Dryomov wrote: > On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote: >> Hi All, >> >> I have search everywhere for some sort of t

Re: [ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-27 Thread Shinobu Kinjo
Just for clarification. Did you upgrade your cluster from Hammer to Luminous, then hit an assertion? On Wed, Sep 27, 2017 at 8:15 PM, Richard Hesketh wrote: > As the subject says... any ceph fs administrative command I try to run hangs > forever and kills monitors in the background - sometimes t

Re: [ceph-users] Ceph Developers Monthly - October

2017-09-28 Thread Shinobu Kinjo
Are we going to have next CDM in an APAC friendly time slot again? On Thu, Sep 28, 2017 at 12:08 PM, Leonardo Vaz wrote: > Hey Cephers, > > This is just a friendly reminder that the next Ceph Developer Montly > meeting is coming up: > > http://wiki.ceph.com/Planning > > If you have work that y

Re: [ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-28 Thread Shinobu Kinjo
So the problem you faced has been completely solved? On Thu, Sep 28, 2017 at 7:51 PM, Richard Hesketh wrote: > On 27/09/17 19:35, John Spray wrote: >> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh >> wrote: >>> On 27/09/17 12:32, John Spray wrote: On Wed, Sep 27, 2017 at 12:15 PM, Richar

Re: [ceph-users] cephx

2017-10-12 Thread Shinobu Kinjo
On Fri, Oct 13, 2017 at 3:29 PM, Ashley Merrick wrote: > Hello, > > > Is it possible to limit a cephx user to one image? > > > I have looked and seems it's possible per a pool, but can't find a per image > option. What did you look at? Best reg

Re: [ceph-users] Random checksum errors (bluestore on Luminous)

2017-12-10 Thread Shinobu Kinjo
Can you open a ticket with exact version of your ceph cluster? http://tracker.ceph.com Thanks, On Sun, Dec 10, 2017 at 10:34 PM, Martin Preuss wrote: > Hi, > > I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9, > consisting of 3 hosts, each host has 3-4 OSDs (using 4TB hdds, cu

Re: [ceph-users] I want to submit a PR - Can someone guide me

2016-11-18 Thread Shinobu Kinjo
On Sat, Nov 19, 2016 at 6:59 AM, Brad Hubbard wrote: > +ceph-devel > > On Fri, Nov 18, 2016 at 8:45 PM, Nick Fisk wrote: >> Hi All, >> >> I want to submit a PR to include fix in this tracker bug, as I have just >> realised I've been experiencing it. >> >> http://tracker.ceph.com/issues/9860 >> >

Re: [ceph-users] A question about io consistency in osd down case

2016-12-12 Thread Shinobu Kinjo
On Sat, Dec 10, 2016 at 11:00 PM, Jason Dillaman wrote: > I should clarify that if the OSD has silently failed (e.g. the TCP > connection wasn't reset and packets are just silently being dropped / > not being acked), IO will pause for up to "osd_heartbeat_grace" before The number is how long an O

Re: [ceph-users] can cache-mode be set to readproxy for tier cache with ceph 0.94.9 ?

2016-12-13 Thread Shinobu Kinjo
On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong wrote: > hi cephers: > we are using ceph hammer 0.94.9, yes, It's not the latest ( jewel), > with some ssd osds for tiering, cache-mode is set to readproxy, > everything seems to be as expected, > but when reading some small files from c

Re: [ceph-users] can cache-mode be set to readproxy for tier cachewith ceph 0.94.9 ?

2016-12-13 Thread Shinobu Kinjo
-p ${cache pool} ls # rados -p ${cache pool} get ${object} /tmp/file # ls -l /tmp/file -- Original -- From: "Shinobu Kinjo"; Date: Tue, Dec 13, 2016 06:21 PM To: "JiaJia Zhong"; Cc: "CEPH list"; "ukernel"; Subject: Re:

Re: [ceph-users] cephfs quota

2016-12-14 Thread Shinobu Kinjo
Would you give us some outputs? # getfattr -n ceph.quota.max_bytes /some/dir and # ls -l /some/dir On Thu, Dec 15, 2016 at 4:41 PM, gjprabu wrote: > > Hi Team, > > We are using ceph version 10.2.4 (Jewel) and data's are mounted > with cephfs file system in linux. We are trying to s

Re: [ceph-users] can cache-mode be set to readproxy for tiercachewith ceph 0.94.9 ?

2016-12-17 Thread Shinobu Kinjo
ng set-overlay ? we didn't sweep the clients out while setting overlay > > -- Original -- > From: "JiaJia Zhong"; > Date: Wed, Dec 14, 2016 11:24 AM > To: "Shinobu Kinjo"; > Cc: "CEPH list"; "ukernel"; &

Re: [ceph-users] Ceph Import Error

2016-12-21 Thread Shinobu Kinjo
Can you share exact steps you took to build the cluster? On Thu, Dec 22, 2016 at 3:39 AM, Aakanksha Pudipeddi wrote: > I mean setup a Ceph cluster after compiling from source and make install. I > usually use the long form to setup the cluster. The mon setup is fine but > when I create an OSD u

Re: [ceph-users] Ceph pg active+clean+inconsistent

2016-12-22 Thread Shinobu Kinjo
Would you be able to execute ``ceph pg ${PG ID} query`` against that particular PG? On Wed, Dec 21, 2016 at 11:44 PM, Andras Pataki wrote: > Yes, size = 3, and I have checked that all three replicas are the same zero > length object on the disk. I think some metadata info is mismatching what > t

Re: [ceph-users] Ceph pg active+clean+inconsistent

2016-12-23 Thread Shinobu Kinjo
: 2293522445, > "omap_digest": 4294967295, > "expected_object_size": 4194304, > "expected_write_size": 4194304, > "alloc_hint_flags": 53, > "watchers": {} > } > > Depending on the output one method for

Re: [ceph-users] Recover VM Images from Dead Cluster

2016-12-24 Thread Shinobu Kinjo
On Sun, Dec 25, 2016 at 7:33 AM, Brad Hubbard wrote: > On Sun, Dec 25, 2016 at 3:33 AM, w...@42on.com wrote: >> >> >>> Op 24 dec. 2016 om 17:20 heeft L. Bader het volgende >>> geschreven: >>> >>> Do you have any references on this? >>> >>> I searched for something like this quite a lot and did

  1   2   3   >