[ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-27 Thread van
ted codes, I wonder if ceph has also so some effort on kernel client to solve this problem. If ceph did, could anyone help provide the kernel version with the patch? Thanks. van chaofa...@owtware.com ___ ceph-users mailing list ceph-users@lists.c

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread van
or could help with this? Thanks. > > On Jul 28, 2015, at 3:01 PM, Ilya Dryomov wrote: > > On Tue, Jul 28, 2015 at 9:17 AM, van wrote: >> Hi, list, >> >> I found on the ceph FAQ that, ceph kernel client should not run on >> machines belong to ceph cluster. &g

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread van
Hi, Ilya, In the dmesg, there is also a lot of libceph socket error, which I think may be caused by my stopping ceph service without unmap rbd. Here is a more than 1 lines log contains more info, http://jmp.sh/NcokrfT <http://jmp.sh/NcokrfT> Thanks for willing to help

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread van
> On Jul 28, 2015, at 7:57 PM, Ilya Dryomov wrote: > > On Tue, Jul 28, 2015 at 2:46 PM, van wrote: >> Hi, Ilya, >> >> In the dmesg, there is also a lot of libceph socket error, which I think >> may be caused by my stopping ceph service without unmap rbd. &g

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-29 Thread van
> On Jul 29, 2015, at 12:40 AM, Ilya Dryomov wrote: > > On Tue, Jul 28, 2015 at 7:20 PM, van <mailto:chaofa...@owtware.com>> wrote: >> >>> On Jul 28, 2015, at 7:57 PM, Ilya Dryomov wrote: >>> >>> On Tue, Jul 28, 2015 at 2:46 PM, van wrot

[ceph-users] questions on editing crushmap for ceph cache tier

2015-07-29 Thread van
ht 2.5 item node1 weight 2.5 } # typical ruleset rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } van chaofa...@owtwa

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-29 Thread van
> Subject: Re: [ceph-users] which kernel version can help avoid kernel client > deadlock > > > On Jul 29, 2015, at 12:40 AM, Ilya Dryomov <mailto:idryo...@gmail.com>> wrote: > > On Tue, Jul 28, 2015 at 7:20 PM, van <mailto:chaofa...@owtware.com>> wrote:

Re: [ceph-users] questions on editing crushmap for ceph cache tier

2015-07-30 Thread van
J8s4 > I+AdYDjiQOAG3n3xixqRFOb4URjfKOrUbnHfNVQJU+qfYfV1RBLThhRjiv0v > O+oiKiWuugZicHniTfHuOYePgxbs9eU2Hk8VRVk9ievXuRynDrH7D+IeUzUJ > JGoj01YM60Ul1XJPWatMoM+435hcHrGd0rJ3bi91DOrZmT55X4jjdUA8z/3Y > xMZE > =Tqaw > -----END PGP SIGNATURE- > > Robert LeBlanc >

[ceph-users] Questions about deploying multiple cluster on same servers

2014-11-24 Thread van
about multiple ceph services on the same server. Could you give some guidances on deploying multiple clusters on the same servers? Best regards, van chaofa...@owtware.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Robert van Leeuwen
> We are at the end of the process of designing and purchasing storage to > provide Ceph based backend for VM images, VM boot (ephemeral) disks, > persistent volumes (and possibly object storage) for our future Openstack > cloud. > We considered many options and we chose to prefer commodity sto

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Robert van Leeuwen
> this is a very good point that I totally overlooked. I concentrated more on > the IOPS alignment plus write durability, > and forgot to check the sequential write bandwidth. Again, this totally depends on the expected load. Running lots of VMs usually tends to end up being random IOPS on your

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-23 Thread Robert van Leeuwen
> All of which means that Mysql performance (looking at you binlog) may > still suffer due to lots of small block size sync writes. Which begs the question: Anyone running a reasonable busy Mysql server on Ceph backed storage? We tried and it did not perform good enough. We have a small ceph clu

[ceph-users] Adding new OSDs fails (hang on ceph-osd -i )

2014-07-08 Thread Robert van Leeuwen
ngs on OSD start. When I manually add the OSD the following process just hangs: ceph-osd -i 10 --osd-journal=/mnt/ceph/journal_vg_sda/journal --mkfs --mkkey Running ceph-0.72.1 on CentOS. Any tips? Thx, Robert van Leeuwen ___ ceph-users mailing list

Re: [ceph-users] Adding new OSDs fails (hang on ceph-osd -i )

2014-07-08 Thread Robert van Leeuwen
> Try to add --debug-osd=20 and --debug-filestore=20 > The logs might tell you more why it isn't going through. Nothing of interest there :( What I do notice is that when I run ceph_deploy it is referencing a keyring that does not exist: --keyring /var/lib/ceph/tmp/mnt.J7nSi0/keyring If I look o

[ceph-users] Hang of ceph-osd -i (adding an OSD)

2014-07-09 Thread Robert van Leeuwen
Hi, I cannot add a new OSD to a current Ceph cluster. It just hangs, here is the debug log: ceph-osd -d --debug-ms=20 --debug-osd=20 --debug-filestore=31 -i 10 --osd-journal=/mnt/ceph/journal_vg_sda/journal0 --mkfs --mkjournal --mkkey 2014-07-09 10:50:28.934959 7f80f6a737a

Re: [ceph-users] Hang of ceph-osd -i (adding an OSD)

2014-07-09 Thread Robert van Leeuwen
> I cannot add a new OSD to a current Ceph cluster. > It just hangs, here is the debug log: > This is ceph 0.72.1 on CentOS. Found the issue: Although I installed the specific ceph (0.72.1) version the latest leveldb was installed. Apparently this breaks stuff... Cheers, Robert va

Re: [ceph-users] Hang of ceph-osd -i (adding an OSD)

2014-07-09 Thread Robert van Leeuwen
> Which leveldb from where? 1.12.0-5 that tends to be in el6/7 repos is broken > for Ceph. > You need to remove the “basho fix” patch. > 1.7.0 is the only readily available version that works, though it is so old > that I suspect it is responsible for various > issues we see. Apparently at some

Re: [ceph-users] question about crushmap

2014-07-11 Thread Robert van Leeuwen
hat in the crush map bucket hierarchy indeed. There is a nice explanation here: http://ceph.com/docs/master/rados/operations/crush-map/ Note that your clients will write in both the local and remote dc so it will impact write latency! Cheers, Robert van Leeuwen __

Re: [ceph-users] HW recommendations for OSD journals?

2014-07-16 Thread Robert van Leeuwen
ous incoming data and lose a 2-3 percent of lifetime per week. I would highly recommend to monitor this if you are not doing this already ;) Buying bigger SSDs will help because the writes are spread across more cells. So a 240GB drive should last 2x a 120GB drive. Cheers, Robert van Leeuwen

Re: [ceph-users] nf_conntrack overflow crashes OSDs

2014-08-08 Thread Robert van Leeuwen
nusable. It is also possible to specifically not conntrack certain connections. e.g. iptables -t raw -A PREROUTING -p tcp --dport 6789 -j CT --notrack Note that you will have to make the rules in both traffic flows since the connections are no longer tracked it does not automatically accepts the return

Re: [ceph-users] best practice of installing ceph(large-scale deployment)

2014-08-11 Thread Robert van Leeuwen
st IO for database like loads I think you should probably go with SSD only (pools). (I'd be happy to hear the numbers about running high random write loads on it :) And another nice hardware scaling PDF from dreamhost... https://objects.dreamhost.com/inktankweb/Inkta

Re: [ceph-users] cache pools on hypervisor servers

2014-08-12 Thread Robert van Leeuwen
hypervisors get above a certain load threshold. I would certainly test a lot with high loads before putting it in production... Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cache pools on hypervisor servers

2014-08-14 Thread Robert van Leeuwen
and > generally have over 50% of free cpu power. The number of cores do not really matter if they are all busy ;) I honestly do not know how Ceph behaves when it is CPU starved but I guess it might not be pretty. Since your whole environment will be crumbling down if your storage becomes un

Re: [ceph-users] Ceph Journal Disk Size

2015-07-03 Thread Van Leeuwen, Robert
ite workload is hitting the disk and it is just very big sequential writes. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread Van Leeuwen, Robert
eries though. E.g. The Intel 750, 1.2TB costs more then a 400GB S3700 and has a lot less endurance (about 200GB per day vs 4TB per day) Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-08-31 Thread Kenneth Van Alstyne
ack" - rbd_concurrent_management_ops is unset, so it appears the default is “10” Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com DHS EAGLE

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-08-31 Thread Kenneth Van Alstyne
I/O coalescing to deal with my crippling IOP limit due to the low number of spindles? Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-09-01 Thread Kenneth Van Alstyne
Thanks for the awesome advice folks. Until I can go larger scale (50+ SATA disks), I’m thinking my best option here is to just swap out these 1TB SATA disks with 1TB SSDs. Am I oversimplifying the short term solution? Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-09-01 Thread Kenneth Van Alstyne
Got it — I’ll keep that in mind. That may just be what I need to “get by” for now. Ultimately, we’re looking to buy at least three nodes of servers that can hold 40+ OSDs backed by 2TB+ SATA disks, Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled

[ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-02 Thread Robert van Leeuwen
ffectively losing the network. This is what I set in the ceph client: [client] rbd cache = true rbd cache writethrough until flush = true Anyone else noticed this behaviour before or have some troubleshooting tips? Thx, Robert van Leeuwen ___ ceph

Re: [ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-03 Thread Robert van Leeuwen
> The behavior you both are seeing is fixed by making flush requests > asynchronous in the qemu driver. This was fixed upstream in qemu 1.4.2 > and 1.5.0. If you've installed from ceph-extras, make sure you're using > the .async rpms [1] (we should probably remove the non-async ones at > this point

Re: [ceph-users] Install question

2013-10-07 Thread Robert van Leeuwen
east, serves the Redhat RPM's, maybe the EPEL and Ceph repo's can be added there? If not, make sure you have a lot of time and patience to copy stuff around. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-08 Thread Robert van Leeuwen
> I tried putting Flashcache on my spindle OSDs using an Intel SSL and it works > great. > This is getting me read and write SSD caching instead of just write > performance on the journal. > It should also allow me to protect the OSD journal on the same drive as the > OSD data and still get bene

[ceph-users] Distribution of performance under load.

2013-10-17 Thread Robert van Leeuwen
omewhat reduces any need to do this in Ceph but I am curious what Ceph does) I guess it is pretty tricky to handle since load can either be raw bandwith or number of IOPS. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@list

[ceph-users] ceph (deploy?) and drive paths / mounting / best practice.

2013-11-18 Thread Robert van Leeuwen
t using /dev/sdX for this instead of the /dev/disk/by-id /by-path given by ceph-deploy. So I am wondering how other people are setting up machines and how things work :) Thx, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com h

[ceph-users] OSDs marking itself down and reconnecting back to the cluster

2013-11-19 Thread Robert van Leeuwen
803 mon.0 [INF] osd.7 marked itself down 2013-11-19 13:48:40.694596 7f15a6192700 0 monclient: hunting for new mon Thx, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSDs marking itself down and reconnecting back to the cluster

2013-11-19 Thread Robert van Leeuwen
Ok, probably hitting this: http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/ flapping OSD part... Cheers, Robert From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on behalf of Robert van Leeuwen

[ceph-users] No gracefull handling of a maxed out cluster network with noup / nodown set.

2013-11-20 Thread Robert van Leeuwen
Hi, I'm playing with our new Ceph cluster and it seems that Ceph is not gracefully handling a maxed out cluster network. I had some "flapping" nodes once every few minutes when pushing a lot of traffic to the nodes so I decided to set the noup and nodown as described in the docs. http://ceph.c

[ceph-users] How to replace a failed OSD

2013-11-20 Thread Robert van Leeuwen
would like to do a partition/format and some ceph commands to get stuff working again... Thx, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Adding new OSDs, need to increase PGs?

2013-12-02 Thread Robert van Leeuwen
y and CPU did not seem to be a problem. Since had the option to recreate the pool and I was not using the recommended settings I did not really dive into the issue. I will not stray to far from the recommended settings in the future though :) Cheers, Robert van Leeuwen ___

Re: [ceph-users] Adding new OSDs, need to increase PGs?

2013-12-03 Thread Robert van Leeuwen
wntime. What I did see is that IOs will crawl to a halt during pg creation ( 1000 took a few minutes). Also expect reduced performance during the rebalance of the data. The OSDs will be quite busy during that time. I would certainly pick a time with low traffic to do

Re: [ceph-users] Adding new OSDs, need to increase PGs?

2013-12-03 Thread Robert van Leeuwen
d for 100% percent. Also the usage dropped to 0% pretty much immediately after the benchmark so it looks like it's not lagging behind the journal. Did not really test reads yet since we have so much read cache (128 GB per node) I assume we will mostly be write limited. Cheers, Robert v

Re: [ceph-users] Adding new OSDs, need to increase PGs?

2013-12-03 Thread Robert van Leeuwen
sequential writes. Cheers, Robert van Leeuwen Sent from my iPad > On 3 dec. 2013, at 17:02, "Mike Dawson" wrote: > > Robert, > > Do you have rbd writeback cache enabled on these volumes? That could > certainly explain the higher than expected write performance. Any

Re: [ceph-users] Openstack+ceph volume mounting to vm

2013-12-04 Thread Robert van Leeuwen
sume you know this maillinglist is a community effort. If you want immediate and official support 24x7 buy support @ www.inktank.com Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Serge van Ginderachter
On 10 October 2014 16:58, Stéphane DUGRAVOT < stephane.dugra...@univ-lorraine.fr> wrote: > We wonder the availability of professional support in our project > approach. ​We were happy to work with Wido Den Hollander https://www.42on.com/​ ​​ ___ ceph-

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Robert van Leeuwen
;s is still not very comfortable. (especially if the disks come from the same batch) Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph and Compute on same hardware?

2014-11-12 Thread Robert van Leeuwen
mes more complex having to rule out more potential causes. Not saying it can not work perfectly fine. I'd rather just not take any chances with the storage system... Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists

Re: [ceph-users] ceph on peta scale

2015-01-12 Thread Robert van Leeuwen
h of an issue but latency certainly will be. Although bandwidth during a rebalance of data might also be problematic... Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph on peta scale

2015-01-14 Thread Robert van Leeuwen
; instead of strongly consistent. I think Ceph is working on something similar for the Rados gateway. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CEPH I/O Performance with OpenStack

2015-01-27 Thread Robert van Leeuwen
; 6. Should i use RAID Level for the drivers on OSD nodes ? or it's better to > go without RAID ? Without RAID usually makes for better performance. Benchmark your specific workload to be sure. In general I would go for 3 replica's and no RAID. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CEPH I/O Performance with OpenStack

2015-01-27 Thread Robert van Leeuwen
a single instance since the data would be needed to be written exactly spread across the cluster. In my experience it is "good enough" for some low writes instances but not for write intensive applications like Mysql. Cheers, Robert van Leeuwen

Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread Robert van Leeuwen
we use the SSD for flashcache instead of journals) Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Impact of fancy striping

2013-12-06 Thread Robert van Leeuwen
making it fully random. I would expect a performance of 100 to 200 IOPS max. Doing an iostat -x or atop should show this bottleneck immediately. This is also the reason to go with SSDs: they have reasonable random IO performance. Cheers, Robert van Leeuwen Sent from my iPad > On 6 dec. 2013,

Re: [ceph-users] how to set up disks in the same host

2013-12-09 Thread Robert van Leeuwen
licas for objects in the pool in order to acknowledge a write operation to the client. If minimum is not met, Ceph will not acknowledge the write to the client. This setting ensures a minimum number of replicas when operating in degraded mode. Cheers, Robert va

Re: [ceph-users] Deleted Objects

2013-12-12 Thread Joel van Velden
In a similar problem, i'm finding that cancelled uploads are not getting freed. "rados df" reports 0 KB in the .rgw.gc pool and "radosgw-admin gc list" gives "[]" Can someone from Inktank provide some light here? -Joel van Velden On 13/12/2013 7:04 a.m., Fa

Re: [ceph-users] Impact of fancy striping

2013-12-13 Thread Robert van Leeuwen
use RAID 10 and do 2 instead of 3 replicas. Cheers, Robert van Leeuwen From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on behalf of nicolasc [nicolas.cance...@surfsara.nl] Sent: Thursday, December 12, 2013 5:23 PM To: Craig Lewis

[ceph-users] rgw.multimeta

2013-12-15 Thread Joel van Velden
;: "default.6546.1", "marker": "default.6546.1", "owner": "me", "ver": 393, "master_ver": 0, "mtime": 1386033759, "max_marker": "", "usage": { "rgw.main": { "si

Re: [ceph-users] Failure probability with largish deployments

2013-12-19 Thread Robert van Leeuwen
code base I *think* it should be pretty trivial to change the code to support this and would be a very small change compared to erasure code. ( I looked a bit at crush map Bucket Types but it *seems* that all Bucket types will still stripe the PGs across all nodes within a failure dom

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-12-19 Thread Joel van Velden
Yehuda, Do you have any futher detail on this radosgw bug? Does it only apply to emperor? Joel van Velden On 19/12/2013 5:09 a.m., Yehuda Sadeh wrote: We were actually able to find the culprit yesterday. While the nginx workaround might be a valid solution (really depends on who nginx reads

Re: [ceph-users] backfill_toofull issue - can reassign PGs to different server?

2014-01-06 Thread Robert van Leeuwen
full (above 85%). Hi, Could you check / show the weights of all 3 servers and disks? eg. run "ceph osd tree" Also if you use failure domains (e.g. by rack) and the first 2 are in the same domain it will not spread the data according to just the weight but also the failure domai

Re: [ceph-users] Ceph Performance

2014-01-14 Thread Robert van Leeuwen
o: Result jobs=1: iops=297 Result jobs=16: iops=1200 I'm running the fio bench from a KVM virtual. Seems that a single write thread is not able to go above 300 iops (latency?) Ceph can handle more iops if you start more / parallel write threads. Cheers, Robe

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Robert van Leeuwen
ave 10Gb copper on board. The above machines just have 2x 1Gb. I think all brands have their own quirks, the question is which one you are the most comfortable to live with. (e.g. we have no support contracts with Supermicro and just have parts on stock) Cheers, Robert van Leeuwen __

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Robert van Leeuwen
the m4 is. Up to now it seems that only Intel seems to have done his homework. In general they *seem* to be the most reliable SSD provider. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo

Re: [ceph-users] XFS tunning on OSD

2014-03-05 Thread Robert van Leeuwen
Hi, We experience something similar with our Openstack Swift setup. You can change the sysstl "vm.vfs_cache_pressure" to make sure more inodes are being kept in cache. (Do not set this to 0 because you will trigger the OOM killer at some point ;) We also decided to go for nodes with more memory

Re: [ceph-users] Dell H310

2014-03-07 Thread Robert van Leeuwen
> I'm hoping to get some feedback on the Dell H310 (LSI SAS2008 chipset). > Based on searching I'd done previously I got the impression that people > generally recommended avoiding it in favour of the higher specced H710 > (LSI SAS2208 chipset). Purely based on the controller chip it should be OK.

[ceph-users] Write performance from qemu rbd (Mysql)

2014-03-12 Thread Robert van Leeuwen
nyone have experience running Mysql or other real-life heavy workloads from qemu and getting more then 300 IOPS? Thx, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] help .--Why the PGS is STUCK UNCLEAN?

2014-03-13 Thread Robert van Leeuwen
across 2 failure domains by default. My guess is the default crush map will see a node as a single failure domain by default. So, edit the crushmap to allow this or add a second node. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists

Re: [ceph-users] Linux kernel module / package / drivers needed for RedHat 6.4 to work with CEPH RBD

2014-03-28 Thread Robert van Leeuwen
On 28.03.14, 12:12, Guang wrote: > Hello ceph-users, > We are trying to play with RBD and I would like to ask if RBD works for > RedHat 6.4 (with kernel version 2.6.32), Unless things changed I think there is no usable kernel client with the Redhat supplied kernel. You could build your own kern

Re: [ceph-users] Ceph and shared backend storage.

2014-04-08 Thread Robert van Leeuwen
something similar. You can get it to work but it is a bit of a PITA. There are also some performance considerations with those filesystems so you should really do some proper testing before any large scale deployments. Cheers, Robert van Leeuwen ___ c

Re: [ceph-users] Ceph and shared backend storage.

2014-04-09 Thread Robert van Leeuwen
> So .. the idea was that ceph would provide the required clustered filesystem > element, > and it was the only FS that provided the required "resize on the fly and > snapshotting" things that were needed. > I can't see it working with one shared lun. In theory I can't see why it > couldn't wor

Re: [ceph-users] Ceph and shared backend storage.

2014-04-09 Thread Robert van Leeuwen
> This is "similar" to ISCSI except that the data is distributed accross x ceph > nodes. > Just as ISCSI you should mount this on two locations unless you run a > clustered filesystem (e.g. GFS / OCFS) Oops I meant, should NOT mount this on two locations unles... :) Cheers, Robert _

Re: [ceph-users] [SOLVED] RE: Ceph 0.72.2 installation on Ubuntu 12.04.4 LTS never got active + сlean

2014-04-30 Thread Robert van Leeuwen
replica counts. When you write something with a replica count of 2 it will show up as using twice the amount of space. So 1 GB usage will result in: 1394 GB / 1396 GB avail Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] Storage Multi Tenancy

2014-05-15 Thread Jeroen van Leur
could I please have some assistance in doing so. Has anyone ever done this before. I would like thank you in advance for reading this lengthy e-mail. If there’s anything that is unclear, please feel free to ask. Best Regards, Jeroen van Leur —  Infitialis Jeroen van Leur Sent with

Re: [ceph-users] SSD journals killed by VMs generating 500 IOPs (4kB) non-stop for a month, seemingly because of a syslog-ng bug

2015-11-23 Thread Mart van Santen
already >> noticing a massive performance improvement (reduction in write latency, and >> higher IOPs). So I'm not too upset about having unnecessarily killed the 850 >> Pros. But I thought it was worth sharing the experience... >> >> FWIW the OSDs themselves are on

Re: [ceph-users] SSD journals killed by VMs generating 500 IOPs (4kB) non-stop for a month, seemingly because of a syslog-ng bug

2015-11-23 Thread Mart van Santen
On 11/23/2015 10:42 AM, Eneko Lacunza wrote: > Hi Mart, > > El 23/11/15 a las 10:29, Mart van Santen escribió: >> >> >> On 11/22/2015 10:01 PM, Robert LeBlanc wrote: >>> There have been numerous on the mailing list of the Samsung EVO and >>> Pros fa

Re: [ceph-users] Cannot Issue Ceph Command

2015-11-23 Thread Mart van Santen
ceph-deploy and the ceph command itself are seperate packages. It is possible to install *only* the ceph-deploy package without the ceph package. Normally it is as simple as "apt-get install ceph" (depending on your OS). Regards, Mart van Santen On 11/23/2015 05:03 PM, James Galla

Re: [ceph-users] Performance question

2015-11-24 Thread Mart van Santen
>> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> > >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > > >> > >> > >> > >> -- > >> Best Regards, >

Re: [ceph-users] High load during recovery (after disk placement)

2015-11-25 Thread Mart van Santen
as additions please let me know. Regards, Mart van Santen -- Mart van Santen Greenhost E: m...@greenhost.nl T: +31 20 4890444 W: https://greenhost.nl A PGP signature can be attached to this e-mail, you need PGP software to verify it. My public key is available in keyserver(s) see: http://t

Re: [ceph-users] Undersized pgs problem

2015-11-27 Thread Mart van Santen
gt; } > >> host slpeah001 { > >> id -3 # do not change unnecessarily > >> # weight 14.560 > >> alg straw > >> hash 0 # rjenkins1 > >> item osd.1 weight 3.640 > >>

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Van Leeuwen, Robert
>We've hit issues (twice now) that seem (have not >figured out exactly how to confirm this yet) to be related to kernel >dentry slab cache exhaustion - symptoms were a major slow down in >performance and slow requests all over the place on writes, watching >OSD iostat would show a single drive hitt

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Mart van Santen
https://github.com/ceph/ceph/pull/9330 > > > > Hope this helps other people with their upgrades to Jewel! > > > > Wido > > _______ > > ceph-users mailing list > > ceph-users@lists.ceph.com &

Re: [ceph-users] Designing ceph cluster

2016-08-17 Thread Mart van Santen
; > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Mart van Santen Greenhost E: m...@greenhost.nl T: +31 20 4890444 W: https://greenhost.nl A PGP signature can be atta

Re: [ceph-users] Designing ceph cluster

2016-08-18 Thread Mart van Santen
mpute1, Can you please share the > steps to split it up using VMs and suggested by you. > We are running kernel rbd on dom0 and osd's in domu, as well a monitor in domu. Regards, Mart > > Regards > Gaurav Goyal > > > On Wed, Aug 17, 2016 at 9:28 A

[ceph-users] Snapshot cleanup performance impact on client I/O?

2017-06-30 Thread Kenneth Van Alstyne
know if I’ve missed something fundamental. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com DHS EAGLE II Prime Contractor: FC1 S

Re: [ceph-users] New cluster - configuration tips and reccomendation - NVMe

2017-07-05 Thread Van Leeuwen, Robert
memory bandwidth. Having some extra memory for read-cache probably won’t hurt either (unless you know your workload won’t include any cacheable reads) Cheers, Robert van Leeuwen From: ceph-users on behalf of Massimiliano Cuttini Organization: PhoenixWeb Srl Date: Wednesday, July 5, 2017 at 10:54

[ceph-users] Modify user metadata in RGW multi-tenant setup

2017-08-17 Thread Sander van Schie
Hello, I'm trying to modify the metadata of a RGW user in a multi-tenant setup. For a regular user with the default implicit tenant it works fine using the following to get metadata: # radosgw-admin metadata get user: I however can't figure out how to do the same for a user with an explicit tena

Re: [ceph-users] Modify user metadata in RGW multi-tenant setup

2017-08-18 Thread Sander van Schie
I tried using quotes before, which doesn't suffice. Turns out you just need to escape the dollar-sign: radosgw-admin metadata get user:\$ On Thu, Aug 17, 2017 at 10:38 PM, Sander van Schie wrote: > Hello, > > I'm trying to modify the metadata of a RGW user in a multi-te

[ceph-users] Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD

2017-09-27 Thread Eric van Blokland
tting down the OSDs. Any help would be greatly appreciated. Kind regards, Eric van Blokland ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD

2017-09-28 Thread Eric van Blokland
p e163: 6 osds: 4 up, 4 in; 14 remapped pgs pgmap v2320: 96 pgs, 2 pools, 514 MB data, 141 objects 1638 MB used, 100707 MB / 102345 MB avail 28/423 objects degraded (6.619%) 53 active+undersized+degraded 29 active+clean

Re: [ceph-users] OSD crashed while reparing inconsistent PG luminous

2017-10-17 Thread Mart van Santen
Hi Greg, (I'm a colleague of Ana), Thank you for your reply On 10/17/2017 11:57 PM, Gregory Farnum wrote: > > > On Tue, Oct 17, 2017 at 9:51 AM Ana Aviles > wrote: > > Hello all, > > We had an inconsistent PG on our cluster. While performing PG repair > op

Re: [ceph-users] OSD crashed while reparing inconsistent PG luminous

2017-10-18 Thread Mart van Santen
, mart On 10/18/2017 12:39 PM, Ana Aviles wrote: > > Hello, > > We created a BUG #21827 . Also updated the log file of the OSD with > debug 20. Reference is 6e4dba6f-2c15-4920-b591-fe380bbca200 > > Thanks, > Ana > > On 18/10/17 00:46, Mart van Santen wrote:

Re: [ceph-users] ceph.conf tuning ... please comment

2017-12-06 Thread Van Leeuwen, Robert
should be done in each deployment ;) I am sure some other people have better insights in these specific settings. Cheers, Robert van Leeuwen On 12/6/17, 7:01 AM, "ceph-users on behalf of Stefan Kooman" wrote: Dear list, In a ceph blog post about the new Luminous release

Re: [ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

2018-01-12 Thread Van Leeuwen, Robert
are a hacker target How good will you sleep knowing there is a potential hole in security :) Etc. Cheers, Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph and rsync

2016-12-17 Thread Mart van Santen
Hello, The way Wido explained is the correct way. I won't deny, however, last year we had problems with our SSD disks and they did not perform well. So we decided to replace all disks. As the replacement done by Ceph caused highload/downtime on the clients (which was the reason we wanted to repl

Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Mart van Santen
gt; > >>> ___ >>> ceph-users mailing list >>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >&g

[ceph-users] OSD Crash When Upgrading from Jewel to Luminous?

2018-08-17 Thread Kenneth Van Alstyne
ount of logging and debug information I have available, unfortunately. If it helps, all ceph-mon, ceph-mds, radosgw, and ceph-mgr daemons were running 12.2.7, while 30 of the 50 total ceph-osd daemons were also on 12.2.7 when the remaining 20 ceph-osd daemons (on 10.2.10) crashed. Thanks, -- Ken

Re: [ceph-users] OSD Crash When Upgrading from Jewel to Luminous?

2018-08-21 Thread Kenneth Van Alstyne
duplicate the issue in a lab, but highly suspect this is what happened. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com<h

[ceph-users] Anyone tested Samsung 860 DCT SSDs?

2018-10-12 Thread Kenneth Van Alstyne
Cephers: As the subject suggests, has anyone tested Samsung 860 DCT SSDs? They are really inexpensive and we are considering buying some to test. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite

Re: [ceph-users] Anyone tested Samsung 860 DCT SSDs?

2018-10-12 Thread Kenneth Van Alstyne
Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com DHS EAGLE II Prime Contractor: FC1 SDVOSB Track GSA Schedule 70 SDVOSB: GS-35F-0646S GSA MOBIS Schedule

Re: [ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-18 Thread Van Leeuwen, Robert
proper amount of "hot" data where the IOs happen. But I guess that’s a very hard thing to build properly. Final note: if you only have SSDs in the cluster the problem might not be there because usually bigger SSDs are also faster :) Cheers, Robert van Leeuwen _

  1   2   3   4   5   6   7   8   9   >