Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Robert van Leeuwen
> By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22 SSD + > 50 SATA drives. > Our racks can hold 10 this servers and 50 this racks in ceph cluster = > 36000 OSD's, > With 4tb SATA drives and replica = 2 and nerfull ratio = 0.8 we have 40 > Petabyte of useful capacity. > > It's too b

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Dan Van Der Ster
> On 28 Oct 2014, at 08:25, Robert van Leeuwen > wrote: > >> By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22 SSD + >> 50 SATA drives. >> Our racks can hold 10 this servers and 50 this racks in ceph cluster = >> 36000 OSD's, >> With 4tb SATA drives and replica = 2 and nerfull r

[ceph-users] Scrub proces, IO performance

2014-10-28 Thread Mateusz Skała
Hello, We are using Ceph as a storage backend for KVM, used for hosting MS Windows RDP, Linux for web applications with MySQL database and file sharing from Linux. Wen scrub or deep-scrub process is active, RDP sessions are freezing for a few seconds and web applications have big replay latency.

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Christian Balzer
On Tue, 28 Oct 2014 07:46:30 + Dan Van Der Ster wrote: > > > On 28 Oct 2014, at 08:25, Robert van Leeuwen > > wrote: > > > >> By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22 SSD > >> + 50 SATA drives. > >> Our racks can hold 10 this servers and 50 this racks in ceph cluste

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Dan Van Der Ster
> On 28 Oct 2014, at 09:30, Christian Balzer wrote: > > On Tue, 28 Oct 2014 07:46:30 + Dan Van Der Ster wrote: > >> >>> On 28 Oct 2014, at 08:25, Robert van Leeuwen >>> wrote: >>> By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22 SSD + 50 SATA drives. Our r

Re: [ceph-users] Scrub proces, IO performance

2014-10-28 Thread Dan Van Der Ster
Hi, You should try the new osd_disk_thread_ioprio_class / osd_disk_thread_ioprio_priority options. Cheers, dan On 28 Oct 2014, at 09:27, Mateusz Skała mailto:mateusz.sk...@budikom.net>> wrote: Hello, We are using Ceph as a storage backend for KVM, used for hosting MS Windows RDP, Linux for we

Re: [ceph-users] Filestore throttling

2014-10-28 Thread GuangYang
> Date: Thu, 23 Oct 2014 21:26:07 -0700 > From: s...@newdream.net > To: yguan...@outlook.com > CC: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com > Subject: RE: Filestore throttling > > On Fri, 24 Oct 2014, GuangYang wrote: >>> commit 44dca5c8c5058a

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Mariusz Gronczewski
On Tue, 28 Oct 2014 11:32:34 +0900, Christian Balzer wrote: > On Mon, 27 Oct 2014 19:30:23 +0400 Mike wrote: > The fact that they make you buy the complete system with IT mode > controllers also means that if you would want to do something like RAID6, > you'd be forced to do it in software. If

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Nick Fisk
I've been looking at various categories of disks and how the performance/reliability/cost varies. There seems to be 5 main categories: (WD disks given as example)- Budget (WD Green - 5400 no TLER) Desktop Drives (WD Blue - /7200RPM no TLER) NAS Drives (WD Red - 5400RPM TLER) Enterprise Capacity

Re: [ceph-users] Can't start osd- one osd alway be down.

2014-10-28 Thread tuantb
Hi Craig Lewis, My pool have 300TB DATA, I can't recreate a new pool, then copying data by "ceph cp pool" (take very long time). I upgraded Ceph to Giant (0.86), but still error :(( I think my proplem is "objects misplaced (0.320%)" # ceph pg 23.96 query "num_objects_missing_on_primary"

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Mariusz Gronczewski
On Tue, 28 Oct 2014 09:58:45 -, Nick Fisk wrote: > The RED drives are interesting, they are very cheap and if performance is > not of top importance (cold storage/archive) they would seem to be a good > choice as they are designed for 24x7 use and support 7s error timeout. WD also have Red

[ceph-users] error when executing ceph osd pool set foo-hot cache-mode writeback

2014-10-28 Thread Cristian Falcas
Hello, In the documentation about creating an cache pool, you find this: "Cache mode The most important policy is the cache mode: ceph osd pool set foo-hot cache-mode writeback" But when trying to run the above command, I get an error: ceph osd pool set ssd_cache cache-mode writeback Invalid

Re: [ceph-users] Scrub proces, IO performance

2014-10-28 Thread Mateusz Skała
Thanks for reply, we are using now ceph 0.80.1 firefly, is this options available? From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mateusz Skała Sent: Tuesday, October 28, 2014 9:27 AM To: ceph-us...@ceph.com Subject: [ceph-users] Scrub proces, IO performance Hello,

Re: [ceph-users] Scrub proces, IO performance

2014-10-28 Thread Irek Fasikhov
No. Appeared in 0.80.6. But there is a bug which is corrected in 0.80.8 See: http://tracker.ceph.com/issues/9677 2014-10-28 14:50 GMT+03:00 Mateusz Skała : > Thanks for reply, we are using now ceph 0.80.1 firefly, is this options > available? > > > > *From:* ceph-users [mailto:ceph-users-boun...@

[ceph-users] Ceph tries to install to root on OSD's

2014-10-28 Thread Support - Avantek
Hi, Ceph always installs tries to install to the root user on my OSD's not my Ceph user I created (and always asks for a password!). I think it must be something to do with editing the ~/.ssh/config file but this doesn't exist on my monitor so can I just create one? And also how do you set it u

Re: [ceph-users] can we deploy multi-rgw on one ceph cluster?

2014-10-28 Thread yuelongguang
clewis: 1. i do not understand the use case of multiple regions and multiple zones and multiple rgw. in actual use(upload, download) , users have dealings with rgw directly, in this process , what do regions and zones do? 2. it is radosgw-agent to sync data and metadata. if data synchron

[ceph-users] Adding a monitor to

2014-10-28 Thread Patrick Darley
Hi there Over the last week or so, I've been trying to connect a ceph monitor node running on a baserock system to connect to a simple 3-node ubuntu ceph cluster. The 3 node ubunutu cluster was created by following the documented Quick installation guide using 3 VMs running ubuntu Trusty. A

[ceph-users] Poor RBD performance as LIO iSCSI target

2014-10-28 Thread Christopher Spearman
I've noticed a pretty steep performance degradation when using RBDs with LIO. I've tried a multitude of configurations to see if there are any changes in performance and I've only found a few that work (sort of). Details about the systems being used: - All network hardware for data is 10gbe, the

Re: [ceph-users] error when executing ceph osd pool set foo-hot cache-mode writeback

2014-10-28 Thread Gregory Farnum
On Tue, Oct 28, 2014 at 3:24 AM, Cristian Falcas wrote: > Hello, > > In the documentation about creating an cache pool, you find this: > > "Cache mode > > The most important policy is the cache mode: > > ceph osd pool set foo-hot cache-mode writeback" > > But when trying to run the above command,

Re: [ceph-users] Adding a monitor to

2014-10-28 Thread Gregory Farnum
On Mon, Oct 27, 2014 at 11:37 AM, Patrick Darley wrote: > Hi there > > Over the last week or so, I've been trying to connect a ceph monitor node > running on a baserock system > to connect to a simple 3-node ubuntu ceph cluster. > > The 3 node ubunutu cluster was created by following the documente

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-10-28 Thread Mike Christie
On 10/27/2014 04:24 PM, Christopher Spearman wrote: > > - What tested with bad performance (Reads ~25-50MB/s - Writes ~25-50MB/s) >* RBD setup as target using LIO >* RBD -> LVM -> LIO target >* RBD -> RAID0/1 -> LIO target > - What tested with good performance (Reads ~700-800MB/s - W

Re: [ceph-users] can we deploy multi-rgw on one ceph cluster?

2014-10-28 Thread Craig Lewis
On Tue, Oct 28, 2014 at 7:41 AM, yuelongguang wrote: > clewis: > 1. > i do not understand the use case of multiple regions and multiple zones > and multiple rgw. > in actual use(upload, download) , users have dealings with rgw directly, > in this process , what do regions and zones do? > The

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-28 Thread Gregory Farnum
You'll need to gather a log with the offsets visible; you can do this with "debug ms = 1; debug mds = 20; debug journaler = 20". -Greg On Fri, Oct 24, 2014 at 7:03 AM, Jasper Siero wrote: > Hello Greg and John, > > I used the patch on the ceph cluster and tried it again: > /usr/bin/ceph-mds -i t

Re: [ceph-users] Adding a monitor to

2014-10-28 Thread Gregory Farnum
I'm sorry, you're right — I misread it. :( But indeed step 6 is the crucial one, which tells the existing monitors to accept the new one into the cluster. You'll need to run it with an admin client keyring that can connect to the existing cluster; that's probably the part that has gone wrong. You d

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-10-28 Thread Christopher Spearman
Sage: That'd be my assumption, performance looked pretty fantastic over loop until it started being used it heavily Mike: The configs you asked for are at the end of this message I've subtracted & changed some info, iqn/wwn/portal, for security purposes. The raw & loop target configs are all in

Re: [ceph-users] Troubleshooting Incomplete PGs

2014-10-28 Thread Gregory Farnum
On Thu, Oct 23, 2014 at 6:41 AM, Chris Kitzmiller wrote: > On Oct 22, 2014, at 8:22 PM, Craig Lewis wrote: > > Shot in the dark: try manually deep-scrubbing the PG. You could also try > marking various osd's OUT, in an attempt to get the acting set to include > osd.25 again, then do the deep-scru

Re: [ceph-users] error when executing ceph osd pool set foo-hot cache-mode writeback

2014-10-28 Thread Cristian Falcas
It's from here: https://ceph.com/docs/v0.79/dev/cache-pool/#cache-mode In that page there are both commands On Tue, Oct 28, 2014 at 6:03 PM, Gregory Farnum wrote: > On Tue, Oct 28, 2014 at 3:24 AM, Cristian Falcas > wrote: >> Hello, >> >> In the documentation about creating an cache pool, you f

Re: [ceph-users] Troubleshooting Incomplete PGs

2014-10-28 Thread Lincoln Bryant
Hi Greg, Loic, I think we have seen this as well (sent a mail to the list a week or so ago about incomplete pgs). I ended up giving up on the data and doing a force_create_pgs after doing a find on my OSDs and deleting the relevant pg dirs. If there are any logs etc you'd like to see for debugg

[ceph-users] HTTP Get returns 404 Not Found for Swift API

2014-10-28 Thread Pedro Miranda
Hi I'm new using Ceph and I have a very basic Ceph cluster with 1 mon in one node and 2 OSDs in two separate nodes (all CentOS 7). I followed the quick-ceph-deploy tutorial. All went well. Then I started the quick-rgw

[ceph-users] Setting ceph username for rbd fuse

2014-10-28 Thread Xavier Trilla
Hi list, We have been using rbd fuse, it works flawless if we use the admin key but we don't find a way to tell fuse the username we want to use to log into ceph. It always seem to default to client.admin. We even tried setting CEPH_ARGS but it doesn't seem to recognize the user id. So, that's

Re: [ceph-users] Troubleshooting Incomplete PGs

2014-10-28 Thread Loic Dachary
Hi Chris, Would you be so kind as to attach to http://tracker.ceph.com/issues/9752 the osdmaps that are relevant ( 4663 and 4685 would be helpfull) ? If my request is unclear I can guide you. Please note that I'm in Paris France and about to disconnect (11pm here ;-). I'll read you in the morni

Re: [ceph-users] Troubleshooting Incomplete PGs

2014-10-28 Thread Loic Dachary
On 28/10/2014 22:20, Lincoln Bryant wrote: > Hi Greg, Loic, > > I think we have seen this as well (sent a mail to the list a week or so ago > about incomplete pgs). I ended up giving up on the data and doing a > force_create_pgs after doing a find on my OSDs and deleting the relevant pg > dir

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Craig Lewis
> > And finally the SAS drive. For CEPH I don't see this drive making much > sense. Most manufacturers enterprise SATA drives are identical to the SAS > version with just the different interface. Performance seems identical in > all comparisons I have seen, apart from the fact that SATA can only qu

Re: [ceph-users] What a maximum theoretical and practical capacity in ceph cluster?

2014-10-28 Thread Christian Balzer
On Tue, 28 Oct 2014 10:33:55 +0100 Mariusz Gronczewski wrote: > On Tue, 28 Oct 2014 11:32:34 +0900, Christian Balzer > wrote: > > > On Mon, 27 Oct 2014 19:30:23 +0400 Mike wrote: > > > The fact that they make you buy the complete system with IT mode > > controllers also means that if you would

Re: [ceph-users] Troubleshooting Incomplete PGs

2014-10-28 Thread Chris Kitzmiller
On Oct 28, 2014, at 5:20 PM, Lincoln Bryant wrote: > Hi Greg, Loic, > > I think we have seen this as well (sent a mail to the list a week or so ago > about incomplete pgs). I ended up giving up on the data and doing a > force_create_pgs after doing a find on my OSDs and deleting the relevant pg