Oh, for some reason i thought you'd mentioned the OSD config earlier here.
Gald you figured it out anyway!
Are you doing any comparison benchmarks with/without compression? There is
precious little (no?) info out there about performance impact...
Cheers,
Blair
On 3 Jul. 2018 03:18, "David Turner
Hi,
Zeros are not a great choice of data for testing a storage system unless
you are specifically testing what it does with zeros. Ceph knows that other
higher layers in the storage stack use zero-fill for certain things and
will probably optimise for it. E.g., it's common for thin-provisioning
sy
This is true, but misses the point that the OP is talking about old
hardware already - you're not going to save much money on removing a 2nd
hand CPU from a system.
On Wed, 20 Jun 2018 at 22:10, Wido den Hollander wrote:
>
>
> On 06/20/2018 02:00 PM, Robert Sander wrote:
> > On 20.06.2018 13:58,
On 19 May 2018 at 09:20, Scottix wrote:
> It would be nice to have an option to have all IO blocked if it hits a
> degraded state until it recovers. Since you are unaware of other MDS state,
> seems like that would be tough to do.
I agree this would be a nice knob to have from the perspective o
> But 7proc/cpuinfo never lies (does it ?)
>
>
>
>
> On 16 May 2018 at 13:22, Blair Bethwaite
> wrote:
>
>> On 15 May 2018 at 08:45, Wido den Hollander wrote:
>>>
>>> > We've got some Skylake Ubuntu based hypervisors that we can look a
On 15 May 2018 at 08:45, Wido den Hollander wrote:
>
> > We've got some Skylake Ubuntu based hypervisors that we can look at to
> > compare tomorrow...
> >
>
> Awesome!
Ok, so results still inconclusive I'm afraid...
The Ubuntu machines we're looking at (Dell R740s and C6420s running with
Perfo
Sorry, bit late to get back to this...
On Wed., 2 May 2018, 06:19 Nick Fisk, wrote:
> 4.16 required?
>
Looks like it - thanks for pointing that out.
Wido, I don't think you are doing anything wrong here, maybe this is a
bug...
I've got RHEL7 + Broadwell based Ceph nodes here for which the sam
Also curious about this over here. We've got a rack's worth of R740XDs
with Xeon 4114's running RHEL 7.4 and intel-pstate isn't even active
on them, though I don't believe they are any different at the OS level
to our Broadwell nodes (where it is loaded).
Have you tried poking the kernel's pmqos i
On 26 April 2018 at 14:58, Jonathan D. Proulx wrote:
> Those block queue scheduler tips *might* help me squeeze a bit more
> till next budget starts July 1...
Maybe you could pick up some cheap cache from this guy: https://xkcd.com/908/
--
Cheers,
~Blairo
___
Hi Jon,
On 25 April 2018 at 21:20, Jonathan Proulx wrote:
>
> here's a snap of 24hr graph form one server (others are similar in
> general shape):
>
> https://snapshot.raintank.io/dashboard/snapshot/gB3FDPl7uRGWmL17NHNBCuWKGsXdiqlt
That's what, a median IOPs of about 80? Pretty high for spinning
Thanks Ilya,
We can probably handle ~6.2MB for a 100TB volume. Is it reasonable to
expect a librbd client such as QEMU to only hold one object-map per guest?
Cheers,
On 12 February 2018 at 21:01, Ilya Dryomov wrote:
> On Mon, Feb 12, 2018 at 6:25 AM, Blair Bethwaite
> wrote:
>
Hi all,
Wondering if anyone can clarify whether there are any significant overheads
from rbd features like object-map, fast-diff, etc. I'm interested in both
performance overheads from a latency and space perspective, e.g., can
object-map be sanely deployed on a 100TB volume or does the client try
On 25 January 2018 at 04:53, Warren Wang wrote:
> The other thing I can think of is if you have OSDs locking up and getting
> corrupted, there is a severe XFS bug where the kernel will throw a NULL
> pointer dereference under heavy memory pressure. Again, it's due to memory
> issues, but you wi
+1 to Warren's advice on checking for memory fragmentation. Are you
seeing kmem allocation failures in dmesg on these hosts?
On 24 January 2018 at 10:44, Warren Wang wrote:
> Check /proc/buddyinfo for memory fragmentation. We have some pretty severe
> memory frag issues with Ceph to the point wh
Firstly, the OP's premise in asking, "Or should there be a differnce
of 10x", is fundamentally incorrect. Greater bandwidth does not mean
lower latency, though the latter almost always results in the former.
Unfortunately, changing the speed of light remains a difficult
engineering challenge :-). H
What type of SAS disks, spinners or SSD? You really need to specify
the sustained write throughput of your OSD nodes if you want to figure
out whether your network is sufficient/appropriate.
At 3x replication if you want to sustain e.g. 1 GB/s of write traffic
from clients then you will need 2 GB/
lease shout out and/or add to
https://etherpad.openstack.org/p/SYD-forum-Ceph-OpenStack-BoF.
Also, hope to see some of the core team there!
Cheers,
On 7 July 2017 at 13:47, Blair Bethwaite wrote:
> Hi all,
>
> Are there any "official" plans to have Ceph events co-hosted with O
ts is the cause of these
> bounces and that setting is beyond your control.
>
> Because Google knows best[tm].
>
> Christian
>
> On Mon, 16 Oct 2017 13:50:43 +1100 Blair Bethwaite wrote:
>
>> Hi all,
>>
>> This is a mailing-list admin issue - I keep being u
Hi all,
This is a mailing-list admin issue - I keep being unsubscribed from
ceph-users with the message:
"Your membership in the mailing list ceph-users has been disabled due
to excessive bounces..."
This seems to be happening on roughly a monthly basis.
Thing is I have no idea what the bounce is
Hi all,
I just submitted an OpenStack Forum proposal for a Ceph BoF session at
OpenStack Sydney. If you're interested in seeing this happen then
please hit up http://forumtopics.openstack.org/cfp/details/46 with
your comments / +1's.
--
Cheers,
~Blairo
___
On 23 September 2017 at 11:58, Sage Weil wrote:
> I'm *much* happier with 2 :) so no complaint from me. I just heard a lot
> of "2 years" and 2 releases (18 months) doesn't quite cover it. Maybe
> it's best to start with that, though? It's still an improvement over the
> current ~12 months.
FW
5,641-76.41%4.24
>>> 16 24,587 5,643-77.05%4.36
>>> RW
>>> 120,37911,166-45.21%1.83
>>> 234,246 9,525-72.19%3.60
>>> 833,195 9,300-71.98%3.57
>>> 16 31,641 9,762-69.15
Hi all,
We're looking at readdressing the mons (moving to a different subnet)
on one of our clusters. Most of the existing clients are OpenStack
guests on Libvirt+KVM and we have a major upgrade to do for those in
coming weeks that will mean they have to go down briefly, that will
give us an oppor
You're the OP, so for that, thanks! Our upgrade plan (for Thursday
this week) was modified today to include prep work to double-check the
caps.
On 12 September 2017 at 21:26, Nico Schottelius
wrote:
>
> Well, we basically needed to fix it, that's why did it :-)
>
>
>
(Apologies if this is a double post - I think my phone turned it into
HTML and so bounced from ceph-devel)...
We currently use both upstream and distro (RHCS) versions on different
clusters. Downstream releases are still free to apply their own
models.
I like the idea of a predictable (and more r
On 7 September 2017 at 01:23, Sage Weil wrote:
> * Drop the odd releases, and aim for a ~9 month cadence. This splits the
> difference between the current even/odd pattern we've been doing.
>
> + eliminate the confusing odd releases with dubious value
> + waiting for the next release isn't quite a
Great to see this issue sorted.
I have to say I am quite surprised anyone would implement the
export/import workaround mentioned here without *first* racing to this
ML or IRC and crying out for help. This is a valuable resource, made
more so by people sharing issues.
Cheers,
On 12 September 2017
On 12 September 2017 at 01:15, Blair Bethwaite
wrote:
> Flow-control may well just mask the real problem. Did your throughput
> improve? Also, does that mean flow-control is on for all ports on the
> switch...? IIUC, then such "global pause" flow-control will mean switchpor
Flow-control may well just mask the real problem. Did your throughput
improve? Also, does that mean flow-control is on for all ports on the
switch...? IIUC, then such "global pause" flow-control will mean
switchports with links to upstream network devices will also be paused if
the switch is attemp
Hi all,
(Sorry if this shows up twice - I got auto-unsubscribed and so first
attempt was blocked)
I'm keen to read up on some performance comparisons for replication versus
EC on HDD+SSD based setups. So far the only recent thing I've found is
Sage's Vault17 slides [1], which have a single slide
Hi Brad,
On 22 July 2017 at 09:04, Brad Hubbard wrote:
> Could you share what kernel/distro you are running and also please test
> whether
> the error message can be triggered by running the "blkid" command?
I'm seeing it on RHEL7.3 (3.10.0-514.2.2.el7.x86_64). See Red Hat
support case #0189101
de? And could you give some
>> information of your cephfs's usage pattern, for example, does your client
>> nodes directly mount cephfs or mount it through an NFS, or something like
>> it, running a directory that is mounted with cephfs and are you using
>> ceph-fus
Interesting. Any FUSE client data-points?
On 19 July 2017 at 20:21, Дмитрий Глушенок wrote:
> RBD (via krbd) was in action at the same time - no problems.
>
> 19 июля 2017 г., в 12:54, Blair Bethwaite
> написал(а):
>
> It would be worthwhile repeating the first test (crashin
It would be worthwhile repeating the first test (crashing/killing an
OSD host) again with just plain rados clients (e.g. rados bench)
and/or rbd. It's not clear whether your issue is specifically related
to CephFS or actually something else.
Cheers,
On 19 July 2017 at 19:32, Дмитрий Глушенок wro
We are a data-intensive university, with an increasingly large fleet
of scientific instruments capturing various types of data (mostly
imaging of one kind or another). That data typically needs to be
stored, protected, managed, shared, connected/moved to specialised
compute for analysis. Given the
Brilliant, thanks Marcus. We have just (noticed we've) hit this too
and looks like your script will fix this (will test and report
back...).
On 18 July 2017 at 14:08, Marcus Furlong wrote:
> [ 92.938882] XFS (sdi1): Mounting V5 Filesystem
> [ 93.065393] XFS (sdi1): Ending clean mount
> [ 93.17529
t; so-called active-standby mode? And could you give some information of your
> cephfs's usage pattern, for example, does your client nodes directly mount
> cephfs or mount it through an NFS, or something like it, running a directory
> that is mounted with cephfs and are you u
It works and can reasonably be called "production ready". However in
Jewel there are still some features (e.g. directory sharding, multi
active MDS, and some security constraints) that may limit widespread
usage. Also note that userspace client support in e.g. nfs-ganesha and
samba is a mixed bag a
d since I'm kind of a newcomer myself. I'd also like a Ceph
> BoF.
>
> <3 Trilliams
>
> Sent from my iPhone
>
>> On Jul 6, 2017, at 10:50 PM, Blair Bethwaite
>> wrote:
>>
>> Oops, this time plain text...
>>
>>> On 7 July 2017
Hi Greg,
On 12 July 2017 at 03:48, Gregory Farnum wrote:
> I poked at Patrick about this and it sounds like the venue is a little
> smaller than usual (and community planning is a little less
> planned-out for those ranges than usual) so things are still up in the
> air. :/
Yes, it is a smaller
Oops, this time plain text...
On 7 July 2017 at 13:47, Blair Bethwaite wrote:
>
> Hi all,
>
> Are there any "official" plans to have Ceph events co-hosted with OpenStack
> Summit Sydney, like in Boston?
>
> The call for presentations closes in a week. The Forum
Hi all,
Are there any "official" plans to have Ceph events co-hosted with OpenStack
Summit Sydney, like in Boston?
The call for presentations closes in a week. The Forum will be organised
throughout September and (I think) that is the most likely place to have
e.g. Ceph ops sessions like we have
How did you even get 60M objects into the bucket...?! The stuck requests
are only likely to be impacting the PG in which the bucket index is stored.
Hopefully you are not running other pools on those OSDs?
You'll need to upgrade to Jewel and gain the --bypass-gc radosgw-admin
flag, that speeds up
On 5 July 2017 at 19:54, Wido den Hollander wrote:
> I'd probably stick with 2x10Gbit for now and use the money I saved on more
> memory and faster CPUs.
>
On the latency point. - you will get an improvement going from 10Gb to
25Gb, but stepping up to 100Gb won't significantly change things as 1
Hi all,
I'm doing some work to evaluate the risks involved in running 2r storage
pools. On the face of it my naive disk failure calculations give me 4-5
nines for a 2r pool of 100 OSDs (no copyset awareness, i.e., secondary disk
failure based purely on chance of any 1 of the remaining 99 OSDs fail
On 15 May 2017 at 23:21, Danny Al-Gaaf wrote:
> What about moving the event to the next OpenStack Summit in Sydney, let
> say directly following the Summit.
+1!
The Ceph day just gone at the Boston OpenStack Summit felt a lot like
I imagined Cephalocon would be anyway, and as far as I know the
O
ng, or whether documenting this would suffice?
>>
>> Any doc contribution would be welcomed.
>>
>> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
>> wrote:
>>> On 3 May 2017 at 19:07, Dan van der Ster wrote:
>>>> Whether cpu_dma_latency should
On 3 May 2017 at 19:07, Dan van der Ster wrote:
> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
> your 30% boost was when going from throughput-performance to
> dma_latency=0, right? I'm trying to understand what is the incremental
> improvement from 1 to 0.
Probably minima
On 3 May 2017 at 18:38, Dan van der Ster wrote:
> Seems to work for me, or?
Yeah now that I read the code more I see it is opening and
manipulating /dev/cpu_dma_latency in response to that option, so the
TODO comment seems to be outdated. I verified tuned
latency-performance _is_ doing this prope
On 3 May 2017 at 18:15, Dan van der Ster wrote:
> It looks like el7's tuned natively supports the pmqos interface in
> plugins/plugin_cpu.py.
Ahha, you are right, but I'm sure I tested tuned and it did not help.
Thanks for pointing out this script, I had not noticed it before and I
can see now wh
Hi Dan,
On 3 May 2017 at 17:43, Dan van der Ster wrote:
> We use cpu_dma_latency=1, because it was in the latency-performance profile.
> And indeed by setting cpu_dma_latency=0 on one of our OSD servers,
> powertop now shows the package as 100% in turbo mode.
I tried both 0 and 1 and didn't noti
On 3 May 2017 at 17:24, Wido den Hollander wrote:
> Is this a HDD or SSD cluster? I assume the latter? Since usually HDDs are
> 100% busy during heavy recovery.
HDD with SSD journals. Our experience at this scale, ~900 OSDs over 33
hosts, is that it takes a fair percentage of PGs to be involved
Hi all,
We recently noticed that despite having BIOS power profiles set to
performance on our RHEL7 Dell R720 Ceph OSD nodes, that CPU frequencies
never seemed to be getting into the top of the range, and in fact spent a
lot of time in low C-states despite that BIOS option supposedly disabling
C-s
I suppose the other option here, which I initially dismissed because
Red Hat are not supporting it, is to have a CephFS dir/tree bound to a
cache-tier fronted EC pool. Is anyone having luck with such a setup?
On 3 March 2017 at 21:40, Blair Bethwaite wrote:
> Hi Marc,
>
> Whilst I agr
Thanks for the useful reply Robin and sorry for not getting back sooner...
> On Fri, Mar 03, 2017 at 18:01:00 +, Robin H. Johnson wrote:
> On Fri, Mar 03, 2017 at 10:55:06 +1100, Blair Bethwaite wrote:
>> Does anyone have any recommendations for good tools to perform
>>
speed up, because file information is gotten from the mds
> daemon, so this should save on one rsync file lookup, and we expect that
> we can run more tasks in parallel.
>
>
>
>
>
> -Original Message-
> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
Hi all,
Does anyone have any recommendations for good tools to perform
file-system/tree backups and restores to/from a RGW object store (Swift or
S3 APIs)? Happy to hear about both FOSS and commercial options please.
I'm interested in:
1) tools known to work or not work at all for a basic file-ba
ake default
> step chooseleaf firstn 0 type rack
> step emit
> }
>
>
> On Thu, Feb 16, 2017 at 7:10 AM, Blair Bethwaite <
> blair.bethwa...@gmail.com> wrote:
>
>> Am I going nuts (it is extremely late/early here), or is crushtool
>> totally broken? I'm
Am I going nuts (it is extremely late/early here), or is crushtool
totally broken? I'm trying to configure a ruleset that will place
exactly one replica into three different racks (under each of which
there are 8-10 hosts). crushtool has given me empty mappings for just
about every rule I've tried
ption here is that you have enough space in your cluster for
a replicated pool that will temporarily hold the intermediate data.
> On 7 February 2017 at 23:50, Blair Bethwaite
> wrote:
>> 1) insert a large enough temporary replicated pool as a cache tier
The cache-tiering feature is s
On 7 February 2017 at 23:50, Blair Bethwaite wrote:
> 1) insert a large enough temporary replicated pool as a cache tier
> 2) somehow force promotion of every object into the cache (don't see
> any way to do that other than actually read them - but at least some
> creative script
Hi all,
Wondering if anyone has come up with a quick and minimal impact way of
moving data between erasure coded pools? We want to shrink an existing
EC pool (also changing the EC profile at the same time) that backs our
main RGW buckets. Thus far the only successful way I've found of
managing the
Worth considering OpenStack and Ubuntu cloudarchive release cycles
here. Mitaka is the release where all Ubuntu OpenStack users need to
upgrade from Trusty to Xenial - so far Mitaka and now Newton
deployments are still in the minority (see the OpenStack
user/deployment survey for the data) and I ex
gt;
>
> On Fri, Jul 22, 2016 at 2:18 AM, Yan, Zheng wrote:
>>
>> On Fri, Jul 22, 2016 at 11:15 AM, Blair Bethwaite
>> wrote:
>> > Thanks Zheng,
>> >
>> > On 22 July 2016 at 12:12, Yan, Zheng wrote:
>> >> We actively back-port fi
Thanks Zheng,
On 22 July 2016 at 12:12, Yan, Zheng wrote:
> We actively back-port fixes to RHEL 7.x kernel. When RHCS2.0 release,
> the RHEL kernel should contain fixes up to 3.7 upstream kernel.
You meant 4.7 right?
--
Cheers,
~Blairo
___
ceph-user
t goes :)
>
> - Ken
>
> On Tue, Jul 19, 2016 at 11:45 PM, Blair Bethwaite
> wrote:
>> Hi all,
>>
>> We've started a CephFS Samba PoC on RHEL but just noticed the Samba
>> Ceph VFS doesn't seem to be included with Samba on RHEL, or we're not
>>
Hi all,
We've started a CephFS Samba PoC on RHEL but just noticed the Samba
Ceph VFS doesn't seem to be included with Samba on RHEL, or we're not
looking in the right place. Trying to avoid needing to build Samba
from source if possible. Any pointers appreciated.
--
Cheers,
~Blairo
_
On 25 Jun 2016 6:02 PM, "Kyle Bader" wrote:
> fdatasync takes longer when you have more inodes in the slab caches, it's
the double edged sword of vfs_cache_pressure.
That's a bit sad when, iiuc, it's only journals doing fdatasync in the Ceph
write path. I'd have expected the vfs to handle this on
On 23 June 2016 at 12:37, Christian Balzer wrote:
> Case in point, my main cluster (RBD images only) with 18 5+TB OSDs on 3
> servers (64GB RAM each) has 1.8 million 4MB RBD objects using about 7% of
> the available space.
> Don't think I could hit this problem before running out of space.
Perhap
d there. I recall the
UnitedStack folks using 32MB.
Cheers,
On 23 June 2016 at 12:28, Christian Balzer wrote:
> On Thu, 23 Jun 2016 12:01:38 +1000 Blair Bethwaite wrote:
>
>> On 23 June 2016 at 11:41, Wade Holler wrote:
>> > Workload is native librados with python. ALL 4k obje
On 23 June 2016 at 11:41, Wade Holler wrote:
> Workload is native librados with python. ALL 4k objects.
Was that meant to be 4MB?
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-c
and a write occurs (maybe a read, I forget).
>>>
>> If it's a read a plain scrub might do the trick.
>>
>> Christian
>>> Warren
>>>
>>>
>>> From: ceph-users
>>> mailto:ceph-users-boun...@lists.ceph.com>>
>>>
On 20 June 2016 at 09:21, Blair Bethwaite wrote:
> slow request issues). If you watch your xfs stats you'll likely get
> further confirmation. In my experience xs_dir_lookups balloons (which
> means directory lookups are missing cache and going to disk).
Murphy's a bitch.
Hi Wade,
(Apologies for the slowness - AFK for the weekend).
On 16 June 2016 at 23:38, Wido den Hollander wrote:
>
>> Op 16 juni 2016 om 14:14 schreef Wade Holler :
>>
>>
>> Hi All,
>>
>> I have a repeatable condition when the object count in a pool gets to
>> 320-330 million the object write ti
Hi Wade,
What IO are you seeing on the OSD devices when this happens (see e.g.
iostat), are there short periods of high read IOPS where (almost) no
writes occur? What does your memory usage look like (including slab)?
Cheers,
On 16 June 2016 at 22:14, Wade Holler wrote:
> Hi All,
>
> I have a r
few mins/secs during the reblancing
> task. Not sure, these low priority configurations are doing the job as
> its.
>
> Thanks
> Swami
>
> On Thu, Jun 9, 2016 at 5:50 PM, Blair Bethwaite
> wrote:
>> Swami,
>>
>> Run it with the help option for more context:
&g
;= 0.704429) [1.00 -> 0.95]
> 173 (0.817063 >= 0.704429) [1.00 -> 0.95]
> ==
>
> is the above scripts says to reweight 43 -> 0.95?
>
> Thanks
> Swami
>
> On Wed, Jun 8, 2016 at 10:34 AM, M Ranga Swami Reddy
> wrote:
>> Blair - Thanks for
pt has option for dry run?
>
> Thanks
> Swami
>
> On Wed, Jun 8, 2016 at 6:35 AM, Blair Bethwaite
> wrote:
>> Swami,
>>
>> Try
>> https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py,
>> that'll work with Fire
Swami,
Try
https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py,
that'll work with Firefly and allow you to only tune down weight of a
specific number of overfull OSDs.
Cheers,
On 7 June 2016 at 23:11, M Ranga Swami Reddy wrote:
> OK, understood...
> To f
Hi all,
What are the densest node configs out there, and what are your
experiences with them and tuning required to make them work? If we can
gather enough info here then I'll volunteer to propose some upstream
docs covering this.
At Monash we currently have some 32-OSD nodes (running RHEL7), tho
Hi all,
IMHO reweight-by-utilization should come with some sort of warning, it
just suddenly reweights everything - no dry run, no confirmation,
apparently no option to see what it's going to do. It also doesn't
appear to consider pools and hence crush rulesets, which I imagine
could result in it
Thanks Jason, thanks Dan,
On 9 March 2016 at 01:34, Jason Dillaman wrote:
> Are you interesting in the max FD count or max thread count? You mention
> both in your email.
True, I did mix the two somewhat incorrectly - I was sort of guessing
there'd be some number of threads per socket or vice
Hi all,
Not getting very far with this query internally (RH), so hoping
someone familiar with the code can spare me the C++ pain...
We've hit soft thread count ulimits a couple of times with different
Ceph clusters. The clients (Qemu/KVM guests on both Ubuntu and RHEL
hosts) have hit the limit th
Hi all,
Does anyone know if RGW supports Keystone's PKIZ tokens, or better yet know
a list of the supported token types?
Cheers,
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-cep
Hi Matt,
(CC'ing in ceph-users too - similar reports there:
http://www.spinics.net/lists/ceph-users/msg23037.html)
We've seen something similar for KVM [lib]RBD clients acting as NFS
gateways within our OpenStack cloud, the NFS services were locking up
and causing client timeouts whenever we star
Hi all,
We're setting up radosgw against a large 8/3 EC pool (across 24 nodes)
with a modest 4 node cache tier in front (those 4 nodes each have 20x
10k SAS drives and 4x Intel DC S3700 journals). With the cache tiering
we're not sure what the best setup is for all the various peripheral
rgw pools
On 13 July 2015 at 21:36, Emmanuel Florac wrote:
> I've benchmarked it and found it has about exactly the same performance
> profile as the He6. Compared to the Seagate 6TB it draws much less
> power (almost half), and that's the main selling point IMO, with
> durability.
That's consistent with t
2015 at 11:01, Christian Balzer wrote:
> On Wed, 8 Jul 2015 10:28:17 +1000 Blair Bethwaite wrote:
>
>> Hi folks,
>>
>> Does anyone have any experience with the newish HGST He8 8TB Helium
>> filled HDDs? Storagereview looked at th
Hi folks,
Does anyone have any experience with the newish HGST He8 8TB Helium
filled HDDs? Storagereview looked at them here:
http://www.storagereview.com/hgst_ultrastar_helium_he8_8tb_enterprise_hard_drive_review.
I'm torn as to the lower read performance shown there than e.g. the
He6 or Seagate
Has anyone had any luck using the radosgw-sync-agent to push or pull
to/from "real" S3?
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
; out CephFS too. Hadoop is a predictable workload that we haven't seen
> break at all in several years and the bindings handle data locality
> and such properly. :)
> -Greg
>
> On Thu, May 21, 2015 at 11:24 PM, Wang, Warren
> > wrote:
> >
> > On 5/21/15,
Hi Warren,
On 20 May 2015 at 23:23, Wang, Warren wrote:
> We¹ve contemplated doing something like that, but we also realized that
> it would result in manual work in Ceph everytime we lose a drive or
> server,
> and a pretty bad experience for the customer when we have to do
> maintenance.
Yeah
Hi Warren,
Following our brief chat after the Ceph Ops session at the Vancouver
summit today, I added a few more notes to the etherpad
(https://etherpad.openstack.org/p/YVR-ops-ceph).
I wonder whether you'd considered setting up crush layouts so you can
have multiple cinder AZs or volume-types th
Hi all,
I understand the present pool tiering infrastructure is intended to work
for >2 layers? We're presently considering backup strategies for large
pools and wondered how much of a stretch it would be to have a base tier
sitting in e.g. an S3 store... I imagine a pg in the base+1 tier mapping
Hi Mark,
Cool, that looks handy. Though it'd be even better if it could go a
step further and recommend re-weighting values to balance things out
(or increased PG counts where needed).
Cheers,
On 5 March 2015 at 15:11, Mark Nelson wrote:
> Hi All,
>
> Recently some folks showed interest in gath
Sorry if this is actually documented somewhere, but is it possible to
create and use multiple filesystems on the data data and metadata
pools? I'm guessing yes, but requires multiple MDSs?
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.
It'd be nice to see a standard/recommended LB and HA approach for RGW
with supporting documentation too.
On 26 February 2015 at 06:31, Sage Weil wrote:
> Hey,
>
> We are considering switching to civetweb (the embedded/standalone rgw web
> server) as the primary supported RGW frontend instead of t
ar non-default namespaces (managed by the cinder rbd driver),
that way leaking secrets from cinder gives less exposure - but I guess
that would be a bit of a change from the current namespace
functionality.
On 13 February 2015 at 05:57, Josh Durgin wrote:
> On 02/10/2015 07:54 PM, Blair Bethwaite w
On 11 February 2015 at 20:43, John Spray wrote:
> Namespaces in CephFS would become useful in conjunction with limiting
> client authorization by sub-mount -- that way subdirectories could be
> assigned a layout with a particular namespace, and a client could be
> limited to that namespace on the
Just came across this in the docs:
"Currently (i.e., firefly), namespaces are only useful for
applications written on top of librados. Ceph clients such as block
device, object storage and file system do not currently support this
feature."
Then found:
https://wiki.ceph.com/Planning/Sideboard/rbd%
1 - 100 of 139 matches
Mail list logo