Re: [openstack-dev] Gantt project

2014-08-12 Thread John Dickinson
Thanks for the info. It does seem like most OpenStack projects have some 
concept of a "scheduler", as you mentioned. Perhaps that's expected in any 
distributed system.

Is it expected or assumed that Gantt will become the common scheduler for all 
OpenStack projects? That is, is Gantt's plan and/or design goals to provide 
scheduling (or a "scheduling framework") for all OpenStack projects? Perhaps 
this is a question for the TC rather than Don. [1]

Since Gantt is initially intended to be used by Nova, will it be under the 
compute program or will there be a new program created for it?


--John


[1] You'll forgive me, but I've certainly seen OpenStack projects move from 
"you can use it if you want" to "you must start using this" in the past.




On Aug 11, 2014, at 11:09 PM, Dugger, Donald D  
wrote:

> This is to make sure that everyone knows about the Gantt project and to make 
> sure that no one has a strong aversion to what we are doing.
>  
> The basic goal is to split the scheduler out of Nova and create a separate 
> project that, ultimately, can be used by other OpenStack projects that have a 
> need for scheduling services.  Note that we have no intention of forcing 
> people to use Gantt but it seems silly to have a scheduler inside Nova, 
> another scheduler inside Cinder, another scheduler inside Neutron and so 
> forth.  This is clearly predicated on the idea that we can create a common, 
> flexible scheduler that can meet everyone’s needs but, as I said, theirs is 
> no rule that any project has to use Gantt, if we don’t meet your needs you 
> are free to roll your own scheduler.
>  
> We will start out by just splitting the scheduler code out of Nova into a 
> separate project that will initially only be used by Nova.  This will be 
> followed by enhancements, like a common API, that can then be utilized by 
> other projects.
>  
> We are cleaning up the internal interfaces in the Juno release with the 
> expectation that early in the Kilo cycle we will be able to do the split and 
> create a Gantt project that is completely compatible with the current Nova 
> scheduler.
>  
> Hopefully our initial goal (a separate project that is completely compatible 
> with the Nova scheduler) is not too controversial but feel free to reply with 
> any concerns you may have.
>  
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread John Dickinson

On Aug 12, 2014, at 11:08 AM, Doug Hellmann  wrote:

> 
> On Aug 12, 2014, at 1:44 PM, Dolph Mathews  wrote:
> 
>> 
>> On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon  wrote:
>> 
>> 
>> 
>> On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery  wrote:
>> On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon  wrote:
>> >
>> >
>> >
>> > On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez 
>> > wrote:
>> >>
>> >> Hi everyone,
>> >>
>> >> With the incredible growth of OpenStack, our development community is
>> >> facing complex challenges. How we handle those might determine the
>> >> ultimate success or failure of OpenStack.
>> >>
>> >> With this cycle we hit new limits in our processes, tools and cultural
>> >> setup. This resulted in new limiting factors on our overall velocity,
>> >> which is frustrating for developers. This resulted in the burnout of key
>> >> firefighting resources. This resulted in tension between people who try
>> >> to get specific work done and people who try to keep a handle on the big
>> >> picture.
>> >>
>> >> It all boils down to an imbalance between strategic and tactical
>> >> contributions. At the beginning of this project, we had a strong inner
>> >> group of people dedicated to fixing all loose ends. Then a lot of
>> >> companies got interested in OpenStack and there was a surge in tactical,
>> >> short-term contributions. We put on a call for more resources to be
>> >> dedicated to strategic contributions like critical bugfixing,
>> >> vulnerability management, QA, infrastructure... and that call was
>> >> answered by a lot of companies that are now key members of the OpenStack
>> >> Foundation, and all was fine again. But OpenStack contributors kept on
>> >> growing, and we grew the narrowly-focused population way faster than the
>> >> cross-project population.
>> >>
>> >>
>> >> At the same time, we kept on adding new projects to incubation and to
>> >> the integrated release, which is great... but the new developers you get
>> >> on board with this are much more likely to be tactical than strategic
>> >> contributors. This also contributed to the imbalance. The penalty for
>> >> that imbalance is twofold: we don't have enough resources available to
>> >> solve old, known OpenStack-wide issues; but we also don't have enough
>> >> resources to identify and fix new issues.
>> >>
>> >> We have several efforts under way, like calling for new strategic
>> >> contributors, driving towards in-project functional testing, making
>> >> solving rare issues a more attractive endeavor, or hiring resources
>> >> directly at the Foundation level to help address those. But there is a
>> >> topic we haven't raised yet: should we concentrate on fixing what is
>> >> currently in the integrated release rather than adding new projects ?
>> >
>> >
>> > TL;DR: Our development model is having growing pains. until we sort out the
>> > growing pains adding more projects spreads us too thin.
>> >
>> +100
>> 
>> > In addition to the issues mentioned above, with the scale of OpenStack 
>> > today
>> > we have many major cross project issues to address and no good place to
>> > discuss them.
>> >
>> We do have the ML, as well as the cross-project meeting every Tuesday
>> [1], but we as a project need to do a better job of actually bringing
>> up relevant issues here.
>> 
>> [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
>> 
>> >>
>> >>
>> >> We seem to be unable to address some key issues in the software we
>> >> produce, and part of it is due to strategic contributors (and core
>> >> reviewers) being overwhelmed just trying to stay afloat of what's
>> >> happening. For such projects, is it time for a pause ? Is it time to
>> >> define key cycle goals and defer everything else ?
>> >
>> >
>> >
>> > I really like this idea, as Michael and others alluded to in above, we are
>> > attempting to set cycle goals for Kilo in Nova. but I think it is worth
>> > doing for all of OpenStack. We would like to make a list of key goals 
>> > before
>> > the summit so that we can plan our summit sessions around the goals. On a
>> > really high level one way to look at this is, in Kilo we need to pay down
>> > our technical debt.
>> >
>> > The slots/runway idea is somewhat separate from defining key cycle goals; 
>> > we
>> > can be approve blueprints based on key cycle goals without doing slots.  
>> > But
>> > with so many concurrent blueprints up for review at any given time, the
>> > review teams are doing a lot of multitasking and humans are not very good 
>> > at
>> > multitasking. Hopefully slots can help address this issue, and hopefully
>> > allow us to actually merge more blueprints in a given cycle.
>> >
>> I'm not 100% sold on what the slots idea buys us. What I've seen this
>> cycle in Neutron is that we have a LOT of BPs proposed. We approve
>> them after review. And then we hit one of two issues: Slow review
>> cycles, and slow code turnaround issues. I don't think slots would
>> help this, and in fact may cause more 

Re: [openstack-dev] [Swift] Can gatekeeper middleware be removed from pipeline?

2014-08-19 Thread John Dickinson
If you do not have the gatekeeper explicitly referenced in your proxy pipeline, 
Swift will automatically add it.

--John



On Aug 19, 2014, at 3:09 AM, Daisuke Morita  
wrote:

> 
> Hi,
> 
> Can gatekeeper middleware be removed from pipeline?
> This does not mean that i want to use Swift without gatekeeper because
> it can be security risk, but i just want to make it clear whether it is
> configurable or not.
> 
> 
> Thanks,
> 
> -- 
> Daisuke Morita 
> NTT Software Innovation Center, NTT Corporation
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread John Dickinson
I think Anne makes some excellent points about the pattern being proposed being 
unlikely to be commonly implemented across all the programs (or, at best, very 
difficult). Let's not try to formalize another "best practice" that works many 
times and force it to work every time. Here's an alternate proposal:

Let's let PTLs be PTLs and effectively coordinate and manage the activity in 
their respective projects. And let's get the PTLs together for one or two days 
every cycle to discuss project issues. Just PTLs, and let's focus on the 
project management stuff and some cross-project issues.

Getting the PTLs together would allow them to discuss cross-project issues, 
share frustrations and solutions about what does and doesn't work. Basically, 
think of it as a mid-cycle meetup, but for PTLs. (Perhaps we could even ask the 
Foundation to sponsor it.)

--John





On Aug 22, 2014, at 6:02 PM, Anne Gentle  wrote:

> 
> 
> 
> On Fri, Aug 22, 2014 at 6:17 PM, Rochelle.RochelleGrober 
>  wrote:
> /flame-on
> Ok, this is funny to some of us in the community.  The general populace of 
> this community is so against the idea of management that they will use the 
> term for a despotic dictator as a position name rather than "manager".  
> Sorry, but this needed to be said.
> /flame-off
> 
> Specific comments in line:
> 
> Thierry Carrez wrote:
> >
> > Hi everyone,
> >
> > We all know being a project PTL is an extremely busy job. That's
> > because
> > in our structure the PTL is responsible for almost everything in a
> > project:
> >
> > - Release management contact
> > - Work prioritization
> > - Keeping bugs under control
> > - Communicate about work being planned or done
> > - Make sure the gate is not broken
> > - Team logistics (run meetings, organize sprints)
> > - ...
> >
> 
> Point of clarification:  I've heard PTL=Project Technical Lead and 
> PTL=Program Technical Lead. Which is it?  It is kind of important as 
> OpenStack grows, because the first is responsible for *a* project, and the 
> second is responsible for all projects within a program.
> 
> 
> Now Program, formerly Project.
>  
> I'd also like to set out as an example of a Program that is growing to 
> encompass multiple projects, the Neutron Program.  Look at how it is 
> expanding:
> 
> Multiple sub-teams for:  LBAAS, DNAAS, GBP, etc.  This model could be 
> extended such that:
> - the subteam is responsible for code reviews, including the first +2 for 
> design, architecture and code of the sub-project, always also keeping an eye 
> out that the sub-project code continues to both integrate well with the 
> program, and that the program continues to provide the needed code bits, 
> architecture modifications and improvements, etc. to support the sub-project.
> - the final +2/A would be from the Program reviewers to ensure that all 
> integrate nicely together into a single, cohesive program.
> - This would allow sub-projects to have core reviewers, along with the 
> program and be a good separation of duties.  It would also help to increase 
> the number of reviews moving to merged code.
> - Taken to a logical stepping stone, you would have project technical leads 
> for each project, and they would make up a program council, with the program 
> technical lead being the chair of the council.
> 
> This is a way to offload a good chunk of PTL tactical responsibilities and 
> help them focus more on the strategic.
> 
> > They end up being completely drowned in those day-to-day operational
> > duties, miss the big picture, can't help in development that much
> > anymore, get burnt out. Since you're either "the PTL" or "not the PTL",
> > you're very alone and succession planning is not working that great
> > either.
> >
> > There have been a number of experiments to solve that problem. John
> > Garbutt has done an incredible job at helping successive Nova PTLs
> > handling the release management aspect. Tracy Jones took over Nova bug
> > management. Doug Hellmann successfully introduced the concept of Oslo
> > liaisons to get clear point of contacts for Oslo library adoption in
> > projects. It may be time to generalize that solution.
> >
> > The issue is one of responsibility: the PTL is ultimately responsible
> > for everything in a project. If we can more formally delegate that
> > responsibility, we can avoid getting up to the PTL for everything, we
> > can rely on a team of people rather than just one person.
> >
> > Enter the Czar system: each project should have a number of liaisons /
> > official contacts / delegates that are fully responsible to cover one
> > aspect of the project. We need to have Bugs czars, which are
> > responsible
> > for getting bugs under control. We need to have Oslo czars, which serve
> > as liaisons for the Oslo program but also as active project-local oslo
> > advocates. We need Security czars, which the VMT can go to to progress
> > quickly on plugging vulnerabilities. We need release management czars,
> > to h

[openstack-dev] [Swift] 2.1.0-rc tagged

2014-08-25 Thread John Dickinson
Swift 2.1.0.rc1 has been tagged as our release candidate for 2.1.0. The plan is 
to let this RC soak for a week and then do the final release on Sept 1.

Please check it out and report any issues that you find.


Tag applied:
http://git.openstack.org/cgit/openstack/swift/commit/?id=8d02147d04a41477383de8e13bea6ac3fd2cade0

Tarball built:
http://tarballs.openstack.org/swift/swift-2.1.0.rc1.tar.gz

Bugfixes marked released at:
https://launchpad.net/swift/+milestone/2.1.0



--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 2.1.0 released

2014-09-01 Thread John Dickinson
I'm happy to announce that Swift 2.1.0 has been released. This release includes 
several useful features that I'd like to highlight.

First, Swift's data placement algorithm was slightly changed to improve adding 
capacity. Specifically, now when you add a new region to an existing Swift 
cluster, there will not be a massive migration of data. If you've been wanting 
to expand your Swift cluster into another region, you can now do it painlessly.

Second, we've updated some of the logging and metrics tracking. We removed some 
needless log spam (cleaner logs!), added the process PID to the storage node 
log lines, and no count user errors as errors in StatsD metrics reporting.

We've also improved the object auditing process to allow for multiple processes 
at once. Using the new "concurrency" config value can speed up the overall 
auditor cycle time.

The tempurl middleware default allowed methods has been updated to allow POST 
and DELETE. This means that with no additional configuration, users can create 
tempURLs against any supported verb.

Finally, the list_endpoints middleware now has a v2 response that supports 
storage policies.

Please take a look at the full changelog to see what else has changed. I'd 
encourage everyone to upgrade to this new version of Swift. As always, you can 
upgrade with no end-user downtime.


Changelog:
http://git.openstack.org/cgit/openstack/swift/tree/CHANGELOG

Tarball:
http://tarballs.openstack.org/swift/swift-2.1.0.tar.gz

Launchpad:
https://launchpad.net/swift/+milestone/2.1.0


This release is the result of 28 contributors, including 7 new contributors. 
The first-time contributors to Swift are:

Jing Liuqing
Steve Martinelli
Matthew Oliver
Pawel Palucki
Thiago da Silva
Nirmal Thacker
Lin Yang

Thank you to everyone who contributed to this release, both as a dev and as the 
sysadmins who keep Swift running every day at massive scale around the world.

My vision for Swift is that everyone uses it every day, even if they don't 
realize it. We're well on our way to that goal. Thank you.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] (Non-)consistency of the Swift hash ring implementation

2014-09-07 Thread John Dickinson
To test Swift directly, I used the CLI tools that Swift provides for managing 
rings. I wrote the following short script:

$ cat remakerings
#!/bin/bash

swift-ring-builder object.builder create 16 3 0
for zone in {1..4}; do
for server in {200..224}; do
for drive in {1..12}; do
swift-ring-builder object.builder add 
r1z${zone}-10.0.${zone}.${server}:6010/d${drive} 3000
done
done
done
swift-ring-builder object.builder rebalance



This adds 1200 devices. 4 zones, each with 25 servers, each with 12 drives 
(4*25*12=1200). The important thing is that instead of adding 1000 drives in 
one zone or in one server, I'm splaying across the placement hierarchy that 
Swift uses.

After running the script, I added one drive to one server to see what the 
impact would be and rebalanced. The swift-ring-builder tool detected that less 
than 1% of the partitions would change and therefore didn't move anything (just 
to avoid unnecessary data movement).

--John





On Sep 7, 2014, at 11:20 AM, Nejc Saje  wrote:

> Hey guys,
> 
> in Ceilometer we're using consistent hash rings to do workload
> partitioning[1]. We've considered using Ironic's hash ring implementation, 
> but found out it wasn't actually consistent (ML[2], patch[3]). The next thing 
> I noticed that the Ironic implementation is based on Swift's.
> 
> The gist of it is: since you divide your ring into a number of equal sized 
> partitions, instead of hashing hosts onto the ring, when you add a new host, 
> an unbound amount of keys get re-mapped to different hosts (instead of the 
> 1/#nodes remapping guaranteed by hash ring).
> 
> Swift's hash ring implementation is quite complex though, so I took the 
> conceptually similar code from Gregory Holt's blogpost[4] (which I'm guessing 
> is based on Gregory's efforts on Swift's hash ring implementation) and tested 
> that instead. With a simple test (paste[5]) of first having 1000 nodes and 
> then adding 1, 99.91% of the data was moved.
> 
> I have no way to test this in Swift directly, so I'm just throwing this out 
> there, so you guys can figure out whether there actually is a problem or not.
> 
> Cheers,
> Nejc
> 
> [1] https://review.openstack.org/#/c/113549/
> [2] 
> http://lists.openstack.org/pipermail/openstack-dev/2014-September/044566.html
> [3] https://review.openstack.org/#/c/118932/4
> [4] http://greg.brim.net/page/building_a_consistent_hashing_ring.html
> [5] http://paste.openstack.org/show/107782/
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Log Rationalization -- Bring it on!

2014-09-17 Thread John Dickinson

On Sep 17, 2014, at 8:43 PM, Jay Faulkner  wrote:

> Comments inline.
> 
>> -Original Message-
>> From: Monty Taylor [mailto:mord...@inaugust.com]
>> Sent: Wednesday, September 17, 2014 7:34 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] Log Rationalization -- Bring it on!
>> 
>> On 09/17/2014 04:42 PM, Rochelle.RochelleGrober wrote:
>>> TL;DR:  I consider the poor state of log consistency a major
>>> impediment for more widespread adoption of OpenStack and would like to
>>> volunteer to own this cross-functional process to begin to unify and
>>> standardize logging messages and attributes for Kilo while dealing
>>> with the most egregious issues as the community identifies them.
>>> 
>> 
>> I fully support this, and I, for one, welcome our new log-standardization
>> overlords.
>> 
> 
> Something that could be interesting is to see if we can emit metrics 
> everytime a loggable event happens. There's already a spec+code being drafted 
> for Ironic in Kilo (https://review.openstack.org/#/c/100729/ 
> &https://review.openstack.org/#/c/103202/) that we're using downstream to 
> emit metrics from Ironic.

You may be interested to see how Swift has integrated StatsD events into a log 
adapter.

https://github.com/openstack/swift/blob/master/swift/common/utils.py#L1197

See also the StatsdClient class in that same file.

--John





> 
> If we have good organization of logging events, and levels, perhaps there's 
> possibly a way to make it easy for metrics to be emitted at that time as well.
> 
> -
> Jay Faulkner
> 
>> 
>>> 
>>> Recap from some mail threads:
>>> 
>>> 
>>> 
>>> From Sean Dague on Kilo cycle goals:
>>> 
>>> 2. Consistency in southbound interfaces (Logging first)
>>> 
>>> 
>>> 
>>> Logging and notifications are south bound interfaces from OpenStack
>>> providing information to people, or machines, about what is going on.
>>> 
>>> There is also a 3rd proposed south bound with osprofiler.
>>> 
>>> 
>>> 
>>> For Kilo: I think it's reasonable to complete the logging standards
>>> and implement them. I expect notifications (which haven't quite kicked
>>> off) are going to take 2 cycles.
>>> 
>>> 
>>> 
>>> I'd honestly *really* love to see a unification path for all the the
>>> southbound parts, logging, osprofiler, notifications, because there is
>>> quite a bit of overlap in the instrumentation/annotation inside the
>>> main code for all of these.
>>> 
>>> 
>>> And from Doug Hellmann: 1. Sean has done a lot of analysis and started
>>> a spec on standardizing logging guidelines where he is gathering input
>>> from developers, deployers, and operators [1].
>>> Because it is far enough for us to see real progress, it's a good
>>> place for us to start experimenting with how to drive cross-project
>>> initiatives involving code and policy changes from outside of a single
>>> project. We have a couple of potentially related specs in Oslo as part
>>> of the oslo.log graduation work [2] [3], but I think most of the work
>>> will be within the applications.
>>> 
>>> [1] https://review.openstack.org/#/c/91446/ [2]
>>> https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-p
>>> arameters
>>> 
>>> 
>> [3] https://blueprints.launchpad.net/oslo.log/+spec/remove-context-
>> adapter
>>> 
>>> 
>>> 
>>> And from James Blair:
>>> 
>>> 1) Improve log correlation and utility
>>> 
>>> 
>>> 
>>> If we're going to improve the stability of OpenStack, we have to be
>>> able to understand what's going on when it breaks.  That's both true
>>> as developers when we're trying to diagnose a failure in an
>>> integration test, and it's true for operators who are all too often
>>> diagnosing the same failure in a real deployment.  Consistency in
>>> logging across projects as well as a cross-project request token would
>>> go a long way toward this.
>>> 
>>> While I am not currently managing an OpenStack deployment, writing
>>> tests or code, or debugging the stack, I have spent many years doing
>>> just that.  Through QA, Ops and Customer support, I have come to revel
>>> in good logging and log messages and curse the holes and vagaries in
>>> many systems.
>>> 
>>> Defining/refining logs to be useful and usable is a cross-functional
>>> effort that needs to include:
>>> 
>>> · Operators
>>> 
>>> · QA
>>> 
>>> · End Users
>>> 
>>> · Community managers
>>> 
>>> · Tech Pubs
>>> 
>>> · Translators
>>> 
>>> · Developers
>>> 
>>> · TC (which provides the forum and impetus for all the
>>> projects to cooperate on this)
>>> 
>>> At the moment, I think this effort may best work under the auspices of
>>> Oslo (oslo.log), I'd love to hear other proposals.
>>> 
>>> Here is the beginnings of my proposal of how to attack and subdue the
>>> painful state of logs:
>>> 
>>> 
>>> · Post this email to the MLs (dev, ops, enduser) to get
>>> feedback, garner support and participants in the process (Done;-)
>>> 
>>> · In 

Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-17 Thread John Dickinson
I just release python-swiftclient 2.3.0

In addition to some smaller changes and bugfixes, the biggest changes are the 
support for Keystone v3 and a refactoring that allows for better testing and 
extensibility of the functionality exposed by the CLI.

https://pypi.python.org/pypi/python-swiftclient/2.3.0

--John



On Sep 17, 2014, at 8:14 AM, Matt Riedemann  wrote:

> 
> 
> On 9/15/2014 12:57 PM, Matt Riedemann wrote:
>> 
>> 
>> On 9/10/2014 11:08 AM, Kyle Mestery wrote:
>>> On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
>>>  wrote:
 
 
 On 9/9/2014 4:19 PM, Sean Dague wrote:
> 
> As we try to stabilize OpenStack Juno, many server projects need to get
> out final client releases that expose new features of their servers.
> While this seems like not a big deal, each of these clients releases
> ends up having possibly destabilizing impacts on the OpenStack whole
> (as
> the clients do double duty in cross communicating between services).
> 
> As such in the release meeting today it was agreed clients should have
> their final release by Sept 18th. We'll start applying the dependency
> freeze to oslo and clients shortly after that, all other requirements
> should be frozen at this point unless there is a high priority bug
> around them.
> 
> -Sean
> 
 
 Thanks for bringing this up. We do our own packaging and need time
 for legal
 clearances and having the final client releases done in a reasonable
 time
 before rc1 is helpful.  I've been pinging a few projects to do a final
 client release relatively soon.  python-neutronclient has a release this
 week and I think John was planning a python-cinderclient release this
 week
 also.
 
>>> Just a slight correction: python-neutronclient will have a final
>>> release once the L3 HA CLI changes land [1].
>>> 
>>> Thanks,
>>> Kyle
>>> 
>>> [1] https://review.openstack.org/#/c/108378/
>>> 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> python-cinderclient 1.1.0 was released on Saturday:
>> 
>> https://pypi.python.org/pypi/python-cinderclient/1.1.0
>> 
> 
> python-novaclient 2.19.0 was released yesterday [1].
> 
> List of changes:
> 
> mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 --oneline 
> --no-merges
> cd56622 Stop using intersphinx
> d96f13d delete python bytecode before every test run
> 4bd0c38 quota delete tenant_id parameter should be required
> 3d68063 Don't display duplicated security groups
> 2a1c07e Updated from global requirements
> 319b61a Fix test mistake with requests-mock
> 392148c Use oslo.utils
> e871bd2 Use Token fixtures from keystoneclient
> aa30c13 Update requirements.txt to include keystoneclient
> bcc009a Updated from global requirements
> f0beb29 Updated from global requirements
> cc4f3df Enhance network-list to allow --fields
> fe95fe4 Adding Nova Client support for auto find host APIv2
> b3da3eb Adding Nova Client support for auto find host APIv3
> 3fa04e6 Add filtering by service to hosts list command
> c204613 Quickstart (README) doc should refer to nova
> 9758ffc Updated from global requirements
> 53be1f4 Fix listing of flavor-list (V1_1) to display swap value
> db6d678 Use adapter from keystoneclient
> 3955440 Fix the return code of the command "delete"
> c55383f Fix variable error for nova --service-type
> caf9f79 Convert to requests-mock
> 33058cb Enable several checks and do not check docs/source/conf.py
> abae04a Updated from global requirements
> 68f357d Enable check for E131
> b6afd59 Add support for security-group-default-rules
> ad9a14a Fix rxtx_factor name for creating a flavor
> ff4af92 Allow selecting the network for doing the ssh with
> 9ce03a9 fix host resource repr to use 'host' attribute
> 4d25867 Enable H233
> 60d1283 Don't log sensitive auth data
> d51b546 Enabled hacking checks H305 and H307
> 8ec2a29 Edits on help strings
> c59a0c8 Add support for new fields in network create
> 67585ab Add "version-list" for listing REST API versions
> 0ff4afc Description is mandatory parameter when creating Security Group
> 6ee0b28 Filter endpoints by region whenever possible
> 32d13a6 Add missing parameters for server rebuild
> f10d8b6 Fixes typo in error message of do_network_create
> 9f1ee12 Mention keystoneclient.Session use in docs
> 58cdcab Fix booting from volume when using api v3
> 52c5ad2 Sync apiclient from oslo-incubator
> 2acfb9b Convert server tests to httpretty
> 762bf69 Adding cornercases for set_metadata
> 313a2f8 Add way to specify key-name from envir

[openstack-dev] [Swift] Goals for Icehouse

2013-11-20 Thread John Dickinson
During the past month, Swift contributors have gathered in Austin,
Hong Kong, and online to discuss projects underway. There are some
major efforts underway, and I hope to lay them out and tie them
together here, so that we all know what the goals for the next six
months are.

The biggest feature set is storage policies. Storage policies will
give deployers and users incredible flexibility in how to manage their
data in the storage cluster. There are three basic parts to storage
policies.

First, given the global set of hardware available in a single Swift
cluster, choose which subset of hardware on which to store data. This
can be done by geography (e.g. US-East vs EU vs APAC vs global) or by
hardware properties (e.g. SATA vs SSDs). An obviously, the combination
can give a lot of flexibility.

Second, given the subset of hardware being used to store the data,
choose how to encode the data across that set of hardware. For
example, perhaps you have 2-replica, 3-replica, or erasure code
policies. Combining this with the hardware possibilities, you get e.g.
US-East reduced redundancy, global triple replicas, and EU erasure
coded.

Third, give the subset of hardware and how to store the data across
that hardware, control how Swift talks to a particular storage volume.
This may be optimized local file systems. This may be Gluster volumes.
This may be non-POSIX volumes like Seagate's new Kinetic drives.

We're well on our way to getting this set of work done. In Hong Kong
there was a demo of the current state of multi-ring support (for parts
one and two). We've also got a great start in refactoring the
interface between Swift and on-disk files and databases (for part
three).

But there is still a ton of work to do.

* we need to finalize the multi-ring work for the WSGI processes
* we need to ensure that large objects work with storage policies
* replication needs to be multi-ring aware
* auditing needs to be multi-ring aware
* updaters need to be multi-ring aware
* we need to write a multi-ring reconciler
* we need to merge all of this multi-ring stuff into master
* we need to finalize the DiskFile refactoring
* we need to properly refactor the DBBrokers
* we need the make sure the daemons can be extended to support different 
DiskFiles

The top-level blueprint for this is at
https://blueprints.launchpad.net/swift/+spec/storage-policies

Our target for the storage policy work is to support erasure coded
data within Swift. After the general storage policy work above is
done, we need to work on refactoring Swift's proxy server to make sure
its interaction with the storage nodes allows for differing storage
schemes.

I'm tremendously excited about the potential for storage policies. I
think it's the most significant development in Swift since the entire
project was open-sourced. Storage policies allow Swift to grow from
being the engine powering the world's largest storage clouds to a
storage platform enabling broader use cases by offering the
flexibility to very specifically match many different deployment
patterns.

Oh and if that's not enough to keep us all busy, there is other work
going on in the community, too. Some has been merged into Swift, and
some will stay in the ecosystem of tools for Swift, but they are all
important to a storage system. We've got an improved replication
platform merged into Swift, and it needs to be thoroughly tested and
polished. Once it's stable, we'll be able to build on it to really
improve Swift around MTTD and MTTR metrics. We're in the process of
refactoring metadata so that it is strongly separate into stuff that a
user can see and change and stuff the user can't see and change.

There is also some serious effort being put forth by a few companies
(including HP and IBM) to provide a way to add powerful metadata
searching into a Swift cluster. The ZeroVM team is interested in
extending Swift to better support large-scale data processing. The
Barbican team is looking in to providing good ways to offer encryption
for data stored in Swift. Others are looking in to how to grow
clusters (changing the partition power) and extend clusters
(transparently federating multiple Swift clusters).

Not all of this is going to be done by the OpenStack Icehouse release.
We cut stable releases of Swift fairly often, and these features will
roll out in those releases as they are done. My goal for Icehouse is
to see the storage policy work done and ready for production.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Goals for Icehouse

2013-11-20 Thread John Dickinson
Please keep those discussing in the open. #openstack-swift would be a great 
place to discuss what you did and figure out a general solution for others.

--john


On Nov 20, 2013, at 2:52 PM, Christian Schwede  wrote:

> Thanks John for the summary - and all contributors for their work!
> 
>> Others are looking in to how to grow clusters (changing the partition power)
> 
> I'm interested who else is also working on this - I successfully increased 
> partition power of several (smaller) clusters and would like to discuss my 
> approach with others. Please feel free to contact me so we can work together 
> on this :)
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Pete Zaitcev added to core

2013-11-25 Thread John Dickinson
Pete Zaitcev has been involved with Swift for a long time, both by contributing 
patches and reviewing patches. I'm happy to announce that he's accepted the 
responsibility of being a core reviewer for Swift.

Congrats, Pete.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-01 Thread John Dickinson
Just to add to the story, Swift uses "X-Trans-Id" and generates it in the 
outer-most "catch_errors" middleware.

Swift's catch errors middleware is responsible for ensuring that the 
transaction id exists on each request, and that all errors previously uncaught, 
anywhere in the pipeline, are caught and logged. If there is not a common way 
to do this, yet, I submit it as a great template for solving this problem. It's 
simple, scalable, and well-tested (ie tests and running in prod for years).

https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py

Leaving aside error handling and only focusing on the transaction id (or 
request id) generation, since OpenStack services are exposed to untrusted 
clients, how would you propose communicating the appropriate transaction id to 
a different service? I can see great benefit to having a glance transaction ID 
carry through to Swift requests (and so on), but how should the transaction id 
be communicated? It's not sensitive info, but I can imagine a pretty big 
problem when trying to track down errors if a client application decides to set 
eg the X-Set-Transaction-Id header on every request to the same thing.

Thanks for bringing this up, and I'd welcome a patch in Swift that would use a 
common library to generate the transaction id, if it were installed. I can see 
that there would be huge advantage to operators to trace requests through 
multiple systems.

Another option would be for each system that calls an another OpenStack system 
to expect and log the transaction ID for the request that was given. This would 
be looser coupling and be more forgiving for a heterogeneous cluster. Eg when 
Glance makes a call to Swift, Glance cloud log the transaction id that Swift 
used (from the Swift response). Likewise, when Swift makes a call to Keystone, 
Swift could log the Keystone transaction id. This wouldn't result in a single 
transaction id across all systems, but it would provide markers so an admin 
could trace the request.

--John




On Dec 1, 2013, at 5:48 PM, Maru Newby  wrote:

> 
> On Nov 30, 2013, at 1:00 AM, Sean Dague  wrote:
> 
>> On 11/29/2013 10:33 AM, Jay Pipes wrote:
>>> On 11/28/2013 07:45 AM, Akihiro Motoki wrote:
 Hi,
 
 I am working on adding request-id to API response in Neutron.
 After I checked what header is used in other projects
 header name varies project by project.
 It seems there is no consensus what header is recommended
 and it is better to have some consensus.
 
 nova: x-compute-request-id
 cinder:   x-compute-request-id
 glance:   x-openstack-request-id
 neutron:  x-network-request-id  (under review)
 
 request-id is assigned and used inside of each project now,
 so x--request-id looks good. On the other hand,
 if we have a plan to enhance request-id across projects,
 x-openstack-request-id looks better.
>>> 
>>> My vote is for:
>>> 
>>> x-openstack-request-id
>>> 
>>> With an implementation of "create a request UUID if none exists yet" in
>>> some standardized WSGI middleware...
>> 
>> Agreed. I don't think I see any value in having these have different
>> service names, having just x-openstack-request-id across all the
>> services seems a far better idea, and come back through and fix nova and
>> cinder to be that as well.
> 
> +1 
> 
> An openstack request id should be service agnostic to allow tracking of a 
> request across many services (e.g. a call to nova to boot a VM should 
> generate a request id that is provided to other services in requests to 
> provision said VM).  All services would ideally share a facility for 
> generating new request ids and for securely accepting request ids from other 
> services.
> 
> 
> m.
> 
>> 
>>  -Sean
>> 
>> -- 
>> Sean Dague
>> http://dague.net
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread John Dickinson
How are you proposing that this integrate with Swift's account and container 
quotas (especially since there may be hundreds of thousands of accounts and 
millions (billions?) of containers in a single Swift cluster)? A centralized 
lookup for quotas doesn't really seem to be a scalable solution.

--John


On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh  wrote:

> Chmouel,
> 
> We reviewed the design of this feature at the summit with CERN and HP teams. 
> Centralized quota storage in Keystone is an anticipated feature, but there 
> are concerns about adding quota enforcement logic for every service to 
> Keystone. The agreed solution is to add quota numbers storage to Keystone, 
> and add mechanism that will notify services about change to the quota. 
> Service, in turn, will update quota cache and apply the new quota value 
> according to its own enforcement rules.
> 
> More detailed capture of the discussion on etherpad:
> https://etherpad.openstack.org/p/CentralizedQuotas
> 
> Re this particular change, we plan to reuse this API extension code, but 
> extended to support domain-level quota as well.
> 
> --
> Best regards,
> Oleg Gelbukh
> Mirantis Labs
> 
> 
> On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah  wrote:
> Hello,
> 
> I was wondering what was the status of Keystone being the central place 
> across all OpenStack projects for quotas.
> 
> There is already an implementation from Dmitry here :
> 
> https://review.openstack.org/#/c/40568/
> 
> but hasn't seen activities since october waiting for icehouse development to 
> be started and a few bits to be cleaned and added (i.e: the sqlite migration).
> 
> It would be great if we can get this rekicked to get that for icehouse-2.
> 
> Thanks,
> Chmouel.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread John Dickinson

On Dec 3, 2013, at 8:05 AM, Jay Pipes  wrote:

> On 12/03/2013 10:04 AM, John Dickinson wrote:
>> How are you proposing that this integrate with Swift's account and container 
>> quotas (especially since there may be hundreds of thousands of accounts and 
>> millions (billions?) of containers in a single Swift cluster)? A centralized 
>> lookup for quotas doesn't really seem to be a scalable solution.
> 
> From reading below, it does not look like a centralized lookup is what the 
> design is. A push-change strategy is what is described, where the quota 
> numbers themselves are stored in a canonical location in Keystone, but when 
> those numbers are changed, Keystone would send a notification of that change 
> to subscribing services such as Swift, which would presumably have one or 
> more levels of caching for things like account and container quotas...

Yes, I get that, and there are already methods in Swift to support that. The 
trick, though, is either (1) storing all the canonical info in Keystone and 
scaling that or (2) storing some "boiled down" version, if possible, and 
fanning that out to all of the resources in Swift. Both are difficult and 
require storing the information in the central Keystone store.

> 
> Best,
> -jay
> 
>> --John
>> 
>> 
>> On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh  wrote:
>> 
>>> Chmouel,
>>> 
>>> We reviewed the design of this feature at the summit with CERN and HP 
>>> teams. Centralized quota storage in Keystone is an anticipated feature, but 
>>> there are concerns about adding quota enforcement logic for every service 
>>> to Keystone. The agreed solution is to add quota numbers storage to 
>>> Keystone, and add mechanism that will notify services about change to the 
>>> quota. Service, in turn, will update quota cache and apply the new quota 
>>> value according to its own enforcement rules.
>>> 
>>> More detailed capture of the discussion on etherpad:
>>> https://etherpad.openstack.org/p/CentralizedQuotas
>>> 
>>> Re this particular change, we plan to reuse this API extension code, but 
>>> extended to support domain-level quota as well.
>>> 
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>> Mirantis Labs
>>> 
>>> 
>>> On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah  
>>> wrote:
>>> Hello,
>>> 
>>> I was wondering what was the status of Keystone being the central place 
>>> across all OpenStack projects for quotas.
>>> 
>>> There is already an implementation from Dmitry here :
>>> 
>>> https://review.openstack.org/#/c/40568/
>>> 
>>> but hasn't seen activities since october waiting for icehouse development 
>>> to be started and a few bits to be cleaned and added (i.e: the sqlite 
>>> migration).
>>> 
>>> It would be great if we can get this rekicked to get that for icehouse-2.
>>> 
>>> Thanks,
>>> Chmouel.
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-05 Thread John Dickinson

On Dec 5, 2013, at 1:36 AM, Maru Newby  wrote:

> 
> On Dec 3, 2013, at 12:18 AM, Joe Gordon  wrote:
> 
>> 
>> 
>> 
>> On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson  wrote:
>> Just to add to the story, Swift uses "X-Trans-Id" and generates it in the 
>> outer-most "catch_errors" middleware.
>> 
>> Swift's catch errors middleware is responsible for ensuring that the 
>> transaction id exists on each request, and that all errors previously 
>> uncaught, anywhere in the pipeline, are caught and logged. If there is not a 
>> common way to do this, yet, I submit it as a great template for solving this 
>> problem. It's simple, scalable, and well-tested (ie tests and running in 
>> prod for years).
>> 
>> https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py
>> 
>> Leaving aside error handling and only focusing on the transaction id (or 
>> request id) generation, since OpenStack services are exposed to untrusted 
>> clients, how would you propose communicating the appropriate transaction id 
>> to a different service? I can see great benefit to having a glance 
>> transaction ID carry through to Swift requests (and so on), but how should 
>> the transaction id be communicated? It's not sensitive info, but I can 
>> imagine a pretty big problem when trying to track down errors if a client 
>> application decides to set eg the X-Set-Transaction-Id header on every 
>> request to the same thing.
>> 
>> -1 to cross service request IDs, for the reasons John mentions above.
>> 
>> 
>> Thanks for bringing this up, and I'd welcome a patch in Swift that would use 
>> a common library to generate the transaction id, if it were installed. I can 
>> see that there would be huge advantage to operators to trace requests 
>> through multiple systems.
>> 
>> Another option would be for each system that calls an another OpenStack 
>> system to expect and log the transaction ID for the request that was given. 
>> This would be looser coupling and be more forgiving for a heterogeneous 
>> cluster. Eg when Glance makes a call to Swift, Glance cloud log the 
>> transaction id that Swift used (from the Swift response). Likewise, when 
>> Swift makes a call to Keystone, Swift could log the Keystone transaction id. 
>> This wouldn't result in a single transaction id across all systems, but it 
>> would provide markers so an admin could trace the request.
>> 
>> There was a session on this at the summit, and although the notes are a 
>> little scarce this was the conclusion we came up with.  Every time a cross 
>> service call is made, we will log and send a notification for ceilometer to 
>> consume, with the request-ids of both request ids.  One of the benefits of 
>> this approach is that we can easily generate a tree of all the API calls 
>> that are made (and clearly show when multiple calls are made to the same 
>> service), something that just a cross service request id would have trouble 
>> with.
> 
> Is wise to trust anything a client provides to ensure traceability?  If a 
> user receives a request id back from Nova, then submits that request id in an 
> unrelated request to Neutron, the traceability would be effectively 
> corrupted.  If the consensus is that we don't want to securely deliver 
> request ids for inter-service calls, how about requiring a service to log its 
> request id along with the request id returned from a call to another service 
> to achieve the a similar result?

Yes, this is what I was proposing. I think this is the best path forward.


> The catch is that every call point (or client instantiation?) would have to 
> be modified to pass the request id instead of just logging at one place in 
> each service.  Is that a cost worth paying?

Perhaps this is my ignorance of how other projects work today, but does this 
not already happen? Is it possible to get a response from an API call to an 
OpenStack project that doesn't include a request id?

> 
> 
> m.
> 
> 
>> 
>> https://etherpad.openstack.org/p/icehouse-summit-qa-gate-debugability 
>> 
>> 
>> With that in mind I think having a standard x-openstack-request-id makes 
>> things a little more uniform, and means that adding new services doesn't 
>> require new logic to handle new request ids.
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Release of Swift 1.11.0

2013-12-12 Thread John Dickinson
I'm happy to announce that we've released Swift 1.11.0. You can find
the high-level Launchpad details (including a link to the tarball) at
https://launchpad.net/swift/icehouse/1.11.0.

As always, you can upgrade to this release without any downtime to your
users.

Swift 1.11.0 is the work of 26 contributors, including the following 5
new contributors to Swift:

Rick Hawkins
Steven Lang
Gonéri Le Bouder
Zhenguo Niu
Aaron Rosen

This release includes some significant new features. I encourage you
to read the change log
(https://github.com/openstack/swift/blob/master/CHANGELOG), and I'll
highlight some of the more significant changes below.

* Discoverable capabilities: The Swift proxy server will now respond
  to /info requests with information about the particular cluster
  being queried. This will allow easy programmatic discovery of limits
  and features implemented in a particular Swift system. The first two
  obvious use cases are for cross-cluster clients (e.g. common client
  between Rackspace, HP, and a private deployment) and for deeper
  functional testing of all parts of the Swift API.

* Early quorum response: On writes, the Swift proxy server will not
  return success unless a quorum of the storage nodes indicate they
  have successfully written data to disk. Previously, the proxy waited
  for all storage nodes to respond, even if it had already heard from
  a quorum of servers. With this change, the proxy node will be able
  to respond to client requests as soon as a quorum of the storage
  nodes indicate a common response. This can help lower response times
  to clients and improve performance of the cluster.

* Retry reads: If a storage server fails during an object read
  request, the proxy will now continue the response stream to the
  client by making a request to a different replica of the data. For
  example, if a client requests a 3GB object and the particular object
  server serving the response fails during the request after 1.25GB,
  the proxy will make a range request to a different replica, asking
  for the data starting at 1.25GB into the file. In this way, Swift
  provides even higher availability to your data in the face of
  hardware failures.

* DiskFile API: The DiskFile abstraction for talking to data on disk
  has been refactored to allow alternate implementations to be
  developed. There is an example in-memory implementation included in
  the codebase. External implementations include one for Gluster and
  one for Seagate Kinetic drives. The DiskFile API is still a work in
  progress and is not yet finalized.

* Object replication ssync (an rsync alternative): A Swift storage
  node can now be configured to use Swift primitives for replication
  transport instead of rsync. Although still being tested at scale,
  this mechanism will allow for future development improving
  replication times and lowering both MTTD and MTTR of errors.

I'd like to publicly thank the Swift contributors and core developers
for their work on Swift. Their diverse experience and viewpoints make
Swift the mature project it is, capable of running the world's largest
storage clouds.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upgrade Swift to Havana

2013-12-15 Thread John Dickinson
The basic upgrade process for all versions of Swift follow the same basic 
pattern:

First for storage nodes:
On a single "canary" node:
1) stop background processes
2) upgrade packages
3) restart/reload main processes
4) start background processes

And if everything goes well there, perform the same process on all the nodes in 
your cluster, probably a zone at a time.

After all storage nodes have been upgraded, it's time to upgrade the proxy 
servers.

For each proxy, first take it out of the load balancer pool (you can use the 
/healthcheck "disable_path" feature for this; see 
http://docs.openstack.org/developer/swift/misc.html#healthcheck). Then, upgrade 
packages and restart/reload the proxy. Again do this on a "canary" node and 
then move to the rest of the cluster in an orderly fashion.

Ok, that's general info for anyone wanting to perform an upgrade of a Swift 
cluster with no downtime to clients. Now to your specific question of Folsom 
(Swift 1.7.4) to Havana (Swift 1.10.0). As a side note, I'd be remiss if I 
didn't recommend that you go ahead and upgrade to Swift 1.11.0, released just 
last week.

I take make an effort to explicitly mention all changes to configs and existing 
defaults in each release's CHANGELOG entry. See 
https://github.com/openstack/swift/blob/master/CHANGELOG for full notes on 
changes between your current version and Swift 1.11 (today's stable version). 
With a quick glance, I see the following things you need to take into account: 
proxy_logging is now in the pipeline twice (as of 1.8.0), conf.d style configs 
are now supported (as of 1.9.0), disk IO threadpools are now used and 
configurable (as of 1.9.0), pooled memcache connections are now configurable 
(as of 1.10.0), and proxy log lines were edited in 1.11.0.

I don't see anything that should cause major disruption to upgrade from circa 
1.7.4 to 1.11.0. There have been some new configs options added, but sane 
defaults are used and no existing defaults were changed.

Good luck!

--John



On Dec 15, 2013, at 7:52 PM, Snider, Tim  wrote:

> What’s the easiest way to upgrade Swift from a Folsom(ish) release to Havana? 
> Any shortcuts or is it best to follow the multinode installation instructions?
> Apologizes for the dumb question.
>  
> Thanks,
> Tim
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] meeting times and rotations etc...

2013-12-19 Thread John Dickinson
Another option would be to use what we already have to our benefit. Instead of 
trying to provision two meeting rooms (-meeting and -meeting-alt), use the 
various other IRC channels that we already have for team meetings. This would 
allow for meetings to be at the same time, but it would free up more time slots 
to be scheduled, and those time slots can be scheduled more precisely to fit 
the schedules of those attending.

So what about cross-team concerns? We have the weekly meeting, and if that 
isn't sufficient, then the -meeting and -meeting-alt channels can be scheduled 
for cross-team needs.

--John




On Dec 19, 2013, at 1:20 AM, Robert Collins  wrote:

> So, I'm a little worried about the complexities of organising free
> slots given we're basically about to double the # of entrieshave in
> all our calendars.
> 
> Maybe we can do something a little simpler: just have the *whole
> calender* shift phase 180' each week: it won't be perfect,
> particularly for those projects that currently have a majority of
> members meeting in the middle of their day (12 midday -> 12 midnight),
> but if there's any decent spread already meeting, there will be a
> decent spread for the alter week - and an important thing for
> inclusion is to not be doing votes etc in meetings *anyway* so I think
> it's ok for the PTL (for instance) to not be at every meeting.
> 
> Thoughts?
> 
> -Rob
> 
> -- 
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] tagged commit messages

2013-12-29 Thread John Dickinson
I've seen several disconnected messages about tags in commit messages. I've 
seen what is possible with the DocImpact tag, and I'd like to have some more 
flexible tagging things too. I'd like to use tags for things like keeping track 
of config defaults changing, specific ongoing feature work, and tracking 
changes come release time.

I put together a little commit hook this afternoon. 
https://gist.github.com/notmyname/8174779 I think it would be nice to integrate 
this into jeepyb and gerrit. My script isn't much more than the code to parse a 
commit message (and I've only tested it against a test repo), but I think the 
mechanics already present in something like notify_impact.py (in jeepyb) may 
take care of common actions like filing bugs and emailing a list of people.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] tagged commit messages

2013-12-29 Thread John Dickinson

On Dec 29, 2013, at 2:05 PM, Michael Still  wrote:

> On Mon, Dec 30, 2013 at 8:12 AM, John Dickinson  wrote:
>> I've seen several disconnected messages about tags in commit messages. I've 
>> seen
>> what is possible with the DocImpact tag, and I'd like to have some more 
>> flexible tagging
>> things too. I'd like to use tags for things like keeping track of config 
>> defaults changing,
>> specific ongoing feature work, and tracking changes come release time.
> 
> I suspect I'm the last person to have touched this code, and I think
> expanding tags is a good idea. However, I'm not sure if its the best
> mechanism possible -- if a reviewer requires a tag to be added or
> changed, it currently requires a git review round trip for the
> developer or their proxy. Is that too onerous if tags become much more
> common?
> 
> I definitely think some more formal way of tracking that a given patch
> needs to be covered by the release notes is a good idea.
> 
> There are currently two hooks that I can see in our gerrit config:
> 
> - patchset-created
> - change-merged
> 
> I suspect some tags should be "executed" at patchset-merged? For
> example a change to flag defaults might cause a notification to be
> sent to interested operators?
> 
> Perhaps step one is to work out what tags we think are useful and at
> what time they should execute?

I think this is exactly what I don't want. I don't want a set of predefined 
tags. We've got that today with DocImpact and SecurityImpact. What I want, for 
very practical examples in Swift, are tags for config changes so deployers can 
notice, tags for things with upgrade procedures, tags for dependency changes, 
tags for "this is a new feature", all in addition to the existing DocImpact and 
SecurityImpact tag. In other words, just like impacted teams get alerted for 
changes that impact docs, I want "patches that impact Swift proxy-server 
configs" to be tracked (and bin scripts, and dependencies, and ring semantics, 
and etc).

I think you're absolutely right that some things should happen at 
patchset-created time and others at change-merged time. 

Like you I'm also concerned that adding a new tag may be too heavyweight if it 
requires a code push/review/gate cycle. Here's an alternative: 

1) Define a very lightweight rule for tagging commits (eg: one line, starts 
with "tags:", is comma-separated)
2) Write an external script to parse the git logs and look for tags. It 
normalizes tags (eg lowercase+remove spaces), and allow simple searches (eg 
"show all commits that are tagged 'configchange'").

That wouldn't require repo changes to add a tag, gives contributors massive 
flexibility in tagging, doesn't add new dependencies to code repos, and is 
lightweight enough to be flexible over time.


Hmmm...actually I like this idea. I may throw together a simple script to do 
this and propose using it for Swift. Thanks Michael!


--John




> 
> Michael
> 
> -- 
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] tagged commit messages

2013-12-29 Thread John Dickinson

On Dec 29, 2013, at 5:24 PM, Michael Still  wrote:

> On Mon, Dec 30, 2013 at 11:51 AM, John Dickinson  wrote:
>> On Dec 29, 2013, at 2:05 PM, Michael Still  wrote:
> 
> [snip]
> 
>>> Perhaps step one is to work out what tags we think are useful and at
>>> what time they should execute?
>> 
>> I think this is exactly what I don't want. I don't want a set of predefined 
>> tags.
> 
> [snip]
> 
> Super aggressive trimming, because I want to dig into this one bit some 
> more...
> 
> I feel like anything that requires pro-active action from the target
> audience will fail. For example, in nova we've gone through long
> cycles with experimental features where we've asked deployers to turn
> on new features in labs and report problems before we turn it on by
> default. They of course don't.
> 
> So... I feel there is value in a curated list of tags, even if we
> allow additional tags (a bit like launchpad). In fact, the idea of a
> "DeployImpact" tag for example really works for me. I'm very tempted
> to implement that one in notify_impact now.

Yup, I understand and agree with where you are coming from. Let's discuss 
DeployImpact as an example.

First, I like the idea of some set of curated tags (and you'll see why at the 
end of this email). Let's have a way that we can tag a commit as having a 
DeployImpact. Ok, what does that mean? In some manner of speaking, _every_ 
commit has a deployment impact. So maybe just things that affect upgrades? Is 
that changes? New features? Breaking changes only (sidebar: why would these 
sort of changes ever get merged anyway? moving on...)? My point is that a 
curated list of tags ends up being fairly generic to the point of not being too 
useful.

Ok, we figured out the above questions (ie when to use DeployImpact and when to 
not use it). Now I'm a deployer and packager (actually not hypothetical, since 
my employer is both for Swift), so what do I do? Do I have to sign up for some 
sort of thing? Does this mean a gerrit code review cycle to some -infra 
project? That would be a pretty high barrier for getting access to that info. 
Or maybe the change-merged action for a DeployImpact tag simply sends an email 
to a new DeployImpact mailing list or puts a new row in a DB somewhere that is 
shown on some page ever time I load it? In that case, I've still got to sign up 
for a new mailing list (and remember to not filter it and get everyone in my 
company who does deployments to check it) or remember to check a particular 
webpage before I do a deploy.

Maybe I'm thinking about this wrong way. Maybe the intended audience is the 
rest of the OpenStack dev community. In that case, sure, now I have a way to 
find DeployImpact commits. That's nice, but what does that get me? I already 
see all the patches in my email and on my gerrit dashboard. Being able to 
filter the commits is nice, but constraining that to an approved list of tags 
seems heavy-handed.

So while I like the idea of a curated list of tags, in general, I don't think 
they lessen the burden for the intended audience (the intended audience being 
people not in the dev/contributor community but rather those deploying and 
using the code). That's why a tool that can parse git commit messages seems 
simple and flexible enough to meet the needs of deployers (eg run `git log 
 | tagged-search deployimpact` before packaging) without requiring the 
overhead of managing a curated tag list via code repo changes (as DocImpact is 
today).

All that being said, I'll poke some holes in my own idea. The problem with my 
idea is letting deployers know what tags they should actually search for. In 
this case, there probably should be some curated list of high-level tags that 
should be used across all OpenStack projects. In other words, if I use 
deploy-change on my patch and you use DeploymentImpact, then what does a 
packager/deployer search for? There should be some set of tags with guidelines 
for their usage on the wiki. I'd propose starting with ConfigDefaultChanged, 
DependencyChanged, and NewFeature.

--John




> 
> Michael
> 
> -- 
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] tagged commit messages

2013-12-30 Thread John Dickinson

On Dec 30, 2013, at 4:49 PM, Angus Salkeld  wrote:

> On 30/12/13 13:44 -0600, Kevin L. Mitchell wrote:
>> 
>> On Mon, 2013-12-30 at 11:04 +0100, Flavio Percoco wrote:
>>> I like the idea of having custom tags. I'm a bit concerned about the
>>> implications this might have with cross-project collaborations. I
>>> mean, people contributing to more projects will have to be aware of
>>> the many possible differences in this area.
>>> 
>>> That being said, I can think of some cases where we this could be
>>> useful for other projects. However, I'd encourage to keep common tags
>>> documented somewhere, perhaps this common tags shouldn't be part of
>>> the `Tags:` 'field', which you already mentioned above.
>> 
>> If I may be allowed a tangent—should a mechanism external to the commit
>> message be allowed for attaching a tag to a review?  Consider the recent
>> ext3/ext4 change: a reviewer could browse that and say, "This should
>> have a DeploymentImpact tag."  With the tags as so far described in this
>> thread, that has to be something added by the submitter (or a new
>> version of the patch uploaded by a reviewer).  Can we create a mechanism
>> that would allow a reviewer to attach such a tag without having to
>> modify any part of the review?  Can the mechanism allow such an
>> attachment even if the review has already been merged?
> 
> https://www.kernel.org/pub/software/scm/git/docs/git-notes.html

So yeah, that's pretty nice. Thanks.

> 
>> 
>> Just something to think about :)
>> -- 
>> Kevin L. Mitchell 
>> Rackspace
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 WSGI Fails when Token > 8190 Bytes

2014-01-16 Thread John Dickinson
Yep, you should follow https://bugs.launchpad.net/keystone/+bug/1190149 and the 
related patches in each project.

--John



On Jan 16, 2014, at 10:30 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
 wrote:

> Hello,
> 
> I have come across a bug or limitation when using an Apache2 SSL-WSGI front 
> end for Keystone. If the returned token for a Keystone authenticate request 
> is greater than 8190 bytes, the mod_wsgi code throws an error similar to the 
> following:
> 
> [Thu Jan 16 22:27:47 2014] [info] Initial (No.1) HTTPS request received for 
> child 231 (server d00-50-56-8e-75-82.cloudos.org:5000)
> [Thu Jan 16 22:27:47 2014] [info] [client 192.168.124.2] mod_wsgi (pid=24676, 
> process='keystone', application='d00-50-56-8e-75-82.cloudos.org:5000|'): 
> Loading WSGI script '/etc/apache2/wsgi/keystone/main'.
> [Thu Jan 16 22:27:48 2014] [error] [client 192.168.124.2] malformed header 
> from script. Bad header=mVmOTdhMmUzIn0sIHsidXJsIjogImh: main
> [Thu Jan 16 22:27:48 2014] [debug] mod_deflate.c(615): [client 192.168.124.2] 
> Zlib: Compressed 592 to 377 : URL /v3/auth/tokens
> [Thu Jan 16 22:27:48 2014] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: 
> SSL negotiation finished successfully
> [Thu Jan 16 22:27:48 2014] [info] [client 192.168.124.2] Connection closed to 
> child 231 with standard shutdown (server d00-50-56-8e-75-82.cloudos.org:5000)
> 
> 
> I really don't think that I am the first one to stumble across this problem. 
> Has anyone else found and solved this?
> 
> Regards,
> 
> Mark
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] release 1.12.0

2014-01-28 Thread John Dickinson
Today I'm happy to announce that we have released Swift 1.12.0. As
always, this is a stable release and you can upgrade to this version
of Swift with no customer downtime.

You can download the code for this release at
https://launchpad.net/swift/icehouse/1.12.0 or bug your package
provider for the updated version.

I've noticed that OpenStack Swift releases tend to cluster around
certain themes. This release is no different. While we've added some
nice end-user updates to the project, this release has a ton of good
stuff for cluster operators.

I'll highlight a few of the major improvements below, but I encourage
you to read the entire change log at
https://github.com/openstack/swift/blob/master/CHANGELOG.

## Security update

**CVE-2014-006**

Fixed CVE-2014-0006 to avoid a potential timing attack with temp url.
Key validation previously was not using a constant-time string
compare, and therefore it may have been possible for an attacker to
guess tempurl keys if the object name was known and tempurl had been
enabled for that Swift user account. The tempurl key validation now
uses a constant-time string compare to close this potential attack
vector.

## Major End-User Features

**New information added to /info**

We added discoverable capabilities via the /info endpoint in a recent
release. In this release we have added all of the general cluster
constraints to the /info response. This means that a client can
discover the cluster limits on names, metadata, and object sizes.
We've also added information about the support temp url methods and
large object constraints in the cluster.

**Last-Modified header values**

The Last-Modified header value returned will now be the object's
timestamp rounded up to the next second. This allows subsequent
requests with If-[un]modified-Since to use the Last-Modified value as
expected.

## Major Deployer Features

**Generic means for persisting system metadata**

Swift now supports system-level metadata on accounts and containers.
System metadata provides a means to store internal custom metadata
with associated Swift resources in a safe and secure fashion without
actually having to plumb custom metadata through the core swift
servers. The new gatekeeper middleware prevents this system metadata
from leaking into the request or being set by a client.

**Middleware changes**

As mentioned above, there is a new "gatekeeper" middleware to guard
the system metadata. In order to ensure that system metadata doesn't
leak into the response, the gatekeeper middleware will be
automatically inserted near the beginning of the proxy pipeline if it
is not explicitly referenced. Similarly, the catch_errors middleware
is also forced to the front of the proxy pipeline if it is not
explicitly referenced. Note that for either of these middlewares, if
they are already in the proxy pipeline, Swift will not reorder the
pipeline.

**New container sync configuration option**

Container sync has new options to better support syncing containers
across multiple clusters without the end-user needing to know he
required endpoint. See
http://swift.openstack.org/overview_container_sync.html for full
information.

**Bulk middleware config default changed**

The bulk middleware allows the client to send a large body of work to
the cluster with just one request. Since this work may take a while to
return, Swift can periodically send back whitespace before the actual
response data in order to keep the client connection alive. The config
parameter to set the minimum frequency of these whitespace characters
is set by the yield_frequency value. The default value was lowered
from 60 seconds to 10 seconds. This change does not affect
deployments, and there is no migration process needed.

**Raise RLIMIT_NPROC**

In order to support denser storage systems, Swift processes will not
attempt to set the RLIMIT_NPROC value to 8192

**Server exit codes**

Swift processes will now exist with non-zero exist codes on config errors

**Quarantine logs**

Swift will now log at warn level when an object is quarantined

## Community growth

This release of Swift is the work of twenty-three devs includes eight
first-time contributors to the project:

* Morgan Fainberg
* Zhang Jinnan
* Kiyoung Jung
* Steve Kowalik
* Sushil Kumar
* Cristian A Sanchez
* Jeremy Stanley
* Yuriy Taraday

Thank you to everyone who contributes code, promotes the project, and
facilitates the community. Your contributions are what make this
project successful. 


--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread John Dickinson
I've been keeping an eye on this thread, and it seems I actually have a few 
minutes to spend on a response today.

To first answer the specific question, while there are some minor technical 
concerns about oslo logging, the bigger concerns are non-technical. Some things 
I'm concerned about from a technical perspective are that it's not a separate 
module or package that can be imported, so it would probably currently require 
copy/paste code into the Swift codebase. My second concern is that there are 
log line elements that just don't seem to make sense like "instance". I'd be 
happy to be wrong on both of these items, and I want to make clear that these 
are not long-term issues. They are both solvable.

My bigger concern with using oslo logging in Swift is simply changing the 
request log format is something that cannot be done lightly. Request logs are a 
very real interface into the system, and changing the log format in a breaking 
way can cause major headaches for people relying on those logs for system 
health, billing, and other operational concerns.

One possible solution to this is to keep requests logged the same way, but add 
configuration options for all of the other things that are logged. Having two 
different logging systems (or multiple configurable log handlers) to do this 
seems to add a fair bit of complexity to me, especially when I'm not quite sure 
of the actual problem that's being solved. That said, adding in a different log 
format into Swift isn't a terrible idea by itself, but migration is a big 
concern of any implementation (and I know you'll find very strong feelings on 
this in gerrit if/when something is proposed).




Now back to the original topic of actual logging formats.

Here's (something like) what I'd like to see for a common log standard (ie 
Sean, what I think you were asking for comments on):

log_line = prefix message
prefix = timestamp project log_level
message = bytestream
timestamp = `eg the output of time.time()`
project = `one of {nova,swift,neutron,cinder,glance,etc}`

Now, there's plenty of opportunity to bikeshed what the actual log line would 
look like, but the general idea of what I want to see has 2 major parts:

1) Every log message is one line (ends with \n) and the log fields are 
space-delineated. eg (`log_line = ' '.join(urllib.quote(x) for x in 
log_fields_list)`)

2) The only definition of a log format is the prefix and the message is a set 
of fields defined by the service actually doing the logging.


--John




On Jan 30, 2014, at 10:11 AM, Sanchez, Cristian A 
 wrote:

> Is there any technical reason of why Swift does not use oslo logging?
> If not, I can work on incorporating that to Swift.
> 
> Thanks
> 
> Cristian
> 
> On 30/01/14 11:12, "Sean Dague"  wrote:
> 
>> For all projects that use oslo logging (which is currently everything
>> except swift), this works.
>> 
>>  -Sean
>> 
>> On 01/30/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
>>> No idea, I only really work on Nova, but as this is in Oslo I expect so!
>>> 
>>> Matt
>>> 
 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 30 January 2014 13:44
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards
 
 Hi Matt,
 What about the rest of the components? Do they also have this
 capability?
 Thanks
 
 Cristian
 
 On 30/01/14 04:59, "Macdonald-Wallace, Matthew"
  wrote:
 
> Hi Cristian,
> 
> The functionality already exists within Openstack (certainly it's
> there
> in Nova) it's just not very well documented (something I keep meaning
> to
> do!)
> 
> Basically you need to add the following to your nova.conf file:
> 
> log_config=/etc/nova/logging.conf
> 
> And then create /etc/nova/logging.conf with the configuration you want
> to use based on the Python Logging Module's "ini" configuration
> format.
> 
> Hope that helps,
> 
> Matt
> 
>> -Original Message-
>> From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
>> Sent: 29 January 2014 17:57
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] Proposed Logging Standards
>> 
>> Hi Matthew,
>> I¹m interested to help in this switch to python logging framework for
>> shipping to  logstash/etc. Are you working on a blueprint for this?
>> Cheers,
>> 
>> Cristian
>> 
>> On 27/01/14 11:07, "Macdonald-Wallace, Matthew"
>>  wrote:
>> 
>>> Hi Sean,
>>> 
>>> I'm currently working on moving away from the "built-in" logging to
>>> use log_config= and the python logging framework so that
>>> we can start shipping to logstash/sentry/>> here>.
>>> 
>>> I'd be very interested in getting involved in this, especially from
>>

Re: [openstack-dev] [nova] vmware minesweeper

2014-02-05 Thread John Dickinson

On Feb 5, 2014, at 4:04 PM, Ryan Hsu  wrote:

> Also, I have added a section noting crucial bugs/patches that are blocking 
> Minesweeper.


Can we just put flags around them and move on?





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] meeting time updated

2014-02-06 Thread John Dickinson
Historically, the Swift team meetings have been every other week. In order to 
keep better track of things (and hopefully to get more specific attention on 
languishing reviews), we're moving to a weekly meeting schedule.

New meeting time: every Wednesday at 1900UTC in #openstack-meeting

The meeting agenda is tracked at https://wiki.openstack.org/wiki/Meetings/Swift


--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ready to import Launchpad Answers into Ask OpenStack

2014-02-06 Thread John Dickinson
Sounds like a good plan. My only concern with the import is that the users are 
matched up, and it looks like that's being handled. The only reason I've wanted 
to keep LP Answers open is to not lose that content, and this takes care of 
that. Thanks for doing it, and lgtm.

--John



On Feb 6, 2014, at 9:07 AM, Stefano Maffulli  wrote:

> Hello folks,
> 
> we're ready to import the answers from Launchpad into Ask OpenStack. A
> script will import all questions, answers, comments (and data abou user
> accounts) from LP into Ask, tag them as the project of origin (nova,
> swift, etc). You can see the results of the test runs on
> http://ask-staging.openstack.org/en/questions/
> For example, the questions migrated from LP Answers Swift are
> http://ask-staging.openstack.org/en/questions/scope:all/sort:activity-desc/tags:swift/page:1/
> 
> We'll try also to sync accounts already existing on Ask with those
> imported from LP, matching on usernames, OpenID and email addresses as
> exposed by LP API. If there is no match, a new account will be created.
> 
> I'm writing to you to make sure that you're aware of this effort and to
> ask you if you are really, adamantly against closing LP Answers. In case
> you are against, I'll try to convince you otherwise :)
> 
> You can see the history of the effort and its current status on
> 
> https://bugs.launchpad.net/openstack-community/+bug/1212089
> 
> Next step is to set a date to run the import. The process will be:
> 
> 1 - run the import script
> 2 - put Ask down for maintenance
> 3 - import data into Ask
> 4 - check that it run correctly
> 5 - close all LP Answers, reconfigure LP projects to redirect to Ask
> 
> I think we can run this process one project at the time so we minimize
> interruptions. If the PTLs authorize me I think I have the necessary
> permissions to edit LP Answers, remove the archives from the public once
> the data is replicated correctly on Ask, so you can focus on coding.
> 
> Let me know what you think about closing LP Answers, use Ask exclusively
> to handle support requests and about delegating to me closing LP Answers
> for your projects.
> 
> Cheers,
> stef
> 
> -- 
> Ask and answer questions on https://ask.openstack.org



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Fixed recent gate issues

2014-02-14 Thread John Dickinson
As many of you surely noticed, we had some significant significant
gate issues in the last day. It's fixed now, and I've got the details below.

The root cause of the issue was a lack of proper testing in python-
swiftclient. We've made some improvements here in the last few hours,
but improving this will be a longer-term effort (and one that is being
prioritized).

Here's what happened: In order to get support for TLS certificate
validation, swiftclient was ported to use the Python requests library.
This is a good change, overall, but it was merged with a bug where the
object data was uploaded as a multipart/form-data instead of as the
raw data itself. This issue was resolved with patch
https://review.openstack.org/#/c/73585/. The gate is currently stable,
everyone should be unblocked by this issue now. If you have a patch
that failed a check or gate run, you should recheck/reverify with bug
#1280072.

This, of course, raises the question of how this was allowed to
happen. First, there is a lack of functional testing in swiftclient
itself. (And, mea culpa, I should have done better testing before I
merged the initial breaking patch.) These tests are being prioritized
and worked on now.

Second, python-swiftclient did not have a symmetric gate with the
other projects that depend upon it. Although the gate change to make
this happen was proposed quite a while ago, it wasn't merged until
just this morning (https://review.openstack.org/#/c/70378/). Having
these tests earlier should have caught the issues in the original
python-swiftclient patch. Now that it has landed, there is much less
risk of such a problem happening again.

I want to thank Jeremy Stanley on the infra team for helping getting
these patches landed quickly. I'd also like to thank Tristan Cacqueray
for helping get the fixes written for python-swiftclient.

--John


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-swiftclient releases

2014-02-14 Thread John Dickinson
I'm pleased to announce a couple of big releases for python-swiftclient:
versions 1.9.0 and 2.0.2. You can find them both on PyPI:

https://pypi.python.org/pypi/python-swiftclient/2.0.2
https://pypi.python.org/pypi/python-swiftclient/1.9.0

So why the two releases? The 2.0.2 release is the result of migrating to
the Python requests library. The 1.9.0 release is the final release of
the 1.X series and includes all unreleased changes before the port to
requests. Below is a summary of the changes included in 1.9.0 and 2.0.2.

1.9.0 new features:

* Add parameter --object-name, which:
1) Sets target object name when upload single file
2) Sets object prefix when upload a directory

* Add capabilities command
This option uses the new /info endpoint to request the
remote capabilities and nicely display it.

* Allow custom headers when using swift download (CLI)
A repeatable option, --header or -H, is added so a user can specify
custom headers such as Range or If-Modified-Since when downloading
an object with the swift CLI.

2.0.2 new features and important info:

* Ported to use the "requests" library to support TLS/SSL certificate
  validation. The certificate validation changes the interpretation
  and usage of the "--insecure" option.

Usage of the requests library has two important caveats:

1) SSL compression is no longer settable with the
"--no-ssl-compression" option. The option is preserved as a
no-op for client compatibility. SSL compression is set by the
system SSL library.

2) The requests library does not currently support Expect
100-continue requests on upload. Users requiring this feature
should use python-swiftclient 1.9.0 until requests support this
feature or use a different API wrapper.

Please pay special attention to these changes. There are no plans to
maintain ongoing development on the 1.X series. All future work,
including support for Python 3, will happen in the 2.X series.

I'd also like to explicitly thank the eNovance development team,
especially Tristan Cacqueray, Christian Schwede, and Chmouel Boudjnah,
for their work in these releases. In addition to several smaller
features, they led the effort to port python-swiftclient to the
requests library.

Note: 2.0.2 and not simply 2.0 is because of a bug that was
discovered after 2.0 was tagged. See
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027172.html
for details.

--John


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.13.0 released

2014-03-03 Thread John Dickinson
I'm please to announce that OpenStack Swift 1.13 has been released.
This release has some important new features (highlighted below), and
it also serves as a good checkpoint before Swift's final release in
the Icehouse cycle.

Launchpad page for this release: https://launchpad.net/swift/icehouse/1.13.0

Highlights from the Swift 1.13 changelog
(https://github.com/openstack/swift/blob/master/CHANGELOG):

* Account-level ACLs and ACL format v2

  Accounts now have a new privileged header to represent ACLs or
  any other form of account-level access control. The value of
  the header is a JSON dictionary string to be interpreted by the
  auth system. A reference implementation is given in TempAuth.
  Please see the full docs at
  http://swift.openstack.org/overview_auth.html

* Moved all DLO functionality into middleware

  The proxy will automatically insert the dlo middleware at an
  appropriate place in the pipeline the same way it does with the
  gatekeeper middleware. Clusters will still support DLOs after upgrade
  even with an old config file that doesn't mention dlo at all.

* Remove python-swiftclient dependency

Please read the full changelog for a full report of this release. As
always, this release is considered production-ready, and deployers can
upgrade to it with no client downtime in their cluster.

I'd like to thank the following new contributors to Swift, each of
whom contributed to the project for the first time during this
release:

 - Luis de Bethencourt (l...@debethencourt.com)
 - Florent Flament (florent.flament-...@cloudwatt.com)
 - David Moreau Simard (dmsim...@iweb.com)
 - Shane Wang (shane.w...@intel.com)

Thank you to everyone who has contributed.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][swift] Importing Launchpad Answers in Ask OpenStack

2014-03-03 Thread John Dickinson
Thanks for doing this, Stef. I've closed LP answers for Swift. All new 
questions should go to ask.openstack.org

--John



On Mar 3, 2014, at 3:26 PM, Stefano Maffulli  wrote:

> And we're done!
> 
> All questions and answers on Launchpad Answers have been imported in Ask
> OpenStack
> 
> Check it out, there are now almost 6,000 questions on
> https://ask.openstack.org/en/questions/
> 
> I realized that contrary to what I thought, I can't edit most of the
> Launchpad projects to close Answers. I'll contact the PTLs to edit the
> projects.
> 
> Cheers,
> stef
> 
> 
> On 01/28/2014 04:38 PM, Stefano Maffulli wrote:
>> Hello folks
>> 
>> we're almost ready to import all questions and asnwers from LP Answers
>> into Ask OpenStack.  You can see the result of the import from Nova on
>> the staging server http://ask-staging.openstack.org/
>> 
>> There are some formatting issues for the imported questions and I'm
>> trying to evaluate how bad these are.  The questions I see are mostly
>> readable and definitely pop up in search results, with their answers so
>> they are valuable already as is. Some parts, especially the logs, may
>> not look as good though. Fixing the parsers and get a better rendering
>> for all imported questions would take an extra 3-5 days of work (maybe
>> more) and I'm not sure it's worth it.
>> 
>> Please go ahead and browse the staging site and let me know what you think.
>> 
>> Cheers,
>> stef
>> 
> 
> -- 
> Ask and answer questions on https://ask.openstack.org
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread John Dickinson
This is great!

Sean, I agree with your analysis.

gate-swift-pep8 (yes)
gate-swift-docs (yes)
gate-swift-python27 (yes)
gate-swift-tox-func (yes)
check-swift-dsvm-functional (yes)
check-tempest-dsvm-full(to further ensure glance/heat/cinder checking)
check-grenade-dsvm  (I can go either way on this one, I won't fight for or 
against it)



--John





> On Nov 25, 2014, at 7:03 AM, Sean Dague  wrote:
> 
> As we are trying to do smart disaggregation of tests in the gate, I
> think it's important to figure out which test configurations seem to be
> actually helping, and which aren't. As the swift team has long had a
> functional test job, this seems like a good place to start. (Also the
> field deploy / upgrade story on Swift is probably one of the best of any
> OpenStack project, so removing friction is probably in order.)
> 
> gate-swift-pep8   SUCCESS in 1m 16s
> gate-swift-docs   SUCCESS in 1m 48s
> gate-swift-python27   SUCCESS in 3m 24s
> check-tempest-dsvm-full   SUCCESS in 56m 51s
> check-tempest-dsvm-postgres-full  SUCCESS in 54m 53s
> check-tempest-dsvm-neutron-full   SUCCESS in 1h 06m 09s
> check-tempest-dsvm-neutron-heat-slow  SUCCESS in 31m 18s
> check-grenade-dsvmSUCCESS in 39m 33s
> gate-tempest-dsvm-large-ops   SUCCESS in 29m 34s
> gate-tempest-dsvm-neutron-large-ops   SUCCESS in 22m 11s
> gate-swift-tox-func   SUCCESS in 2m 50s (non-voting)
> check-swift-dsvm-functional   SUCCESS in 17m 12s
> check-devstack-dsvm-cells SUCCESS in 15m 18s
> 
> 
> I think in looking at that it's obvious that:
> * check-devstack-dsvm-cells
> * check-tempest-dsvm-postgres-full
> * gate-tempest-dsvm-large-ops
> * gate-tempest-dsvm-neutron-large-ops
> * check-tempest-dsvm-neutron-full
> 
> Provide nothing new to swift, the access patterns on the glance => swift
> interaction aren't impacted on any of those, neither is the heat / swift
> resource tests or volumes / swift backup tests.
> 
> check-tempest-dsvm-neutron-heat-slow  doesn't touch swift either (it's
> actually remarkably sparse of any content).
> 
> Which kind of leaves us with 1 full stack run, and the grenade job. Have
> those caught real bugs? Does there remain value in them? Have other
> teams that rely on swift found those to block regressions?
> 
> Let's figure out what's helpful, and what's not, and purge out all the
> non helpful stuff.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Swift 2.2.1 rc (err "c") 1 is available

2014-12-15 Thread John Dickinson
All,

I'm happy to say that the Swift 2.2.1 release candidate is available.

http://tarballs.openstack.org/swift/swift-2.2.1c1.tar.gz

Please take a look, and if nothing is found, we'll release this as the final 
2.2.1 version at the end of the week.

This release includes a lot of great improvements for operators. You can see 
the change log at https://github.com/openstack/swift/blob/master/CHANGELOG.


One note about the tag name. The recent release of setuptools has started 
enforcing PEP440. According to that spec, 2.2.1rc1 (ie the old way we tagged 
things) is normalized to 2.2.1c1. See 
https://www.python.org/dev/peps/pep-0440/#pre-releases for the details. Since 
OpenStack infrastructure relies on setuptools parsing to determine the tarball 
name, the tags we use need to be already normalized so that the tag in the repo 
matches the tarball created. Therefore, the new tag name is 2.2.1c1.


--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift 2.2.1 released

2014-12-19 Thread John Dickinson
I'm happy to announce the release of Swift 2.2.1. The work of 28 contributors 
(including 8 first-time contributors), this release is definitely 
operator-centric. I recommend that you upgrade; as always you can upgrade to 
this release with no customer downtime.

Get the release: https://launchpad.net/swift/kilo/2.2.1
Full change log: https://github.com/openstack/swift/blob/master/CHANGELOG

Below I've highlighted a few of the more significant updates in this release.

* Swift now rejects object names with unicode surrogates. These unicode code 
points are not able to be encoded as UTF-8, so they are now formally rejected.

* Storage node error limits now survive a ring reload. Each Swift proxy server 
tracks errors when talking to a storage node. If a storage node sends too many 
errors, no further requests are sent to that node for a time. However, 
previously this error tracking was cleared with a ring reload, and a ring 
reload could happen frequently if some servers were being gradually added to 
the cluster. Now, the error tracking is not lost on ring reload, and error 
tracking is aggregated across storage polices. Basically, this means that the 
proxy server has a more accurate view of the health of the cluster and your 
cluster will be less stressed when you have failures and capacity adjustments 
at the same time.

* Clean up empty account and container partitions directories if they are 
empty. This keeps the system healthy and prevents a large number of empty 
directories from (significantly) slowing down the replication process.

* Swift now includes a full translation for Simplified Chinese (zh_CN locale).

I'd like to thank all of the Swift contributors for helping with this release. 
I'd especially like to thank the first-time contributors listed below:

Cedric Dos Santos
Martin Geisler
Filippo Giunchedi
Gregory Haynes
Daisuke Morita
Hisashi Osanai
Shilla Saebi
Pearl Yajing Tan


Thank you, and have a happy holiday season.


John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Objects not getting distributed across the swift cluster...

2014-05-01 Thread John Dickinson

On May 1, 2014, at 10:32 AM, Shyam Prasad N  wrote:

> Hi Chuck, 
> Thanks for the reply.
> 
> The reason for such weight distribution seems to do with the ring rebalance 
> command. I've scripted the disk addition (and rebalance) process to the ring 
> using a wrapper command. When I trigger the rebalance after each disk 
> addition, only the first rebalance seems to take effect.
> 
> Is there any other way to adjust the weights other than rebalance? Or is 
> there a way to force a rebalance, even if the frequency of the rebalance (as 
> a part of disk addition) is under an hour (the min_part_hours value in ring 
> creation).

Rebalancing only moves one replica at a time to ensure that your data remains 
available, even if you have a hardware failure while you are adding capacity. 
This is why it may take multiple rebalances to get everything evenly balanced.

The min_part_hours setting (perhaps poorly named) should match how long a 
replication pass takes in your cluster. You can understand this because of what 
I said above. By ensuring that replication has completed before putting another 
partition "in flight", Swift can ensure that you keep your data highly 
available.

For completeness to answer your question, there is an (intentionally) 
undocumented option in swift-ring-builder called 
"pretend_min_part_hours_passed", but it should ALMOST NEVER be used in a 
production cluster, unless you really, really know what you are doing. Using 
that option will very likely cause service interruptions to your users. The 
better option is to correctly set the min_part_hours value to match your 
replication pass time (with set_min_part_hours), and then wait for swift to 
move things around.

Here's some more info on how and why to add capacity to a running Swift 
cluster: https://swiftstack.com/blog/2012/04/09/swift-capacity-management/

--John





> On May 1, 2014 9:00 PM, "Chuck Thier"  wrote:
> Hi Shyam,
> 
> If I am reading your ring output correctly, it looks like only the devices in 
> node .202 have a weight set, and thus why all of your objects are going to 
> that one node.  You can update the weight of the other devices, and 
> rebalance, and things should get distributed correctly.
> 
> --
> Chuck
> 
> 
> On Thu, May 1, 2014 at 5:28 AM, Shyam Prasad N  wrote:
> Hi,
> 
> I created a swift cluster and configured the rings like this...
> 
> swift-ring-builder object.builder create 10 3 1
> 
> ubuntu-202:/etc/swift$ swift-ring-builder object.builder 
> object.builder, build version 12
> 1024 partitions, 3.00 replicas, 1 regions, 4 zones, 12 devices, 300.00 
> balance
> The minimum number of hours before a partition can be reassigned is 1
> Devices:id  region  zone  ip address  port  replication ip  
> replication port  name weight partitions balance meta
>  0   1 1  10.3.0.202  6010  10.3.0.202
>   6010  xvdb   1.00   1024  300.00 
>  1   1 1  10.3.0.202  6020  10.3.0.202
>   6020  xvdc   1.00   1024  300.00 
>  2   1 1  10.3.0.202  6030  10.3.0.202
>   6030  xvde   1.00   1024  300.00 
>  3   1 2  10.3.0.212  6010  10.3.0.212
>   6010  xvdb   1.00  0 -100.00 
>  4   1 2  10.3.0.212  6020  10.3.0.212
>   6020  xvdc   1.00  0 -100.00 
>  5   1 2  10.3.0.212  6030  10.3.0.212
>   6030  xvde   1.00  0 -100.00 
>  6   1 3  10.3.0.222  6010  10.3.0.222
>   6010  xvdb   1.00  0 -100.00 
>  7   1 3  10.3.0.222  6020  10.3.0.222
>   6020  xvdc   1.00  0 -100.00 
>  8   1 3  10.3.0.222  6030  10.3.0.222
>   6030  xvde   1.00  0 -100.00 
>  9   1 4  10.3.0.232  6010  10.3.0.232
>   6010  xvdb   1.00  0 -100.00 
> 10   1 4  10.3.0.232  6020  10.3.0.232
>   6020  xvdc   1.00  0 -100.00 
> 11   1 4  10.3.0.232  6030  10.3.0.232
>   6030  xvde   1.00  0 -100.00 
> 
> Container and account rings have a similar configuration.
> Once the rings were created and all the disks were added to the rings like 
> above, I ran rebalance on each ring. (I ran rebalance after adding each of 
> the node above.)
> Then I immediately scp the rings to all other nodes in the cluster.
> 
> I now observe that the objects are all going to 10.3.0.202. I don't see the 
> objects being replicated to the other nodes. So much so that 202 is 
> approaching 100% disk usage, while other nodes are almost completely empty.
> What am I doing wrong? Am I not supposed to run rebalance operation after 
> addition of each disk/node?
> 

Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread John Dickinson
To add some color, Swift supports both single conf files and conf.d 
directory-based configs. See 
http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-configuration.

The "single config file" pattern is quite useful for simpler configurations, 
but the directory-based ones becomes especially useful when looking at cluster 
configuration management tools--stuff that auto-generates and composes config 
settings (ie non hand-curated configs). For example, the conf.d configs can 
support each middleware config or background daemon process in a separate file. 
Or server settings in one file and common logging settings in another.

(Also, to answer before it's asked [but I don't want to derail the current 
thread], I'd be happy to look at oslo config parsing if it supports the same 
functionality.)

--John




On May 4, 2014, at 9:49 AM, Armando M.  wrote:

> If the consensus is to unify all the config options into a single
> configuration file, I'd suggest following what the Nova folks did with
> [1], which I think is what Salvatore was also hinted. This will also
> help mitigate needless source code conflicts that would inevitably
> arise when merging competing changes to the same file.
> 
> I personally do not like having a single file with gazillion options
> (the same way I hate source files with gazillion LOC's but I digress
> ;), but I don't like a proliferation of config files either. So I
> think what Mark suggested below makes sense.
> 
> Cheers,
> Armando
> 
> [1] - 
> https://github.com/openstack/nova/blob/master/etc/nova/README-nova.conf.txt
> 
> On 2 May 2014 07:09, Mark McClain  wrote:
>> 
>> On May 2, 2014, at 7:39 AM, Sean Dague  wrote:
>> 
>>> Some non insignificant number of devstack changes related to neutron
>>> seem to be neutron plugins having to do all kinds of manipulation of
>>> extra config files. The grenade upgrade issue in neutron was because of
>>> some placement change on config files. Neutron seems to have *a ton* of
>>> config files and is extremely sensitive to their locations/naming, which
>>> also seems like it ends up in flux.
>> 
>> We have grown in the number of configuration files and I do think some of 
>> the design decisions made several years ago should probably be revisited.  
>> One of the drivers of multiple configuration files is the way that Neutron 
>> is currently packaged [1][2].  We’re packaged significantly different than 
>> the other projects so the thinking in the early years was that each 
>> plugin/service since it was packaged separately needed its own config file.  
>> This causes problems because often it involves changing the init script 
>> invocation if the plugin is changed vs only changing the contents of the 
>> init script.  I’d like to see Neutron changed to be a single package similar 
>> to the way Cinder is packaged with the default config being ML2.
>> 
>>> 
>>> Is there an overview somewhere to explain this design point?
>> 
>> Sadly no.  It’s a historical convention that needs to be reconsidered.
>> 
>>> 
>>> All the other services have a single config config file designation on
>>> startup, but neutron services seem to need a bunch of config files
>>> correct on the cli to function (see this process list from recent
>>> grenade run - http://paste.openstack.org/show/78430/ note you will have
>>> to horiz scroll for some of the neutron services).
>>> 
>>> Mostly it would be good to understand this design point, and if it could
>>> be evolved back to the OpenStack norm of a single config file for the
>>> services.
>>> 
>> 
>> +1 to evolving into a more limited set of files.  The trick is how we 
>> consolidate the agent, server, plugin and/or driver options or maybe we 
>> don’t consolidate and use config-dir more.  In some cases, the files share a 
>> set of common options and in other cases there are divergent options [3][4]. 
>>   Outside of testing the agents are not installed on the same system as the 
>> server, so we need to ensure that the agent configuration files should stand 
>> alone.
>> 
>> To throw something out, what if moved to using config-dir for optional 
>> configs since it would still support plugin scoped configuration files.
>> 
>> Neutron Servers/Network Nodes
>> /etc/neutron.d
>>neutron.conf  (Common Options)
>>server.d (all plugin/service config files )
>>service.d (all service config files)
>> 
>> 
>> Hypervisor Agents
>> /etc/neutron
>>neutron.conf
>>agent.d (Individual agent config files)
>> 
>> 
>> The invocations would then be static:
>> 
>> neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
>> /etc/neutron/server.d
>> 
>> Service Agents:
>> neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
>> /etc/neutron/service.d
>> 
>> Hypervisors (assuming the consolidates L2 is finished this cycle):
>> neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
>> /etc/neutron/agent.d
>> 
>> Thoughts?
>> 
>> 

Re: [openstack-dev] Monitoring as a Service

2014-05-04 Thread John Dickinson
One of the advantages of the program concept within OpenStack is that separate 
code projects with complementary goals can be managed under the same program 
without needing to be the same codebase. The most obvious example across every 
program are the "server" and "client" projects under most programs.

This may be something that can be used here, if it doesn't make sense to extend 
the ceilometer codebase itself.

--John





On May 4, 2014, at 12:30 PM, Denis Makogon  wrote:

> Hello to All.
> 
> I also +1 this idea. As I can see, Telemetry program (according to Launchpad) 
> covers the process of the infrastructure metrics (networking, etc) and 
> in-compute-instances metrics/monitoring.
> So, the best option, I guess, is to propose add such great feature to 
> Ceilometer. In-compute-instance monitoring will be the great value-add to 
> upstream Ceilometer.
> As for me, it's a good chance to integrate well-known production ready 
> monitoring systems that have tons of specific plugins (like Nagios etc.)
> 
> Best regards,
> Denis Makogon
> 
> воскресенье, 4 мая 2014 г. пользователь John Griffith написал:
> 
> 
> 
> On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand  wrote:
> On 05/02/2014 05:17 AM, Alexandre Viau wrote:
> > Hello Everyone!
> >
> > My name is Alexandre Viau from Savoir-Faire Linux.
> >
> > We have submited a Monitoring as a Service blueprint and need feedback.
> >
> > Problem to solve: Ceilometer's purpose is to track and *measure/meter* 
> > usage information collected from OpenStack components (originally for 
> > billing). While Ceilometer is usefull for the cloud operators and 
> > infrastructure metering, it is not a *monitoring* solution for the tenants 
> > and their services/applications running in the cloud because it does not 
> > allow for service/application-level monitoring and it ignores detailed and 
> > precise guest system metrics.
> >
> > Proposed solution: We would like to add Monitoring as a Service to Openstack
> >
> > Just like Rackspace's Cloud monitoring, the new monitoring service - lets 
> > call it OpenStackMonitor for now -  would let users/tenants keep track of 
> > their ressources on the cloud and receive instant notifications when they 
> > require attention.
> >
> > This RESTful API would enable users to create multiple monitors with 
> > predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks 
> > performed by a Monitoring Agent on the instance they want to monitor.
> >
> > Predefined checks such as CPU and disk usage could be polled from 
> > Ceilometer. Other predefined checks would be performed by the new 
> > monitoring service itself. Checks such as PING could be flagged to be 
> > performed from multiple sites.
> >
> > Custom checks would be performed by an optional Monitoring Agent. Their 
> > results would be polled by the monitoring service and stored in Ceilometer.
> >
> > If you wish to collaborate, feel free to contact me at 
> > alexandre.v...@savoirfairelinux.com
> > The blueprint is available here: 
> > https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
> >
> > Thanks!
> 
> I would prefer if monitoring capabilities was added to Ceilometer rather
> than adding yet-another project to deal with.
> 
> What's the reason for not adding the feature to Ceilometer directly?
> 
> Thomas
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ​I'd also be interested in the overlap between your proposal and Ceilometer.  
> It seems at first thought that it would be better to introduce the monitoring 
> functionality in to Ceilometer and make that project more diverse as opposed 
> to yet another project.​
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] This week's team meeting: summit prep

2014-05-05 Thread John Dickinson
This week's Swift team meeting will spend time looking at the Swift-related 
conference talks and summit design sessions.

https://wiki.openstack.org/wiki/Meetings/Swift

If you are leading a session topic or giving a conference talk, please attend 
this week's meeting. I want to make sure you have what you need and are 
adequately prepared for the conference.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] process problem with release tagging

2014-05-05 Thread John Dickinson
tl;dr: (1) the current tag names used don't work and we need
something else. (2) Swift (at least) needs to burn a
release number with a new tag

The current process of release is:

1) branch milestone-proposed (hereafter, m-p) from master
2) tag m-p with an RC tag (eg 1.13.1.rc1)
* note that since there are no commits on m-p,
  this tag is an ancestor of master (effectively on master itself)
3) continue development on master
3.1) backport any changes necessary to m-p
4) after QA, tag m-p with the final version
5) merge m-p into master, thus making the final version tag
   an ancestor of master[0]


This process has 2 flaws:

First (and easiest to fix), the rc tag name sorts after the final
release name (`dpkg --compare-versions 1.13.1.rc1.25 lt 1.13.1`
fails). The practical result is that if someone grabbed a version of
the repo after m-p was created but before the merge and then packaged
and deployed it, their currently-installed version actually sorts
newer than the current version on master[1]. The short-term fix is to
burn a version number to get a newer version on master. The long-term
fix is to use a different template for creating the RC tags on m-p.
For example, `dpkg --compare-versions 1.13.1~rc1.25 lt 1.13.1` works.

Second, the process creates a time window where the version number on
master is incorrect. There are a few ways I'd propose to fix this. One
way is to stop using post-release versioning. Set the version number
in a string in the code when development starts so that the first
commit after a release (or creation of m-p) is the version number for
the next release. I'm not a particular fan of this, but it is the way
we used to do things and it does work.

Another option would be to not tag a release until the m-p branch
actually is merged to master. This would eliminate any windows of
wrong versions and keep master always deployable (all tags, except
security backports, would be on master). Another option would be to do
away with the m-p branch altogether and only create it if there is a
patch needed after the RC period starts.

The general idea of keeping release tags on the master branch would
help enable deployers (ie ops) who are tracking master and not just
releasing the distro-packaged versions. We know that some of the
largest and loudest OpenStack deployers are proud that they "follow
master".

What other options are there?


[0] This is the process for Swift, but in both Keystone and Ceilometer
I don't see any merge commits from m-p back to master. This
actually means that for Keystone and Ceilometer, any deployer
packaging master will get bitten by the same issue we've seen in
the Swift community.
[1] In Icehouse, this window of opportunity was exacerbated by the
long time (2 weeks?) it took to get m-p merged back into master.



--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson
Can you explain how PKI info is compressible? I thought it was encrypted, which 
should mean you can't compress it right?


--John





On May 21, 2014, at 8:32 AM, Morgan Fainberg  wrote:

> The keystone team is also looking at ways to reduce the data contained in the 
> token. Coupled with the compression, this should get the tokens back down to 
> a reasonable size. 
> 
> Cheers,
> Morgan
> 
> Sent via mobile
> 
> On Wednesday, May 21, 2014, Adam Young  wrote:
> On 05/21/2014 11:09 AM, Chuck Thier wrote:
>> There is a review for swift [1] that is requesting to set the max header 
>> size to 16k to be able to support v3 keystone tokens.  That might be fine if 
>> you measure you request rate in requests per minute, but this is continuing 
>> to add significant overhead to swift.  Even if you *only* have 10,000 
>> requests/sec to your swift cluster, an 8k token is adding almost 80MB/sec of 
>> bandwidth.  This will seem to be equally bad (if not worse) for services 
>> like marconi.
>> 
>> When PKI tokens were first introduced, we raised concerns about the 
>> unbounded size of of the token in the header, and were told that uuid style 
>> tokens would still be usable, but all I heard at the summit, was to not use 
>> them and PKI was the future of all things.
>> 
>> At what point do we re-evaluate the decision to go with pki tokens, and that 
>> they may not be the best idea for apis like swift and marconi?
> 
> Keystone tokens were slightly shrunk at the end of the last release cycle by 
> removing unnecessary data from each endpoint entry.
> 
> Compressed PKI tokens are enroute and will be much smaller.
> 
>> 
>> Thanks,
>> 
>> --
>> Chuck
>> 
>> [1] https://review.openstack.org/#/c/93356/
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> 
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson
Thanks Dolph and Lance for the info and links.


What concerns me, in general, about the current length of keystone tokens is 
that they are unbounded. And the proposed solutions don't change that pattern.

My understanding of why PKI tokens are used is so that the system doesn't have 
to call to Keystone to authorize the request. This reduces the load on 
Keystone, but it adds significant overhead for every API request.

Keystone's first system was to use UUID bearer tokens. These are fixed length, 
small, cacheable, and require a call to Keystone once per cache period.

Moving to PKI tokens, we now have multi-kB headers that significantly increase 
the size of each request. Swift deployers commonly have small objects on the 
order of <50kB, so adding another ~10kB to each request, just to save a 
once-a-day call to Keystone (ie uuid tokens) seems to be a really high price to 
pay for not much benefit.

The other benefit to PKI tokens is that services can make calls to other 
systems on behalf of the user (eg nova can call cinder for the user). This is 
great, but it's not the only usage pattern in OpenStack projects, and therefore 
I don't like optimizing for it at the expense of other patterns.

In addition to PKI tokens (ie signed+encoded service catalogs), I'd like to see 
Keystone support and remain committed to fixed-length bearer tokens or a 
signed-with-shared-secret auth mechanism (a la AWS).

--John




On May 21, 2014, at 9:09 AM, Dolph Mathews  wrote:

> 
> On Wed, May 21, 2014 at 10:41 AM, John Dickinson  wrote:
> Can you explain how PKI info is compressible? I thought it was encrypted, 
> which should mean you can't compress it right?
> 
> They're not encrypted - just signed and then base64 encoded. The JSON (and 
> especially service catalog) is compressible prior to encoding.
> 
> 
> 
> --John
> 
> 
> 
> 
> 
> On May 21, 2014, at 8:32 AM, Morgan Fainberg  
> wrote:
> 
> > The keystone team is also looking at ways to reduce the data contained in 
> > the token. Coupled with the compression, this should get the tokens back 
> > down to a reasonable size.
> >
> > Cheers,
> > Morgan
> >
> > Sent via mobile
> >
> > On Wednesday, May 21, 2014, Adam Young  wrote:
> > On 05/21/2014 11:09 AM, Chuck Thier wrote:
> >> There is a review for swift [1] that is requesting to set the max header 
> >> size to 16k to be able to support v3 keystone tokens.  That might be fine 
> >> if you measure you request rate in requests per minute, but this is 
> >> continuing to add significant overhead to swift.  Even if you *only* have 
> >> 10,000 requests/sec to your swift cluster, an 8k token is adding almost 
> >> 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) 
> >> for services like marconi.
> >>
> >> When PKI tokens were first introduced, we raised concerns about the 
> >> unbounded size of of the token in the header, and were told that uuid 
> >> style tokens would still be usable, but all I heard at the summit, was to 
> >> not use them and PKI was the future of all things.
> >>
> >> At what point do we re-evaluate the decision to go with pki tokens, and 
> >> that they may not be the best idea for apis like swift and marconi?
> >
> > Keystone tokens were slightly shrunk at the end of the last release cycle 
> > by removing unnecessary data from each endpoint entry.
> >
> > Compressed PKI tokens are enroute and will be much smaller.
> >
> >>
> >> Thanks,
> >>
> >> --
> >> Chuck
> >>
> >> [1] https://review.openstack.org/#/c/93356/
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >>
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson

On May 21, 2014, at 4:26 PM, Adam Young  wrote:

> On 05/21/2014 03:36 PM, Kurt Griffiths wrote:
>> Good to know, thanks for clarifying. One thing I’m still fuzzy on, however, 
>> is why we want to deprecate use of UUID tokens in the first place? I’m just 
>> trying to understand the history here...
> Because they are wasteful, and because they are the chattiest part of 
> OpenStack.  I can go into it in nauseating detail if you really want, 
> including the plans for future enhancements and the weaknesses of bearer 
> tokens.
> 
> 
> A token is nothing more than a snap shot of the data you get from Keystone 
> distributed.  It is stored in Memcached and in the Horizon session uses the 
> hash of it for a key.
> 
> You can do the same thing.  Once you know the token has been transferred once 
> to a service, assuming that service has caching on, you can pass the hash of 
> the key instead of the whole thing.  

So this would mean that a Swift client would auth against Keystone to get the 
PKI token, send that to Swift, and then get back from Swift a "short" token 
that can be used for subsequent requests? It's an interesting idea to consider, 
but it is a new sort of protocol for clients to implement.


> 
> Actually, you can do that up front, as auth_token middleware will just 
> default to an online lookup. However, we are planning on moving to ephemeral 
> tokens (not saved in the database) and an online lookup won't be possible 
> with those.  The people that manage Keystone will be happy with that, and 
> forcing an online lookup will make them sad.

An "online lookup" is one that calls the Keystone service to validate a token? 
Which implies that by disabling online lookup there is enough info in the token 
to validate it without any call to Keystone?

I understand how it's advantageous to offload token validation away from 
Keystone itself (helps with scaling), but the current "solution" here seems to 
be pushing a lot of pain to consumers and deployers of data APIs (eg Marconi 
and Swift and others).


> 
> Hash is MD5 up through what is released in Icehouse.  The next version of 
> auth_token middleware will support a configurable algorithm.  The default 
> should be updated to sha256 in the near future.

If a service (like Horizon) is hashing the token and using that as a session 
key, then why does it matter what the auth_token middleware supports? Isn't the 
hashing handled in the service itself? I'm thinking in the context of how we 
would implement this idea in Swift (exploring possibilities, not committing to 
a patch).

> 
> 
> 
> 
> 
> 
>> 
>> From: Morgan Fainberg 
>> Reply-To: OpenStack Dev 
>> Date: Wednesday, May 21, 2014 at 1:23 PM
>> To: OpenStack Dev 
>> Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
>> tokens
>> 
>> This is part of what I was referencing in regards to lightening the data 
>> stored in the token. Ideally, we would like to see an "ID only" token that 
>> only contains the basic information to act. Some initial tests show these 
>> tokens should be able to clock in under 1k in size. However all the details 
>> are not fully defined yet. Coupled with this data reduction there will be 
>> explicit definitions of the data that is meant to go into the tokens. Some 
>> of the data we have now is a result of convenience of accessing the data. 
>> 
>> I hope to have this token change available during Juno development cycle. 
>> 
>> There is a lot of work to be done to ensure this type of change goes 
>> smoothly. But this is absolutely on the list of things we would like to 
>> address. 
>> 
>> Cheers,
>> Morgan
>> 
>> Sent via mobile 
>> 
>> On Wednesday, May 21, 2014, Kurt Griffiths  
>> wrote:
>> > adding another ~10kB to each request, just to save a once-a-day call to
>> >Keystone (ie uuid tokens) seems to be a really high price to pay for not
>> >much benefit.
>> 
>> I have the same concern with respect to Marconi. I feel like KPI tokens
>> are fine for control plane APIs, but don’t work so well for high-volume
>> data APIs where every KB counts.
>> 
>> Just my $0.02...
>> 
>> --Kurt
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> 
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New release of python-swiftclient (now with py3 support)

2014-05-23 Thread John Dickinson
I'm happy to announce that python-swiftclient 2.1.0 has just been released!

https://pypi.python.org/pypi/python-swiftclient/

This release includes support for Python 3.3. I want to specifically thank 
Tristan Cacqueray, Chmouel Boudjnah, Alex Gaynor, and Christian Schwede for 
working on the py3 compatibility.


--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] storage policies merge plan

2014-05-23 Thread John Dickinson
We've been working for a long time on the feature/ec branch in the swift repo. 
It's now "done" and needs to be merged into master to be generally available.

Here's how the integration is going to work:

1) The feature/ec branch will be refactored into a series of dependent 
reviewable patches
2) The patch chain will be proposed to master, and master will enter a freeze 
until the storage policy patches land
3) The first patch in the chain will be marked as -2 to "plug" the chain
4) The Swift community will review and approve all patches in the chain.
5) When all patches in the chain are approved, the first -2 will be removed and 
the whole chain will be sent to the CI system


There are two things that I'll ask of you during this time. First, please 
commit time to reviewing the storage policy patches. Second, please do not 
deploy a version of Swift that is midway through the storage policy patch 
chain. I don't expect it to break anything, but it's a complicating factor best 
to be avoided.

I will send out another email when the patch chain has been proposed to master 
and to announce the freeze.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] storage policies are upon us; soft freeze in effect

2014-05-28 Thread John Dickinson
The series of patches implementing storage policies in Swift has been proposed 
to master. The first patch set is https://review.openstack.org/#/c/96026/.

This is a major feature in Swift, and it requires a lot of work in reviewing 
and integrating it. In order to focus as reviewers, Swift is under a soft 
freeze until the storage policies patches land.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-11 Thread John Dickinson
For both the security and the log line length, Swift is by default just 
displaying the first 16 bytes of the token.

--John



On Jun 11, 2014, at 12:39 PM, Morgan Fainberg  wrote:

> This stems a bit further than just reduction in noise in the logs. Think of 
> this from a case of security (with centralized logging or lower privileged 
> users able to read log files). If we aren’t putting passwords into these log 
> files, we shouldn’t be putting tokens in. The major functional different 
> between a token and a password is that the token has a fixed life span. 
> Barring running over the TTL of the token, the token grants all rights and 
> privileges that user has (some exceptions, such as trusts), even allowing for 
> a rescope of token to another project/tenant. In this light, tokens
> are only marginally less exposure than a password in a log file.
> 
> I firmly believe that we should avoid putting information that conveys 
> authorization (e.g. username/password or bearer token id) in the logs.
> —
> Morgan Fainberg
> 
> 
> From: Sean Dague s...@dague.net
> Reply: OpenStack Development Mailing List (not for usage questions) 
> openstack-dev@lists.openstack.org
> Date: June 11, 2014 at 12:02:20
> To: OpenStack Development Mailing List (not for usage questions) 
> openstack-dev@lists.openstack.org
> Subject:  [openstack-dev] masking X-Auth-Token in debug output - proposed 
> consistency 
> 
>> We've had a few reviews recently going around to mask out X-Auth-Token 
>> from the python clients in the debug output. Currently there are a mix 
>> of ways this is done. 
>> 
>> In glanceclient (straight stricken) 
>> 
>> X-Auth-Token: *** 
>> 
>> The neutronclient proposal - 
>> https://review.openstack.org/#/c/93866/9/neutronclient/client.py is to 
>> use 'REDACTED' 
>> 
>> There is a novaclient patch in the gate that uses SHA1() - 
>> https://review.openstack.org/#/c/98443/ 
>> 
>> Morgan was working on keystone.session patch - 
>> https://review.openstack.org/#/c/98443/ 
>> 
>> after some back and forth we landed on {SHA1} because 
>> that's actually LDAP standard for such things, and SHA1(...) looks too 
>> much like a function. I think that should probably be our final solution 
>> here. 
>> 
>> Why SHA1? 
>> 
>> While we want to get rid of the token from the logs, for both security 
>> and size reasons (5 - 10% of the logs in a gate run by bytes are 
>> actually keystone tokens), it's actually sometimes important to 
>> understand that *the same* token was used between 2 requests, or that 2 
>> different tokens were used. This is especially try with expiration times 
>> defaulting to 1 hr, and the fact that sometimes we have tests take 
>> longer than that (so we need to debug that we didn't rotate tokens when 
>> we should have). 
>> 
>> Because the keystone token is long (going north of 4k), and variable 
>> data length, and with different site data, these values should not be 
>> susceptible to a generic rainbow attack, so a single SHA1 seems 
>> sufficient. If there are objections to that, we can field something else 
>> there. It also has the advantage of being "batteries included" with all 
>> our supported versions of python. 
>> 
>> I'm hoping we can just ACK this approach, and get folks to start moving 
>> patches through the clients to clean this all up. 
>> 
>> If you have concerns, please bring them up now. 
>> 
>> -Sean 
>> 
>> -- 
>> Sean Dague 
>> http://dague.net 
>> 
>> ___ 
>> OpenStack-dev mailing list 
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread John Dickinson
On Mar 19, 2014, at 12:27 PM, Julien Danjou  wrote:

> On Wed, Mar 19 2014, Kurt Griffiths wrote:
> 
>> That begs the question, *why* is that unlikely to change?
> 
> Because that project is Swift.

If you look at the Swift code, you'll see that swob is not a replacement for 
either Pecan or Falcon. swob was written to replace WebOb, and we documented 
why we did this. 
https://github.com/openstack/swift/blob/master/swift/common/swob.py#L23 It's an 
in-tree module written to remove a recurring pain point. swob has allowed the 
Swift team to focus their time on adding features and fixing bugs in other 
parts of the code.

Why don't we use Pecan or Falcon in Swift? Mostly because we don't need the 
functionality that they provide, and so there is no reason to go add a 
dependency (and thus increase packaging and install requirements on deployers). 
Now if there are other uses for swob outside of Swift, let's have a 
conversation about including it in an external library so we can all benefit.

---

The comparison that Balaji did between Falcon and Pecan looks like a very good 
overview. It gives information necessary to make an informed choice based on 
real data instead of "it's what everybody is doing". If you don't like some 
criteria reported on, I'm sure Balaji would be happy to see your comparison and 
evaluation.

We all want to make informed decisions based on data, not claims. Balaji's 
analysis is a great start on figuring out what the Marconi project should 
choose. As such, it seems that the Marconi team is the responsible party to 
make the right choice for their use case, after weighing all the factors.


--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift storage policies in Icehouse

2014-03-24 Thread John Dickinson
tl;dr: Icehouse won't have storage policies; merging the work into master will 
be in "logical chunks" after Swift's contribution to the Icehouse release.

Here's a quick update on what's been going on in the Swift community with 
storage policies.

Many Swift contributors have been working on storage policies for quite some 
time now. It's a huge feature and improvement to Swift that enables a ton of 
new use cases. There's been very strong interest in this feature from both 
existing and new users.

As a quick review, storage policies allow objects to be stored across a 
particular subset of hardware (e.g. SLA or geography) and with a particular 
storage algorithm (e.g. different replication parameters and [soon] erasure 
codes). Storage policies allow deployers to specifically tailor their storage 
cluster to match their particular use case. (You can read more at 
https://swiftstack.com/blog/2014/01/27/openstack-swift-storage-policies/.)

When we first started actively working on this set of work, we had a goal of 
including it in the Icehouse release. However, that was an estimate, based on 
some early assumptions on what needed to be done. Work has continued very well, 
and we've learned both what to do and what not to do. As we get down to the 
final pieces of functionality and close to the Icehouse release date, it's 
become obvious that we will not be able to finish the work and provide adequate 
docs and testing in the Icehouse timeframe. This is not due to a lack of effort 
or fault of anyone involved; it's simply a large chunk of work that takes a 
long time to get right.

It's a hard decision, but one we feel is the right one to make.

So what could we do?
1) Just finish up the final bits and force it in right before Icehouse. Not 
only would we look like jerks to the whole community, if some major bug were to 
be revealed, we'd look incompetent. This isn't a good solution.

2) Wait until just after the Icehouse release and shove it into master. Then we 
look like jerks, but we have a few months before the next OpenStack integrated 
release to let things settle. This really isn't a good solution either.

3) Refactor the existing feature/ec branch into logical chunks of functionality 
and proposed those to master. This allows for better reviews, since its more 
digestible than a 5500+ line merge patch, and it also gives non-reviewers a 
clearer understanding of what's changing.

Number three is what we're doing. The patches to enable storage policies in 
Swift are now being prepared for review and will be submitted to gerrit shortly.

Looking at the Icehouse release, I'll cut a Swift release in the OpenStack RC 
window (March 27-April 10). This will not include storage policies. I expect 
that it will take several weeks to review and merge the storage policy code, 
and that will allow us to include storage policies in a Swift release soon 
after our Icehouse contribution.

If you have any questions, please let me know.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift storage policies in Icehouse

2014-03-25 Thread John Dickinson

On Mar 25, 2014, at 12:11 PM, Kurt Griffiths  
wrote:

>> As a quick review, storage policies allow objects to be stored across a
>> particular subset of hardware...and with a particular storage algorithm
> 
> Having worked on backup software in the past, this sounds interesting. :D
> 
> What is the scope of these policies? Are they per-object, per-container,
> and/or per-project? Or do they not work like that?

A storage policy is set on a container when it is created. So, for example, 
create your "photos" container with a global 3-replica scheme and also a 
"thumbnails-west" with 2 replicas in your West Coast region and 
"thumbnails-east" with 2 replicas in your East Coast region. Then make a 
container for "server-backups" that is erasure coded and stored in the EU. And 
all of that is stored and managed in the same logical Swift cluster.

So you can see that this feature set gives deployers and users a ton of 
flexibility.

How will storage policies be exposed? I'm glad you asked... A deployer (ie the 
cluster operator) will configure the storage policies (including which is the 
default). At that point, an end-user can create containers with a particular 
storage policy and start saving objects there. What about automatically moving 
data between storage policies? This is something that is explicitly not in 
scope for this set of work. Maybe someday, but in the meantime, I fully expect 
the Swift ecosystem to create and support tools to help manage data lifecycle 
management. For now, that doesn't belong in Swift.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] new core members

2014-03-27 Thread John Dickinson
I'm pleased to announce that Alistair Coles and Christian Schwede have both 
joined the Swift core team. They have both been very active in the Swift 
community, contributing both code and reviews. Both Alistair and Christian work 
with large-scale production Swift clusters, and I'm happy to have them on the 
core team.

Alistair and Christian, thanks for your work in the community and for taking on 
more responsibility. I'm glad we'll be able to continue to work together on 
Swift.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] PTL candidacy

2014-03-31 Thread John Dickinson
I'm announcing my candidacy for Swift PTL. I've been involved with Swift 
specifically and OpenStack in general since the beginning. I'd like to continue 
to serve in the role as Swift PTL.

Swift has grown quite a bit over the last 4 years. In this past year, we've 
added major new features refactored significant areas of the code to improve 
efficiency and extensibility. We've added support for global clusters. We've 
significantly refactored replication to be more efficient. We've cleaned up the 
volume interface to make it much simpler to extend. Swift is a great storage 
engine, powering some of the world's largest storage clouds. Let's keep making 
it better.

Going forward, I'd like to address four things in Swift in the next year:

1) Finish storage policies, including erasure code support. In my opinion, this 
is the biggest feature in Swift since it was open-sourced, and I'm really 
excited by the opportunities it enables. I sent an email earlier this month 
about our current plan on getting storage policies finished up: 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030937.html

2) Focus on performance and efficiency rather than on a "feature train". We've 
started on several things here, including the "ssync" replication improvements 
and some profiling middleware. I'd also like to see improvement in replication 
bandwidth efficiency (especially with global clusters), time-to-first-byte 
latency improvement, better support of very dense storage, and support higher 
concurrency with less resources.

3) Better QA. Swift has always been a very stable system. We need to ensure 
that it remains stable, especially as new feature go in and other parts of the 
codebase change. Examples here include better functional test coverage, testing 
against real clusters, more end-to-end testing of workflows, running probetests 
automatically against submitted changes, and tracking performance metrics 
against patches. 

4) Better community efficiency. As the community has grown, we need to get 
better at offering feedback channels from production deployments, especially 
from non-developers. We need to get better at reducing the patch review time 
and encouraging newer developers to jump in and offer patches.

These are the things that I want to focus on as PTL in the next 6 to 12 months. 
My vision for Swift is that everyone will use it every day, even if they don't 
realize it. Together we can make it happen.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2014-04-16 Thread John Dickinson
I'd like to announce my Technical Committee candidacy.

I've been involved with OpenStack since it began. I'm one of the original 
authors of Swift, and I have been serving as PTL since the position was 
established. I'm employed by SwiftStack, a company building management and 
integration tools for Swift clusters.

OpenStack is a large (and growing) set of projects, unified under a common open 
governance model. The important part about OpenStack is not the pieces that 
make up the "stack"; it's the concept of "open". We, as OpenStack collectively, 
are a set of cooperating projects striving for excellence on our own, but 
stronger when put together.

As OpenStack moves forward, I believe the most important challenges the TC 
faces are:

- Ensuring high-quality, functioning, scalable code is delivered to users.
- Working with the Board of Directors to establish conditions around OpenStack 
trademark usage.
- Ensuring the long-term success of OpenStack by lowering code contribution 
barriers, incorporating feedback from non-developers, and promoting OpenStack 
to new users.

As a member of the TC, I will work to ensure these challenges are addressed. I 
appreciate your vote for me in the TC election.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread John Dickinson
I put together links to every candidate's nomination email at 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/candidates

--John




On Apr 18, 2014, at 8:29 AM, Anita Kuno  wrote:

> On 04/18/2014 11:22 AM, Anita Kuno wrote:
>> Voting for the TC Election is now open and will remain open until after
>> 1300 utc April 24 2014.
>> 
>> We are electing 7 positions from a pool of 17 candidates[0].
>> 
>> Follow the instructions that are available when you vote. If you are
>> confused and need more instruction, close the webpage without submitting
>> your vote and then email myself and Tristan[1]. Your ballot will still
>> be enabled to vote until the election is closed, as long as you don't
>> submit your ballot before your close your webpage.
>> 
>> You are eligible to vote if are a Foundation individual member[2] that
>> also has committed to one of the official programs projects[3] over the
>> Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
>> 05:59 UTC) Or if you are one of the extra-atcs.[4]
>> 
>> What to do if you don't see the email and have a commit in at least one
>> of the official programs projects[3]:
>> * check the trash of your gerrit Preferred Email address[5], in
>> case it went into trash or spam
>> * wait a bit and check again, in case your email server is a bit slow
>> * find the sha of at least one commit from the program project
>> repos[3] and email me and Tristan[1]. If we can confirm that you are
>> entitled to vote, we will add you to the voters list and you will be
>> emailed a ballot.
>> 
>> Our democratic process is important to the health of OpenStack, please
>> exercise your right to vote.
>> 
>> Candidate statements/platforms can be found linked to Candidate names on
>> this page:
>> https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
>> 
>> Happy voting,
>> Anita. (anteaya)
>> 
>> [0] https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
>> [1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
>> enovance dot com
>> [2] http://www.openstack.org/community/members/
>> [3]
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
>> [4]
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs
>> [5] Sign into review.openstack.org: Go to Settings > Contact
>> Information. Look at the email listed as your Preferred Email. That is
>> where the ballot has been sent.
>> 
> I have to extend an apology to Flavio Percoco, whose name is spelled
> incorrectly on both the wikipage and on the ballot.
> 
> I can't change the ballot now and will leave the wikipage with the
> spelling mistake so it is consistent to voters, but I do want folks to
> know I am aware of the mistake now, and I do apologize to Flavio for this.
> 
> I'm sorry,
> Anita.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread John Dickinson
I had completely missed the links Anita had put together. Use her list (ie the 
officially updated one).

https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates


Sorry about that, Anita!

--John



On Apr 18, 2014, at 11:33 AM, John Dickinson  wrote:

> I put together links to every candidate's nomination email at 
> https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/candidates
> 
> --John
> 
> 
> 
> 
> On Apr 18, 2014, at 8:29 AM, Anita Kuno  wrote:
> 
>> On 04/18/2014 11:22 AM, Anita Kuno wrote:
>>> Voting for the TC Election is now open and will remain open until after
>>> 1300 utc April 24 2014.
>>> 
>>> We are electing 7 positions from a pool of 17 candidates[0].
>>> 
>>> Follow the instructions that are available when you vote. If you are
>>> confused and need more instruction, close the webpage without submitting
>>> your vote and then email myself and Tristan[1]. Your ballot will still
>>> be enabled to vote until the election is closed, as long as you don't
>>> submit your ballot before your close your webpage.
>>> 
>>> You are eligible to vote if are a Foundation individual member[2] that
>>> also has committed to one of the official programs projects[3] over the
>>> Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
>>> 05:59 UTC) Or if you are one of the extra-atcs.[4]
>>> 
>>> What to do if you don't see the email and have a commit in at least one
>>> of the official programs projects[3]:
>>>* check the trash of your gerrit Preferred Email address[5], in
>>> case it went into trash or spam
>>>* wait a bit and check again, in case your email server is a bit slow
>>>* find the sha of at least one commit from the program project
>>> repos[3] and email me and Tristan[1]. If we can confirm that you are
>>> entitled to vote, we will add you to the voters list and you will be
>>> emailed a ballot.
>>> 
>>> Our democratic process is important to the health of OpenStack, please
>>> exercise your right to vote.
>>> 
>>> Candidate statements/platforms can be found linked to Candidate names on
>>> this page:
>>> https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
>>> 
>>> Happy voting,
>>> Anita. (anteaya)
>>> 
>>> [0] https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
>>> [1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
>>> enovance dot com
>>> [2] http://www.openstack.org/community/members/
>>> [3]
>>> http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
>>> [4]
>>> http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs
>>> [5] Sign into review.openstack.org: Go to Settings > Contact
>>> Information. Look at the email listed as your Preferred Email. That is
>>> where the ballot has been sent.
>>> 
>> I have to extend an apology to Flavio Percoco, whose name is spelled
>> incorrectly on both the wikipage and on the ballot.
>> 
>> I can't change the ballot now and will leave the wikipage with the
>> spelling mistake so it is consistent to voters, but I do want folks to
>> know I am aware of the mistake now, and I do apologize to Flavio for this.
>> 
>> I'm sorry,
>> Anita.
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Initial summit schedule published

2014-04-22 Thread John Dickinson
I've pushed the initial schedule for the Swift summit sessions to 
http://junodesignsummit.sched.org.

There were 21 sessions proposed for 8 available slots, so most are not able to 
be selected. (Thanks Cinder for taking one of our proposed ones!)

Swift's sessions are split over two days. When selecting and scheduling the 
sessions, I tried to choose sessions that would apply to a wide range of people 
and fit with the overall flow of the summit. Thursday afternoon scheduled 
sessions are "big picture" sessions about principles, community pain points, 
major features, and ops concerns. Friday morning's sessions are around testing, 
efficiency, and Python3.

I think we're going to have a great summit, and I'm sorry if your session was 
not selected. We simply don't have the time for all of the great proposed 
sessions. However, we do have the Swift project pod (read: a table just for us) 
in room B202, and I hope to have ongoing conversations there all week long, 
especially on the session topics that weren't chosen for the summit sessions.


--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread John Dickinson

On Sep 19, 2014, at 5:46 AM, John Griffith  wrote:

> 
> 
> On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez  wrote:
> Vishvananda Ishaya wrote:
> > Great writeup. I think there are some great concrete suggestions here.
> >
> > A couple more:
> >
> > 1. I think we need a better name for Layer #1 that actually represents what 
> > the goal of it is: Infrastructure Services?
> > 2. We need to be be open to having other Layer #1s within the community. We 
> > should allow for similar collaborations and group focus to grow up as well. 
> > Storage Services? Platform Services? Computation Services?
> 
> I think that would nullify most of the benefits of Monty's proposal. If
> we keep on blessing "themes" or special groups, we'll soon be back at
> step 0, with projects banging on the TC door to become special, and
> companies not allocating resources to anything that's not special.
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ​Great stuff, mixed on point 2 raised by Vish but honestly I think that's 
> something that could evolve over time, but I looked at that differently as in 
> Cinder, SWIFT and some day Manilla live under a Storage Services umbrella, 
> and ideally at some point there's some convergence there.
> 
> Anyway, I don't want to start a rat-hole on that, it's kind of irrelevant 
> right now.  Bottom line is I think the direction and initial ideas in Monty's 
> post are what a lot of us have been thinking about and looking for.  I'm in!!​


I too am generally supportive of the concept, but I do want to think about the 
vishy/tts/jgriffith points above.

It's interesting that the proposed "layer #1" stuff is very very similar to 
what was originally in OpenStack at the very beginning as Nova. Over time, many 
of these pieces of functionality required for compute were split out (block, 
networking, image, etc), and I think that's why so many people look at these 
pieces and say (rightly), "of course these are required all together and 
tightly coupled". That's how these projects started, and we still see evidence 
of their birth today.

For that reason, I do agree with Vish that there should be similar 
collaborations for other things. While the "layer #1" (or "compute") use case 
is very common, we can all see that it's not the only one that people are 
solving with OpenStack parts. And this is reflected in the products build and 
sold by companies, too. Some sell one subset of openstack stuff as product X 
and maybe a different subset as product Y. (The most common example here is 
"compute" vs "object storage".) This reality has led to a lot of the angst 
around definitions since there is effort to define openstack all as one thing 
(or worse, as a "base" thing that others are defined as built upon).

I propose that we can get the benefits of Monty's proposal and implement all of 
his concrete suggestions (which are fantastic) by slightly adjusting our usage 
of the program/project concepts.

I had originally hoped that the "program" concept would have been a little 
higher-level instead of effectively spelling "project" as "program". I'd love 
to see a hierarchy of openstack->program->project/team->repos. Right now, we 
have added the "program" layer but have effectively mapped it 1:1 to the 
project. For example, we used to have a few repos in the Swift project managed 
by the same group of people, and now we have a few repos in the "object 
storage" program, all managed by the same group of people. And every time 
something is added to OpenStack, its added as a new program, effectively 
putting us exactly where we were before we called it a program with the same 
governance and management scaling problems.

Today, I'd group existing OpenStack projects into programs as follows:

Compute: nova, sahara, ironic
Storage: swift, cinder, glance, trove
Network: neutron, designate, zaqar
Deployment/management: heat, triple-o, horizon, ceilometer
Identity: keystone, barbican
Support (not user facing): infra, docs, tempest, devstack, oslo
(potentially even) stackforge: lots

I like two things about this. First, it allows people to compose a solution. 
Second, it allows us as a community to thing more about the strategic/product 
things. For example, it lets us as a community say, "We think storage is 
important. How are we solving it today? What gaps do we have in that? How can 
the various storage things we have work together better?"

Thierry makes the point that more "themes" will nullify the benefits of Monty's 
proposal. I agree, if we continue to allow the explosion of 
projects/programs/themes to continue. The benefit of what Monty is proposing is 
that it identifies and focusses on a particular use case (deploy a VM, add a 
volume, get an IP, configure a domain) so that we know we have solved it well. 
I think that focus is 

[openstack-dev] [Swift] PTL candidacy

2014-09-25 Thread John Dickinson
I'm announcing my candidacy for Swift PTL. I've been involved with Swift 
specifically and OpenStack in general since the beginning. I'd like to continue 
to serve in the role as Swift PTL.

In my last candidacy email[1], I talked about several things I wanted to focus 
on in Swift.

1) Storage policies. This is done, and we're currently building on it to 
implement erasure code storage in Swift.

2) Focus on performance and efficiency. This is an ongoing thing that is never 
"done", but we have made improvements here, and there are some other 
interesting things in-progress right now (like zero-copy data paths).

3) Better QA. We've added a third-party test cluster to the CI system, but I'd 
like to improve this further, for example by adding our internal integration 
tests (probe tests) to our QA pipeline.

4) Better community efficiency. Again, we've made some small improvements here, 
but we have a ways to go yet. Our review backlog is large, and it takes a while 
for patches to land. We need to continue to improve community efficiency on 
these metrics.

Overall, I want to ensure that Swift continues to provide a stable and robust 
object storage engine. Focusing on the areas listed above will help us do that. 
We'll continue to build functionality that allows applications to rely on Swift 
to take over hard problems of storage so that apps can focus on adding their 
value without worrying about storage.

My vision for Swift is that everyone will use it every day, even if they don't 
realize it. Together we can make it happen.

--John

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031450.html






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-29 Thread John Dickinson

> On Oct 29, 2014, at 3:32 PM, Eoghan Glynn  wrote:
> 
> 
> Folks,
> 
> I haven't seen the customary number-crunching on the recent TC election,
> so I quickly ran the numbers myself.
> 
> Voter Turnout
> =
> 
> The turnout rate continues to decline, in this case from 29.7% to 26.7%.
> 
> Here's how the participation rates have shaped up since the first TC2.0
> election:
> 
> Election | Electorate | Voted | Turnout | Change
> 
> 10/2013  | 1106   | 342   | 30.9%   | -8.0% 
> 04/2014  | 1510   | 448   | 29.7%   | -4.1%
> 10/2014  | 1892   | 506   | 26.7%   | -9.9%


Overall percentage of the electorate voting is declining, but absolute numbers 
of voters has increased. And in fact, the electorate has grown more than the 
turnout has declined.



> 
> Partisan Voting
> ===
> 
> As per the usual analysis done by ttx, the number of ballots that
> strictly preferred candidates from an individual company (with
> multiple candidates) above all others:
> 
> HP   ahead in 30 ballots (5.93%)
> RHAT ahead in 18 ballots (3.56%)
> RAX  ahead in 8  ballots (1.58%)
> 
> The top 6 pairings strictly preferred above all others were:
> 
> 35 voters (6.92%) preferred Monty Taylor & Doug Hellmann  (HP/HP)
> 34 voters (6.72%) preferred Monty Taylor & Sean Dague (HP/HP)
> 26 voters (5.14%) preferred Anne Gentle & Monty Taylor(RAX/HP)
> 21 voters (4.15%) preferred Russell Bryant & Sean Dague   (RHAT/HP)
> 21 voters (4.15%) preferred Russell Bryant & Eoghan Glynn (RHAT/RHAT)
> 16 voters (3.16%) preferred Doug Hellmann & Sean Dague(HP/HP)
> 
> Conclusion
> ==
> 
> The rate of potentially partisan voting didn't diverge significantly
> from the norms we've seen in previous elections.
> 
> The continuing decline in the turnout rate is a concern however, as
> the small-scale changes tried (blogging on TC activity, standardized
> questions in the TC nomination mails) have not arrested the fall-off
> in participation.
> 
> Cheers,
> Eoghan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] URLs

2014-11-17 Thread John Dickinson
Adam,

I'm not sure why you've marked Swift URLs as having their own scheme. It's true 
that Swift doesn't have the concept of "admin" URLs, but in general if Swift 
were to assume some URL path prefix, I'm not sure why it wouldn't work (for 
some definition of work).

Other issues might be the fact that you'd have the extra complexity of a broker 
layer for all the OpenStack components. iie instead of clients accessing Swift 
directly and the operator scaling that, the new scheme would require the 
operator to manage and scale the broker layer and also the Swift layer.

For the record, Swift would need to be updated since it assumes it's the only 
service running on the domain at that port (Swift does a lot of path parsing).

--John






> On Nov 11, 2014, at 2:35 PM, Adam Young  wrote:
> 
> Recent recurrence of the "Why ios everything on its own port" question 
> triggered my desire to take this pattern and put it to rest.
> 
> My suggestion, from a while ago, was to have a naming scheme that deconflicts 
> putting all of the services onto a single server, on port 443.
> 
> I've removed a lot of the cruft, but not added in entries for all the new 
> *aaS services.
> 
> 
> https://wiki.openstack.org/wiki/URLs
> 
> Please add in anything that should be part of OpenStack.  Let's make this a 
> reality, and remove the  specific ports.
> 
> If you are worried about debugging, look into rpdb.  It is a valuable tool 
> for debugging a mod_wsgi based application.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 2.0 release candidate

2014-06-22 Thread John Dickinson
Through extensive work from the entirety of the Swift dev team over the past 
year, storage policies have landed in Swift. Last Friday, we merged commit 
1feaf6e2 which brings storage polices into master.

I especially would like to publicly thank Paul Luse (Intel), Clay Gerrard 
(SwiftStack), and Sam Merritt (SwiftStack) for providing such tremendous focus, 
dedication, awesome ideas, and leadership to getting this feature designed, 
written, and merged.

For those that don't know, storage policies are a way to take the global 
footprint of your Swift cluster and choose what subset of hardware to store the 
data on and how to store it across that subset of hardware. For example, a 
single Swift cluster can now have data segmented by geographic region or 
performance tier. Additionally, each policy can have a different replication 
factor, which enables high replication for local access (e.g. one copy in every 
PoP) or low replication for some data (e.g. image thumbnails or transcoded 
video).

Storage policies is the necessary building block to allow non-replicated 
storage (i.e. erasure codes) in Swift, a feature that we are continuing to 
develop.

Full documentation, including design, usage, and upgrade notes, can be found at 
http://swift.openstack.org/overview_policies.html.

With this commit landing, we have tagged Swift 2.0.0.rc1. We are now having a 
two week QA period to allow community members to play with it in their labs. At 
the end of the RC period, we'll formally release Swift 2.0. The current target 
for this is Thursday July 3 (although I realize that discovered issues and the 
US holiday may put this at risk).

In addition to participating in the OpenStack integrated release cycle, Swift 
makes semantically-versioned releases throughout the year. Because of the scope 
of the storage policies changes and because you cannot safely downgrade your 
cluster after configuring a second policy (i.e. you'd lose access to that data 
if you go to a pre-storage-policies release), we have chosen to bump the major 
version number to 2.

Note that deployers can still upgrade to this version with no client downtime 
and still safely downgrade until multiple policies are configured.

The full CHANGELOG for the 2.0 release is at 
https://github.com/openstack/swift/blob/master/CHANGELOG.

If you are using Swift, please read over the docs, download the tarball from 
http://tarballs.openstack.org/swift/swift-2.0.0.rc1.tar.gz, and let us know 
what you find.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] eventual consistency and lost updates

2014-06-27 Thread John Dickinson
Great questions. I'll answer inline.


On Jun 27, 2014, at 2:54 AM, Eoghan Glynn  wrote:

> 
> Hi Swiftsters!
> 
> A basic question about swift eventual- versus strong-consistency.
> The context is potentially using swift as a store for metric data.
> 
> Say datapoints {p1, p2, ..., p_i} are stored in a swift object.
> Presumably these data are replicated across the object ring in
> an eventually consistent way.

Correct, but let me provide some more depth for those watching at home.

When an object is written into Swift, multiple replicas are durably written 
across different failure domains before the response returns a success to the 
client. For a three replica cluster, three writes are attempted, and success is 
only returned once two or three durable writes (ie flush all the way to disk) 
are successful).

Swift choses those three replica locations (ie drives) based on the ring. 
However, if there is a failure condition in the cluster, one or more of those 
three locations may not be available. In that case, Swift deterministically 
chooses other drives in the cluster until it finds three that are available for 
writing. Then the write happens and success or failure is returned to the 
client depending on how many writes were successful.

Consider the following example:

Time T0:

PUT objectA (content hash H1), and it gets written to drives 1, 2, and 3.

Time T1:

The server that drive 3 is plugged in to fails

Time T2:

PUT objectA (content hash H2), and now it gets written t drives 1, 2, and 4

Time T3:

Access to the server that drive is plugged in to is restored.

At this point we have the following distribution of data:

drive1: content H2
drive2: content H2
drive3: content H1
drive4: content H2

Time T4

GET for objectA -> Swift will (by default) choose a random one of drive 1, 2, 
and 3 and return that data to the client.

You can see how it's possible for Swift to return the old copy of objectA (a 
1/3 chance).

Swift's replication process is continuously running in the background on Swift 
servers. On each of the servers that the drives 1-4 are respectively plugged in 
to, when objectA is found locally, it will query drives 1-3 to ensure than the 
right data is in the right place.

Replication will ensure that drive4's objectA with content H2 is removed (once 
it's known that the right data is on each of drives 1-3), and replication will 
also ensure that drive3's objectA with content H1 is replaced with objectA with 
content H2.

The conflict resolution here is last write wins.


(Note that the above example is only for the failure scenario where a server 
has failed, is busy, or is otherwise incapable of responding to requests. Other 
failure scenarios like a drive failure--which is more common than server 
failure--have slightly different, but similar, behaviors. This example is 
provided as a simple one for explanation purposes.)

Back in January 2013, I gave a full talk on this and other failure scenarios in 
Swift. The recording is at https://www.youtube.com/watch?v=mKZ7kDDPSIU

> 
> Say I want to read back this blob and update it with a further
> datapoint {p_(i+1)}.
> 
> But eventual consistency tells me that I may end up folding my
> new datapoint into an older version of the object:
> 
>  {p1, p2, ..., p_(i-1)}
> 
> instead of the expected:
> 
>  {p1, p2, ..., p_i}
> 
> i.e. the classic lost update problem.
> 
> So my basic questions are:
> 
> * is read-then-update an acknowledged anti-pattern for swift?

Yes. You cannot guarantee that nothing is happening in between the read and 
write. ie you can't perform two Swift API calls in an atomic transaction.

> 
> * if so, what are the recommended strategies for managing non-
>   static data in swift?
> 
>   - e.g. write latest to a fresh object each time, do an async
> delete on the old
> 
>   - store a checksum elsewhere and detect the stale-read case

Both of these are quite acceptable, and it depends on what you are able to do 
on the client side. Yes, the checksum for the content of the object is stored 
with and returned with the object (in the ETag header).

As an extra thing to check out, Netflix is able to work around the eventual 
consistency in S3 by using a consistent DB for tracking what should be where. 
There project to do this is called S3mper, and it's quite possible to use the 
same strategy for Swift.

> 
> * are the object metadata (ETag, X-Timestamp specifically)
>   actually replicated in the same eventually-consistent way as
>   the object content?
> 
>   (the PUT code[1] suggests data & metadata are stored together,
>but just wanted to be sure I'm reading that correctly)

The metadata for the object is stored with the object. Metadata and object 
content are _not_ replicated separately. They are always kept together.

> 
> Thanks,
> Eoghan
> 
> [1] 
> https://github.com/openstack/swift/blob/master/swift/obj/server.py#L441-455
> 
> ___
> OpenStack-dev mailing lis

Re: [openstack-dev] [swift] add checking daemons existence in Healthcheck middleware

2014-07-07 Thread John Dickinson
In general, you're right. It's pretty important to know what's going on in the 
cluster. However, the checks for these background daemons shouldn't be done in 
the wsgi servers. Generally, we've stayed away from a lot of process monitoring 
in the Swift core. That it, Swift already works around failures, and there is 
already existing ops tooling to monitor if a process is alive.

Check out the swift-recon tool that's included with Swift. It already includes 
some checks like the replication cycle time. While it's not a direct "is this 
process alive" monitoring tool, it does give good information about the health 
of the cluster.

If you've got some other ideas on checks to add to recon or ways to make it 
better or perhaps even some different ways to integrate monitoring systems, let 
us know!

--John



On Jul 7, 2014, at 7:33 PM, Osanai, Hisashi  
wrote:

> 
> Hi,
> 
> Current Healthcheck middleware provides the functionality of monitoring 
> Servers such as 
> Proxy Server, Object Server, Container Server, Container Server and Account 
> Server. The 
> middleware checks whether each Servers can handle request/response. 
> My idea to enhance this middleware is 
> checking daemons such replications, updaters and auditors existence in 
> addition to current one. 
> If we realize this, the scope of Health would be extended from 
> "a Server can handle request" to "a Server and daemons can work 
> appropriately".
> 
> http://docs.openstack.org/developer/swift/icehouse/middleware.html?highlight=health#healthcheck
> 
> What do you think?
> 
> Best Regards,
> Hisashi Osanai
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift 2.0.0 has been released and includes support for storage policies

2014-07-08 Thread John Dickinson
I'm happy to announce that Swift 2.0.0 has been officially released! You can 
get the tarball at http://tarballs.openstack.org/swift/swift-2.0.0.tar.gz.

This release is a huge milestone in the history of Swift. This release includes 
storage policies, a set of features I've often said is the most important thing 
to happen to Swift since it was open-sourced.

What are storage policies, and why are they so significant?

Storage policies allow you to set up your cluster to exactly match your use 
case. From a technical perspective, storage policies allow you to have more 
than one object ring in your cluster. Practically, this means that you can can 
do some very important things. First, given the global set of hardware for your 
Swift deployment, you can choose which set of hardware your data is stored on. 
For example, this could be performance-based, like with flash vs spinning 
drives, or geography-based, like Europe vs North America.

Second, once you've chosen the subset of hardware for your data, storage 
policies allow you to choose how the data is stored across that set of 
hardware. You can choose the replication factor independently for each policy. 
For example, you can have a "reduced redundancy tier", a "3x replication tier", 
and also a tier with a replica in every geographic region in the world. 
Combined with the ability to choose the set of hardware, this gives you a huge 
amount of control over how your data is stored.

Looking forward, storage policies is the foundation upon we are building 
support for non-replicated storage. With this release, we are able to focus on 
building support for an erasure code storage policy, thus giving the ability to 
more efficiently store large data sets.

For more information, start with the developer docs for storage policies at 
http://swift.openstack.org/overview_policies.html.

I gave a talk on storage policies at the Atlanta summit last April. 
https://www.youtube.com/watch?v=mLC1qasklQo

The full changelog for this release is at 
https://github.com/openstack/swift/blob/master/CHANGELOG.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone/swift] role-based access cotrol in swift

2014-07-10 Thread John Dickinson
There are a couple of places to look to see the current dev effort in Swift 
around ACLs.

In no particular order:

* Supporting a service token in Swift https://review.openstack.org/#/c/105228/
* Adding policy engine support to Swift https://review.openstack.org/#/c/89568/
* Fixing ACLs to work with Keystone v3+ https://review.openstack.org/#/c/86430/

Some of the above may be in line with what you're looking for.

--John

On Jul 10, 2014, at 8:17 PM, Osanai, Hisashi  
wrote:

> 
> Hi, 
> 
> I looked for info about role-based access control in swift because 
> I would like to prohibit PUT operations to containers like create 
> containers and set ACLs.
> 
> Other services like Nova, Cinder have "policy.json" file but Swift doesn't.
> And I found out the following info.
> - Swift ACL's migration
> - Centralized policy management
> 
> Do you have detail info for above?
> 
> http://dolphm.com/openstack-juno-design-summit-outcomes-for-keystone/
> ---
> Migrate Swift ACL's from a highly flexible Tenant ID/Name basis, which worked 
> reasonably well against Identity API v2, to strictly be based on v3 Project 
> IDs. The driving requirement here is that Project Names are no longer 
> globally unique in v3, as they're only unique within a top-level domain.
> ---
> Centralized policy management
> Keystone currently provides an unused /v3/policies API that can be used to 
> centralize policy blob management across OpenStack.
> 
> 
> Best Regards,
> Hisashi Osanai
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] statsd client opening a new socket everytime a stats is updated

2014-07-15 Thread John Dickinson
We've been chatting in IRC, but for the mailing list archives, yes! we'd love 
to see patches to improve this.

--John


On Jul 15, 2014, at 1:22 PM, Tatiana Al-Chueyr Martins 
 wrote:

> Hello!
> 
> I'm new to both Swift and OpenStack, I hope you can help me.
> 
> Considering statsd is enabled, each time something is logged, a new socket is 
> being opened.
> 
> At least, this is what I understood from the implementation and usage of 
> StatsdClient at:
> - swift/common/utils.py
> - swift/common/middleware/proxy_logging.py
> 
> If this analysis is correct: is there any special reason for this behavior 
> (open a new socket each request)?
> 
> We could significantly improve performance reusing the same socket.
> 
> Would you be interested in a patch in this regard?
> 
> Best,
> -- 
> Tatiana Al-Chueyr
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-swiftclient 2.2.0 release

2014-07-22 Thread John Dickinson
I'm happy to announce that python-swiftclient 2.2.0 has been released.

This release has the following significant features:

* Ability to set a storage policy on container and object upload
* Ability to generate Swift temporary URLs from the CLI and SDK
* Added context-sensitive help to the CLI commands

This release is available on PyPI at 
https://pypi.python.org/pypi/python-swiftclient/2.2.0

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread John Dickinson
Using hostnames instead of IPs is, as mentioned above, something under 
consideration in that patch.

However, note that until now, we've intentionally kept it as just IP addresses 
since using hostnames adds a lot of operational complexity and burden. I 
realize that hostnames may be preferred in some cases, but this places a very 
large strain on DNS systems. So basically, it's a question of do we add the 
feature, knowing that most people who use it will in fact be making their lives 
more difficult, or do we keep it out, knowing that we won't be serving those 
who actually require the feature.

--John



On Jul 23, 2014, at 2:29 AM, Matsuda, Kenichiro 
 wrote:

> Hi,
> 
> Thank you for the info.
> I was able to understand that hostname support is under developing.
> 
> Best Regards,
> Kenichiro Matsuda.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread John Dickinson
While your correct that a chassis replacement can avoid data rebalancing in the 
FQDN case if you update DNS, you can actually do the same today with the 
IP-based system. You can use the set_info command of swift-ring-builder to 
change the IP for existing devices and this avoids any rebalancing in the 
cluster.

--John



On Jul 23, 2014, at 6:27 PM, Osanai, Hisashi  
wrote:

> 
> I would like to discuss this topic more deeply.
> 
> I understand we need to prepare DNS systems and add a lot of operational 
> complexity and burden to use the DNS system when we use FQDN in Ring files.
> 
> However I think most datacenter have DNS systems to manage network resources 
> such as ip addresses and hostnames and it is centralized management.
> And you already pointed out that we can get benefit to use FQDN in Ring files 
> with some scenarios. 
> 
> A scenarios: Corruption of a storage node
> 
> IP case:
> One storage node corrupted when swift uses IPs in Ring files. An operator 
> removes 
> the node from swift system using ring-builder command and keeping the node 
> for 
> further investigation. Then the operator tries to add new storage node with 
> different ip address. In this case swift rebalance all objects.
> 
> FQDN case:
> One storage node corrupted when swift uses FQDN in Ring files. An operator 
> prepares 
> new storage node with difference ip address then changes info in DNS systems 
> with 
> the ip address. In this case swift copy objects that related to the node.
> 
> If above understanding is true, it is better to have ability for using FQDN 
> in Ring 
> files in addition to ip addresses. What do you think?
> 
> On Thursday, July 24, 2014 12:55 AM, John Dickinson wrote:
> 
>> However, note that until now, we've intentionally kept it as just IP
>> addresses since using hostnames adds a lot of operational complexity and
>> burden. I realize that hostnames may be preferred in some cases, but this
>> places a very large strain on DNS systems. So basically, it's a question
>> of do we add the feature, knowing that most people who use it will in
>> fact be making their lives more difficult, or do we keep it out, knowing
>> that we won't be serving those who actually require the feature.
> 
> Best Regards,
> Hisashi Osanai
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread John Dickinson
Oh I totally agree with what you are saying. A DNS change may be lower cost 
than running Swift config/management commands. At the very least, ops already 
know how to do DNS updates, regardless of it's "cost", where they have to learn 
how to do Swift management.

I was simply adding clarity to the trickiness of the situation. As I said 
originally, it's a balance of offering a feature that has a known cost (DNS 
lookups in a large cluster) vs not offering it and potentially making some 
management more difficult. I don't think either solution is all that great, but 
in the absence of a decision, we've so-far defaulted to "less code has less 
bugs" and not yet written or merged it.

--John






On Jul 23, 2014, at 10:07 PM, Osanai, Hisashi  
wrote:

> 
> Thank you for the quick response.
> 
> On Thursday, July 24, 2014 12:51 PM, John Dickinson wrote:
> 
>> you can actually do the same today
>> with the IP-based system. You can use the set_info command of
>> swift-ring-builder to change the IP for existing devices and this avoids
>> any rebalancing in the cluster.
> 
> Thanks for the info. 
> I will check the set_info command of swift-ring-builder.
> 
> My understanding now is 
> - in the FQDN case, an operator has to do DNS related operation. (no whole 
> rebalancing)
> - in the IP case, an operator has to execute swift's command. (no whole 
> rebalancing)
> 
> I think that the point of this discussion is "swift's independency in case of 
> failure" 
> and "adding a lot of operational complexity and burden".
> 
> I think that recovery procedure in the FQDN case is common one so it is 
> better to have the ability for using FQDN in addition to ip addresses.
> What do you think of this?
> 
> +--+--+---+
> |  | In the FQDN case | In the IP case|
> +--+--+---+
> |Swift's independency  |completely independent|rely on DNS systems|
> +--+--+---+
> |Operational complexity| (1)  | (2)   |
> |(recovery process)| simple   | a bit complex |
> +--+--+---+
> |Operational complexity| DNS and Swift| Swift only|
> |(necessary skills)|  |   |
> +--+--+---+
> 
> (1) in the FQDN case, change DNS info for the node. (no swift related 
> operation)
> (2) in the IP case, execute the swift-ring-builder command on a node then 
> copy it to 
>all related nodes.
> 
> Best Regards,
> Hisashi Osanai
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread John Dickinson

On Jul 24, 2014, at 3:25 PM, Sean Dague  wrote:

> On 07/24/2014 06:15 PM, Angus Salkeld wrote:
>> On Wed, 2014-07-23 at 14:39 -0700, James E. Blair wrote:
>>> OpenStack has a substantial CI system that is core to its development
>>> process.  The goals of the system are to facilitate merging good code,
>>> prevent regressions, and ensure that there is at least one configuration
>>> of upstream OpenStack that we know works as a whole.  The "project
>>> gating" technique that we use is effective at preventing many kinds of
>>> regressions from landing, however more subtle, non-deterministic bugs
>>> can still get through, and these are the bugs that are currently
>>> plaguing developers with seemingly random test failures.
>>> 
>>> Most of these bugs are not failures of the test system; they are real
>>> bugs.  Many of them have even been in OpenStack for a long time, but are
>>> only becoming visible now due to improvements in our tests.  That's not
>>> much help to developers whose patches are being hit with negative test
>>> results from unrelated failures.  We need to find a way to address the
>>> non-deterministic bugs that are lurking in OpenStack without making it
>>> easier for new bugs to creep in.
>>> 
>>> The CI system and project infrastructure are not static.  They have
>>> evolved with the project to get to where they are today, and the
>>> challenge now is to continue to evolve them to address the problems
>>> we're seeing now.  The QA and Infrastructure teams recently hosted a
>>> sprint where we discussed some of these issues in depth.  This post from
>>> Sean Dague goes into a bit of the background: [1].  The rest of this
>>> email outlines the medium and long-term changes we would like to make to
>>> address these problems.
>>> 
>>> [1] https://dague.net/2014/07/22/openstack-failures/
>>> 
>>> ==Things we're already doing==
>>> 
>>> The elastic-recheck tool[2] is used to identify "random" failures in
>>> test runs.  It tries to match failures to known bugs using signatures
>>> created from log messages.  It helps developers prioritize bugs by how
>>> frequently they manifest as test failures.  It also collects information
>>> on unclassified errors -- we can see how many (and which) test runs
>>> failed for an unknown reason and our overall progress on finding
>>> fingerprints for random failures.
>>> 
>>> [2] http://status.openstack.org/elastic-recheck/
>>> 
>>> We added a feature to Zuul that lets us manually "promote" changes to
>>> the top of the Gate pipeline.  When the QA team identifies a change that
>>> fixes a bug that is affecting overall gate stability, we can move that
>>> change to the top of the queue so that it may merge more quickly.
>>> 
>>> We added the clean check facility in reaction to the January gate break
>>> down. While it does mean that any individual patch might see more tests
>>> run on it, it's now largely kept the gate queue at a countable number of
>>> hours, instead of regularly growing to more than a work day in
>>> length. It also means that a developer can Approve a code merge before
>>> tests have returned, and not ruin it for everyone else if there turned
>>> out to be a bug that the tests could catch.
>>> 
>>> ==Future changes==
>>> 
>>> ===Communication===
>>> We used to be better at communicating about the CI system.  As it and
>>> the project grew, we incrementally added to our institutional knowledge,
>>> but we haven't been good about maintaining that information in a form
>>> that new or existing contributors can consume to understand what's going
>>> on and why.
>>> 
>>> We have started on a major effort in that direction that we call the
>>> "infra-manual" project -- it's designed to be a comprehensive "user
>>> manual" for the project infrastructure, including the CI process.  Even
>>> before that project is complete, we will write a document that
>>> summarizes the CI system and ensure it is included in new developer
>>> documentation and linked to from test results.
>>> 
>>> There are also a number of ways for people to get involved in the CI
>>> system, whether focused on Infrastructure or QA, but it is not always
>>> clear how to do so.  We will improve our documentation to highlight how
>>> to contribute.
>>> 
>>> ===Fixing Faster===
>>> 
>>> We introduce bugs to OpenStack at some constant rate, which piles up
>>> over time. Our systems currently treat all changes as equally risky and
>>> important to the health of the system, which makes landing code changes
>>> to fix key bugs slow when we're at a high reset rate. We've got a manual
>>> process of promoting changes today to get around this, but that's
>>> actually quite costly in people time, and takes getting all the right
>>> people together at once to promote changes. You can see a number of the
>>> changes we promoted during the gate storm in June [3], and it was no
>>> small number of fixes to get us back to a reasonably passing gate. We
>>> think that optimizing this system will

[openstack-dev] Paul Luse added to Swift Core

2014-07-30 Thread John Dickinson
I'm happy to share that Paul Luse has join Swift's core reviewer team. Paul has 
provided a ton of leadership, insight, and code during the past year. He was 
instrumental in getting storage policies written, and he's actively involved in 
the current work around erasure code support.

Welcome, Paul!



--John








signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread John Dickinson
Can you please add Swift as well?

--John



On Aug 4, 2014, at 9:54 AM, Andreas Jaeger  wrote:

> Great, I've updated my patch to add neutron and nova to the index page.
> 
> For now read the specs using:
> http://specs.openstack.org/openstack/neutron-specs/
> http://specs.openstack.org/openstack/nova-specs/
> 
> Andreas
> -- 
> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
>GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Protecting the access to memcache

2013-09-15 Thread John Dickinson
Alex,

You raise two issues, so let me address them independently.

First, you discuss protecting memcache for unauthorized access. Yes, this is 
something that every deployer of memcache (whether in conjunction with Swift or 
not) needs to consider. Unchecked access to memcache can allow information 
leaks and potentially cache poisoning. Memcache servers should be restricted in 
access to trusted clients. You describe one such way of doing so, and deployers 
will need to evaluate if your proposed method for themselves. I'd love to see 
you release the code around your SLIM implementation for Swift, but I do not 
think it should be in the Swift codebase.

As to the code organization question, swift.common.memcached is a performant 
memcache client (note there are a couple of outstanding patches to improve this 
in various ways). swift.common.middleware.memcache is the cache middleware 
loaded by a Swift WSGI app, and it uses the library module for accessing the 
memcache pool. The memcache client is used by other middleware (eg ratelimit), 
so that's why it's in swift/common. The swift/common/middleware directory is 
for the modules that are available for a WSGI pipeline. (Note that 
swift.common.middleware.acl may be misplaced by this definition, but it's only 
used by tempauth.) I think the placement is right the way it is, and I don't 
think anything should move, especially since there potentially third party 
modules using these.

--John




On Sep 15, 2013, at 3:03 PM, Alexandra Shulman-Peleg  
wrote:

> Hi,
> 
> Following the declaration regarding the memcache vulnerability below, I 
> would like to raise a discussion regarding its protection. If we could 
> limit/control the access to memcache it would be easier to confine the 
> damage in case of an attack. For example, in the attached paper we added a 
> gatekeeper to ensure that  the keys/values stored in the memcached of 
> Swift are accessed only by the tenant/domain to which they belong (e.g., a 
> user from domain A can not access the cached data of users belonging to 
> domain B), 
> 
> I suggest to provide a generic mechanism allowing insertion of various 
> memcache protections as dedicated middleware modules. Practically, 
> although in Swift we have a module memcache.py which is part of 
> middleware, the module memcached.py is located under "common". What is the 
> motivation for this code organization? Can we move the module memcached.py 
> to be under "middleware" in Swift? 
> 
> Thank you very much,
> Alex.
> 
> 
> 
> --
> Alexandra Shulman-Peleg, PhD
> Storage Research, Cloud Platforms Dept.
> IBM Haifa Research Lab
> Tel: +972-3-7689530 | Fax: +972-3-7689545
> 
> 
> From:   Thierry Carrez 
> To: openstack-annou...@lists.openstack.org, 
> openst...@lists.openstack.org, 
> Date:   11/09/2013 06:52 PM
> Subject:[Openstack] [OSSA 2013-025] Token revocation failure using 
> Keystone memcache/KVS backends (CVE-2013-4294)
> 
> 
> 
> Signed PGP part
> OpenStack Security Advisory: 2013-025
> CVE: CVE-2013-4294
> Date: September 11, 2013
> Title: Token revocation failure using Keystone memcache/KVS backends
> Reporter: Kieran Spear (University of Melbourne)
> Products: Keystone
> Affects: Folsom, Grizzly
> 
> Description:
> Kieran Spear from the University of Melbourne reported a vulnerability
> in Keystone memcache and KVS token backends. The PKI token revocation
> lists stored the entire token instead of the token ID, triggering
> comparison failures, ultimately resulting in revoked PKI tokens still
> being considered valid. Only Folsom and Grizzly Keystone setups making
> use of PKI tokens with the memcache or KVS token backends are affected.
> Havana setups, setups using UUID tokens, or setups using PKI tokens with
> the SQL token backend are all unaffected.
> 
> Grizzly fix:
> https://review.openstack.org/#/c/46080/
> 
> Folsom fix:
> https://review.openstack.org/#/c/46079/
> 
> References:
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4294
> https://bugs.launchpad.net/keystone/+bug/1202952
> 
> Regards,
> 
> - -- 
> Thierry Carrez
> OpenStack Vulnerability Management Team
> 
> 
> ___
> Mailing list: 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] PTL Candidacy

2013-09-24 Thread John Dickinson
I would like to nominate myself for PTL of Swift.

I've been involved in OpenStack Swift since it started, and I'd like
to share a few of the thins in-progress and where I want to see Swift
go.

Swift has always been a world-class storage system, proven at scale
and production-ready from day one. In the past few years Swift has
been deployed in public and private storage clouds all over the world,
and it is in use at the largest companies in the world.

My goal for Swift is that everyone will use Swift, every day, even if
they don't realize it. And taking a look at where Swift is being used
today, we're well on our way to that goal. We'll continue to move
towards Swift being everywhere as Swift grows to solve more real-world
use cases.

Right now, there is work being done in Swift that will give deployers
a very high degree of flexibility in how they can store data. We're
working on implementing storage policies in Swift. These storage
policies give deployers the ability to choose:

(a) what subset of hardware the data lives on
(b) how the data is stored across that hardware
(c) how communication with an actual storage volume happens.

Supporting (a) allows for storage tiers and isolated storage hardware.
Supporting (b) allows for different replication or non-replication
schemes. Supporting (c) allows for specific optimizations for
particular filesystems or storage hardware. Combined, it's even
feasable to have a Swift cluster take advantage of other storage
systems as a storage policy (imagine an S3 storage policy).

As PTL, I want to help coordinate this work and see it to completion.
Many people from many different companies are working on it, in
addition to the normal day-to-day activity in Swift.

I'm excited by the future of Swift, and would be honored to continue
to serve as Swift PTL.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] python-swiftclient 1.7.0 released

2013-09-30 Thread John Dickinson
I tagged and released a new version of python-swiftclient this morning, 
specifically to get around some painful dependency issues (especially on older 
systems). Find the new version at 
https://pypi.python.org/pypi/python-swiftclient/1.7.0 on pypi. Below is the tag 
message:

This release is promted by a dependency change that should avoid
painful dependency management issues. The main issue was the
requirement for d2to1. By removing it we resolve some issues related
to upstream changes in distribute and pip, especially on older systems
(see http://lists.openstack.org/pipermail/openstack-dev/2013-July/011425.html).


--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-10 Thread John Dickinson
I'd like to announce my candidacy to the OpenStack Technical Committee.

As the Swift PTL, I've been involved in the TC for a while (and the PPB before 
that and the POC before that). I've seen OpenStack grow the very beginning, and 
I'm very proud to be a part of it.

As we all know, OpenStack has grown tremendously since it started. Open source, 
design, development, and community give people the ability to have ownership of 
their data. These core principles are why I think OpenStack will continue to 
change how people build and use technology for many years to come.

Of course, principles and ideas don't run in production. Code does. Therefore I 
think that a very important role of the TC is to ensure that all of the 
OpenStack projects do work, work well together, and promote the vision of 
OpenStack to provide ubiquitous cloud infrastructure. 

I believe that OpenStack is a unified project that provides independent 
OpenStack solutions to hard problems.

I believe that OpenStack needs to clearly define its scope so that it can stay 
focused on fulfilling its mission.

I believe that OpenStack is good for both public and private clouds, and that 
the private cloud use case (i.e. deploying OpenStack internally for internal 
users only) will be the dominant deployment pattern for OpenStack.

If elected, I will continue to promote these goals for the TC. Thank you for 
your support.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Porting swiftclient to Python 3?

2013-10-12 Thread John Dickinson
Co-reviewing each other's patches and discussing changes in #openstack-swift 
would be good ways to ensure that you are working in the same direction.

--John


On Oct 12, 2013, at 3:49 PM, Brian Curtin  wrote:

> Hi,
> 
> I just had a look at the python-swiftclient reviews in Gerrit and noticed 
> that Kui Shi and I are working on the same stuff, but I'm guessing Kui didn't 
> see that I had proposed a number of Python 3 changes from a few weeks ago. 
> Now that there are reviews and a web of dependent branches being maintained 
> by both of us, how should this proceed?
> 
> I don't want to waste anyone's time with two sets of branches to develop and 
> two sets of patches to review.
> 
> Brian
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] team meeting

2013-10-14 Thread John Dickinson
This week's team meeting is cancelled since most of the active contributors 
will all be together in Austin for the Swift hackathon during the regularly 
scheduled meeting time.

Regular bi-weekly meetings will resume on October 30 at 1900UTC in 
#openstack-meeting

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon PTL candidacy

2013-11-10 Thread John Dickinson
A random off-the-top-of-my-head use case would be to subscribe to events from 
creating or changing objects in a particular Swift account or container. This 
would allow much more efficient listings in Horizon for active containers (and 
may also be consumed by other listeners too).

--John



On Nov 10, 2013, at 2:45 PM, Dolph Mathews  wrote:

> 
> On Fri, Nov 8, 2013 at 2:38 AM, Matthias Runge  wrote:
> 
> Those are my primary targets I'd like to see addressed in Horizon during
> the cycle. Another thing I'd like to see addressed is the lack of
> listening to a notification service. That's probably an integration
> point with Marconi, and it might be possible, this won't make it until
> Icehouse.
> 
> This bit caught me off guard - what's the use case here? Is there a link to a 
> blueprint? Thanks!
>  
> 
> Matthias
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> 
> -Dolph
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Horizon] The future or pagination

2013-11-13 Thread John Dickinson
Swift uses marker+limit for pagination when listing containers or objects (with 
additional support for prefix, delimiters, and end markers). This is done 
because the total size of the listing may be rather large, and going to a 
correct "page" based on an offset gets expensive and doesn't allow for 
repeatable queries.

Pagination implies some sort of ordering, and I'm guessing (assuming+hoping) 
that your listings are based around something more meaningful that an 
incrementing id. By itself, "metric number 32592" doesn't mean anything, and 
listings like "go to metric 4200 and give me the next 768 items" doesn't 
tell the consumer anything and probably isn't even a very repeatable query. 
Therefore, using a marker+prefix+limit style pagination system is very useful 
(eg "give me up to 1000 metrics that start with 'nova/instance_id/42/'"). Also, 
end_marker queries are very nice (half-closed ranges).

One thing I would suggest (and I hope we change in Swift whenever we update the 
API version) is that you don't promise to return the full page in a response. 
Instead, you should return a "no matches" or "end of listing" token. This 
allows you the flexibility to return responses quickly without consuming too 
many resources on the server side. Clients can then continue to iterate over 
subsequent pages as they are needed.

Something else that I'd like to see in Swift (it was almost added once) is the 
ability to reverse the order of the listings so you can iterate backwards over 
pages.

--John




On Nov 13, 2013, at 2:58 AM, Julien Danjou  wrote:

> Hi,
> 
> We've been discussing and working for a while on support for pagination
> on our API v2 in Ceilometer. There's a large amount that already been
> done, but that is now stalled because we are not sure about the
> consensus.
> 
> There's mainly two approaches around pagination as far as I know, one
> being using limit/offset and the other one being marker based. As of
> today, we have no clue of which one we should pick, in the case we would
> have a technical choice doable between these two.
> 
> I've added the Horizon tag in the subject because I think it may concern
> Horizon, since it shall be someday in the future one of the main
> consumer of the Ceilometer API.
> 
> I'd be also happy to learn what other projects do in this regard, and
> what has been said and discussed during the summit.
> 
> To a certain extend, we Ceilometer would also be happy to find common
> technical ground on this to some extend so _maybe_ we can generalise
> this into WSME itself for consumption from other projects.
> 
> Cheers,
> -- 
> Julien Danjou
> ;; Free Software hacker ; independent consultant
> ;; http://julien.danjou.info
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Horizon] The future or pagination

2013-11-13 Thread John Dickinson
marker + end_marker, such that the result is in [marker, end_marker), and a 
reverse parameter allows you to build your UI with next and prev links.

Also, limit+offset has the distinct disadvantage to skipping or repeating 
entries while going to the next or previous page if the listing is being 
changed while it is being paginated.

--John




On Nov 13, 2013, at 9:51 AM, Lyle, David  wrote:

> From a purely UI perspective, the limit/offset is a lot more useful.  Then we 
> can show links to previous page, next page and display the current page 
> number.
> 
> Past mailing list conversations have indicated that limit/offset can be less 
> efficient on the server side.  The marker/limit approach works for paginating 
> UI side, just in a more primitive way.  With that approach, we are generally 
> limited to a next page link only.
> 
> David 
> 
>> -----Original Message-
>> From: John Dickinson [mailto:m...@not.mn]
>> Sent: Wednesday, November 13, 2013 10:09 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Ceilometer][Horizon] The future or
>> pagination
>> 
>> Swift uses marker+limit for pagination when listing containers or objects
>> (with additional support for prefix, delimiters, and end markers). This is
>> done because the total size of the listing may be rather large, and going to 
>> a
>> correct "page" based on an offset gets expensive and doesn't allow for
>> repeatable queries.
>> 
>> Pagination implies some sort of ordering, and I'm guessing
>> (assuming+hoping) that your listings are based around something more
>> meaningful that an incrementing id. By itself, "metric number 32592"
>> doesn't mean anything, and listings like "go to metric 4200 and give me
>> the next 768 items" doesn't tell the consumer anything and probably isn't
>> even a very repeatable query. Therefore, using a marker+prefix+limit style
>> pagination system is very useful (eg "give me up to 1000 metrics that start
>> with 'nova/instance_id/42/'"). Also, end_marker queries are very nice (half-
>> closed ranges).
>> 
>> One thing I would suggest (and I hope we change in Swift whenever we
>> update the API version) is that you don't promise to return the full page in 
>> a
>> response. Instead, you should return a "no matches" or "end of listing"
>> token. This allows you the flexibility to return responses quickly without
>> consuming too many resources on the server side. Clients can then continue
>> to iterate over subsequent pages as they are needed.
>> 
>> Something else that I'd like to see in Swift (it was almost added once) is 
>> the
>> ability to reverse the order of the listings so you can iterate backwards 
>> over
>> pages.
>> 
>> --John
>> 
>> 
>> 
>> 
>> On Nov 13, 2013, at 2:58 AM, Julien Danjou  wrote:
>> 
>>> Hi,
>>> 
>>> We've been discussing and working for a while on support for
>>> pagination on our API v2 in Ceilometer. There's a large amount that
>>> already been done, but that is now stalled because we are not sure
>>> about the consensus.
>>> 
>>> There's mainly two approaches around pagination as far as I know, one
>>> being using limit/offset and the other one being marker based. As of
>>> today, we have no clue of which one we should pick, in the case we
>>> would have a technical choice doable between these two.
>>> 
>>> I've added the Horizon tag in the subject because I think it may
>>> concern Horizon, since it shall be someday in the future one of the
>>> main consumer of the Ceilometer API.
>>> 
>>> I'd be also happy to learn what other projects do in this regard, and
>>> what has been said and discussed during the summit.
>>> 
>>> To a certain extend, we Ceilometer would also be happy to find common
>>> technical ground on this to some extend so _maybe_ we can generalise
>>> this into WSME itself for consumption from other projects.
>>> 
>>> Cheers,
>>> --
>>> Julien Danjou
>>> ;; Free Software hacker ; independent consultant ;;
>>> http://julien.danjou.info
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] added Peter Portante to Swift-core

2013-06-17 Thread John Dickinson
Peter Portante (from Red Hat) has been very active with Swift patches, reviews, 
and community discussion during the last seven months and has been added as a 
member of Swift core.

Peter has been a pleasure to work with, and I'm happy to add him. Welcome, 
Peter!

--John



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] 1.9.0 release candidate

2013-06-26 Thread John Dickinson
The RC for Swift 1.9.0 has been cut, and baring any issues discovered, will be 
finalized next Tuesday.

Please take some time to try it out and run through your tests.

Cahngelog: https://github.com/openstack/swift/blob/milestone-proposed/CHANGELOG
Launchpad milestone page: https://launchpad.net/swift/+milestone/1.9.0
Code on the milestone-proposed branch: 
https://github.com/openstack/swift/tree/milestone-proposed
Direct download: 
http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz

Major new features in this release include full global clusters support, 
improvements to disk performance, and conf.d-style config directory support.

This is a great release that is the result of contributions from a lot of 
people. Thanks to everyone involved.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch

2013-07-01 Thread John Dickinson

On Jul 1, 2013, at 8:03 AM, Mark McLoughlin  wrote:

> Hey Thierry
> 
> I actually didn't notice this go by last week, the other thread got all
> the attention.
> 
> On Wed, 2013-06-26 at 14:51 +0200, Thierry Carrez wrote:
>> Hi everyone,
>> 
>> Yesterday at the TC meeting we agreed that as a first step to
>> establishing programs, we should have a basic definition of them and
>> establish the first (undisputed) ones.
>> 
>> We can solve harder questions (like if "horizontal efforts" should be a
>> program or a separate thing, or where each current official repo exactly
>> falls) as a second step.
>> 
>> So here is my proposal for step 1:
>> 
>> """
>> 'OpenStack Programs' are efforts which are essential to the completion
>> of our mission, but which do not produce deliverables included in the
>> common release of OpenStack 'integrated' projects every 6 months, like
>> Projects do.
> 
> Hmm, this wasn't what I understood our direction to be.
> 
> And maybe this highlights a subtle difference in thinking - as I see it,
> Oslo absolutely is producing release deliverables. For example, in what
> way was oslo.config 1.1.0 *not* a part of the grizzly release?
> 
> The idea that documentation isn't a part of our releases seems a bit off
> too.
> 
> This distinction feels like it's based on an extremely constrained
> definition of what constitutes a release.
> 
>> Programs can create any code repository and produce any deliverable
>> they deem necessary to achieve their goals.
>> 
>> Programs are placed under the oversight of the Technical Committee, and
>> contributing to one of their code repositories grants you ATC status.
>> 
>> Current efforts or teams which want to be recognized as an 'OpenStack
>> Program' should place a request to the Technical Committee, including a
>> clear mission statement describing how they help the OpenStack general
>> mission and how that effort is essential to the completion of our
>> mission. Programs do not need to go through an Incubation period.
>> 
>> The initial Programs are 'Documentation', 'Infrastructure', 'QA' and
>> 'Oslo'. Those programs should retroactively submit a mission statement
>> and initial lead designation, if they don't have one already.
>> """
>> 
>> This motion is expected to be discussed and voted at the next TC
>> meeting, so please comment on this thread.
> 
> It's funny, I think we're all fine with the idea of Programs but can't
> quite explain what distinguishes a program from a project, etc. and
> we're reaching for things like "programs don't produce release
> deliverables" or "projects don't have multiple code repositories". I'm
> nervous of picking a distinguishing characteristic that will
> artificially limit what Programs can do.

I think the concern I have with the current discussions is that the definition 
is becoming so specific that we'll someday have such an over categorization of 
things to the point of "repos created on a Tuesday".

What is the end result here, and what are we trying to promote? I think we want 
to give ATC status to people who contribute to code that is managed as part of 
the OpenStack organization. In that sense, everything (ie nova, swift, neutron, 
cinder, etc) is a program, right? What happens if an existing "project" wants 
to deliver an independent code library? "Just put it in oslo!" may be the 
current answer, but moving a bunch of unrelated deliverables into oslo causes 
review issues (a separate review community) and may slow development. (that's 
actually not the argument I want to have in this email thread)

I'd suggest that everything we have today are "openstack programs". Many have 
multiple deliverables (eg a server side and a client side). As a specific 
example (only because it's what I'm most familiar with, not that this is 
something Swift is considering), if Swift wanted to separately deliver the swob 
library (replacement for WebOb) or our logging stuff, then they simply become 
another deliverable under the "Swift program".

I completely support (going back years now) the idea of having CI, QA, Docs, 
etc as additional "top-level" openstack things.

To reiterate, what are we trying to accomplish with further classification of 
code into programs and projects? What is lacking in the current structure that 
further classification (ie a new name) gets us?

--John




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch

2013-07-01 Thread John Dickinson
I really like the Solution 2 proposal.


On Jul 1, 2013, at 12:32 PM, Thierry Carrez  wrote:

> Thierry Carrez wrote:
> 
> Solution (2) is to make everything a "Program". Some have a goal of
> producing an 'integrated' piece and those must go through incubation.
> Something like:
> 
> """
> 'OpenStack Programs' are efforts which are essential to the completion
> of our mission. Programs can create any code repository and produce any
> deliverable they deem necessary to achieve their goals.
> 
> Programs are placed under the oversight of the Technical Committee, and
> contributing to one of their code repositories grants you ATC status.
> 
> Current efforts or teams which want to be recognized as an 'OpenStack
> Program' should place a request to the Technical Committee, including a
> clear mission statement describing how they help the OpenStack general
> mission and how that effort is essential to the completion of our
> mission. Only programs which have a goal that includes the production of
> a server 'integrated' deliverable need to go through an Incubation period.
> 
> The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer', 'Documentation',
> 'Infrastructure', 'QA' and 'Oslo'. 'Trove' and 'Ironic' are in
> incubation. Those programs should retroactively submit a mission
> statement and initial lead designation, if they don't have one already.
> """
> 
> In that scenario, we could also name them "Official Teams"... because
> that's the central piece.
> 
> -- 
> Thierry Carrez (ttx)


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.9.0 released: global clusters and more

2013-07-02 Thread John Dickinson
I'm pleased to announce that Swift 1.9.0 has been released. This has
been a great release with major features added, thanks to the combined
effort of 37 different contributors.

Full release notes: https://github.com/openstack/swift/blob/master/CHANGELOG
Download: https://launchpad.net/swift/havana/1.9.0

Feature Summary
===

Full global clusters support


With this release, Swift fully supports global clusters. A single
Swift cluster can now be deployed a wide geographic area (eg across an
ocean or continent) and still provide high durability and
availability. This feature has four major parts:

* Region tier for data placement
* Adjustable replica counts
* Separate replication network support
* Affinity on reads and writes

Improvements in disk performance


The object server can now be configured to use threadpools to increase
performance and smooth out latency on storage nodes. Also, many disk
operations were reordered to increase reliablility and improve
performance. This work is a direct result of the design summit
sessions in Portland.

Support for config directories
--

Swift now supports conf.d style config directories. This allows config
snippets to be managed independently and composed into the full config
for a Swift process. For example, a deployer can have a config snippet
for each piece of proxy middleware.

Multiple TempURL keys
-

The TempURL feature (temporary, signed URLs) now supports two signing
keys. This allows users to safely rotate keys without invalidating
existing signed URLs.

Other
-

There's a ton of "other" stuff in this release including features,
security fixes, general polishing, and bug fixes. I encourage you to
check out the full release notes for more info
(https://github.com/openstack/swift/blob/master/CHANGELOG).

New Contributors


Twelve of the 37 total contributors are first-time contributors to
Swift. They are:

* Fabien Boucher (fabien.bouc...@enovance.com)
* Brian D. Burns (ios...@gmail.com)
* Alex Gaynor (alex.gay...@gmail.com)
* Edward Hope-Morley (opentas...@gmail.com)
* Matthieu Huin (m...@enovance.com)
* Shri Javadekar (shrin...@maginatics.com)
* Sergey Kraynev (skray...@mirantis.com)
* Dieter Plaetinck (die...@vimeo.com)
* Chuck Short (chuck.sh...@canonical.com)
* Dmitry Ukov (du...@mirantis.com)
* Vladimir Vechkanov (vvechka...@mirantis.com)
* niu-zglinux (niu.zgli...@gmail.com)

Thank you to everyone who contributed.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] failure node muting not working

2013-07-03 Thread John Dickinson
Take a look at the proxy config, starting here: 
https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L70

The error_suppression_interval and error_suppression_limit control the window 
you are looking for. With the default values, 10 errors in 60 seconds will 
prevent the proxy from using that particular storage node for another 60 
seconds.

--John



On Jul 2, 2013, at 8:57 PM, "Zhou, Yuan"  wrote:

> Hi lists,
>  
> We’re trying to evaluate the node failure performance in Swift.
> According the docs Swift should be able to mute the failed nodes:
> ‘if a storage node does not respond in a reasonable about of time, the proxy 
> considers it to be unavailable and will not attempt to communicate with it 
> for a while.’
>  
> We did a simple test on a 5 nodes cluster:
> 1.   Using COSBench to keep downloading files from the clusters.
> 2.   Stop the networking on SN1, there are lots of ‘connection timeout 
> 0.5s’ error occurs in Proxy’s log
> 3.   Keep workload running and wait for about 1hour
> 4.   The same error still occurs in Proxy, which means the node is not 
> muted, but we expect the SN1 is muted in proxy side and there is no 
> ‘connection  timeout ’ error in Proxy
>  
> So is there any special works needs to be done to use this feature?
>  
> Regards, -yuanz
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] erasure codes, digging deeper

2013-07-17 Thread John Dickinson
Last week we wrote a blog post about introducting erasure codes into Swift. 
Today, I'm happy to share more technical details around this feature.

We've posted an overview of our design and some of the tradeoffs in the feature 
at 
http://swiftstack.com/blog/2013/07/17/erasure-codes-with-openstack-swift-digging-deeper/.

I've also posted some slides with a high-level design at 
http://d.not.mn/EC_swift_proxy_design.pdf

Here's the summary of the erasure code design:
* Erasure codes (vs replicas) will be set on a per-container basis
* Data will be erasure coded inline in the proxy server
* API clients will control when data is replicated or erasure coded

So what's next? How do we get this done?

Of course, there's a lot of coding to be done. At a high level, and to tie 
everything together, I've updated Launchpad with several new blueprints. 
https://blueprints.launchpad.net/swift/+spec/swift-ec will keep track of the 
total work. More specifically, these are the areas that need to be worked on:
* Proxy server (https://blueprints.launchpad.net/swift/+spec/ec-proxy-work)
* EC Ring (https://blueprints.launchpad.net/swift/+spec/ec-ring)
* EC Reconstructor 
(https://blueprints.launchpad.net/swift/+spec/ec-reconstructor)
* EC Auditor (https://blueprints.launchpad.net/swift/+spec/ec-auditor)
* EC Stripe Auditor 
(https://blueprints.launchpad.net/swift/+spec/ec-stripe-auditor)
* EC library interface 
(https://blueprints.launchpad.net/swift/+spec/ec-library-interface)

Of course, as work progresses, there may be other areas too.

To facilitate the community dev work on this, I've done a couple of things.

First, there is now an upstream "feature/ec" branch for this work. The purpose 
of having a separate ec branch is because this is a long-running feature that 
does not have a lot of independent features (eg it doesn't make sense to merge 
an EC Reconstructor into Swift without the rest of the EC work). We will 
frequently merge master back into the ec branch so that we don't get too 
divergent. Here's how to start using this branch locally:

# go to the code and get the latest version
cd swift && git fetch --all
# checkout the upstream ec branch
git checkout origin/feature/ec
# create a local branch called ec to track the upstream ec branch
git branch ec2 origin/feature/ec && git checkout ec
# type, type, type
# this will push the review to gerrit for review (normal rules apply).
# on this ec branch, it will default to merge into the upstream feature/ec 
branch instead of master
git review

Second, I've also set up a trello board to keep track and discuss designs at 
https://trello.com/board/swift-erasure-codes/51e0814d4ee9022d2b002a2c. Why 
trello? Mostly because Launchpad isn't sufficient to have discussions around 
the design, and gerrit only supports discussion around a particular piece of 
code.

Lastly, I would encourage anyone interested in this effort to participate in 
the #openstack-swift IRC channel on freenode and to attend the every-other-week 
swift meetings in #openstack-meeting (the next meeting is July 24 at 1900UTC).

I'm really looking forward to this feature, and I'm excited to see memebers of 
the OpenStack community coming together to implement it.

--John

smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] erasure codes, digging deeper

2013-07-18 Thread John Dickinson
Check out the slides I linked. The plan is to enable an EC policy that is then 
set on a container. A cluster may have a replication policy and one or more EC 
policies. Then the user will be able to choose the policy for a particular 
container.

--John




On Jul 18, 2013, at 2:50 AM, Chmouel Boudjnah  wrote:

> On Thu, Jul 18, 2013 at 12:42 AM, John Dickinson  wrote:
>>* Erasure codes (vs replicas) will be set on a per-container basis
> 
> I was wondering if there was any reasons why it couldn't be as
> per-account basis as this would allow an operator to have different
> type of an account and different pricing (i.e: tiered storage).
> 
> Chmouel.



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] erasure codes, digging deeper

2013-07-18 Thread John Dickinson
Are you talking about the parameters for EC or the fact that something is 
erasure coded vs replicated?

For the first, that's exactly what we're thinking: a deployer sets up one (or 
more) policies and calls them Alice, Bob, or whatever, and then the API client 
can set that on a particular container.

This allows users who know what they are doing (ie those who know the tradeoffs 
and their data characteristics) to make good choices. It also allows deployers 
who want to have an automatic policy to set one up to migrate data.

For example, a deployer may choose to run a migrator process that moved certain 
data from replicated to EC containers over time (and drops a manifest file in 
the replicated tier to point to the EC data so that the URL still works).

Like existing features in Swift (eg large objects), this gives users the 
ability to flexibly store their data with a nice interface yet still have the 
ability to get at some of the pokey bits underneath.

--John



On Jul 18, 2013, at 10:31 AM, Chuck Thier  wrote:

> I'm with Chmouel though.  It seems to me that EC policy should be chosen by 
> the provider and not the client.  For public storage clouds, I don't think 
> you can make the assumption that all users/clients will understand the 
> storage/latency tradeoffs and benefits.
> 
> 
> On Thu, Jul 18, 2013 at 8:11 AM, John Dickinson  wrote:
> Check out the slides I linked. The plan is to enable an EC policy that is 
> then set on a container. A cluster may have a replication policy and one or 
> more EC policies. Then the user will be able to choose the policy for a 
> particular container.
> 
> --John
> 
> 
> 
> 
> On Jul 18, 2013, at 2:50 AM, Chmouel Boudjnah  wrote:
> 
> > On Thu, Jul 18, 2013 at 12:42 AM, John Dickinson  wrote:
> >>* Erasure codes (vs replicas) will be set on a per-container basis
> >
> > I was wondering if there was any reasons why it couldn't be as
> > per-account basis as this would allow an operator to have different
> > type of an account and different pricing (i.e: tiered storage).
> >
> > Chmouel.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] erasure codes, digging deeper

2013-07-18 Thread John Dickinson
Yes, and I'd imagine that the normal default would be for replicated data.

Moving the granularity from a container to account-based, as Chmouel and Chuck 
said, is interesting too.

--John

On Jul 18, 2013, at 11:32 AM, Christian Schwede  wrote:

> A solution to this might be to set the default policy as a configuration
> setting in the proxy. If you want a replicated swift cluster just allow
> this policy in the proxy and set it to default. The same for EC cluster,
> just set the allowed policy to EC. If you want both (and let your users
> decide which policy to use) simply configure a list of allowed policies
> with the first one in the list as the default policy in case they don't
> set a policy during container creation.
> 
> Am 18.07.13 20:15, schrieb Chuck Thier:
>> I think you are missing the point.  What I'm talking about is who
>> chooses what data is EC and what is not.  The point that I am trying to
>> make is that the operators of swift clusters should decide what data is
>> EC, not the clients/users.  How the data is stored should be totally
>> transparent to the user.
>> 
>> Now if we want to down the road offer user defined classes of storage
>> (like how S3 does reduced redundancy), I'm cool with that, just that it
>> should be orthogonal to the implementation of EC.
>> 
>> --
>> Chuck
>> 
>> 
>> On Thu, Jul 18, 2013 at 12:57 PM, John Dickinson > <mailto:m...@not.mn>> wrote:
>> 
>>Are you talking about the parameters for EC or the fact that
>>something is erasure coded vs replicated?
>> 
>>For the first, that's exactly what we're thinking: a deployer sets
>>up one (or more) policies and calls them Alice, Bob, or whatever,
>>and then the API client can set that on a particular container.
>> 
>>This allows users who know what they are doing (ie those who know
>>the tradeoffs and their data characteristics) to make good choices.
>>It also allows deployers who want to have an automatic policy to set
>>one up to migrate data.
>> 
>>For example, a deployer may choose to run a migrator process that
>>moved certain data from replicated to EC containers over time (and
>>drops a manifest file in the replicated tier to point to the EC data
>>so that the URL still works).
>> 
>>Like existing features in Swift (eg large objects), this gives users
>>the ability to flexibly store their data with a nice interface yet
>>still have the ability to get at some of the pokey bits underneath.
>> 
>>--John
>> 
>> 
>> 
>>On Jul 18, 2013, at 10:31 AM, Chuck Thier ><mailto:cth...@gmail.com>> wrote:
>> 
>>> I'm with Chmouel though.  It seems to me that EC policy should be
>>chosen by the provider and not the client.  For public storage
>>clouds, I don't think you can make the assumption that all
>>users/clients will understand the storage/latency tradeoffs and
>>benefits.
>>> 
>>> 
>>> On Thu, Jul 18, 2013 at 8:11 AM, John Dickinson ><mailto:m...@not.mn>> wrote:
>>> Check out the slides I linked. The plan is to enable an EC policy
>>that is then set on a container. A cluster may have a replication
>>policy and one or more EC policies. Then the user will be able to
>>choose the policy for a particular container.
>>> 
>>> --John
>>> 
>>> 
>>> 
>>> 
>>> On Jul 18, 2013, at 2:50 AM, Chmouel Boudjnah
>>mailto:chmo...@enovance.com>> wrote:
>>> 
>>>> On Thu, Jul 18, 2013 at 12:42 AM, John Dickinson ><mailto:m...@not.mn>> wrote:
>>>>>   * Erasure codes (vs replicas) will be set on a per-container
>>basis
>>>> 
>>>> I was wondering if there was any reasons why it couldn't be as
>>>> per-account basis as this would allow an operator to have different
>>>> type of an account and different pricing (i.e: tiered storage).
>>>> 
>>>> Chmouel.
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>><mailto:OpenStack-dev@lists.openstack.org>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>><mailto:OpenStack-dev@l

Re: [openstack-dev] [Swift] erasure codes, digging deeper

2013-07-22 Thread John Dickinson

On Jul 22, 2013, at 9:34 AM, David Hadas  wrote:

> Hi, 
> 
> In Portland, we discussed a somewhat related issue of having multiple 
> replication levels in one Swift cluster. 
> It may be that a provider would not wish to expose the use of EC or the level 
> of replication used. For example a provider may offer a predefined set of 
> services such as "Gold", "Silver" and "Bronze", "Aluminum" which a user can 
> choose from. The provider may decide how each level is implemented (As a 
> silly example: Gold is 4 way replication, Silver is a 3 way replication, 
> Bronze is EC, Aluminum is single replica without EC). 
> 
> Does it make sense to consider EC as an implementation of a certain service 
> level (the same as for example the choice of the number of replicas)? 

yes, that's actually exactly what I'm thinking.

> 
> Now we are back to the question of what is the right unit in which we define 
> this 'level of service' ("Gold", "Silver"...).
> Should the level of service be defined when the account is created or should 
> we allow a user to state:
> "I would like to have a container with Gold to keep my work, Bronze to keep 
> my family pictures and videos and Aluminum to keep a copy of all my music 
> files".
> 
> If we choose to enable container service levels, we need to enable billing 
> per level of service such that a single account could be billed for the 
> actual use it has done per each level of service. Maybe we even need to have 
> all statistics gathered to be grouped by service level.
> I am not sure if we can escape that even with account service levels. 

Either on the account or container level, the billing number generator will 
need to correlate particular bytes with a particular service level. That would 
be in ceilometer, slogging, or whatever other people are using.


> 
> DH
> 
> On Thu, Jul 18, 2013 at 9:37 PM, John Dickinson  wrote:
> Yes, and I'd imagine that the normal default would be for replicated data.
> 
> Moving the granularity from a container to account-based, as Chmouel and 
> Chuck said, is interesting too.
> 
> --John
> 
> On Jul 18, 2013, at 11:32 AM, Christian Schwede  wrote:
> 
> > A solution to this might be to set the default policy as a configuration
> > setting in the proxy. If you want a replicated swift cluster just allow
> > this policy in the proxy and set it to default. The same for EC cluster,
> > just set the allowed policy to EC. If you want both (and let your users
> > decide which policy to use) simply configure a list of allowed policies
> > with the first one in the list as the default policy in case they don't
> > set a policy during container creation.
> >
> > Am 18.07.13 20:15, schrieb Chuck Thier:
> >> I think you are missing the point.  What I'm talking about is who
> >> chooses what data is EC and what is not.  The point that I am trying to
> >> make is that the operators of swift clusters should decide what data is
> >> EC, not the clients/users.  How the data is stored should be totally
> >> transparent to the user.
> >>
> >> Now if we want to down the road offer user defined classes of storage
> >> (like how S3 does reduced redundancy), I'm cool with that, just that it
> >> should be orthogonal to the implementation of EC.
> >>
> >> --
> >> Chuck
> >>
> >>
> >> On Thu, Jul 18, 2013 at 12:57 PM, John Dickinson  >> <mailto:m...@not.mn>> wrote:
> >>
> >>Are you talking about the parameters for EC or the fact that
> >>something is erasure coded vs replicated?
> >>
> >>For the first, that's exactly what we're thinking: a deployer sets
> >>up one (or more) policies and calls them Alice, Bob, or whatever,
> >>and then the API client can set that on a particular container.
> >>
> >>This allows users who know what they are doing (ie those who know
> >>the tradeoffs and their data characteristics) to make good choices.
> >>It also allows deployers who want to have an automatic policy to set
> >>one up to migrate data.
> >>
> >>For example, a deployer may choose to run a migrator process that
> >>moved certain data from replicated to EC containers over time (and
> >>drops a manifest file in the replicated tier to point to the EC data
> >>so that the URL still works).
> >>
> >>Like existing features in Swift (eg large objects), this gives users
> >>the ability to 

Re: [openstack-dev] the performance degradation of swift PUT

2013-08-03 Thread John Dickinson
For those playing along from home, this question has been discussed at 
https://answers.launchpad.net/swift/+question/233444

--John


On Aug 3, 2013, at 10:34 AM, kalrey  wrote:

> hi openstackers,
> I'm a learner of swift. I took some benchmark about swift last week and the 
> result is not pleasant.
> When I put a large number of small files(4KB) under high concurrency, the 
> performance degradation  of PUT appeared.
> The speed of PUT even can reach to 2000/s at beginning. But it down to 600/s 
> after one minute. It's stable at 100/s at last and some error like '503' 
> occured. But when I flushed all disk in cluster it could reach back 2000/s.
> In fact, I also took some benchmark about GET in the same environment but it 
> works very well(5000/s).
>  
> There are some information which maybe useful:
> Test environment:
> Ubuntu 12.04
> 1 proxy-node : 128GB-ram / CPU 16core / 1Gb NIC*1
> 5 Storage-nodes : each for 128GB-ram / CPU 16core / 2TB*4 / 1Gb NIC*1.
> [bench]
> 
> concurrency 
> = 200
> 
> object_size 
> = 4096
> 
> num_objects 
> = 200
> 
> num_containers 
> = 200
> =
> I have traced the code of PUT operation to find out what cause the 
> performance degradation while putting objects. Some code cost a long time in 
> ObjectController::PUT(swift/obj/server.py).
>  
> > for chunk in iter(lambda: reader(self.network_chunk_size), ”):
> start_time = time.time()
> >  upload_size += len(chunk)
> >  if time.time() > upload_expiration:
> >  self.logger.increment(‘PUT.timeouts’)
> >  return HTTPRequestTimeout(request=request)
> >  etag.update(chunk)
> >  while chunk:
> > written = os.write(fd, chunk)
> > chunk = chunk[written:]
> >  sleep() 
>  
>'lambda: reader' will cost average of 600ms per execution. And 
> 'sleep()' will cost 500ms per execution.In fact, 'fsync' also spend a lot 
> time when file flush to disk at last and I removed it already just for 
> testing. I think the time is too long.
> I monitor resource of cluster while putting object.The usage of bandwidth is 
> very low and the load of CPUs were very light.
> I have tried to change vfs_cache_pressure to a low value and it does not seem 
> to work.  
> Is there any advice to figure out the problem?
> appreciate~
> kalrey
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] gate on functional tests

2013-08-05 Thread John Dickinson
All,

The Swift functional tests have been running as an advisory for a bit now on 
all Swift patches. Everything appears to be plumbed together correctly, and the 
tests have been added as gating tests for every merge into master.

Thanks to the -infra team for all their hard work.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] gate on functional tests

2013-08-06 Thread John Dickinson
They were non-voting. The change is that they are now voting.

--John


On Aug 5, 2013, at 9:17 PM, Chmouel Boudjnah  wrote:

> On Mon, Aug 5, 2013 at 10:59 PM, John Dickinson  wrote:
>> The Swift functional tests have been running as an advisory for a bit now on 
>> all Swift patches. Everything appears to be plumbed together correctly, and 
>> the tests have been added as gating tests for every merge into master.
> 
> would it be too soon to make those gates as voting gates [1]? I
> haven't seen false positive with those yet
> 
> (except we really should fix
> https://bugs.launchpad.net/python-swiftclient/+bug/1201376 so we don't
> get all those keystoneclient cnx messages filling up the logs)
> 
> Chmouel.
> 
> [1] I think they are running as non-voting for now.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.9.9 RC available

2013-08-07 Thread John Dickinson
Today we have released Swift 1.9.1 (RC1).

The tarball for the RC is at
http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz

This release was initially prompted by a bug found by Peter Portante
(https://bugs.launchpad.net/swift/+bug/1196932) and includes a patch
for it. All clusters are recommended to upgrade to this new release.
As always, you can upgrade to this version of Swift with no end-user
downtime.

In addition to the patch mentioned above, this release contains a few
other important features:

* The default worker count has changed from 1 to auto. The new default
  value will for workers in the proxy, container, account & object
  wsgi servers will spawn as many workers per process as you have cpu
  cores.

* A "reveal_sensitive_prefix" config parameter was added to the
  proxy_logging config. This value allows the auth token to be
  obscured in the logs.

* The Keystone middleware will now enforce that the reseller_prefix
  ends in an underscore. Previously, this was a recommendation, and
  now it is enforced.

There are several other changes in this release. I'd encourage you to
read the full changelog at
https://github.com/openstack/swift/blob/master/CHANGELOG.

On the community side, this release includes the work of 7 new
contributors. They are:

Alistair Coles (alistair.co...@hp.com)
Thomas Leaman (thomas.lea...@hp.com)
Dirk Mueller (d...@dmllr.de)
Newptone (xingc...@unitedstack.com)
Jon Snitow (other...@swiftstack.com)
TheSriram (sri...@klusterkloud.com)
Koert van der Veer (ko...@cloudvps.com)

Thanks to everyone for your hard work. I'm very happy with where Swift
is and where we are going together.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift hackathon in October in Austin

2013-08-08 Thread John Dickinson
We (SwiftStack) are hosting a Swift Hackathon in Austin, Texas on October 
15-17. We're bringing together Swift contributors to meet and hack for a few 
days.

http://swifthackathon.eventbrite.com

Our plan is to provide a few days of hacking together with just a few rules: 1) 
No slide decks and 2) No "Intro to Swift" sessions. On the other hand, we 
absolutely want lively discussions, competing code implementations, and code 
merged into master.

There is limited space, so if you would like to attend, please RSVP soon.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.9.1 released

2013-08-13 Thread John Dickinson
Swift 1.9.1, as described below, has been released. Download links to the 
tarball are at https://launchpad.net/swift/havana/1.9.1


--John


On Aug 7, 2013, at 10:21 AM, John Dickinson  wrote:

> Today we have released Swift 1.9.1 (RC1).
> 
> The tarball for the RC is at
> http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz
> 
> This release was initially prompted by a bug found by Peter Portante
> (https://bugs.launchpad.net/swift/+bug/1196932) and includes a patch
> for it. All clusters are recommended to upgrade to this new release.
> As always, you can upgrade to this version of Swift with no end-user
> downtime.
> 
> In addition to the patch mentioned above, this release contains a few
> other important features:
> 
> * The default worker count has changed from 1 to auto. The new default
>  value will for workers in the proxy, container, account & object
>  wsgi servers will spawn as many workers per process as you have cpu
>  cores.
> 
> * A "reveal_sensitive_prefix" config parameter was added to the
>  proxy_logging config. This value allows the auth token to be
>  obscured in the logs.
> 
> * The Keystone middleware will now enforce that the reseller_prefix
>  ends in an underscore. Previously, this was a recommendation, and
>  now it is enforced.
> 
> There are several other changes in this release. I'd encourage you to
> read the full changelog at
> https://github.com/openstack/swift/blob/master/CHANGELOG.
> 
> On the community side, this release includes the work of 7 new
> contributors. They are:
> 
> Alistair Coles (alistair.co...@hp.com)
> Thomas Leaman (thomas.lea...@hp.com)
> Dirk Mueller (d...@dmllr.de)
> Newptone (xingc...@unitedstack.com)
> Jon Snitow (other...@swiftstack.com)
> TheSriram (sri...@klusterkloud.com)
> Koert van der Veer (ko...@cloudvps.com)
> 
> Thanks to everyone for your hard work. I'm very happy with where Swift
> is and where we are going together.
> 
> --John
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >