be involved, if only as a reviewer to ensure the queue abstraction layer
makes it over safely. [I still question if we need an rpc.notify()…]
--
Eric Windisch
On Wednesday, March 14, 2012 at 5:19 AM, Juan Antonio García Lebrijo wrote:
> Hi,
>
> we are thinking to contribute to inc
roMQ RPC driver, I'd rather that Glance/Nova can use the existing
implementation, rather than having to develop and maintain a separate driver
for the express purpose of supporting notifications. It is highly redundant.
I'm sure Russell must feel the same way about maintaining two Qpid dri
since Glance seems to advertise some OVF support.
--
Eric Windisch
On Tuesday, April 10, 2012 at 11:52 AM, Scott Moser wrote:
> On Tue, 10 Apr 2012, Andrew Bogott wrote:
>
> > I'm reviving this ancient thread to ask: Will there be a code summit session
> > about this? And
EC2 API support config-drive. The EC2 metadata service must remain.
The EC2 API is intended to mimic EC2 behavior and provide compatibility. The
OpenStack implementations should not diverge or break that compatibility.
--
Eric Windisch
On Tuesday, April 10, 2012 at 2:05 PM, Justin Sant
I agree that it is important to access the limitations of the OpenStack EC2 API
implementation.
To that end, make sure to take a look at
https://github.com/cloudscaling/aws-compat
--
Eric Windisch
On Tuesday, April 10, 2012 at 7:39 PM, Joshua Harlow wrote:
> EC2 compat. Hi all,
>
;m too pedantic to not correct myself.
--
Eric Windisch
On Tuesday, April 10, 2012 at 8:30 PM, Eric Windisch wrote:
> I agree that it is important to access the limitations of the OpenStack EC2
> API implementation.
>
> To that end, make sure to take a look at
> https://github.
internal apis directly. The RPC and database can be made to scale in Nova, but
a REST endpoint won't be as reliable or scale as well.
--
Eric Windisch
On Monday, April 23, 2012 at 11:15 AM, Justin Santa Barbara wrote:
> > What's the advantage of replacing the native EC2 compatib
incubated projects. There is a strong enough push to maintain these versions
*anyway*.
--
Eric Windisch
On Monday, April 23, 2012 at 3:25 PM, Justin Santa Barbara wrote:
> I think the documented 'private' API should be the OpenStack API and should
> be available to all callers (
On Monday, April 23, 2012 at 3:42 PM, Justin Santa Barbara wrote:
> I didn't realize people were willing to do so.
>
Ah yes, well, that problem might still remain. There are certainly seem to be
volunteers to work on the versioning code, but defining, tagging, and adhering
to API contracts
On Monday, April 23, 2012 at 4:00 PM, Joshua Harlow wrote:
> Re: [Openstack] Canonical AWSOME How are REST endpoints not reliable or
> scalable ;-)
>
> I’d like to know, seeing as the web is built on them :-)
The resiliency of the internet is actually built on BGP. REST endpoints fall
over
>
> Actually, I think JSON schema for our message-bus messages might be a really
> good idea (tm). Violations could just be warnings until we get things locked
> down... maybe I should propose a blueprint? (Although I have enough of a
> blueprint backlog as it is...)
t lands
in Folsom. (it currently Pickles, but only because there was a bug in Essex
at one point, breaking JSON serialization)
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscrib
able in rpc.common and we might be able to
refactor some of the stuff in ampq.py to be more generically useful (maybe).
Right now, none of that is a huge concern to me, we can get it integrated and
do the DRY later.
--
Eric Windisch
___
Mailing li
native at the moment.
Additionally, each RPC driver can provide a guide to complying with their
protocol, which extends beyond simply the transport (i.e. AMQP or ZeroMQ). This
might be harder than it sounds and might vary between, or even within, releases.
--
Eric Windisch
On Wednesday, April
Sure, but then the contract becomes between the notifier and the client,
presumably? I'm not as familiar with the notification system as I should be.
I haven't written a ZeroMQ notifier yet, figuring that task would be better
delayed until the move to openstack-common.
--
Eric Win
besides the notifier did this. I'd much rather that a
dash or underscore was used here, if possible.
Then, the ZeroMQ driver would "just work" with the existing notifier by
implementing fanout_cast() for notify().
--
Eric Windisch
On Wednesday, April 25, 2012 at 6:23 PM, Mo
+1
--
Eric Windisch
On Friday, April 27, 2012 at 11:09 AM, Dan Prince wrote:
> Russell Bryant wrote the Nova Qpid rpc implementation and is a member of the
> Nova security team. He has been helping chipping away at reviews and
> contributing to discussions for some time now.
>
in cases where pure Python would be safer.
--
Eric Windisch
On Sunday, April 29, 2012 at 7:41 PM, Andrew Bogott wrote:
> As part of the plugin framework, I'm thinking about facilities for
> adding commands to the nova-rootwrap list without directly editing the
> code in nova-r
i.openstack.org/GerritWorkflow
--
Eric Windisch
On Wednesday, May 2, 2012 at 8:26 AM, Bernhard M. Wiedemann wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> - Original Message
> Subject: [cloud-devel] Bash Completion Scripts
> Date: Wed, 02 May
ion/driver side, but cleaner on
the unit tests. This is basically what 'fake_rabbit' is now, anyway.
Thoughts?
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https:/
then add any flags we
want.
C. not support testing flags on RPC drivers, have a common "testing" flag.
--
Eric Windisch
On Friday, May 4, 2012 at 6:08 PM, Russell Bryant wrote:
> On 05/04/2012 11:53 AM, Eric Windisch wrote:
> > Russell,
> >
> > FYI, w
>
> I guess another question is, why do you need to set ZeroMQ related flags
> in fake_flags? I think those should only be settings that apply for
> *all* unit tests. I would just register your flags in your unit tests.
>
> https://github.com/openstack/nova/blob/master/nova/tests/rpc/test_qpid.py
The nova rpc implementation is moving into openstack common, I agree with using
this abstraction.
As per ZeroMQ, I'm the author of that plugin. There is a downloadable plugin
for Essex and I'm preparing to make a Folsom merge prop within the next week or
so, if all goes well.
Sent from my iPad
n it,
> but the real question is: which queue should we be using here?
The OpenStack common rpc mechanism, for sure. I'm biased, but I believe that
while the zeromq driver is the newest, it is the only driver that meets all of
the above requirements, except, to the exceptions marked above.
able.
We should consider to what degree dynamic vs static configuration is necessary,
if dynamic is truly required, and how a method like get_workers should behave
on a statically configured system.
Regards,
Eric Windisch
___
Mailing list: https://
ins. It remains to be seen
which, if any, messaging platform will be the /default/ in Nova/OpenStack
long-term. Currently, RabbitMQ is the default, but Essex introduced Qpid
messaging, and we'll have ZeroMQ messaging if we can get it out of review ;-)
Regards,
Eric Windisch
en we raise an ugly
Exception.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
tests
(and a bug-fix for nova/rpc/impl_fake.py)
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net
matchmaker lands (this
can provide client-side balancing of servers).
--
Eric Windisch
On Friday, May 25, 2012 at 09:18 AM, Stephen Gran wrote:
> Hello,
>
> I am investigating various high availability options for a pending
> deploy of open stack. One of the obvious services to make
ions. (i.e. this would be a win for Qpid as well) I'm clearly not
a die-hard RabbitMQ admin -- is there a reason to use clustering over a
decoupled solution for a greenfield application?
--
Eric Windisch
On Friday, May 25, 2012 at 17:54 PM, Sébastien Han wrote:
> Why don't you u
ck.org/#/c/7921/2 # matchmaker
https://review.openstack.org/#/c/7770/ # new common rpc tests and fake_impl.py
bugfix
--
Eric Windisch
On Wednesday, May 23, 2012 at 11:34 AM, Eric Windisch wrote:
> Looking for code reviews of the ZeroMQ driver:
> https://review.openstack.org/#/c/7633/
&
tly. This would effectively be a mix of options 1/2.
I'm inclined to suggest option #2 as it is a relatively simple improvement that
would give us short-term gains without much friction. This wouldn't exclude the
other options from being worked on and seems to be
ied and re-executed. Without
having any 'sudo' requirements, the nova user would be quite constrained,
relative to the current situation.
--
Eric Windisch
On Tuesday, June 5, 2012 at 21:18 PM, Yun Mao wrote:
> Python is a scripting language. To get setuid work, you usually ha
, but OpenSSH
had reasons to move to an IPC approach… http://lwn.net/Vulnerabilities/3290/
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
>
>
>
> What implementation suboption would have your preference ? Is
> nova-rootwrap now universally used ? Should we prefer compatibility or
> absence of confusion ?
There is an issue of how to extend rootwrap from third-party backend drivers.
If this was (is?) addressed, universal use of ro
will be able to
sufficiently provide the necessary security backports and hotfixes.
For this reason, the support for stable releases of the kFreeBSD releases, in a
sense, may be considered significantly shortened compared to standard Debian
releases.
--
Eric Windisch
On Monday, June 11
handle return values
across a relaunched caller)
Anyway, in the ZeroMQ driver, we could have a local queue to track casts and
remove them when the send() coroutine completes. This would provide restart
protection for casts.
--
Eric Windisch
On Tuesday, June 12, 2012 at 09:55 AM, Johannes Erdfe
efore that message is consumed, the requesting process can attempt
to resubmit that message for consumption upon relaunch. The requesting process
would track the amount of time waiting for the message to be consumed and would
subtract that time from the remaining timeout.
Regards,
Eri
A successful PULL is a successful PUSH.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
limited support for __future__.
--
Eric Windisch
On Thursday, June 28, 2012 at 13:48 PM, Timothy Daly wrote:
> nova has tools/hacking.py, which looks like it does check some import stuff,
> among other things.
>
> -tim
>
> On Jun 28, 2012, at 10:15 AM, Joshua Harlow wrote
eed up the movement from incubation to library.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
ouble commits.
--
Eric Windisch
On Tuesday, July 3, 2012 at 15:47 PM, Andrew Bogott wrote:
> On 7/3/12 1:59 PM, Gabriel Hurley wrote:
> > The notion that copying code is any protection against APIs that may change
> > is a red herring. It's the exact same effect
ch time as you see fit to properly updated that code and *ensure*
> compatibility in your project
Isn't this what we get with git submodules? Sure, that version is just a
commit-id (or tag), but it isn't tracking HEAD, either. For stable releases, we
can tag and update the reference
enstack-common which are not being automatically resolved and included when
you run update.py?
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~op
aintaining this code and would love to see it working again.
There were already bugs filed for this and changes already in gerrit for
review, that once committed, should fix the tests.
The bigger issue is getting people to do the reviews...
>
> The bigger issue is getting people to do the reviews...
>
Here is the link for those that want to help:
https://review.openstack.org/#/q/status:open+project:openstack/openstack-common+branch:master+topic:bug/1021459,n,z
Regards,
Eric
progress:
https://review.openstack.org/#/q/status:open+project:openstack/openstack-common+branch:master+topic:bug/1021459,n,z
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https:
x27;ve been at all too
rough on you.
I'd just appreciate that in the future, even if the build is broken, that code
review is not bypassed. Additionally, if there is a reasonable way to approach
the author of code, especially if there is already a patch in review, that
op
e should
be easier to digest from the perspective of the queue-server buffs.
Let me know when you're ready to have a chat about it, it might do better to do
this on the phone or IRC than by email.
--
Eric Windisch
___
Mailing list: https:
ide based on their feedback whether it is acceptable to cut the
> nova-volume code out for folsom.
>
Finally something I can put a +1 against.
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.
times.
Perhaps try to see if using inotify would solve this. It is platform-dependent,
but it is the best way to solve these problems without race conditions. If
we're calling kpartx, platform independence is unlikely to be an issue anyway.
However, if compatibility is desired,
On Wednesday, July 18, 2012 at 19:10 PM, Vishvananda Ishaya wrote:
> Hello Everyone!
>
> Yun has been putting a lot of effort into cleaning up our state management,
> and has been contributing a lot to reviews[1]. I think he would make a great
> addition to nova-core.
+1
iver loading
> code. I think he would also be a great addition to nova-core.
+1. I've read through the list and gerrit. Sean seems to be doing a great job.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Pos
Generally, you would return
multiple brokers when you're doing fanout messaging.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
Mark suggested.
Regards,
Eric Windisch
def stop(self, server):
"""
Stop the server.
"""
return self._action('os-stop', server, None)
def start(self, server):
"""
Start the server.
"""
self.
ccess of OpenStack which are
currently lacking (official) leadership. Everyone's problem is nobody's problem.
Consider this my +1 on assigning a PTL for common.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
e code.
In the first pass, the intention is to leave the matchmaker in and introduce
the membership modules. Then, the matchmaker would either use the new
membership modules as a backend, or even replaced entirely.
Regards,
Eric Windisch
___
Mail
t of service
> groups that can monitor response/hearbeat of service daemons.
>
I see. For some additional context, I'm looking to use this managing consumers
of round-robin and fanout queues with the ZeroMQ driver, instead of the static
hashmap that i
ain with ISO9660". Now, everyone has to live with a serious
technical blunder.
Per the summit discussion Etherpad:
"injecting files into a guest is a very popular desire."
Popular desires not necessary smart desires. We sh
is only updating vfat,
another option is mtools which is entirely userspace and can be run with some
safety on the host.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubsc
I just realized you said glance… I'm assuming these are probably ext2/3/4 or
other Linux filesystems. Libguestfs might be the best option, besides simply
not having that feature.
Regards,
Eric windisch
___
Mailing list: https://laun
d user and to forego
any illusions of mounting the filesystem anywhere via the kernel or FUSE.
--
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/
m that libguestfs could be used securely, but it isn't.
--
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
by a guest in the host?
I believe this is more of reading filesystems that were uploaded by users into
glance. However, it is essentially the same thing.
I don't think we need to do this and don't think we should do this. Clearly,
however, someone somewhere, at some point,
backend configuration value is not set. In Grizzly, we would make the
rpc_backend variable mandatory in the configuration.
Mark McLoughlin wisely suggested this come before the mailing list, as it will
affect a great many people. I welcome feedback and discu
27;t need
to have a hard default. It could have a soft-default, via a prompt on first-run
unless defined in the localrc, similar to how passwords are currently handled.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstac
general recommendation on the mailing list for
those installing Nova on a single node is to use devstack. In that case, the
configuration is prompt-driven, and whatever changes need to be made, can be
made.
Regards,
Eric Windisch
___
Mailing list
won't be a very big thorn. If they're a large deployment and
they ignore all of this, including ignoring any need for testing before doing a
large rollout…
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to
ian, and Gentoo; not the steps taken by
FreeBSD. The community already has a number of emerging proprietary and/or
corporate-sponsored distributions, it would not do the community a favor for
the foundation to create its own.
Regards,
Eric Windisch
(sent from my iPad)__
h ends. The cloud
server software needs to enable a compatible and standards-compliant service
endpoint, enforced or not… and the client API libraries need to be flexible
enough to handle a variety of services that might not be 100% identical. Just
like the
ting with the VFS. There has even been work into creating
user-space character devices. One could also make FUSE work with Unix sockets
as an alternative to character devices…
None of this is out of the box, tested, or even in existence...
Regards,
Eric Windisch
__
serspace, and write-once-read-never, if at all
possible. However, I'm not too confident of libguestfs, but I understand why
it is attractive in absence of good userspace filesystem tools. Several have
pointed to mtools as one, and I'll also add debug2fs to this list,
o not necessarily remain valid in an evolving
architecture.
I believe that OpenStack requires leaders with experience 'in the trenches' of
operations, implementation, and of course, leadership. I ask for your trust,
and your votes, in this coming Technical Committee election.
Thank you
the effort to
create a reusable consumption pattern.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
rs.
I believe OpenStack should have its own OUI.
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
t of the box
without risk of collisions (outside of other Xen host's VMs). Colliding with
artifacts of your own software is better than colliding with local operator
configurations and preferences.
Regards,
Eric Windisch
___
Mailing list: https://l
no problem, but it has openstack collateral consequences?
>
>
>
I generally advise not to do this due to potential security concerns.
In practice, your concerns will be with deleting manually created volumes and
creating volumes that match the pattern set in the nova-volumes/cinder
config
ory_mb'.
Note that both of these define how much memory goes to your OS and
applications, rather than how much memory is set aside for Nova / VMs. If you
had 8GB and wanted to give Nova 6GB, you would reserve 2GB for your host OS.
This is a soft limit, your OS will happily take mor
an
a week, tomorrow if I'm smart, lucky, and the store doesn't sell out of
RedBull. A two week grace would give me a nice buffer.
Thanks,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
-MQ
--
Eric Windisch
On Tuesday, January 24, 2012 at 5:20 PM, Yun Mao wrote:
> Hi I'm curious and unfamiliar with the subject. What's the benefit of
> 0MQ vs Kombu? Thanks,
>
> Yun
>
> On Tue, Jan 24, 2012 at 7:08 PM, Eric Windisch (mailto:e...@cloudscaling.
On Tuesday, January 24, 2012 at 6:05 PM, Zhongyue Luo wrote:
> I assume the messages will be delivered directly to the destination rather
> than piling up on a queue server?
>
>
>
Although the blueprint doesn't specify this level of detail, the intention had
originally been to deliver a
.
It would be interesting exercise to allow the ZeroMQ driver to defer back to
the Kombu or Qpid driver for those messages which must remain centralized.
--
Eric Windisch
On Wednesday, January 25, 2012 at 1:18 AM, Alexis Richardson wrote:
> On Wed, Jan 25, 2012 at 4:46 AM, Eric Windisch (
quirement of the serialization
protocol to manage this. Currently, data is simply pickled. Perhaps for
Folsom we can create a blueprint for the signing & verification of messages.
--
Regards,
Eric Windisch
___
Mailing list: https://la
The ZeroMQ RPC driver is now feature-complete. I'm cleaning up for a
merge-proposal!
--
Eric Windisch___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More hel
.
It might be an interesting exercise to provide log messages in two languages
(one always being English), if we don't simply standardize on English.
--
Eric Windisch
On Monday, February 13, 2012 at 12:50 PM, Joshua Harlow wrote:
> Question on i8ln? Hi all,
>
> I was just w
useful for debugging
purposes.
--
Eric Windisch
On Monday, February 13, 2012 at 1:15 PM, Joshua Harlow wrote:
> Re: [Openstack] Question on i8ln? Sure but to contribute they have to
> understand python which itself is english based??
> I can understand for sys-ops people t
ve rather
than a support and operations perspective. Developers will understand English,
but the operations and especially the support team may not. Having native
language log messages has the potential to significantly decrease support costs
for users both domestic and abroad (where domestic users might
both ways...
--
Eric Windisch
On Monday, February 13, 2012 at 2:41 PM, Joshua Harlow wrote:
> Re: [Openstack] Question on i8ln? Agreed, I do that as well.
>
> But I’m also a biased yankee, now a californian (not hippie/ster yet, haha).
>
> On 2/13/12 2:37 PM, "Andrew Bog
e. I've been working with Russell Bryant and his helpful reviews
this morning to polish up for inclusion, if it can get the approvals.
--
Eric Windisch
On Tuesday, February 7, 2012 at 4:05 PM, Eric Windisch wrote:
> The ZeroMQ RPC driver is now feature-complete. I'm cl
y. Breaking out drivers, while easier,
would fracture the community in potentially devastating ways.
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.n
he path of least resistance as
long as we're committed to eventlet.
--
Eric Windisch
On Thursday, March 1, 2012 at 3:36 PM, Vishvananda Ishaya wrote:
> Yes it does. We actually tried to use a pool at diablo release and it was
> very broken. There was discussion about moving over t
s place to switch coroutines
via monkey-patching.
That said, it shouldn't be necessary to "sprinkle" sleep(0) calls. They should
be strategically placed, as necessary.
"race-conditions" around coroutine switching sounds more like thread-safety
issues...
--
Eric Windi
must admit this has largely been
due to my use of a C library (libzmq).
--
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help
durability in RPC. I've done quite
a bit of analysis of this requirement and it simply isn't necessary. There is
some need in AMQP for this due to implementation-specific issues, but not
necessarily unsolvable. However, these problems simply do not exist for all RPC
implement
o propose that it might be more
agreeable to push this to 24:00 UTC if the Europeans do not protest too
badly. The time in the US would remain within or close to business hours on
both coasts while making the meetings more reasonable for those in Asia.
--
Regar
+1 for Berlin or Seoul for my own convenience, but I like the idea of Brussels.
--
Eric Windisch ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help
t a response from the PPB.
Regards,
Eric Windisch
e...@cloudscaling.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
contributions and new members. I suppose it still rubs
many of us the wrong way when Rackers use affirmative terms when discussing
matters that require governance.
Daniel, see what you got yourself into? Welcome!
Regards,
Eric Windisch
e...@cloudscaling.com
___
internally, or for deployments where their smart
filers have edge-cases preventing or breaking the use of these features.
Regards,
Eric Windisch
e...@cloudscaling.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.lau
>
> You're involved in the tgt project and it is the tgt project's purgative to
> add features as seen fit, but are you sure that you want to support this
> feature?
Major spell check fail: prerogative ;-)
Regards,
Eric Windisch
1 - 100 of 112 matches
Mail list logo