Hi,
We're looking at setting up a geographically distributed OpenStack
installation, and we're considering either cells or regions. We'd like to
share a single Glance install between our regions (or cells), so the same
images can be spawned anywhere. From here:
http://docs.openstack.org/trun
ack as a way to build a shared RC
> infrastructure at Harvard at a level below the scheduler, and would love to
> hear what others are
>
>
>
> On Mon, Apr 15, 2013 at 10:32 AM, John Paul Walters wrote:
>
>> Hi,
>>
>> We missed the deadline for an HPC rel
Hi,
We missed the deadline for an HPC related design summit session, but there's
still time to sign up for an unconference if anyone's interested. Anyone
interested? I'd propose one of the Wednesday slots.
best,
JP
___
Mailing list: https://launchpa
the right
>> to write within the mounted directory?
>>
>> Razique Mahroua - Nuage & Co
>> razique.mahr...@gmail.com
>> Tel : +33 9 72 37 94 15
>>
>>
>>
>> Le 11 avr. 2013 à 16:36, John Paul Walters a écrit :
>>
>>> Hi,
nova has the right
> to write within the mounted directory?
>
> Razique Mahroua - Nuage & Co
> razique.mahr...@gmail.com
> Tel : +33 9 72 37 94 15
>
>
>
> Le 11 avr. 2013 à 16:36, John Paul Walters a écrit :
>
>> Hi,
>>
>> We've started i
Hi,
We've started implementing a Glusterfs-based solution for instance storage in
order to provide live migration. I've run into a strange problem when using a
multi-node Gluster setup that I hope someone has a suggestion to resolve.
I have a 12 node distributed/replicated Gluster cluster. I
Hi All,
The monthly HPC telecon is postponed until we have a better sense of the
sessions that will be scheduled for the Grizzly Design Summit. We'll shoot for
holding the telecon on Monday, Oct. 8 in the hopes that we'll know whether our
HPC sessions were accepted. If the decisions aren't ye
adding to the Grizzly release.
If anyone has any other specific agenda items, they're welcome to propose them.
I'm unable to attend, so my colleague David Kang will be hosting this meeting.
We look forward to talking to you!
best,
JP
John Paul Walters invites you to attend
Hi,
The HPC telecon, normally scheduled for the first Monday of the month, will
instead be held on Monday, Sep. 10 due to the Labor Day holiday in the US.
It'll be held at 12:00 noon Eastern Time. I'll follow up with an agenda near
the end of next week. If there's anything that others would
Hi Boris,
We have GPU passthrough working with NVIDIA GPUs in Xen 4.1.2, if I recall
correctly. We don't yet have a stable Xen + Libvirt installation working, but
we're looking at it. Perhaps it would be worth collaborating since it sounds
like this could be a win for both of us.
best,
JP
ctices
I hope to hear from you on Monday!
best,
JP
--
John Paul Walters invites you to attend this online meeting.
Topic: HPC Monthly Telecon
Date: Monday, August 6, 2012
Time: 12:00 pm, Eastern Daylight Time (New York, GMT-04:00)
Meeting Number: 923 25
ion. What do others think?
Would others be interested in attending?
JP
On Jul 22, 2012, at 9:12 PM, Lorin Hochstein wrote:
> On Jul 6, 2012, at 1:28 PM, John Paul Walters wrote:
>
>> I'm strongly considering putting together a proposal for a BoF (birds of a
>>
Trinath,
Do you have a /tftpboot/ directory on your proxy compute server, and if so are
there any error logs in there? Please send us any error logs that you find and
we'll try to get you fixed up.
JP
On Jul 6, 2012, at 6:51 AM, Trinath Somanchi wrote:
> Hi-
>
> I'm currently trying/testing
I'm strongly considering putting together a proposal for a BoF (birds of a
feather) session at this year's Supercomputing in Salt Lake City. For those of
you who are likely to attend, is anyone else interested? It's not a huge
amount of time invested on my end to put together the proposal, but
Stefano,
>
> This is a great idea. HPC is an interesting topic and discussions around
> it may be useful for lots of people. As format, I would suggest
> something that can be recorded and played back conveniently, after it
> happened. Casual Hangouts on Google are very trendy lately (it's the n
Europe so it
> would limit the participation.
>
> Tim
>
>> -Original Message-
>> From: openstack-bounces+tim.bell=cern...@lists.launchpad.net
>> [mailto:openstack-bounces+tim.bell=cern...@lists.launchpad.net] On Behalf
>> Of Narayan Desai
>> Sent: 0
Hi All,
One of the outputs of the design summit was that folks are interested in
participating in a monthly (or so) telecon to express feature requests, best
practices, etc. I'd like to get this process started. For those of you who
are interested, what's the preferred format? IRC, telephone
node either tftp boots of nfs boots
(I'm sorry, I don't have the details immediately available).
best,
JP
- Original Message -
From: "Trinath Somanchi"
To: "John Paul Walters"
Cc: openstack@lists.launchpad.net
Sent: Wednesday, July 4, 2012 4:54:16 AM
Subject:
Matt,
I agree with almost everything that you're saying, except to add that we hope
to change things. I hope that our work at ISI is moving in that direction.
But you're right, hypervisors add some overhead, network performance isn't
always great, etc. Things are changing, albeit slowly, but
Hi,
I'm not sure that I fully understand the security angle that you're getting at
here, but you and Jay are right that we're focusing on adding heterogeneity to
Openstack. Right now we support large shared memory x86 machines, like SGI
UVs, GPUs, and Tilera systems. The blueprints you linked
kes Proxy
> nova-compute.
>
> On Mon, Jul 2, 2012 at 9:32 PM, John Paul Walters wrote:
> Hi Trinath,
>
> Our baremetal experts are on vacation for the next week or so, so I'll take a
> stab at answering in their absence. First, just to be clear, right now the
> bar
Hi Trinath,
Our baremetal experts are on vacation for the next week or so, so I'll take a
stab at answering in their absence. First, just to be clear, right now the
baremetal work that's present in Essex supports ONLY the Tilera architecture.
We're working with the NTT folks to add additional
Hi,
On May 24, 2012, at 5:45 AM, Thierry Carrez wrote:
>
>
>> OpenNebula has also this advantage, for me, that it's designed also to
>> provide scientific cloud and it's used by few research centres and even
>> supercomputing centres. How about Openstack? Anyone tried deploy it in
>> supercompu
What about PCI passthrough? I'm not certain, because I've never tried it
without KVM, but I'd be surprised if it worked outside of KVM.
JP
On May 8, 2012, at 4:30 PM, Razique Mahroua wrote:
> Hi Lorin,
> not that I'm aware off. In fact, even out from Nova, both have the similar
> features.
David,
We're currently in the process of building Essex-3/Essex-4 RPMs locally at
USC/ISI for our heterogeneous Openstack builds. When I looked at the EPEL
testing repo, it looked like the packages that are currently available are
right around Essex-1. Are there any plans to update to the mor
25 matches
Mail list logo