I think most are missing the point a bit. The question that should really
be asked is, what is right for Swift to continue to scale. Since the
inception of Openstack, Swift has had to solve for problems of scale that
generally are not shared with the rest of Openstack.
When we first set out to w
r journals, and thus
provides very misleading results.
--
Chuck
On Mon, Jun 16, 2014 at 4:03 AM, Vincenzo Pii wrote:
> Hi Chuck,
>
> Many thanks for your comments!
> I have replied on the blog.
>
> Best regards,
> Vincenzo.
>
>
> 2014-06-12 21:10 GMT+02:00 Chuck
Just a FYI for those interested in the next eventlet version. It also
looks like they have a python 3 branch ready to start testing with.
--
Chuck
-- Forwarded message --
From: Sergey Shepelev
Date: Fri, Jun 13, 2014 at 1:18 PM
Subject: [Eventletdev] Eventlet 0.15 pre-release te
-for-object-storage-on-small-clusters/
>
>
> 2014-06-06 20:19 GMT+02:00 Matthew Farrellee :
>
>> On 06/02/2014 02:52 PM, Chuck Thier wrote:
>>
>> I have heard that there has been some work to integrate Hadoop with
>>> Swift, but know very little about it. Int
s that handle?
>
> Thanks for you sharing I m sure everyone will take a good lesson.
>
> Ciao
>
> Inviato da iPhone ()
>
> Il giorno May 29, 2014, alle ore 22:39, Chuck Thier ha
> scritto:
>
> Hello Remo,
>
> That is quite an open ended question :) If you c
Hello Remo,
That is quite an open ended question :) If you could share a bit more
about your use case, then it would be easier to provide more detailed
information, but I'll try to cover some of the basics.
First, a disclaimer. I am one of the original Openstack Swift developers,
so I *may* be
There is a review for swift [1] that is requesting to set the max header
size to 16k to be able to support v3 keystone tokens. That might be fine
if you measure you request rate in requests per minute, but this is
continuing to add significant overhead to swift. Even if you *only* have
10,000 req
Hi Shyam,
If I am reading your ring output correctly, it looks like only the devices
in node .202 have a weight set, and thus why all of your objects are going
to that one node. You can update the weight of the other devices, and
rebalance, and things should get distributed correctly.
--
Chuck
Hello Ankit,
The easiest way is to create a new loopback device that is 40GB in the same
way the 10GB device is created. This will create a new empty device, and
you will lose your data. This usually isn't a problem since the SAIO is
specifically for development and learning-- not for production
Well the short answer to that question is that it is generally a best
practice to run a disk controller card with a persistent cache in front of
your storage drives. When this is the case you want to turn barriers off,
otherwise they would render your cache ineffective.
If you are not running a
to integrate swift with azure. Or How can solve this problem?
>
> Regards.
>
>
> 2014-04-15 20:51 GMT+03:00 Chuck Thier :
>
> I'm not aware of any integration between Swift and Azure. Could you
>> explain more what problem you are trying to solve?
>>
>> T
I'm not aware of any integration between Swift and Azure. Could you
explain more what problem you are trying to solve?
Thanks,
--
Chuck
On Tue, Apr 15, 2014 at 6:55 AM, mehmet hacısalihoğlu
wrote:
> Hi All,
>
> I want to integrate swift with azure. But I dont find related document to
> this s
else looked into that?
--
Chuck
On Fri, Apr 4, 2014 at 9:41 AM, Chuck Thier wrote:
> Howdy,
>
> Now that swift has aligned with the other projects to use requests in
> python-swiftclient, we have lost a couple of features.
>
> 1. Requests doesn't support expect: 100-
On Fri, Apr 4, 2014 at 11:18 AM, Donald Stufft wrote:
>
> On Apr 4, 2014, at 10:56 AM, Chuck Thier wrote:
>
> On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft wrote:
>
>> requests should work fine if you used the event let monkey patch the
>> socket module prior to imp
On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft wrote:
> requests should work fine if you used the event let monkey patch the
> socket module prior to import requests.
>
That's what I had hoped as well (and is what swift-bench did already), but
it performs the same if I monkey patch or not.
--
Ch
Howdy,
Now that swift has aligned with the other projects to use requests in
python-swiftclient, we have lost a couple of features.
1. Requests doesn't support expect: 100-continue. This is very useful for
services like swift or glance where you want to make sure a request can
continue before y
>
>
> I agree this is quite an issue but I also think that pretending that
> we'll be able to let OpenStack grow with a minimum set of databases,
> brokers and web servers is a bit unrealistic. The set of supported
> technologies won't be able to fulfill the needs of all the
> yet-to-be-discovered
Hi Shrinand,
For your use case, it would certainly lower the overall latency, and likely
increase throughput. The downside is that the client has to track all of
the objects, and you loose some of the other features like usage.
Is it viable for swift? That's a tough question. I think could be
Concurrency is hard, let's blame the tools!
Any lib that we use in python is going to have a set of trade-offs.
Looking at a couple of the options on the table:
1. Threads: Great! code doesn't have to change too much, but now that
code *will* be preempted at any time, so now we have to worry a
My first guess is that the Redhat kernel in 5.8 may not have as many xfs
improvements and may require that the inode size set to 1024 instead of the
default.
That would be the first thing I would try.
--
Chuck
On Tue, Dec 10, 2013 at 5:57 PM, John Smith wrote:
> On Wed, Dec 11, 2013 at 12:53
Hi Shri,
On Thu, Oct 10, 2013 at 1:31 PM, Shrinand Javadekar wrote:
> Thanks for the inputs Chuck. Please see my responses inline.
>
>
> On Thu, Oct 10, 2013 at 7:56 AM, Chuck Thier wrote:
>
>> Hi Shri,
>>
>> I think your observations are fairly spot on.
otational) of the servers running swift. But the
> relative numbers give a better picture of the benefits of:
>
> i) Sharding across containers to increase throughput
> ii) Restricting the number of objects per container
>
> Let me know if I have missed out on anything or if the
At a minimum, you want 100 partitions per disk. More partitions doesn't
really matter unless you get way too many (for example 100K partitions on a
disk is probably a bad idea). And yes, since there are 3 replicas, there
will be 3 copies of every partition.
When determining the part power, you r
Hi Shri,
The short answer is that sharding your data across containers in swift is
generally a good idea.
The limitations with containers has a lot more to do with overall
concurrency rather than total objects in a container. The number of
objects in a container can have an affect on that, but w
Hi Pangj,
First, make sure you have an updated version of swift-bench. There was a
bug where it was relying on python-swiftclient to setup eventlet, but when
eventlet was removed from swiftclient, that caused swift-bench to not run
requests concurrently.
There are quite a few things that should
Hi Steve,
The services start with the ports defined in the configuration files.
Since it looks like you want to run several servers on the same machine, I
would suggest looking at the all in one docs (
http://docs.openstack.org/developer/swift/development_saio.html) as it is
set up in a similar w
happy to accept pull requests or comments.
>
> What clients do you use against your swift implementation? Have you gotten
> pyrax to work with it?
>
> -- Kyle
>
> On Tue, Jul 30, 2013 at 10:36 AM, Chuck Thier wrote:
>
>> Hey Kyle,
>>
>> I'm interested
ves users the
> ability to flexibly store their data with a nice interface yet still have
> the ability to get at some of the pokey bits underneath.
>
> --John
>
>
>
> On Jul 18, 2013, at 10:31 AM, Chuck Thier wrote:
>
> > I'm with Chmouel though. It seems
I'm with Chmouel though. It seems to me that EC policy should be chosen by
the provider and not the client. For public storage clouds, I don't think
you can make the assumption that all users/clients will understand the
storage/latency tradeoffs and benefits.
On Thu, Jul 18, 2013 at 8:11 AM, Jo
Hello,
If you followed the rsyslog instructions in the SAIO, then the proxy logs
will be in /var/log/swift/proxy.log and proxy.error. If not, then it will
be either in /var/log/syslog or /var/log/messages, depending on your server
distro.
--
Chuck
On Tue, Jul 16, 2013 at 4:57 AM, CHABANI Moham
Swift stores object metadata in the xattrs of the file on disk and XFS
stores xattrs in the inodes. When swift was first developed, there were
performance issues with using the default inode size in XFS, and led to us
recommending to change the inode size when creating XFS filesystems.
In the pas
On Wed, May 22, 2013 at 7:54 PM, Mark Brown wrote:
> Thanks Chuck.
>
> Just one more question about rebalancing. Have there been measurements on
> how much it affects performance when a rebalance is in progress? I would
> assume its an operation that puts some load on the system, while also
> keep
Hey Mark,
On Wed, May 22, 2013 at 8:59 AM, Mark Brown wrote:
> Thank you for the responses Chuck.
>
> As part of a rebalance, the replicator, I would assume, copies the object
> from the old partition to the new partition, and then deletes it from the
> old partition. Is that a fair assumption?
>
Hi Mark,
On Tue, May 21, 2013 at 6:46 PM, Mark Brown wrote:
> Hello,
> I had a few more basic Swift questions..
>
> 1. In Swift, when a rebalance is happening, does the client have write
> access to the object? Does Swift have a mechanism to lock down one copy
> which it is moving, and allow upda
The important lines are the **FINAL** lines (the others are just to print
status, so you did 1000 PUTS at 9.1 PUTs per second average, 52.9 per
second for DEL GET and 7.6 for DEL.
--
Chuck
On Tue, Apr 16, 2013 at 3:28 PM, Sujay M wrote:
> Hi all,
>
> Can you please let me know how one can inte
brought to the TC I will continue to support these
ideals. I deeply care for Openstack and its future success, so please
consider me for this position.
Thanks,
--
Chuck Thier
@creiht
___
Mailing list: https://launchpad.net/~openstack
Post to :
Hi Giuseppe,
The first thing you can do is use the swift-get-nodes utility to find
out where those objects would normally be located. In your case it
will look something like:
swift-get-nodes /etc/swift/object.ring.gz AUTH_ACCOUNTHASH images
8ab06434-5152-4563-b122-f293fd9af465
Of course substi
Howdy,
The scripts are generated when setup.py is run (either as `setup.py
install` or `setup.py develop`
--
Chuck
On Mon, Feb 11, 2013 at 11:02 AM, Kun Huang wrote:
> Hi, swift developers
>
> I found the script /usr/local/bin/swift is:
>
> #!/usr/bin/python -u
> # EASY-INSTALL-DEV-SCRIPT: 'py
Hi John,
It would be difficult to recommend a specific drive, because things
change so often. New drives are being introduced all the time.
Manufacturers buy their competition and cancel their awesome products.
So the short answer is that you really need to test the drives out in
your environme
t; [container-updater]
> concurrency = 8
>
> [container-auditor]
>
> #4 We dont use SSL for swift so, no latency over there.
>
> Hope you guys can shed some light.
>
>
> *
> *
> *
> *
> *Alejandro Comisario
> #melicloud CloudBuilders*
> Arias 3751, Pi
any logs. I'm not sure what
> to do :S
>
>
> On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier wrote:
>>
>> You would have to look at the proxy log to see if a request is being
>> made. The results from the swift command line are just the calls that
>> the clien
>>> token cache? If so I've already added the configuration line and have not
>>> noticed any speedup :/
>>>
>>>
>>>
>>>
>>> On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
>>> wrote:
>>>>
>>>
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
wrote:
> Also, I'm unable to run the swift-bench with keystone.
>
Hrm... That was supposed to be fixed with this bug:
https://bugs.launchpad.net/swift/+bug/1011727
My keystone dev instance isn't working at the moment, but I'll see if
I ca
On Mon, Jan 14, 2013 at 11:01 AM, Leander Bessa Beernaert
wrote:
> I currently have 4 machines running 10 clients each uploading 1/40th of the
> data. More than 40 simultaneous clientes starts to severely affect
> Keystone's ability to handle these operations.
You might also double check that you
you recommend
> another approach?
>
>
> On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier wrote:
>>
>> Using swift stat probably isn't the best way to determine cluster
>> performance, as those stats are updated async, and could be delayed
>> quite a bit as
d some
> calculation based on those values to get to the end result.
>
> Currently I'm resetting swift with a node size of 64, since 90% of the files
> are less than 70KB in size. I think that might help.
>
>
> On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier wrote:
>>
>
Hey Leander,
Can you post what performance you are getting? If they are all
sharing the same GigE network, you might also check that the links
aren't being saturated, as it is pretty easy to saturate pushing 200k
files around.
--
Chuck
On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
Hi Leander,
The following assumes that the cluster isn't in production yet:
1. Stop all services on all machines
2. Format and remount all storage devices
3. Re-create rings with the correct partition size
4. Push new rings out to all servers
5. Start services back up and test.
--
Chuck
On
oblem, other than from the
> datanodes.
>
> Maybe worth pasting our config over here?
> Thanks in advance.
>
> alejandro
>
> On 12 Jan 2013 02:01, "Chuck Thier" wrote:
>>
>> Looking at this from a different perspective. Having 2500 partitions
>>
Looking at this from a different perspective. Having 2500 partitions
per drive shouldn't be an absolutely horrible thing either. Do you
know how many objects you have per partition? What types of problems
are you seeing?
--
Chuck
On Fri, Jan 11, 2013 at 3:28 PM, John Dickinson wrote:
> If eff
and leave replication and
> distribution to higher level of Swift.
>
>
>
> [image: Inactive hide details for Chuck Thier ---2012-12-20 上午
> 12:35:58---Chuck Thier ]Chuck Thier ---2012-12-20 上午
> 12:35:58---Chuck Thier
>
>
>*Chuck Thier *
>Sent by: opens
There are a couple of things to think about when using RAID (or more
specifically parity RAID) with swift.
The first has already been identified in that the workload for swift
is very write heavy with small random IO, which is very bad for most
parity RAID. In our testing, under heavy workloads,
The metadata for objects is stored at the object level, not in the
container dbs. Reporting metadata information for container listings
would require the server to HEAD every object in the container, which
would cause too much work on the backend.
--
Chuck
On Wed, Dec 12, 2012 at 7:01 AM, Morten
Top posting to give some general history and some thoughts.
Snapshots, as implemented currently in cinder, are derived from the
EBS definition of a snapshot. This is more of a consistent block
level backup of a volume. New volumes can be created from any given
snapshot. This is *not* usually wh
Hi Javier,
On Tue, Nov 6, 2012 at 5:07 AM, Javier Fontan wrote:
> Hello,
>
> We recently had interest from some of our enterprise users to use
> Swift Object Store as the backend for the VM images. I have been
> researching on a possible integration with OpenNebula but I have some
> questions.
>
Hey Vish,
First, thanks for bringing this up for discussion. Coincidentally a
similar discussion had come up with our teams, but I had pushed it
aside at the time due to time constraints. It is a tricky problem to
solve generally for all hypervisors. See my comments inline:
On Mon, Aug 13, 201
We currently have a large deployment that is based on nova-volume as it is
in trunk today, and just ripping it out will be quite painful. For us,
option #2 is the only suitable option.
We need a smooth migration path, and time to successfuly migrate to Cinder.
Since there is no clear migration pa
On Wed, Jun 20, 2012 at 12:16 PM, Jay Pipes wrote:
> On 06/20/2012 11:52 AM, Lars Kellogg-Stedman wrote:
>>>
>>> A strategy we are making in Nova (WIP) is to allow instance
>>> termination no matter what. Perhaps a similar strategy could be
>>> adopted for volumes too? Thanks,
>>
>>
>> The 'nova-m
Hey Chmouel,
The first easy step would be to by default not start the aux services
(like replication). And if someone wants to test those, they can run
them manually (similarly to how we do dev with the saio).
--
Chuck
On Mon, May 14, 2012 at 10:17 AM, Chmouel Boudjnah wrote:
> Hello,
>
> devs
Hi Sally,
I don't know if we have the code for the original rings, but gholt has
a good series of blog posts that hits on several of the different
stages we went through when designing the ring in swift:
http://www.tlohg.com/p/building-consistent-hashing-ring.html
--
Chuck
2012/4/19 Sally Cong
Some general notes for consistency and swift (all of the below assumes
3 replicas):
Objects:
When swift PUTs an object, it attempts to write to all 3 replicas
and only returns success if 2 or more replicas were written
successfully. When a new object is created, it has a fairly strong
consiste
Hi Vladimir,
I agree that we need a volume-type aware scheduler, and thanks for
taking this on. I had envisioned it a bit different though. I was
thinking that the cluster operator would define the volume types
(similar to how they define vm flavors). Each type would have a
mapping to a driver,
Hi Fabrice,
The design of Swift has always assumed that the backend services are
running on a secured, private network. If this is not going to be the
case, or you would like to provide more security on that network, a
lot more work needs to be done than just rsync. That said, I don't
think it
Hi Mark,
I just wanted to clarify to the reasoning why we use POST for metadata
modification in Swift. In general I totally agree that PUT/POST
should be used for creation (PUT when you know the identification of
the representation, POST when you do not). And PUT should be used
when modifying th
Howdy,
In general Nginx is really good, and we like it a lot, but it has one
design flaw that causes it to not work well with swift. Nginx spools
all requests, so if you are getting a lot large (say 5GB) uploads, it
can be problematic. In our testing a while ago, Pound proved to have
the best SS
Hi,
Each container server sqlite db is replicated to 3 of your container
nodes. Container replication (which operates a bit differently than
object replication) ensures that they stay in sync. The container
nodes can be run either on the same nodes as your storage nodes, or on
separate nodes. T
Taking a bit of a step back, it seems to me that the biggest thing
that prevents us from using a pure github workflow is the absolute
requirement of a "gated" trunk. Perhaps a better question to ask
weather or not this should be an absolute requirement. For me, it is
a nice to have, but shouldn't
Hi Caitlin,
Right now the best source of what S3 features are available through
the S3 compatibility layer are here:
http://swift.openstack.org/misc.html#module-swift.common.middleware.swift3
--
Chuck
On Fri, Sep 2, 2011 at 2:59 PM, Caitlin Bestler
wrote:
> Joshua Harlow asked:
>
> < Is there
I would like to see one way CHAP support added to Nova Volume. Not a
whole lot more to add, but would be interested in any feedback.
Blueprint: https://blueprints.launchpad.net/nova/+spec/isci-chap
Spec: http://etherpad.openstack.org/iscsi-chap
--
Chuck
_
;
> p.s - typing on a real keyboard is so much easier than an iPad, and leads to
> much better grammar...
> On Thu, Jul 21, 2011 at 12:19 PM, Chuck Thier wrote:
>>
>> Hey Andi,
>>
>> Perhaps it would be better to re-frame the question.
>>
>> What should t
is rewinding to so a historic state some time in the future.
>
> That said, with the prereqs met, both can probably be used to mount a new
> volume.
> Reasonable?
>
> On Jul 20, 2011, at 5:27 PM, Chuck Thier wrote:
>
>> Yeah, I think you are illustrating how this generate
>
> It seems like backup and snapshot are kind of interchangable. This is quite
> confusing, perhaps we should refer to them as:
>
> partial-snapshot
>
> whole-snapshot
>
> or something along those lines that conveys that one is a differencing image
> and one is a
At the last developers summit, it was noted by many, that the idea of
a volume snaphsot in the cloud is highly overloaded. EBS uses the
notion of snapshots for making point in time backups of a volume that
can be used to create a new volume from. These are not true snapshots
though from a storage
,
> protection policies, availability, is it cached, cloned, deduplicated,
> compressed, encrypted, et cetera? Nova volume needs to support some notion
> of those characteristics as well.
>
> Thanks,
> Rob Esker
>
>
> On Jul 18, 2011, at 6:05 PM, Chuck Thier wrote:
>
this code may differ from vendor to
> vendor.
>
> Regards,
> -Vladimir
>
>
> -Original Message-
> From: openstack-bounces+vladimir=zadarastorage@lists.launchpad.net
> [mailto:openstack-bounces+vladimir=zadarastorage@lists.launchpad.net]
> On Behalf O
There are two concepts that I would like Nova Volumes to support:
1. Allow different storage classes within a storage driver. For
example, in our case we will have some nodes with high iops
capabilities and other nodes for cheaper/larger volumes.
2. Allow for different storage backends to be
xtensions, and that still makes sense, but will there be a separate,
> independent block service and API?
>
> Erik
>
> From: Chuck Thier
> Date: Fri, 8 Jul 2011 13:15:56 -0500
> To: Jorge Williams
> Cc: ""
> Subject: Re: [Openstack] Refocusing the Lunr Proje
> What does this mean in terms of APIs? Will there be a separate Volume
> API? Will volumes be embedded in the compute API?
>
> -jOrGe W.
>
>
> On Jul 8, 2011, at 10:40 AM, Chuck Thier wrote:
>
> Openstack Community,
>
> Through the last few months the Lunr team
volume service.
I believe that this new direction will ensure a bright future for storage
in Nova, and look forward to continuing to work with everyone in making this
possible.
Sincerely,
Chuck Thier (@creiht)
Lunr Team Lead
___
Mailing list: https
ese pointers (and maybe other results from the design summit,
> which alas I missed) ?
>
> a.
>
>
> On Tue, Jun 14, 2011 at 2:05 PM, Chuck Thier wrote:
>
>> Hi Andi,
>>
>> There was the initial blue print at:
>> https://blueprints.launchpad.net/nova/+
pointers to
> whatever material is out there?
>
> thx
>
>
>
> On Tue, May 31, 2011 at 7:16 PM, Chuck Thier wrote:
>
>> Howdy Stackers,
>>
>> It has been a while, so I thought I would send out an update.
>>
>> We are still in the process of doin
It saddens me that this is what OpenStack is becoming. There is no reason
that the swift team couldn't right now just fork to github, and leave the
pieces for you to figure out. Instead, they are trying to do the right
thing, work within the system and get things done in a way that will work
for
; vote *separately* on the API and the project incubation status. I was
> saying that the two don't have to be done at the same time...
>
> Apologies for any confusion...
>
> -jay
>
> On Wed, Jun 1, 2011 at 2:01 PM, Chuck Thier wrote:
> > While, I'm not on the bo
While, I'm not on the board any more, I would just like to chime in a bit:
The proposal for API for block storage was presented both at the design
summit, and on the mailing list afterwards. All I received from that was
good feedback, and enough to continue our effort. But, before we try to
offe
Howdy Stackers,
It has been a while, so I thought I would send out an update.
We are still in the process of doing initial R&D work, and hope to have some
code available for people to poke at and comment on in the next few weeks.
This will include a first rough cut of our proposed volume API, dr
Hi Thomas,
The swift-init thing is just a useful tool that we use to manage the
services in dev, and while at one time we had init scripts, our ops guys
just started using the swift-init tool out of convenience.
That said, it should be easy to create other init scripts. The format for
starting a
Hey Soren,
We've asked similar questions before :)
Ever since the packaging was pulled out of the source tree, we have been
mostly out of the packaging loop. Since then most of the packaging details
have been handled by monty and you.
We build our own packages for production, so we have mostly
We have no current plans to make an iSCSI target for swift. Not only would
there be performance issues, but also consistency issues among other things.
For Lunr, swift will only be a target for backups from block devices.
I think some of this confusion stems from the confusion around snapshots,
On Mon, May 2, 2011 at 2:45 PM, Eric Windisch wrote:
>
> On May 2, 2011, at 12:50 PM, FUJITA Tomonori wrote:
>
> > Hello,
> >
> > Chuck told me at the conference that lunr team are still working on
> > the reference iSCSI target driver design and a possible design might
> > exploit device mapper
t; endpoints?
>
> In other words am i creating a volume with a PUT /
> provider.com/high-perf-volumes/account/volumes/
> or just a /provider.com/account/volumes/ and a X-High-Perf header ?
>
> Vish
>
> On Apr 22, 2011, at 2:40 PM, Chuck Thier wrote:
>
> > One of the fir
service. It is also undecided
if
this should be a publicly available api, or just used by backend
services.
The exports endpoint is the biggest change that we are proposing, so we
would
like to solicit feedback on this idea.
--
Chuck Thier (@creiht
to the Openstack community?
Are we able to discuss this at next week's design summit?
We are working on getting the following done before the design summit
next week:
* Choose a project name - DONE (Lunr)
* Identify a project lead - DONE (Chuck Thier @creiht)
* Set up a pr
ion of the project, answer initial questions, and have
initial review
of a draft API prior to the design summit.
--
Chuck Thier (@creiht)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://
Hi Jon,
I'm not familiar with the C# bindings, but the fact that you can do
some operations sounds promising. A 503 return code from the server means
that something wrong happened server side, so you might check the server
logs to see if they provide any useful information. Another useful test
w
Hi Jon,
I'm not familiar with the C# bindings, but the fact that you can do
some operations sounds promising. A 503 return code from the server means
that something wrong happened server side, so you might check the server
logs to see if they provide any useful information. Another useful test
w
rking installation would be to follow the
all in one instructions (http://swift.openstack.org/development_saio.html).
This will allow you to also run the suite of functional tests.
--
Chuck
On Tue, Apr 5, 2011 at 4:18 AM, Thomas Goirand wrote:
> On 04/05/2011 05:27 AM, Chuck Thier wrote:
&g
> > I also worked on swift. Can you have a look? I'm not so sure what I did
> > is fully correct yet, because I didn't succeed in running everything
> > fully. It seems that swift doesn't like using device-mapper as
> > partitions, is that correct? Which leads me to reinstall my test server
> > fro
>
>
> It would be a problem if you start to make upgrades hard. If someone
> wanted to upgrade once a year, then a monthly release cycle means that they
> will have to upgrade from version N to N+12. Do you think that would work?
> Is the Swift QA process good enough that it checks all upgrades
>
>
> Thanks,
>
>
>
> Ewan.
>
>
>
> *From:*
> openstack-poc-bounces+ewan.mellor=citrix@lists.launchpad.net[mailto:
> openstack-poc-bounces+ewan.mellor=citrix....@lists.launchpad.net] *On
> Behalf Of *Chuck Thier
> *Sent:* 24 March 2011 17:26
> *T
On Thu, Mar 24, 2011 at 1:35 PM, Jesse Andrews wrote:
>
> On Mar 24, 2011, at 10:26 AM, Chuck Thier wrote:
>
> Since we are not having the meeting this week, I would like to bring up a
> couple of things for discussion before the next meeting.
>
> 1. We have a need at
1 - 100 of 224 matches
Mail list logo