There are a few things you can look at.
First, the slowdowns you are seeing may be because of the increased numbers of
files on disk. This causes the background processes to do more work, which can
cause some issues for the server processes.
Second (and perhaps more likely), how is your data ar
The auth system and how you organize the data are separate. You can certainly
store all objects in one account (although I'd recommend you spread the objects
across many containers). You could also not use any auth at all (by removing
tempauth or keystone from the pipeline in the proxy server co
It depends on your cluster, but there are good reasons for doing so. Since
Swift's background processes are constantly scanning the drives to check for
errors, processes like the object-auditor and object-replicator can clear out
the system's page cache simply by scanning the local filesystem's
This seems to be a popular question this morning... Take a look at
http://lists.openstack.org/pipermail/openstack/2013-August/000849.html and
https://answers.launchpad.net/swift/+question/234691 as well.
The page cache usage means that the system has to go to disk to get the data
when it is req
swift-init looks for a "--once" argument. If it's present, then the once kwarg
will be set in the call to swift.common.daemon.run_daemon(). This in turn calls
the appropriate run_once or run_forever.
https://github.com/openstack/swift/blob/master/swift/common/daemon.py#L55
--John
On Oct 7, 2
> On Nov 27, 2014, at 1:31 AM, Chris wrote:
>
> Hello
>
> I have a question regarding regions. We have the initial “RegionOne” in our
> current setup and want to and a second one “RegionTwo”.
> It seems like the shared services between different regions are Keystone and
> Horizon. The separat
Great questions, but there isn't a short answer. Let me do my best to give you
a concise answer.
> On Dec 10, 2014, at 11:00 PM, dhanesh1212121212 wrote:
>
> Hi All,
>
> Below details is Mentioned in openstack document to configure object storage.
>
> For simplicity, this guide uses one regi
Sounds like you're looking for a global cluster. You don't need multiple rings
for this. Swift can support this. When you add a new device to a ring, you add
it in a different region, and Swift takes care of it for you.
Here's some more information:
http://docs.openstack.org/developer/swift/adm
ll three have -33.33 (container, object, account) for their balance. Is
> this normal or did do something incorrect? It doesnt seem to be replicating
> the data to the new nodes (or at least it looks like it stopped?) but I am
> not sure. Would appreciate any insight. Thanks!
>
&
one and one copy in region 2.
>
> On Tue, Dec 16, 2014 at 12:35 PM, John Dickinson wrote:
> That's normal. See the "...or none can be due to min_part_hours". Swift is
> refusing to move more data until the stuff likely currently in flight has
> settled. See https:
e version of Swiftstack available per chance do
> you :-)
Yes, we do. https://swiftstack.com/customer/signup/
>
>
>
>
> On Tue, Dec 16, 2014 at 12:48 PM, John Dickinson wrote:
> Assuming your regions are pretty close to the same size, that's exactly what
> you'll get
Michael,
In Swift, I've created https://wiki.openstack.org/wiki/Swift/ideas to document
some ideas that are relatively low-impact, isolated from major ongoing work,
and still pretty nice to have. And although I have created it yet, I'm working
on some other tools to have a much better, holistic
There's a ton of info available at https://swiftstack.com/openstack-swift/.
Specifically, take a look at
https://swiftstack.com/openstack-swift/architecture/ for how Swift solves some
of these issues.
You might also find the info at http://swift.openstack.org useful, especially
http://docs.ope
ssages
(swift-drive-audit), detecting full or failed drives with mount checks and
fallocate calls, and checksum calculations for detecting bit rot.
--John
> On Jan 30, 2015, at 1:21 PM, John Dickinson wrote:
>
> There's a ton of info available at https://swiftstack.com/opens
You can test Swift functionality with Swift's included functional tests.
In the Swift source tree, you can run the .functests script
(https://github.com/openstack/swift/blob/master/.functests). This will look for
/etc/swift/test.conf (sample at
https://github.com/openstack/swift/blob/master/tes
Sounds like what you're looking for are Swift's probe tests. These take a
cluster of a known configuration and test that the cluster components work
together as expected. eg PUT some data, delete one replica, run replication,
and ensure that the missing replica is restored.
The auditors and rep
Well, it's complicated ;-)
Start with http://docs.openstack.org/developer/swift/overview_ring.html
Think about your scenario. if you have differently sized failure domains, then
there are 2 options: limit capacity to the smallest failure domain or have some
capacity overweighted (ie more than
Swift does not use RC4
--John
> On Apr 22, 2015, at 11:41 PM, Mohammed, Allauddin
> wrote:
>
> Hi All,
>Is openstack swift impacted by CVE-2015-2808 (RC4 vulnerability)?
>
> https://access.redhat.com/security/cve/CVE-2015-2808
>
> Any ideas?
>
> Regards,
> Allauddin
>
> _
> On Apr 24, 2015, at 12:42 AM, Shrinand Javadekar
> wrote:
>
> Hi,
>
> I observe that while placing data, the object server creates a
> directory structure:
>
> /srv/node/r0/objects//<3 byte hash suffix>//.data.
>
> Is there a reason for the directory to be created? Couldn't
> this just ha
Great, thanks. Sounds like a pretty interesting performance improvement.
--John
> On Apr 30, 2015, at 11:27 AM, Shrinand Javadekar
> wrote:
>
> I was able to make the code change to create the tmp directory in the
> 3-byte hash directory and fix the unit tests to get this to work. I
> will fi
Yes. Write operations are atomically written to disk. Simultaneous reads and
writes will not interfere with one another.
--John
> On Jun 7, 2015, at 5:56 AM, Binan AL Halabi wrote:
>
> Hi all
>
> Does Openstack object store provide atomic operations ?
> If multiple parties update an objec
The python-dnspython package is only used in Swift for the cname_lookup
middleware. If you are not using that middleware, you can safely not use
python-dnspython.
--john
> On Jun 30, 2015, at 9:54 PM, Mark Kirkwood
> wrote:
>
> On 01/07/15 13:10, Mark Kirkwood wrote:
>> I ran into this a f
No. There are no plans for Swift to implement a file system.
--John
> On Jul 16, 2015, at 10:44 AM, Brent Troge wrote:
>
> Does the swift project plan to have a file system kernel module in the same
> manner as ceph's file system?
>
>
>
>
> ___
m/wp-content/uploads/2015/02/20140710_Datasheet_Filesystem_Gateway.pdf
> ?
>
> I never tried it myself...
>
> On 16 July 2015 at 15:09, John Dickinson wrote:
> No. There are no plans for Swift to implement a file system.
>
> --John
>
>
>
> > On Jul 16, 2015, at 10:44 AM, Brent Trog
You can also use swiftclient.service module. It is the wrapper on top of the
low-level swiftclient.client that is actually what the CLI tool uses for some
of its higher-order functions (like splitting larger local files and making
manifest objects).
You'd have something like:
results = swiftcl
I'm the Project Technical Lead for Swift, and I'd be happy to look over a
summary of your work about monitoring Swift. Feel free to email me directly or
find me in #openstack-swift on IRC (I'm notmyname).
--John
On 8 Sep 2015, at 3:20, pragya jain wrote:
> Hello all
> Me and my colleague,
I've heard of a few ways:
1) use existing puppet or chef or fabric or salt or dsh scripts to treat them
like normal config files
2) use rsync to sync the ring files every so often (eg via cron)
3) host a web server on the admin box where you made the scripts and wget or
curl them in the swift
Swift has a modular design, and this allows you the flexibility to deploy to
match your needs.
Common deployment patterns are (1) proxy, account, container, object all
running on the same hardware (2) proxy on one SKU and account+container+object
on another SKU (3) proxy on one SKU, account+con
While a Swift-All-In-One certainly isn't something your should run in
production, the SAIO document does have some guidance on how to configure
rsyslogd to split out log messages.
http://docs.openstack.org/developer/swift/development_saio.html#optional-setting-up-rsyslog-for-individual-logging
The guys over at Rackspace put together a nice little ring part power
calculator.
http://rackerlabs.github.io/swift-ppc/
--John
On Dec 4, 2013, at 12:17 PM, Stephen Wood wrote:
> I'm wondering if I have the partition number correct for the best performance
> for our setup (we have 23 hosts
correct. a single container is replicated like other data in the system
(typically 3x). This means that a single container is on only 3 spindles, and
an nymber of concurrent writes to objects in that container will attempt to
update the container listing (with graceful failure handling). This me
What version of Swift are you using? You can find it with `python -c 'import
swift; print swift.__version__'`.
If you have a version later than 1.10.0 (ie the Havana release), you may be
running into https://bugs.launchpad.net/swift/+bug/1257330 (for which we have a
patch in progress at https:/
Yes, you can absolutely do this.
As mentioned in another reply, Swift's API is built on HTTP, so it works
natively with web browsers.
There are a couple of ways to read data stored in Swift via a web browser.
1) For public content:
If you've marked a container as public (ie X-Container-Read set
I don't have any personal experience with ownCloud, but I've heard of others
using it. I will absolutely vouch for Swift as a good solution for file
sharing, especially as the aggregate size of your content grows.
--John
On Dec 16, 2013, at 9:14 PM, Frans Thamura wrote:
> Hi all
>
>
> I
You should try both and choose the one that best meets your needs. Chmouel at
eNovance wrote a great summary of the two systems at
http://techs.enovance.com/6427/ceph-and-swift-why-we-are-not-fighting.
--John
On Dec 23, 2013, at 9:24 AM, Cedric Lemarchand wrote:
> Hello,
>
> I am evaluatin
I think you'll find that object storage is relatively new enough in the storage
world and there isn't an international standard (or at least one that is widely
used).
I have seen three things, primarily.
1) Amazon S3: huge because the AWS ecosystem is huge.
2) CDMI: s standard promoted by SNIA
You are asking for practical examples, so here are some links to get started:
https://blog.wikimedia.org/2012/02/09/scaling-media-storage-at-wikimedia-with-swift/
http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/case-study-concur-delivering-a-saas-applic
In the output you pasted, you don't have any successful response. I'd suggest
looking at the tempauth stanza in the proxy server conf to make sure the users
are set up correctly.
--John
On Feb 7, 2014, at 4:55 PM, Adam Lawson wrote:
> To help with troubleshooting, here is what I've executed
Moving your container servers to SSDs is the fastest way to get better
performance out of them.
Although there have been many discussions about how to do internal container
sharding, I'm not aware of any current ongoing work there (although I'd love to
see someone work on it).
--John
On Fe
Look on your existing drives in your cluster and see how much space the
"account" and "container" directories are using (eg `du -sh
/srv/node/*/{accounts,containers}`). Sum that across all the servers in your
cluster, and you've got it.
As a general rule of thumb (ie HUGE ASSUMPTIONS MADE HERE)
Swift absolutely and without a doubt participates in OpenStack releases.
OpenStack releases on a six-month cadence, and Swift participates in this
integrated release (as do all other OpenStack projects).
In between the OpenStack integrated releases, Swift differs from some other
OpenStack proje
There are 2 reasons you don't want to run Swift on VMs:
1) (minor reason, depending on the use case) performance. Virtualized IO is
going to have worse performance for storage than bare metal. In some cases this
may not matter, and in others you may use Super Awesome Virt (tm) that doesn't
have
Yep, you're right. Doing a HEAD request before every PUT gets expensive,
especially for small files.
But don't despair! There's some good news.
First, realize that swiftclient is written for a pretty general use case. If
you have more knowledge about how your system works, then you can write
s
That's not supported today, but it's one of the exact use cases being
implemented with storage policies.
https://swiftstack.com/blog/2014/01/27/openstack-swift-storage-policies/
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030937.html
--John
On Apr 21, 2014, at 1:20 PM, Adam
Alejandro,
We blogged about this recently at
https://swiftstack.com/blog/2014/04/24/about-those-public-cloud-price-drops/.
I've asked around the office here at SwiftStack to see if we can put together
some more public cost details (specifically in response to you question), so I
hope we can sh
On Apr 30, 2014, at 11:59 AM, Dimitri Maziuk wrote:
> I'd say an interesting question is how many users want storage that only
> lets you put, get, and delete a file. A private cloud storage is
> trivially re-exportable as a filesystem, how easy is that with
> commercial offerings?
1) "storage t
Off the top of my head, I don't know any middleware that supports this. If you
do find or write one, I'd love for you to add it to
http://docs.openstack.org/developer/swift/associated_projects.html.
I've long thought that WebDAV support would be cool[1]. Not that I've seen a
lot of people deman
That's fantastic news! Hearing more companies and applications supporting Swift
is great for the community, and it gives us as contributors yet another use
case to reference.
I work with Hugo, so I'll peek over his shoulder to check it out. Meanwhile, if
there's anything I can do to help answer
As Pete mentioned, since Swift can use a lot of sockets and fds when the system
is under load. Take a look at
http://docs.openstack.org/developer/swift/deployment_guide.html#general-system-tuning
for some sysctl settings that can help. Also, note that if you start Swift as
root (it can drop per
Semantic versioning is bigger than just API changes. In fact, since we
separately version the API, we can actually be quite explicit about what's in a
particular release. In general, we do semantic versioning in Swift for a few
reasons, but the main one is that it's been requested and confirmed
Adam, that sounds you're talking about the API versions. So a user is consuming
eg Keystone v2, Nova v3, and Swift v1 (not to mention all the other projects).
Are you asking for a singular "OpenStack API"? Or maybe a common API version
number across projects (similar to how the requirements are
> moment, but that dialog just happens over and over again.
>
>
> Adam Lawson
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
&
On Jul 30, 2014, at 10:57 AM, Samuel Merritt wrote:
> On 7/30/14, 10:18 AM, Shrinand Javadekar wrote:
>> Hi,
>>
>> Swift v1 allowed for geo replication using read and write affinity
>> rules. Now, Swift v2 allows setting storage policies (which can affect
>> replication) per container. I wanted
You are describing one of the ways that Swift does eventual consistency. In the
scenario you describe, it is indeed possible to get the older version of the
object on a read. There is no whole-cluster invalidation of an object. Swift's
behavior here gives you high availability of your data even
I honestly have no idea how this works (the actual logistics), but speaking
from the Swift perspective within OpenStack, I'd love to help out in any way I
can.
--John
On Aug 8, 2014, at 7:41 PM, Mahati C wrote:
> Hi,
>
> I'm interested in participating in OPW (Outreach Program for Women
inline
On Aug 11, 2014, at 10:38 AM, Brent Troge wrote:
>
> By default the maximum object size is 5G. Outside of increased replication
> times, would there be any impacts if I increase that value to 10G? The reason
> is I have to store upto 10G media files. Using the large file manifest just
Mahati,
I chatted with Anita (anteaya) in IRC this afternoon about the OPW. I'd be
happy to continue working with you and either be a mentor for Swift in
OpenStack or help find one.
--John
On Aug 8, 2014, at 8:13 PM, John Dickinson wrote:
> I honestly have no idea how this wo
1) Swift doesn't yet currently support erasure codes. It's something we're
actively working on now.
2) It depends. Swift will allow you to choose your EC algorithm and the
parameters for it, so that will be what determines how much raw storage space
is needed for a given amount of usable storag
If the IP for a storage node changes, you'll need to update the rings where
that server's drives are. You can update the IP with the `swift-ring-builder
set_info ...` command and then use "write_ring" to serialize it. Doing this
will not cause any data movement in the cluster. Removing the serve
(assuming their
> addresses didn't change) and then back again after the rings reflect the new
> addresses of the primary(ies).
>
> -Paul
>
> -Original Message-
> From: John Dickinson [mailto:m...@not.mn]
> Sent: Monday, August 18, 2014 11:54 AM
> To:
https://swiftstack.com/blog/2012/11/21/how-the-ring-works-in-openstack-swift/
is soemthing that should be able to give you a pretty complete overview of how
the ring works in Swift and how data placement works.
Let me know if you have more questions after you watch that video.
--John
On A
ed very
> much.
>
> So with respect to my example numbers, I am guessing that each partition will
> land on every '41538374868278621028243970633760768' of the md5 space.
>
> 2^(128 - 13)
>
> or
>
> 2^(128)/8192
>
> Thanks!
>
>
>
> O
You've actually identified the issues involved. Here's a writeup on how you can
do it, and the general best-practice for capacity management in Swift:
https://swiftstack.com/blog/2012/04/09/swift-capacity-management/
--John
On Aug 22, 2014, at 11:50 AM, Lillie Ross-CDSR11
wrote:
> All,
>
Just to clarify the first answer. Everything Paul said is correct: with 3
replicas, Swift requires that 2 (a quorum) are successful before a success can
be returned to the client. But, assuming there isn't any current issue in the
cluster (ie it's healthy), then all replicas will be written befo
Yup, this is a totally possible failure scenario, and Swift will merge the data
(using last-write-wins for overwrites) automatically when the partition is
restored. But you'll still have full durability on writes, even with a
partitioned global cluster.
--John
On Aug 27, 2014, at 10:49 AM, M
my understanding, but I'm not an expert on Keystone options or
deployment patterns.
>
> TIA,
>
> MW
>
>
> On Wed, Aug 27, 2014 at 11:37 PM, John Dickinson wrote:
>> Yup, this is a totally possible failure scenario, and Swift will merge the
>> data (us
redentials and tokens.
>
> Just confused a bit on the deployment basics.
Good luck! and feel free to ask more as needed.
>
> MW.
>
>
> On Fri, Aug 29, 2014 at 7:51 PM, John Dickinson wrote:
>>
>> On Aug 29, 2014, at 2:43 AM, Marcus White wrote:
>>
&g
I'm not sure what the question is.
If you are looking to have a successful response after it's written twice in a
cluster with 4 replicas, no. Swift's quorum calculation is (replicas DIV 2 +
1). This means that for 4 replicas, you have a quorum size of 3. What I would
suggest you look in to is
equirement, an object will be available in each region once I receive
> a 200 upon file ingest.
>
>
>
>
>
> On Tue, Sep 9, 2014 at 10:02 AM, John Dickinson wrote:
> I'm not sure what the question is.
>
> If you are looking to have a successful respons
Actually, it's even easier than that. Swift provides, out of the box, a
crossdomain middleware so that you can return the appropriate domain-wide
policy for your content.
See
http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain
--John
On Sep
Currently, no.
However, we've talked about (and IIRC there's been a little work on) a sort of
"defaulter" functionality that would (among other things) allow for setting a
per-account default storage policy. It's something that I think would be a
great addition to Swift.
If you want to talk ab
Th short answer is that no you cannot specify a storage node for a given
object. Data placement is done with the hash ring, so placement is based on the
hash of the object name and the current state of the drives in the cluster.
--John
On 22 Feb 2016, at 4:06, pgh pgh wrote:
> Hello All,
>
>
inline
On 22 Feb 2016, at 5:06, Kévin Bernard-Allies wrote:
> Hello,
>
> I've some questions about the handling of concurrent requests by Swift.
> Any help would be great:)
>
>
> If I understand correctly, a PUT request on swift-proxy will be transmitted
> to at least half the storage nodes plus
On 3 Mar 2016, at 13:32, Mark Kirkwood wrote:
> On 03/03/16 03:57, Peter Brouwer wrote:
>> Hello
>>
>> I am trying to find some information on the relationship of the ring
>> (builder) structure of swift and the size of disks used, i.e. how is
>> capacity of a disk managed.
>> I've seen many doc
Your code review is correct.
There's some ideas on how to make things more secure that I expect to be
tackled relatively soon, but for now it's all HTTP.
In single-site deployments, the internal Swift network (i.e. proxy to storage
and storage to storage) should be on a private network). And an
The "bind_port" setting in Swift's config files is required to be explicitly
set. It doesn't matter what the number is. If you have a port conflict with
another service, feel free to change it.
As to why port 6000 was used initially... The truth is probably lost to the
mists of time. However, I
...and we just landed a patch to use the 6200 range for examples.
https://review.openstack.org/#/c/274840/
--John
On 29 Apr 2016, at 9:26, John Dickinson wrote:
> The "bind_port" setting in Swift's config files is required to be explicitly
> set. It doesn't mat
That's an interesting error, and a little different than what I had seen in my
tests. I'd be really interested in your experience with patching eventlet
(directly or monkey-patching from swift) to add those methods.
--John
On 5 May 2016, at 13:01, Shrinand Javadekar wrote:
> Hi,
>
> Base
On 20 May 2016, at 10:27, Shrinand Javadekar wrote:
> Hi,
>
> I am troubleshooting a test setup where Swift returned a 201 for
> objects that were put in it but later when I tried to read it, I got
> back 404s.
>
> The system has been under load. I see lots of connection errors,
> lock-timeouts,
A few years ago, I gave this talk at LCA which covers a lot of these details.
https://www.youtube.com/watch?v=_sUvfGKhaMo&list=PLIr7I80Leee5NpoYTd9ffNvWq0pG18CN3&index=9
--John
On 27 Jun 2016, at 17:36, Mark Kirkwood wrote:
> Hi,
>
> I'm in the process of documenting failure modes (for ops d
Yes!
http://docs.openstack.org/developer/swift/overview_ring.html#fractional-replicas
Set the replica count to 2.1 then 2.2 then 2.3 etc until you get to 3. That
will allow you to gradually change the replica count over time to prevent the
cluster from immediately trying to create a new replica
Great to hear that you're looking at Swift. Answers inline...
--John
On 8 Aug 2016, at 7:56, Chris wrote:
> Hello,
>
> We are currently playing with swift and try to find out if it would be useful
> for our needs. We would use the latest mitaka release. It would be a
> multi-region deployme
Any way we could get the stats you need upstream in swift (emitted via statsd)
and then use something like
https://github.com/danslimmon/statsd-opentsdb-backend to get it in to opentsdb?
--John
On 9 Aug 2016, at 8:59, Alexandr Porunov wrote:
> Hello,
>
> I need to collect different metrics
There's 2 ways you can do this:
1) Use chunked transfer encoding
2) Use Swift's large object manifest objects
For the first, it's the standard HTTP semantics. You can send chunks of data to
Swift to be stored as one object without knowing the full object size up front.
Note that the per-object
xprofile is a module that can be used to profile the running code in a swift
proxy. I would not recommend running it in a production environment. It's
mostly for a developer use case.
--John
On 25 Aug 2016, at 4:28, Chris wrote:
> Hello Hamza,
>
>
>
> I don’t even have the “filter-xprofile”
On 13 Sep 2016, at 15:47, Michael Yoon wrote:
> Hello Alex and Clay,
>
> Pardon my jumping in here, we do something similar and have a similar use
> case (we chose 1MB segments for mobile, and 10MB segments for desktop).
> One question I have in comparing copy/bulk-delete vs keeping the manifest
Tempauth is really only for internal use and removing dependencies when
testing. It does not support v2 or v3 auth APIs. I'd suggest using keystone
instead.
--John
> On Sep 19, 2016, at 8:46 AM, Alexandr Porunov
> wrote:
>
> Hello,
>
> I am using tempauth for authentication. Here what I us
Unfortunately, no (as far as i know). It's a proprietary library used by NTT.
However, if you update to the latest version of liberasurecode, that warning
message is suppressed.
--John
On 12 Oct 2016, at 19:58, Yu Watanabe wrote:
> Hello.
>
> I would like to ask question related to swift obj
That method comes from
https://github.com/openstack/swift/commit/3ff94cb785867382ff6c37cb256d1b0f5381abaa.
The commit message should give some context. Basically, it was added as a way
to update existing clusters that needed an emergency fix.
What you're likely interested in is https://review.o
On 28 Nov 2016, at 19:49, Mark Kirkwood wrote:
> I'm seeing quite o lot of this sort of thing in the object server log:
>
> Nov 29 12:59:34 cat-wgtn-ostor001 object-server: Unexpected file
> /srv/node/obj01/objects/1485492/33d/b555a56c0d8e5cc4c146bbe08788d33d/.1479625672.77617.data.xefqO8:
> I
I'd suggest monitoring overall replications status with a combination of log
monitoring and swift-dispersion-report. If you find something that is
under-replicated, you can run the replicator process and give it a list of
partitions to prioritize.
http://docs.openstack.org/developer/swift/admin
27;s a great idea! Now it's
just a matter of prioritization...
--John
>
> regards
>
>
> Mark
>
>
> On 06/12/16 05:41, John Dickinson wrote:
>> I'd suggest monitoring overall replications status with a combination of log
>> monitoring and swift-di
On 5 Dec 2016, at 12:56, John Dickinson wrote:
> On 5 Dec 2016, at 12:39, Mark Kirkwood wrote:
>
>> Thanks John - increasing the partition coverage is a great idea (I hadn't
>> considered doing that).
>>
>>
>> Now with respect to the lack of durabilit
On 13 Dec 2016, at 0:21, Shyam Prasad N wrote:
> Hi,
>
> I have an openstack swift cluster with 2 nodes, and a replication count of
> 2.
> So, theoretically, during a PUT request, both replicas are updated
> synchronously. Only then the request will return a success. Please correct
> me if I'm w
On 4 Jan 2017, at 20:44, Sameer Kulkarni wrote:
> Hi All,
> I was eager to know the data-flow of an object on Openstack Swift PUT
> command. I know a highlevel overview from various posts, which I am not
> sure of.
>
>- Client Initiates PUT command specifying the object path on local
>st
If you're running 100% coverage with dispersion report, then running it once
per day seems reasonable.
If you're running something smaller, like 1-10%, then doing it once per hour
might be reasonable.
The point is, make it automatic and integrate it into your normal ops metrics.
Track it over
The ring is the beating heart of Swift. It's what defines cluster membership
and ensures disperse and balanced placement. Therefore, understanding the ring
means you understand a whole lot of how Swift works.
Good luck on the journey!
There's been quite a few words written and spoken about the
On 13 Feb 2017, at 17:16, Sai Vishwas wrote:
> Hi,
>
> I was reading about the replicator process that runs at regular
> intervals. I would like to know if my understanding of it is correct .
Yep, you've got a good understanding.
>
> This is my understanding : Since it is partitions
Great question. There's not a simple yes/no answer, so let me go into some
detail about the different ways storage nodes can be selected to handle a
request.
There are a few config settings in the proxy server config that can affect how
nodes are selected for reads. Instead of describing these
On 24 May 2017, at 5:47, Christian Baun wrote:
> Hello all,
>
> I tried again to install Swift only ontop of a Raspberry Pi 3 with
So, first off, I think that's really cool. I tried something like this a while
back, too. https://github.com/notmyname/swift_on_pi
However, you can see that since
1 - 100 of 112 matches
Mail list logo