Obviously a re-balance will cost some IO, but it's normally perceptible to
the client unless you were already on a razor thin line.
Two config options seem obvious to think about experimenting with:
You could decrease the node_timeout and let the proxy try to write more to
handoffs
You could try
Sure! python swiftclient's upload command has a --changed option:
https://docs.openstack.org/python-swiftclient/latest/cli/index.html#swift-upload
But you might be happier with something more sophisticated like rclone:
https://rclone.org/
Nice thing about object storage is you can access it fr
Swift containers can certainly have underscores in them... almost any
character is valid.
But I guess s3api thinks that's maybe not a valid bucket name?
https://github.com/openstack/swift/blob/master/test/unit/common/middleware/s3api/test_utils.py#L38
-Clay
On Thu, Jun 21, 2018 at 3:27 AM, Shy
On Tue, Mar 13, 2018 at 3:05 PM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
> To me this suggests that a certain minimum number of *hosts* per region is
> needed for a given EC policy to be durable in the advent of host outage (or
> destruction). Is this correct - or have a flubbed the
On Thu, Mar 8, 2018 at 8:07 PM, Mark Kirkwood wrote:
> Are we supposed to do a bit of python for ourselves to use these? (rubs
> hands ready to hack...)...
>
Maybe you could dust of this? https://review.openstack.org/#/c/451507/
I now kota & acoles would feel great about seeing that merge - ma
One replica is a little strange. Do the uploads *always* fail - in the
same way? Or is this just one example of a PUT that returned 503? Are you
doing a lot of concurrent PUTs to the same object/name/disk?
The error from the log (EPIPE) means the object-server closed the
connection as the proxy
Probably the "devices" option in the object server is misconfigured?
On my lab and production servers I configure the object-server.conf with
[DEFAULT]
devices = /srv/node
And then I make sure my mounted devices appear at:
/srv/node/d1
/srv/node/d2
/srv/node/d3
etc
The path in the error messa
Pretty sure that's true and mostly optimistic on the part of the db
replicator which is more in the data path for replication - than say rsync
object replication.
If you look at ssync or ec reconstructor you'll see it's quite possible for
them to trip built in DiskFile quarantine behavior during r
Swift is fun project!
Project Docs:
https://docs.openstack.org/developer/swift/index.html
Bugs to get started:
https://bugs.launchpad.net/swift/+bugs?field.assignee=&field.tag=low-hanging-fruit&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist
I've heard of people using the seed feature of rebalance to try make it
repeatable enough to seem deterministic.
But I always found the idea bonkers.
Maybe just upload them into swift? Let the provisioning of a new node
checkout the builders - add itself - rebalance - upload modified builder
and
No, it's not store and forward - the proxy is write-through - proxy takes a
chunk off the network from client and immediately sends the chunk out to
the backend storage nodes.
-Clay
On Fri, Jan 20, 2017 at 7:09 AM, Sameer Kulkarni
wrote:
> Hi All,
>
> When a PUT operation is issued by Client, I
We track and prominently display the time since the last replication cycle
completed some minutes after a ring was deployed (the raw data is available
in recon data [1]) and also monitor counts of handoff partitions per device
(aggregated per node and cluster wide) [2].
You could also try to confi
You don't need to run swift-dispersion-populate more than once - all it
does is put a bunch of objects in some % of your ring's partitions. The
number of partitions in a ring is fixed at creation [1] - only which device
where each partition is assigned will change with a rebalance.
swift-dispersi
Is this *really* the default chunk_size?
http://docs.python-requests.org/en/master/api/#requests.Response.iter_content
Because, that'd be like a *lot* of read calls for a large object ;)
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/api/object_store_v1.py#L383
Can you you find the transaction-id of the GET request for the download in
the logs and inspect the object-server response? Can you duplicate the
results with any other client (e.g. web browser, curl, python-swiftclient,
etc)?
-Clay
On Wed, Jan 4, 2017 at 12:59 AM, don...@ahope.com.cn
wrote:
>
I strongly prefer to configure rsyncd with a module per disk:
https://github.com/openstack/swift/blob/c0640f87107d84d262c20bdc1250b805ae8f9482/etc/rsyncd.conf-sample#L25
and then tune the per-disk connection limit to 2-4
There's not really hard and fast rule, in some sense it's related to
replic
To look at current availability of a single object you can use
`swift-get-nodes` and check all of the primary locations - or if you have
the `.data` file handy already you can use `swift-object-info`
Either of these options will tell you were the object should be, and also
where it might be if the
heh, didn't see this one when responding to the other message :P
My favorite client bindings for the java's is JOSS ->
http://joss.javaswift.org/
For streaming upload; maybe here:
https://github.com/javaswift/tutorial-joss-streaming/blob/e97a302e42b8964b4c87749fc2a5d28a9bb4d32a/src/main/java/org
On Tue, Sep 13, 2016 at 10:37 AM, Alexandr Porunov <
alexandr.poru...@gmail.com> wrote:
>
> Correct me if I am wrong. Algorithm is the next:
> 1. Upload 1MB sub-segments (up to 500 sub-segments per segment). After
> they will be uploaded I will use "copy" request to create one 500MB
> segment. Aft
On Thu, Aug 25, 2016 at 1:28 AM, Chris wrote:
>
>
> I don’t even have the “filter-xprofile” option in my config what is it for?
>
>
>
http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.xprofile
___
Mailing list: htt
Those are just rsync temporary files, they normally get cleaned up when
rsync exits - you should make sure you are running the latest version of
rsync.
The messages are fairly new; we had some issues with propagation of rsync
tempfiles that led to a few changes down there. We should probably have
swift-ring-builder set_replicas
Changes the replica count to the given . may
be a floating-point value, in which case some partitions will have
floor() replicas and some will have ceiling()
in the correct proportions.
A rebalance is needed to make the change take effect.
At the risk of repeating myself:
On Tue, May 24, 2016 at 5:30 PM, Clay Gerrard
wrote:
>
> This inconsistency in search depth based on the per-worker error limiting
> may be something worth looking into generally - but it's probably mostly
> hidden on clusters that are goin
On Tue, May 24, 2016 at 5:38 PM, Shrinand Javadekar wrote:
> Here's my test setup:
>
> - Single node
> - Single replica
> - 4 disks: /srv/node/r1, r2, r3 and r4.
> - Backed by SSDs
>
>
I think in a four device single node single replica setup I'd probably just
run request_node_count = 4 and call
more nodes deep into handoffs in the
default case.
If you're able to reliably reproduce the failure you might try increasing
your [app:proxy_server] request_node_count setting to a fixed value of
perhaps 3, and see if that works better for you.
-Clay
On Tue, May 24, 2016 at 5:14 PM, Cl
On Tue, May 24, 2016 at 4:51 PM, Shrinand Javadekar wrote:
> Thanks for the detailed explanation...
>
Well, you're welcome - my apologies if it was overly verbose.
> This is unlike what I've seen in this setup. I have some code that
> tried to read the object 5 times from Swift with exponentia
On Tue, May 24, 2016 at 4:56 PM, Shrinand Javadekar wrote:
> Sorry... I missed the first question
>
>
no worries
>
> Yes, I was running on a single replica system.
Ah! That's great information. I have *zero* experience with single
replica systems. The logs should be even *more* interesting
On Tue, May 24, 2016 at 11:59 AM, Shrinand Javadekar <
shrin...@maginatics.com> wrote:
>
> I found the object written into the second handoff node.
>
Are you running only a single replica!? Was the object data *only* on the
second handoff?! If the original PUT request did not return success it'
On Mon, May 23, 2016 at 1:49 PM, Shrinand Javadekar wrote:
>
> If objects are placed on different devices than the computed ones,
> they will be unavailable until the replication places them at the
> correct location.
This part doesn't sound quite right to me, but the transaction logs will
tell
On Fri, May 20, 2016 at 1:08 AM, Pete Zaitcev wrote:
>
> Look at cf_xxx functions here:
> https://git.fedorahosted.org/cgit/iwhd.git/tree/backend.c
> Clone
> git://git.fedorahosted.org/iwhd.git
>
>
^ should *also* go on the associated projects list!
_
Yeah that's a few undesirable behaviors there.
https://bugs.launchpad.net/swift/+bug/1583305
#willfix
On Tue, May 17, 2016 at 11:04 PM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
> On 17/05/16 17:43, Mark Kirkwood wrote:
>
>>
>> I'm seeing some replication errors in the object server
I haven't heard much about folks using Swift bindings for C - there's no C
bindings listed on the associated projects page [1]. I'm sure just using
the library functions directly with-in the application would be at least as
good and trying to build a command line utility and calling it with a
subp
Honestly I'm not sure fronting Swift with Apache is a particularly popular
deployment configuration - I'm not sure many folks have much experience
with it - you could try to find David Hadas (original author of Apache
stuff). I'm sure the author of swift informant never tested with apache -
but it
Idk about a chestnut, but there's this:
http://lists.openstack.org/pipermail/openstack-operators/2011-October/000297.html
-Clay
On Mon, Sep 14, 2015 at 10:18 PM, Chris Friesen wrote:
> On 09/14/2015 08:59 PM, Mark Kirkwood wrote:
>
>> Hi,
>>
>> Sorry if this is a bit of a chestnut, but I notic
Neither container-sync nor global replication support that.
I'm not sure if quiesce would be a good fit for the project honestly since
you have to pause writes with an unbounded window and coordinating that
would defeat some of swift's goals scalability and availability.
Honestly, something about
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4983776e190c8dbc
how is the top pick not the author of the book of five rings [1]
-Clay
1. https://en.wikipedia.org/wiki/The_Book_of_Five_Rings
On Mon, Jun 22, 2015 at 7:07 AM, Monty Taylor wrote:
> Hey all!
>
> The M naming poll has conclu
What a well timed question!
A swift core maintainer recently did some analysis on this very question
and the results strongly favored using multiple workers on different ports
each handling only a single physical filesystem device.
To make it easier to achieve that configuration there's a patch t
It would be helpful to identify which SDK?
You're probably aware the auth service and the object storage service are
different http endpoints - you get a token from auth, you provide the token
to swift with the request - swift validates the token and authorizes the
request. If the token provided
maybe a mis-matched hash_path_(prefix|suffix) in swift.conf on the account
node?
On Tue, Apr 21, 2015 at 2:14 AM, Kévin Bernard-Allies <
kbernard-all...@bajoo.fr> wrote:
> Hi,
>
> I've set a new Swift installation for development.
> When I try to create an account or a container, swift-proxy give
On a single node where network transfers are cheaper, and a small object
size request rate oriented workload - a good load generator should be able
to reach cpu limits with enough concurrency. If you're targeting a disk
saturating throughput oriented workload - larger objects sizes (1-10MB) is
the
if it's in a public container (read/referer acl ?) - just point a
webbrowser (or curl) at the url.
you can get the url of any entity in swift with `swift stat -v` (e.g.
object - `swift stat -v `)
-Clay
On Sun, Mar 15, 2015 at 8:29 PM, Sandhya S wrote:
> Hi all,
>
> I uploaded my object in Swi
Another possibility is a large number of client disconnects - which could
happen serving mostly media content for progressive download; or maybe lots
of X-Newest requests.
I've seen situations where the proxy could try harder to close down backend
connections - https://bugs.launchpad.net/swift/+bu
On Tue, Mar 10, 2015 at 10:30 PM, Shrinand Javadekar <
shrin...@maginatics.com> wrote:
> Has there been any analysis done of the cost incurred by
> this index?
No formal analysis beyond some benchmarking back when gholt, redbo and
wreese were trying to figure out what to do about crappy performa
It's really more filtering objects with deleted = 1. The index used to
just be name - but that was not too efficient for sqlite when a container
had a lot of deleted objects (it'd have to page through a bunch of deleted
rows that matched the name index to apply the filter on delete = 0. Adding
th
On Mon, Dec 8, 2014 at 11:54 AM, Shrinand Javadekar wrote:
> Hi,
>
> I am exploring an option where the physical hardware on which Swift
> will be installed can have an nvram. Has anyone explored putting the
> Swift container and account dbs on nvram?
Not to my knowledge. Unless... well excuse
On Fri, Dec 5, 2014 at 11:47 AM, Shrinand Javadekar wrote:
>
> If it is less than N, the swift-drive-audit tool could potentially
> unmount an already recovered drive.
>
> If it is > N, it is possible to miss some messages in the log file.
>
> Is the above analysis correct?
>
You're probably not
If you make the container public [1] you can just point an HTML5 capable
browser at the url of the video in most cases and it'll HTTP pseudo stream
like a boss.
Otherwise you could perhaps generate tempurls [2]
If you need any help you could pop your IRC client into #openstack-swift on
freenode.
401 seems to indicate the token you're getting from keystone isn't able to
be validated by the swift proxy.
Maybe the proxy logs or keystone logs will indicate if keystone thinks AUTH_
ca382398c06749f483d4adfa6cecd868 is a valid token?
-Clay
On Fri, Nov 14, 2014 at 8:31 PM, Vivek Varghese Cheria
Swift is natively instrumented to emit generic statsd metrics that would
cover PUTs/GETs [1]. But I would recommend offline log processing if you
need this data to to be reliable and audit-able, slogging [2] is a
third-party opensource tool for processing swift access logs that might
point you in
On Thu, Oct 30, 2014 at 12:28 PM, Amit Anand wrote:
> Thanks Clay. Yeah this is swift version 2.1.0. Ive done write_wring and
> copied the new .gz files to the object node still getting same error. I
> have no clue where port 6003 is coming from its driving me crazy :-)
>
>
>
Well the device ports can only come from the rings. There is no default.
You're right the backend request is for an object - but maybe it's a
policy-1 ring? You said icehouse though so maybe this isn't swift 2.0?
The swift-init error for proxy is troubling - you should figure out if
maybe the prox
The container server has it's own reclaim age setting - make sure you fix
that up to clear out those container db's (do they have three replica?)
the .ts files might be more trixy -
https://bugs.launchpad.net/swift/+bug/1301728
How's replica count of 1 working out for you? Are you raid backed or
SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=1G
looks promising...
I found it in ./lib/swift
There's probably some way to plumb it through you config - you could always
hack it in place and see what happens?
-Clay
On Thu, Oct 16, 2014 at 7:38 AM, Ali Nazemian wrote:
> Hi,
> I tried to install openst
rackspace dev's wrote, opensourced, and deploy the Swift Origin Service
(SOS) middleware:
https://github.com/dpgoetz/sos
So you could look at that ;)
-Clay
On Fri, Sep 12, 2014 at 12:41 PM, Brent Troge
wrote:
>
> Is there an open source middleware/api-extender that supports setting CDN
> cach
If you're truly not concerned about the replication overhead the balance is
going to require the minimal amount of data movement if you just remove and
re-add the devices in their new zone with a single rebalance and ring-push.
That may or may not be a good idea to recommend depending on how much
Correct, best-effort. There is no guarantee or time boxing on cross-region
replication. The best way to manage cross site replication is by tuning
your replica count to ensure you have primary copies in each region -
eventually. Possibly evaluate if you need write_affinity at all (you can
always
awww man, that is so crappy - I read that code wrong; I thought we fixed
that :'(
default_allowed_headers = '''
content-disposition,
content-encoding,
x-delete-at,
x-object-manifest,
x-static-large-object,
'''
extr
crunchyroll open-sourced this:
https://github.com/crunchyroll/swiftmp4
.. which last I checked supported time based offset streaming directly
through the swift proxies using a cool buffering hack. But IIRC, it's
pretty tied to the mp4 format. I think it was based on maybe...
http://h264.co
No, i was thinking of "allowed_headers", when I add:
[DEFAULT]
allowed_headers = expires
to my object server config and restart I can get an expires on there:
vagrant@kinetic-swift:~$ curl -H 'x-auth-token: XXX'
http://localhost:8080/v1/AUTH_test/test/test -XHEAD -I
HTTP/1.1 200 OK
Content-
I thought there was a config option for setting which arbitrary http
headers (non-meta) you allow to be stored with objects, but I can't fid it
on:
http://docs.openstack.org/developer/swift/deployment_guide.html
On Thu, Aug 7, 2014 at 9:25 AM, Brent Troge
wrote:
>
> Hello.
>
> I am considering
Ah, your part power is too high for that few devices. I would have
recommend 14 rather than 18. Which is good enough to scale up to ~500
devices before you might worry about maybe running into balancing issues
that *could* make it difficult to run your cluster more than 70-80% full,
but at that p
The object servers should only be talking to each other during replication.
They should not talk to the proxy, and probably not the load balancer.
Can you provide the output of "swift-ring-builder
/etc/swift/object.builder" and more details on the network configuration of
this system.
Generally
the econnrefused is probably saying the endpoint for keystone is
misconfigured in your proxy?
The link you pasted for your proxy conf looks like an rsync conf - the
keystone section of your proxy could be very telling - you have to
configure all that service account mumbo jumbo.
You should also t
Well once you remove the oldhost from all of your proxy-server.conf's
"memcache_servers" list in the their [filter:cache] section; and restart
those services - they won't make any more connections to the unreferenced
memcached server. Internal proxy might... you should double check your
/etc/swift
because it's on repeat.
On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман wrote:
> Hello.
>
> I installed Openstack Swift to test server and upload 50 gb data.
> Now I see it in the log:
> root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog |
> grep replicated
> Mar 27 19:44:24 sto
ze, sorry, looking at those docs, it seems maybe I was mistaken and
> am using Dynamic Large Objects because I uploaded the file using swift -S.
> Is DLO also optional? Was it added at a certain version?
>
> Thanks so much for the help!
> -Ben
>
>
> On Thu, Mar 13, 2014 at 10
You should check with your deployer to see what version of Swift they are
running.
the /info (capabilities) feature was added in 1.11 and I think Havana
shipped with 1.10
But I think SLO support has been around since 1.5 which should be in
Havana, maybe even Grizzly. [1]
Either way SLO support h
That formula doesn't make sense to me? 4097 seems to be a magic number in
there? what does 2 ^ 12 have to do with anything?
You can't *really* say how many objects will be in a partition without
thinking about how many objects will be in the cluster. But number of
partitions per disk (2^part_po
it. But that didn't work. I'll update this thread if I find something in
> jclouds that is the problem.
>
>
>
> On Thu, Jan 23, 2014 at 9:23 PM, Clay Gerrard wrote:
>
>> ah ok, so are these two clusters setup as part of a single swauth
>> installation? Are
, swauth.
>
>
> On Thu, Jan 23, 2014 at 1:52 PM, Clay Gerrard wrote:
>
>> Is SwiftAuth... like Swauth?
>>
>> https://github.com/gholt/swauth/search?q=SwiftAuth&ref=cmdform
>>
>> or something else???
>>
>>
>> On Thu, Jan 23, 2014 at 10:
Is SwiftAuth... like Swauth?
https://github.com/gholt/swauth/search?q=SwiftAuth&ref=cmdform
or something else???
On Thu, Jan 23, 2014 at 10:44 AM, Shrinand Javadekar <
shrin...@maginatics.com> wrote:
> Hi,
>
> I am trying to debug a swift auth problem. There are two swift clusters
> using Swif
It's not synchronous, each request/eventlet co-rotine will yield/trampoline
back to the reactor/hub on every socket operation that raises EWOULDBLOCK.
In cases where there's a tight long running read/write loop you'll
normally find a call to eventlet.sleep (or in at least one case a queue) to
avoi
Really can't say what's misconfigured with out the logs.
Since the container is 404 it's something with the account databases.
The container create will try to update the account databases, with
account_autocreate the first container create will try to create the
account database. It's that part
Try this:
sudo apt-get install python-pip
pip install -U pip
sudo apt-get purge python-pip
cd
sudo pip install -e .
GL,
-clayg
On Tue, Nov 19, 2013 at 2:04 AM, Razique Mahroua
wrote:
> Hi,
> you can follow that link:
> https://ask.openstack.org/en/question/4996/importerror-running-swift/
>
>
The token may not be (is probably not) deterministicly created. You give a
username and password to the auth system - and it returns the token for you
to associate with future requests.
The request for the token (the auth request) seems to be missing some
headers:
curl -i http://ictp-R2C4-Contro
Run `swift-ring-builder /etc/swift/object.builder validate` - it should
have no errors and exit 0. Can you provide a paste of the output from
`swift-ring-builder /etc/swift/object.builder` as well - it should list
some general info about the ring (number of replicas, and list of devices).
Rebalan
into those.
>
> At the moment the zones represent different servers (but same rack, DC).
> Is that a bad idea?
>
> Mvh / Best regards
> Morten Møller Riis
> Gigahost ApS
> m...@gigahost.dk
>
>
>
>
> On Sep 17, 2013, at 4:11 AM, Clay Gerrard wrote:
>
>
;: "password"}}}' -H
> "Content-type: application/json" http://IP:35357/v2.0/tokens
>
> Thanks
> Raghavendra
>
> --------
> On Tue, 9/17/13, Raghavendra Rangrej wrote:
>
> Subject: Re: [Openstack] Object versioning not working
> To: "
If you look at the raw API response from a HEAD on the "testing" container
(maybe with curl) I think you'll see that you've set the wrong metadata key.
You should set "X-Version-Location: test_cont" instead of
"X-Container-Meta-X-Version-Location: test_cont"
The `-m` option for `swift post` is on
Those two statements do seem in contrast - run `swift-ring-builer
account.builder` and check what is the current balance?Can you paste
the output? Maybe you have an unbalanced region/zone/server and it's just
can't do any better than it is?
-Clay
On Thu, Sep 12, 2013 at 11:53 PM, Morten Møl
Might this be the issue:
https://ask.openstack.org/en/question/3664/when-i-use-swift-to-perform-a-head-i-am-getting-401-unauthorized/
On Tue, Sep 10, 2013 at 9:55 PM, Mahardhika
wrote:
> Hi, following this guide
> http://docs.openstack.org/developer/swift/development_saio.html#partition-sectio
If you installed via packages from RDO or Cloud Archive - they probably
have 1.8 "grizzly" available.
If you installed from source (or build your own packages) you can just
fetch the updates from github [3] and checkout the 1.9.2 tag [4].
1. http://openstack.redhat.com/
2. https://wiki.ubuntu.com
Looks like the ubuntu package for dnspython 1.9.4, and the Swift package
you have is requiring dnspython 1.10.0
Try to install the newer dnspython with pip:
pip install "dnspython==1.10.0"
-Clay
On Sat, Aug 17, 2013 at 3:10 AM, pragya jain wrote:
> hello all!
>
>
> I am installing and co
I think you're running an older version of swift, but I'm pretty sure that
loadapp line is right after drop_privledges in run_wsgi.
So the process starts as root, and can read the config - then it drops
privileges to the run user configured in the conf and can't access the file
anymore.
... but e
84 matches
Mail list logo