Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-16 Thread Niels O
lets see my environment:

node v0.10.20
npm 1.3.11
├─┬ aws-sdk@*2.1.8*
│ ├─┬ xml2js@0.2.6
│ │ └── sax@0.4.2
│ └── xmlbuilder@0.4.2

riak-cs 1.5.3
riak 1.4.10

and YES upgrading the AWS sdk to 2.1.*16* helped!!..

thanks. solved


On Wed, Mar 11, 2015 at 4:43 AM, Shunichi Shinohara  wrote:

> Niels,
>
> I tested PUT object by your script (slightly modified for keys etc.) and
> succeeded.
> My environment:
> - Riak CS, both 1.5 branch and develop branch
> - node.js v0.10.25
> - npm 1.3.10
> - % npm ls
> /home/shino/b/g/riak_cs-2.0
> └─┬ aws-sdk@2.1.16
>   ├─┬ xml2js@0.2.6
>   │ └── sax@0.4.2
>   └── xmlbuilder@0.4.2
> - script https://gist.github.com/shino/36f02377a687f8312631
>
> maybe version difference of node or aws sdk (?)
>
> Thanks,
> Shino
>
> On Wed, Mar 11, 2015 at 11:13 AM, Shunichi Shinohara 
> wrote:
> > ngrep does not show some bytes. tcpdump can dump network data in pcap
> format.
> >
> > ex: sudo tcpdump -s 65535 -w /tmp/out.pcap -i eth0 'port 8080'
> > --
> > Shunichi Shinohara
> > Basho Japan KK
> >
> >
> > On Tue, Mar 10, 2015 at 7:30 PM, Niels O  wrote:
> >> Hello Shino,
> >>
> >> I was uploading the attached file to riakCS so the correct MD5 digest
> should
> >> be calculatable
> >>
> >> I don't know how to generate a pcap formatted file from linux, but I
> made an
> >> ngrep which might also do the job? ...
> >>
> >> the ngrep below:
> >>
> >> interface: eth0 (172.16.0.0/255.255.248.0)
> >> filter: (ip or ip6) and ( host 172.16.3.21 )
> >> 
> >> T 172.16.2.99:35151 -> 172.16.3.21:8080 [AP]
> >>   PUT http://testje.s3.amazonaws.com:443/4096k HTTP/1.1..User-Agent:
> >> aws-sdk-nodejs/2.1.8 linux/v0.10.20..Content-Type:
> >> application/octet-stream..Content-MD5:
> >> 0BsQLab2tMEzr8IWoS2m5w==..Content-Length: 4194304..Host: testje.s3.
> >>   amazonaws.com..Expect: 100-continue..X-Amz-Date: Tue, 10 Mar 2015
> 10:16:36
> >> GMT..Authorization: AWS
> >> GHSEZVCH4NYD359IZUEX:E2Ur8OR+po687h1c6/PUBx6gYzQ=..Connection: close
> >> ##
> >> T 172.16.2.99:35151 -> 172.16.3.21:8080 [AP]
> >>   PUT http://testje.s3.amazonaws.com:443/4096k HTTP/1.1..User-Agent:
> >> aws-sdk-nodejs/2.1.8 linux/v0.10.20..Content-Type:
> >> application/octet-stream..Content-MD5:
> >> 0BsQLab2tMEzr8IWoS2m5w==..Content-Length: 4194304..Host: testje.s3.
> >>   amazonaws.com..Expect: 100-continue..X-Amz-Date: Tue, 10 Mar 2015
> 10:16:36
> >> GMT..Authorization: AWS
> >> GHSEZVCH4NYD359IZUEX:E2Ur8OR+po687h1c6/PUBx6gYzQ=..Connection: close
> >> ##
> >> T 172.16.3.21:8080 -> 172.16.2.99:35151 [AP]
> >>   HTTP/1.1 100 Continue
> >> ##
> >> T 172.16.2.99:35151 -> 172.16.3.21:8080 [A]
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>
> >>
> 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
> >>

Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-16 Thread Niels O
Hi Shino,

Quite late reaction, but YES this patch works!
So problem solved :-) Thanks!

Niels


On Thu, Mar 12, 2015 at 10:00 AM, Shunichi Shinohara 
wrote:

> Hi Niels,
>
> I made PR to aws-sdk-js that fixes 403 in Multipart Upload Part requests
> [1].
> I hope you can patch your aws sdk installation by its diff.
>
> [1] https://github.com/aws/aws-sdk-js/pull/530
>
> Thanks,
> Shino
>
> On Wed, Mar 11, 2015 at 5:29 PM, Niels O  wrote:
> > I was testing some more.. and now the 400 issue (files from 1024-8191K)
> is
> > solved .. the 403 issue indeed is not yet solved (files > 8192K)
> >
> > so indeed still an issue :-(
> >
> > here a pcap of the 403 issue (with -w option this time :-)
> > http://we.tl/AFhslBBhGo
> >
> > On Wed, Mar 11, 2015 at 8:02 AM, Shunichi Shinohara 
> wrote:
> >>
> >> Congrats :)
> >>
> >> Just my two cents,
> >>
> >> > tcpdump 'host 172.16.3.21'  -s 65535 -i eth0 > /opt/dump.pcap
> >>
> >> tcpdump's option "-w file.pcap" is helpful because dump contains
> >> not only header information but raw packet data.
> >>
> >> How about "403 - AccessDenied" case? Is it also solved by version
> >> up or still an issue?
> >>
> >> Thanks,
> >> Shino
> >
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Advice on making a Riak middleware easy to configure

2015-03-16 Thread Matthew Brender
Marc,
Our side conversation starts to help, but I'm sure there's more to discuss
here. I believe you want to know: what configuration assumptions about Riak
can be made before connecting into apiman vs what functionality does apiman
have to support?

If you're looking to start simple, I'd think beginning from an assumption
of interfacing with the load balancer in front of a Riak cluster to be a
good move.

Perhaps other members note the definition of a "typical" configuration to
help.

Best,
Matt




Hi Matt,

It would seem that I fell on the wrong side of the divide between
providing enough context and making my query clear!

In short, apiman consists of two main parts: runtime (gateway) and
design-time (manager). The manager pushes configuration to the gateway
which then enforces policies on transiting traffic - that might be
something like rate-limiting, authentication, or metrics collection on
HTTP traffic.

The gateway is horizontally scalable, so it requires a data store to
facilitate shared state functionality (e.g. rate-limit applies across
the whole cluster of gateways). We have pluggability at the data-store
level - so, by implementing a few interfaces we can use a given
data-store in an abstracted manner (i.e. zero knowledge about the
underlying specifics when a policy uses a shared state component).

Users can choose whichever data-store suits their needs; simply select
it and provide relevant configuration information to the plug-in (via a
config file, or whatever).

The issue I have is simply: what configuration options should we provide
for our plug-in so it can connect to a typical Riak set-up(s) (given I
have zero knowledge of their set-up and Riak user's conventions).

For instance in the config file, do I:

- Accept a list of Riak nodes and try to *construct* a cluster for them;
or is it safe to assume they've done this in advance?

- Try to define buckets & associated data-types, or should I assume this
is done in advance?

- Just assume everyone uses Riak behind a load-balancer, and I just need
to accept a single URI?

Some of these scenarios run into idempotence issues, so it may be that
it's unsafe or poor for performance to allow those.

I'm happy to support multiple configurations, just I'm not sure which
ones are typical, given there are a large number of possible permutations.

I hope I've been a bit clearer this time; please let me know if I haven't!

Appreciate your assistance.

Regards,
Marc
ᐧ

*Matt Brender | Developer Advocacy Lead*
Basho Technologies
t: @mjbrender 
c: +1 617.817.3195

On Sat, Mar 7, 2015 at 10:35 AM, Marc Savy  wrote:

> Hi All,
>
> I'm involved in a FOSS API management project (apiman), and I've been
> thinking about providing a Riak implementation of its gateway components
> in the community (where we already have ElasticSearch and Infinispan).
> These components provide the distributed storage for tasks like
> rate-limiting counters, IP white-listing, black-listing, etc and are
> applied by a horizontally scalable, async gateway (to vastly
> oversimplify!).
>
> I'm in need of advice principally in regards to configuration and
> set-up. Namely, what assumptions can I safely make about a Riak user's
> set-up, and which settings I should expose in the component's
> configuration. Note that many gateways can exist, and hence any set-up
> ideally needs to already in advance, or be idempotent in case multiple
> nodes attempt to do it at once (or otherwise for it to be
> lockable/exclusionary).
>
> To be more concrete, should I, for example, expect the user to have
> already set up and joined together their Riak cluster a priori, with
> everything behind a load-balancer: just give me a single URI to connect
> to). Or, should I expect a list of FQDNs/IPs and attempt to join them
> together into a cluster on the user's behalf - or will there be
> idempotence issues if I do that multiple times?
>
> As far as I can tell, there is no node discovery/sharing
> implementation[1], so I take it there's no way, for instance, to hit a
> single node (which has already been joined with other nodes), and
> thereby automatically gain knowledge of all cluster members?
>
> A couple of other configuration issues: Given the introduction of Riak
> Data Types on buckets, whom should I expect to set up the data types[2]?
> Should I create them automatically if they don't exist? Same for the
> bucket itself.
>
> I'm very interested to know to present a convenient set of options that
> will allow a typical development and deployment environment to be
> supported.
>
> Regards,
> Marc
>
> [0] With the usual consistency limitations
> [1] https://github.com/basho/riak/issues/356
> [2] http://docs.basho.com/riak/latest/dev/using/data-types/#
> Setting-Up-Buckets-to-Use-Riak-Data-Types
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.co