2020-11-02 01:52:35 UTC - Houman Kamali: Fantastic, thanks!

So in case I expect short bursts of parallel requests and want to optimize for 
cost, I’ll need an additional layer to perhaps batch/throttle the requests or 
otherwise I might end up with as many containers as there are requests.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604281955207000?thread_ts=1604249854.198100&cid=C3TPCAQG1
----
2020-11-02 02:05:00 UTC - Rodric Rabbah: That’s the likely scenario. However 
optimizing for cost in this regard is a no op. You’re billed for total compute 
time x memory by most faas providers so you’re paying the same regardless of 
whether you do it sequentially or in parallel 
+1 : Houman Kamali
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604282700209400?thread_ts=1604249854.198100&cid=C3TPCAQG1
----
2020-11-02 02:06:10 UTC - Rodric Rabbah: If you have short requests which don’t 
consume the full billable quanta then you can optimize to reduce waste intra 
quantum like 100ms. This is harder to do and unless your functions are really 
short running, is not worth it 
+1 : Houman Kamali
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604282770211100?thread_ts=1604249854.198100&cid=C3TPCAQG1
----
2020-11-02 02:14:32 UTC - Houman Kamali: I see your point. The specific use 
case I have has a throughput of hundreds of requests per second; processing the 
requests is mostly I/O intensive, with occasional CPU intensive ones arriving 
(they make external calls, but parse variable amount of JSON internally), so 
I’m assuming even if all of them go into different containers, they’ll still 
each take e.g. 0.5s to execute. Perhaps I can optimize the cost by batching the 
requests e.g. over windows of 50ms
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604283272211400?thread_ts=1604249854.198100&cid=C3TPCAQG1
----
2020-11-02 02:15:01 UTC - Houman Kamali: TIL that there is a minimum billable 
quanta! Should verify that with out faas provider to see how much it’s, thanks!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604283301211700?thread_ts=1604249854.198100&cid=C3TPCAQG1
----
2020-11-02 02:24:19 UTC - Joshua Dunham: Hey Everyone. Can anyone give a thumbs 
up towards an OW worker as a DB client in a (usually microservice oriented) 
usecase? I want SQL to JSON using ~ 3 workers in a chain (Auth, DB Client, JSON 
serialization). Is this written up somewhere that I haven't come across yet?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604283859214600?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 03:48:00 UTC - Houman Kamali: If by worker you mean just a 
serverless function instance, we do use them for DB operations. You can cache 
your client between different runs of a running instance as long as it is kept 
warm (IIRC, if you’ve requests at least every 30mins or so, it’ll remain hot), 
so it’s an efficient solution in that regard.

You need to make sure your DB can handle all the connections that are made from 
the functions though, or implement your own throttling mechanism. OW doesn’t 
come with throttling OOTB, so we had cases where thousands of connections were 
created on our DB when we had a high throughput (my post above might actually 
be relevant)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604288880216000?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 03:56:24 UTC - Joshua Dunham: In this case there is really one one 
request per day for this chain. The serverless use comes in another chain that 
takes each of the rows and scales up some workers.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604289384216500?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 03:56:45 UTC - Joshua Dunham: Liked your post above, read it right 
before posting!
pray : Houman Kamali
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604289405216700?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 03:57:46 UTC - Joshua Dunham: Do you have any tips for lang or 
approach to DB client? It's SQLServer which IMHO is probably the most difficult 
to work with out of the common ones. :confused:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604289466216900?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 06:28:02 UTC - Houman Kamali: One limitation is that each invocation 
runtime has a limit, IIRC it’s 600s? So that might be a limiting factor.

If you’re doing heaving number crunching, then maybe a compiled language that’s 
more performant?

If you’re making multiple calls to the DB, then maybe optimizing the pool size?

I haven’t worked with SQLServer, so if there are any specific considerations 
with that, I wouldn’t know, sorry
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604298482217200?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 13:41:21 UTC - Joshua Dunham: No prob, thx for the tips! Is there 
any way I can turn off the 600s invocation time? It should finish when it 
finishes.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604324481217500?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 14:54:04 UTC - Houman Kamali: AFAIK, there isn’t; maybe if you have 
a self-managed deployment?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604328844217700?thread_ts=1604283859.214600&cid=C3TPCAQG1
----
2020-11-02 15:33:45 UTC - Joshua Dunham: I do manage the K8S based deployment. 
The other half of this proof of concept is to utilize it as an autoscaling 
cluster.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1604331225217900?thread_ts=1604283859.214600&cid=C3TPCAQG1
----

Reply via email to