On Tue, 4 Jul 2017 at 23:22 Chris Travers <chris.trav...@gmail.com> wrote:

> Having done a lot of SQL optimisation stuff  I have doubts that this is
> possible.  The problem is that it is much easier to go from a declarative
> to an imperative plan than it is to go the other way.  In fact sometimes we
> use SQL the way your first code works and then it is often a problem.
>
> For example, consider the difference between an EXISTS and an IN query, or
> between an INNER JOIN and a LATERAL JOIN.  PostgreSQL's optimiser is
> amazing at identifying cases where these are equivalent and planning
> accordingly, but it is extremely easy to get just outside the envelope
> where the optimiser gives up and has to default back to an imperative
> interpretation of these.  Proving that two imperative approaches are
> equivalent is a lot harder than proving that two different imperative
> approaches implement the same declarative request.  In other words, going
> imperative -> declarative strikes me as a far, far harder problem than the
> other way.
>
> Also I have done a little bit of work on Apache Spark and there it is
> extremely important to understand the imperative side of the data flow in
> that case (what is partitioned and what is not).
>

I can not argue these points with you; but Fortress is a good example of
imperative looking code that translates to a functional/declarative core;
as indeed is monadic or applicative code. LINQ is a more recent and
widespread example -- though not encompassing an entire language -- of
something that has an imperative form while being declarative under the
hood. Scala's for comprehensions -- more or less monad comprehensions --are
another.

With regards to Spark, I assume for comprehensions are an important part of
the interface?

Kind Regards,
  Jason

>

Reply via email to