bhattula Rajesh Kumar
> To: Richard Hillegas/San Francisco/IBM@IBMUS
> Cc: "u...@spark.incubator.apache.org"
> , "user@spark.apache.org"
>
> Date: 11/05/2015 06:35 PM
> Subject: Re: Spark sql jdbc fails for Oracle NUMBER type columns
>
> Hi Richard,
> Th
Or you may be referring to
https://issues.apache.org/jira/browse/SPARK-10648. That issue has a couple
pull requests but I think that the limited bandwidth of the committers
still applies.
Thanks,
Rick
Richard Hillegas/San Francisco/IBM@IBMUS wrote on 11/05/2015 09:16:42 AM:
> From: Rich
Hi Rajesh,
I think that you may be referring to
https://issues.apache.org/jira/browse/SPARK-10909. A pull request on that
issue was submitted more than a month ago but it has not been committed. I
think that the committers are busy working on issues which were targeted
for 1.6 and I doubt that th
Hi Kishor,
Spark doesn't currently support subqueries in the WHERE clause. However, it
looks as though someone is working on this right now:
https://issues.apache.org/jira/browse/SPARK-4226
Hope this helps,
Rick Hillegas
Kishor Bachhav wrote on 10/28/2015 05:52:50 AM:
> From: Kishor Bachhav
Note that embedded Derby supports multiple, simultaneous connections, that
is, multiple simultaneous users. But a Derby database is owned by the
process which boots it. Only one process can boot a Derby database at a
given time. The creation of multiple SQL contexts must be spawning multiple
attem
As an academic aside, note that all datatypes are nullable according to the
SQL Standard. NOT NULL is modelled in the Standard as a constraint on data
values, not as a parallel universe of special data types. However, very few
databases implement NOT NULL via integrity constraints. Instead, almost
Hi Jeff,
Hard to say what's going on. I have had problems subscribing to the Apache
lists in the past. My problems, which may be different than yours, were
caused by replying to the confirmation request from a different email
account than the account I was trying to subscribe from. It was easy fo
A crude workaround may be to run your spark shell with a sudo command.
Hope this helps,
Rick Hillegas
Sourav Mazumder wrote on 10/15/2015 09:59:02
AM:
> From: Sourav Mazumder
> To: user
> Date: 10/15/2015 09:59 AM
> Subject: SQL Context error in 1.5.1 - any work around ?
>
> I keep on gettin
Hi Ravi,
If you build Spark with Hive support, then your sqlContext variable will be
an instance of HiveContext and you will enjoy the full capabilities of the
Hive query language rather than the more limited capabilities of Spark SQL.
However, even Hive QL does not support the OFFSET clause, at
Hi Akhandeshi,
It may be that you are not seeing your own posts because you are sending
from a gmail account. See for instance
https://support.google.com/a/answer/1703601?hl=en
Hope this helps,
Rick Hillegas
STSM, IBM Analytics, Platform - IBM USA
akhandeshi wrote on 10/07/2015 08:10:32 AM:
Hi Ruslan,
Here is some sample code which writes a DataFrame to a table in a Derby
database:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val binaryVal = Array[Byte] ( 1, 2, 3, 4 )
val timestampVal = java.sql.Timestamp.valueOf("1996-01-01 03:30:36")
val dateVal = java.sql.D
Hi Sukesh,
To unsubscribe from the dev list, please send a message to
dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please
send a message user-unsubscr...@spark.apache.org. Please see:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
sukesh kumar wrote
Hi Ntale,
To unsubscribe from the user list, please send a message to
user-unsubscr...@spark.apache.org as described here:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
Ntale Lukama wrote on 09/23/2015 04:34:48 AM:
> From: Ntale Lukama
> To: user
> Date: 09/23/2015 04:
For what it's worth, I get the expected result that "filter" behaves like
"group by" when I run the same experiment against a DataFrame which was
loaded from a relational store:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val df = sqlContext.read.format("jdbc").options(
Ma
To unsubscribe from the user list, please send a message to
user-unsubscr...@spark.apache.org as described here:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
The latest Derby SQL Reference manual (version 10.11) can be found here:
https://db.apache.org/derby/docs/10.11/ref/index.html. It is, indeed, very
useful to have a comprehensive reference guide. The Derby build scripts can
also produce a BNF description of the grammar--but that is not part of the
16 matches
Mail list logo