Hi Mich,
thanks a ton for your kind response, but this error was happening because
of loading derby classes mroe than once

In my second email I mentioned the steps that I took in order to resolve
the issue.


Thanks and Regards,
Gourav

On Tue, Mar 1, 2016 at 8:54 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Hi Gourav,
>
> Did you modify the following line in your code
>
>  val conf = new
> SparkConf().setAppName("IdeaProjects").setMaster("local[*]").set("spark.driver.allowMultipleContexts",
> "true")
>
> I checked every line in your code they work fine in spark shell with the
> following package added
>
> spark-shell --master spark://50.140.197.217:7077 --packages
> amplab:succinct:0.1.6
>
> Can you explain how it worked?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 1 March 2016 at 18:20, Gourav Sengupta <gourav.sengu...@gmail.com>
> wrote:
>
>> Hi,
>>
>> FIRST ATTEMPT:
>> Use build.sbt in IntelliJ and it was giving me nightmares with several
>> incompatibility and library issues though the sbt version was compliant
>> with the scala version
>>
>> SECOND ATTEMPT:
>> Created a new project with no entries in build.sbt file and imported all
>> the files in $SPARK_HOME/lib/*jar into the project. This started causing
>> issues I reported earlier
>>
>> FINAL ATTEMPT:
>> removed all the files from the import (removing them from dependencies)
>> which had the word derby in it and this resolved the issue.
>>
>> Please note that the following additional jars were included in the
>> library folder than the ones which are usually supplied with the SPARK
>> distribution:
>> 1. ojdbc7.jar
>> 2. spark-csv***jar file
>>
>>
>> Regards,
>> Gourav Sengupta
>>
>> On Tue, Mar 1, 2016 at 5:19 PM, Gourav Sengupta <
>> gourav.sengu...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I am getting the error  "*java.lang.SecurityException: sealing
>>> violation: can't seal package org.apache.derby.impl.services.locks: already
>>> loaded"*   after running the following code in SCALA.
>>>
>>> I do not have any other instances of sparkContext running from my system.
>>>
>>> I will be grateful for if anyone could kindly help me out.
>>>
>>>
>>> Environment:
>>> SCALA: 1.6
>>> OS: MAC OS X
>>>
>>> ------------
>>>
>>> import org.apache.spark.SparkContext
>>> import org.apache.spark.SparkConf
>>> import org.apache.spark.sql.Row
>>> import org.apache.spark.sql.hive.HiveContext
>>> import org.apache.spark.sql.types._
>>> import org.apache.spark.sql.SQLContext
>>>
>>> // Import SuccinctRDD
>>> import edu.berkeley.cs.succinct._
>>>
>>> object test1 {
>>>   def main(args: Array[String]) {
>>>     //the below line returns nothing
>>>     println(SparkContext.jarOfClass(this.getClass).toString())
>>>     val logFile = "/tmp/README.md" // Should be some file on your system
>>>
>>>     val conf = new 
>>> SparkConf().setAppName("IdeaProjects").setMaster("local[*]")
>>>     val sc = new SparkContext(conf)
>>>     val logData = sc.textFile(logFile, 2).cache()
>>>     val numAs = logData.filter(line => line.contains("a")).count()
>>>     val numBs = logData.filter(line => line.contains("b")).count()
>>>     println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>>>
>>>
>>>     // Create a Spark RDD as a collection of articles; ctx is the 
>>> SparkContext
>>>     val articlesRDD = sc.textFile("/tmp/README.md").map(_.getBytes)
>>>
>>>     // Compress the Spark RDD into a Succinct Spark RDD, and persist it in 
>>> memory
>>>     // Note that this is a time consuming step (usually at 8GB/hour/core) 
>>> since data needs to be compressed.
>>>     // We are actively working on making this step faster.
>>>     val succinctRDD = articlesRDD.succinct.persist()
>>>
>>>
>>>     // SuccinctRDD supports a set of powerful primitives directly on 
>>> compressed RDD
>>>     // Let us start by counting the number of occurrences of "Berkeley" 
>>> across all Wikipedia articles
>>>     val count = succinctRDD.count("the")
>>>
>>>     // Now suppose we want to find all offsets in the collection at which 
>>> ìBerkeleyî occurs; and
>>>     // create an RDD containing all resulting offsets
>>>     val offsetsRDD = succinctRDD.search("and")
>>>
>>>     // Let us look at the first ten results in the above RDD
>>>     val offsets = offsetsRDD.take(10)
>>>
>>>     // Finally, let us extract 20 bytes before and after one of the 
>>> occurrences of ìBerkeleyî
>>>     val offset = offsets(0)
>>>     val data = succinctRDD.extract(offset - 20, 40)
>>>
>>>     println(data)
>>>     println(">>>")
>>>
>>>
>>>     // Create a schema
>>>     val citySchema = StructType(Seq(
>>>       StructField("Name", StringType, false),
>>>       StructField("Length", IntegerType, true),
>>>       StructField("Area", DoubleType, false),
>>>       StructField("Airport", BooleanType, true)))
>>>
>>>     // Create an RDD of Rows with some data
>>>     val cityRDD = sc.parallelize(Seq(
>>>       Row("San Francisco", 12, 44.52, true),
>>>       Row("Palo Alto", 12, 22.33, false),
>>>       Row("Munich", 8, 3.14, true)))
>>>
>>>
>>>     val hiveContext = new HiveContext(sc)
>>>
>>>     //val sqlContext = new org.apache.spark.sql.SQLContext(sc)
>>>
>>>   }
>>> }
>>>
>>>
>>> -------------
>>>
>>>
>>>
>>> Regards,
>>> Gourav Sengupta
>>>
>>
>>
>

Reply via email to