0

Kıvılcım ve cassandra için çok yeni. Ben datastax tarafından sağlanan kıvılcım-cassandra-bağlayıcı kullanarak cassandra tabloya yeni satırlar eklemek için çalışıyorum basit bir java progam çalışıyorum.Yerel kıvılcım kıvılcım cassandra konektörü Spark kıvılcımı aşağı görünüyor

Dizüstü bilgisayarımda dse'yi çalıştırıyorum. Java kullanarak, verileri cassandra DB thru Spark'e kaydetmeye çalışıyorum. Aşağıda kodu:

: Eğer Kıvılcım yapılandırmasını düzeltmek gerekir gibi ben bu kodu çalıştırdığınızda aşağıdaki

Map<String, String> extra = new HashMap<String, String>(); 
     extra.put("city", "bangalore"); 
     extra.put("dept", "software"); 
     List<User> products = Arrays.asList(new User(1, "vamsi", extra)); 
     JavaRDD<User> productsRDD = sc.parallelize(products); 
     javaFunctions(productsRDD, User.class).saveToCassandra("test", "users"); 

alıyorum hata

16/03/26 20:57:31 INFO client.AppClient$ClientActor: Connecting to master spark://127.0.0.1:7077... 16/03/26 20:57:44 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 16/03/26 20:57:51 INFO client.AppClient$ClientActor: Connecting to master spark://127.0.0.1:7077... 16/03/26 20:57:59 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 16/03/26 20:58:11 ERROR client.AppClient$ClientActor: All masters are unresponsive! Giving up. 16/03/26 20:58:11 ERROR cluster.SparkDeploySchedulerBackend: Spark cluster looks dead, giving up. 16/03/26 20:58:11 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 16/03/26 20:58:11 INFO scheduler.DAGScheduler: Failed to run runJob at RDDFunctions.scala:48 Exception in thread "main" org.apache.spark.SparkException: Job aborted: Spark cluster looks down at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020) at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018) at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604) at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604) at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

cevap