spark“package.TreeNodeException”错误python“java.lang.RuntimeException:找不到pythonUDF”
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了spark“package.TreeNodeException”错误python“java.lang.RuntimeException:找不到pythonUDF”,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含11323字,纯文字阅读大概需要17分钟。
内容图文
![spark“package.TreeNodeException”错误python“java.lang.RuntimeException:找不到pythonUDF”](/upload/InfoBanner/zyjiaocheng/782/81f80f1381a84c569feb698e1e767819.jpg)
我在Databricks上使用pySpark 2.1.
我编写了一个UDF来为pyspark数据帧的每一行生成一个唯一的uuid.我正在使用的数据帧相对较小< 10,000行.永远不应该超越那个. 我知道有内置函数spark函数zipWithIndex()和zipWithUniqueId()来生成行索引,但我已经被特别要求使用uuid来实现这个特定的项目. UDF udf_insert_uuid在小数据集上工作正常,但似乎与内置的spark函数减法冲突. 是什么导致了这个错误:
package.TreeNodeException: Binding attribute, tree: pythonUDF0#104830
更深入的驱动程序堆栈错误,它还说:
Caused by: java.lang.RuntimeException: Couldn’t find pythonUDF0#104830
这是我在下面运行的代码:
创建一个函数来生成一组unique_ids
import pandas
from pyspark.sql.functions import *
from pyspark.sql.types import *
import uuid
#define a python function
def insert_uuid():
user_created_uuid = str( uuid.uuid1() )
return user_created_uuid
#register the python function for use in dataframes
udf_insert_uuid = udf(insert_uuid, StringType())
创建一个包含50个元素的数据框
import pandas
from pyspark.sql.functions import *
from pyspark.sql.types import *
list_of_numbers = range(1000,1050)
temp_pandasDF = pandas.DataFrame(list_of_numbers, index=None)
sparkDF = (
spark
.createDataFrame(temp_pandasDF, ["data_points"])
.withColumn("labels", when( col("data_points") < 1025, "a" ).otherwise("b")) #if "values" < 25, then "labels" = "a", else "labels" = "b"
.repartition("labels")
)
sparkDF.createOrReplaceTempView("temp_spark_table")
#add a unique id for each row
#udf works fine in the line of code here
sparkDF = sparkDF.withColumn("id", lit( udf_insert_uuid() ))
sparkDF.show(20, False)
ssparkDF输出:
+-----------+------+------------------------------------+
|data_points|labels|id |
+-----------+------+------------------------------------+
|1029 |b |d3bb91e0-9cc8-11e7-9b70-00163e9986ba|
|1030 |b |d3bb95e6-9cc8-11e7-9b70-00163e9986ba|
|1035 |b |d3bb982a-9cc8-11e7-9b70-00163e9986ba|
|1036 |b |d3bb9a50-9cc8-11e7-9b70-00163e9986ba|
|1042 |b |d3bb9c6c-9cc8-11e7-9b70-00163e9986ba|
+-----------+------+------------------------------------+
only showing top 5 rows
使用与sparkDF不同的值创建另一个DF
list_of_numbers = range(1025,1075)
temp_pandasDF = pandas.DataFrame(list_of_numbers, index=None)
new_DF = (
spark
.createDataFrame(temp_pandasDF, ["data_points"])
.withColumn("labels", when( col("data_points") < 1025, "a" ).otherwise("b")) #if "values" < 25, then "labels" = "a", else "labels" = "b"
.repartition("labels"))
new_DF.show(5, False)
new_DF输出:
+-----------+------+
|data_points|labels|
+-----------+------+
|1029 |b |
|1030 |b |
|1035 |b |
|1036 |b |
|1042 |b |
+-----------+------+
only showing top 5 rows
将new_DF中的值与spark_DF进行比较
values_not_in_new_DF = (new_DF.subtract(sparkDF.drop("id")))
将uuid添加到udf的每一行并显示它
display(values_not_in_new_DF
.withColumn("id", lit( udf_insert_uuid())) #add a column of unique uuid's
)
以下错误结果:
package.TreeNodeException: Binding attribute, tree: pythonUDF0#104830
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: pythonUDF0#104830 at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) at
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88) at
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87) at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:268) at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:268) at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:267) at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:273) at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:273) at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:307) at
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188) at
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:305) at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:273) at
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:257) at
org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87) at
org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$33.apply(HashAggregateExec.scala:473) at
org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$33.apply(HashAggregateExec.scala:472) at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at
scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at
scala.collection.AbstractTraversable.map(Traversable.scala:105) at
org.apache.spark.sql.execution.aggregate.HashAggregateExec.generateResultCode(HashAggregateExec.scala:472) at
org.apache.spark.sql.execution.aggregate.HashAggregateExec.doProduceWithKeys(HashAggregateExec.scala:610) at
org.apache.spark.sql.execution.aggregate.HashAggregateExec.doProduce(HashAggregateExec.scala:148) at
org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83) at
org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78) at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) at
org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78) at
org.apache.spark.sql.execution.aggregate.HashAggregateExec.produce(HashAggregateExec.scala:38) at
org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:313) at
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:354) at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) at
org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225) at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308) at
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2807) at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2132) at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2132) at
org.apache.spark.sql.Dataset$$anonfun$60.apply(Dataset.scala:2791) at
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:87) at
org.apache.spark.sql.execution.SQLExecution$.withFileAccessAudit(SQLExecution.scala:53) at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:70) at
org.apache.spark.sql.Dataset.withAction(Dataset.scala:2790) at
org.apache.spark.sql.Dataset.head(Dataset.scala:2132) at
org.apache.spark.sql.Dataset.take(Dataset.scala:2345) at
com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation0(OutputAggregator.scala:81) at
com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation(OutputAggregator.scala:42) at
com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$getResultBuffer$1.apply(PythonDriverLocal.scala:461) at
com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$getResultBuffer$1.apply(PythonDriverLocal.scala:441) at
com.databricks.backend.daemon.driver.PythonDriverLocal.withInterpLock(PythonDriverLocal.scala:394) at
com.databricks.backend.daemon.driver.PythonDriverLocal.getResultBuffer(PythonDriverLocal.scala:441) at
com.databricks.backend.daemon.driver.PythonDriverLocal.com$databricks$backend$daemon$driver$PythonDriverLocal$$outputSuccess(PythonDriverLocal.scala:428) at
com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$repl$3.apply(PythonDriverLocal.scala:178) at
com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$repl$3.apply(PythonDriverLocal.scala:175) at
com.databricks.backend.daemon.driver.PythonDriverLocal.withInterpLock(PythonDriverLocal.scala:394) at
com.databricks.backend.daemon.driver.PythonDriverLocal.repl(PythonDriverLocal.scala:175) at
com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$2.apply(DriverLocal.scala:230) at
com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$2.apply(DriverLocal.scala:211) at
com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:173) at
scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at
com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:168) at
com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:39) at
com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:206) at
com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:39) at
com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:211) at
com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:589) at
com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:589) at
scala.util.Try$.apply(Try.scala:161) at
com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:584) at
com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:488) at
com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:391) at
com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:348) at
com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215) at
java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: Couldn’t find pythonUDF0#104830 in [data_points#104799L,labels#104802] at
scala.sys.package$.error(package.scala:27) at
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94) at
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88) at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) … 82 more
解决方法:
我在运行脚本时遇到了与您相同的错误.我发现使其工作的唯一方法是将UDF传递给一列而不是没有参数:
def insert_uuid(col):
user_created_uuid = str( uuid.uuid1() )
return user_created_uuid
udf_insert_uuid = udf(insert_uuid, StringType())
然后在标签上调用它,例如:
values_not_in_new_DF\
.withColumn("id", udf_insert_uuid("labels"))\
.show()
无需使用点燃
内容总结
以上是互联网集市为您收集整理的spark“package.TreeNodeException”错误python“java.lang.RuntimeException:找不到pythonUDF”全部内容,希望文章能够帮你解决spark“package.TreeNodeException”错误python“java.lang.RuntimeException:找不到pythonUDF”所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。