admin管理员组文章数量:1619183
今天执行了一个spark,调整了启动参数
正常运行到一半时报错:
Job aborted due to stage failure: Task 20 in stage 3.0 failed 1 times, most recent failure: Lost task 20.0 in stage 3.0 (TID 240, localhost, executor driver): com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 60000, active 20, maxActive 20, creating 0 at com.alibaba.druid.pool.DruidDataSource.getConnectionInternal(DruidDataSource.java:1619)
at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:1337)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1317)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1307)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:109)
at com.aisino.util.DataSourceUtil.getConnection(DataSourceUtil.java:48)
at com.aisino.service.DwdDataService$$anonfun$monthlyStatistics_1$1.apply(DwdDataService.scala:67)
at
查找问题,在–executor-memory 调小后运行正常
本文标签: Stagefailureduejobaborted
版权声明:本文标题:Job aborted due to stage failure: Task 20 in stage 3.0 failed 1 times, most recent failure:问题 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://m.elefans.com/xitong/1728785251a1173241.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论