You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/examples/AgePredictor/spark-warehouse at org.apache.hadoop.fs.Path.initialize(Path.java:206) at org.apache.hadoop.fs.Path.<init>(Path.java:172) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89) at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:328) at edu.usc.irds.agepredictor.authorage.AgePredicterLocal.predictAge(AgePredicterLocal.java:108) at edu.usc.irds.agepredictor.authorage.AgePredicterLocal.main(AgePredicterLocal.java:141) Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/examples/AgePredictor/spark-warehouse at java.net.URI.checkPath(Unknown Source) at java.net.URI.<init>(Unknown Source) at org.apache.hadoop.fs.Path.initialize(Path.java:203) ... 14 more
simple solution from https://issues.apache.org/jira/browse/SPARK-17810 is to override spark warehouse directory by using spark.sql.warehouse.dir option:
-Dspark.sql.warehouse.dir="file:/tmp/spark-warehouse"
The text was updated successfully, but these errors were encountered:
Relates to: https://issues.apache.org/jira/browse/SPARK-17810
I was trying to run example from https://wiki.apache.org/tika/AgeDetectionParser on Windows 8.1
but it throws an exception:
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/examples/AgePredictor/spark-warehouse at org.apache.hadoop.fs.Path.initialize(Path.java:206) at org.apache.hadoop.fs.Path.<init>(Path.java:172) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89) at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:328) at edu.usc.irds.agepredictor.authorage.AgePredicterLocal.predictAge(AgePredicterLocal.java:108) at edu.usc.irds.agepredictor.authorage.AgePredicterLocal.main(AgePredicterLocal.java:141) Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/examples/AgePredictor/spark-warehouse at java.net.URI.checkPath(Unknown Source) at java.net.URI.<init>(Unknown Source) at org.apache.hadoop.fs.Path.initialize(Path.java:203) ... 14 more
simple solution from https://issues.apache.org/jira/browse/SPARK-17810 is to override spark warehouse directory by using spark.sql.warehouse.dir option:
-Dspark.sql.warehouse.dir="file:/tmp/spark-warehouse"
The text was updated successfully, but these errors were encountered: