atom feed6 messages in org.apache.spark.issues[jira] [Comment Edited] (SPARK-14172)...
FromSent OnAttachments
Saktheesh Balaraj (JIRA)Oct 3, 2017 2:53 am 
Saktheesh Balaraj (JIRA)Oct 3, 2017 2:53 am 
Saktheesh Balaraj (JIRA)Oct 3, 2017 3:07 am 
Saktheesh Balaraj (JIRA)Oct 3, 2017 3:07 am 
Saktheesh Balaraj (JIRA)Oct 3, 2017 8:22 am 
Saktheesh Balaraj (JIRA)Oct 4, 2017 2:22 am 
Subject:[jira] [Comment Edited] (SPARK-14172) Hive table partition predicate not passed down correctly
From:Saktheesh Balaraj (JIRA) (ji@apache.org)
Date:Oct 3, 2017 3:07:00 am
List:org.apache.spark.issues

[
https://issues.apache.org/jira/browse/SPARK-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16189481#comment-16189481
]

Saktheesh Balaraj edited comment on SPARK-14172 at 10/3/17 10:06 AM:

---------------------------------------------------------------------

Similar problem is observed while joining 2 hive tables based on partition
columns in Spark

*Example* Table A having 1000 partitions (date partition and hour sub-partition) and
Table B having 2 partition. when joining 2 tables based on partition it's going
for full table scan in table A i.e using all 1000 partitions instead of taking 2
partitions from Table A and join with Table B.

{noformat} sqlContext.sql("select * from tableA a, tableB b where a.trans_date=b.trans_date
and a.trans_hour=b.trans_hour") {noformat}

(Here trans_date is the partition and trans_hour is the sub-partition on both
the tables)

*Workaround* selecting 2 partitions from table B and then do lookup on Table A step1: {noformat} select trans_date and a.trans_hour from table B {noformat} step2: {noformat} select * from tableA where trans_date=<substitute from step1> and
a.trans_hour =<substitute from step1> {noformat}

was (Author: saktheesh): Similar problem is observed while joining 2 hive tables based on partition
columns in Spark

*Example* Table A having 1000 partitions (date partition and hour sub-partition) and
Table B having 2 partition. when joining 2 tables based on partition it's going
for full table scan in table A i.e using all 1000 partitions instead of taking 2
partitions from Table A and join with Table B.

{noformat} sqlContext.sql("select * from tableA a, tableB b where a.trans_date=b.trans_date
and a.trans_hour=b.trans_hour") {noformat}

(Here trans_date is the partition and trans_hour is the sub-partition on both
the tables)

*Workaround* selecting 2 partitions from table B and then do lookup on Table A step1: select trans_date and a.trans_hour from table B step2: select * from tableA where trans_date=<substitute from step1> and
a.trans_hour =<substitute from step1>

Hive table partition predicate not passed down correctly

--------------------------------------------------------

Key: SPARK-14172 URL: https://issues.apache.org/jira/browse/SPARK-14172 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 1.6.1 Reporter: Yingji Zhang Priority: Critical

When the hive sql contains nondeterministic fields, spark plan will not push
down the partition predicate to the HiveTableScan. For example: {code} -- consider following query which uses a random function to sample rows SELECT * FROM table_a WHERE partition_col = 'some_value' AND rand() < 0.01; {code} The spark plan will not push down the partition predicate to HiveTableScan which
ends up scanning all partitions data from the table.