
Hi, I was trying out selection queries with varying selectivity factors on a 1 million tuple relation. The selectivity factors for predicates were 0.1%, 1% and 10% of the entries in the column and the data type of the column is int. What I'm seeing is that the execution time increases with selectivity factors going from 0.1% to 1% to 10%. I'm not sure why this is happening though, considering its a column-store system. Should this be attributed to increase in CPU time or disk I/Os or any other factor? What my understanding of column-store systems is that the execution time should remain constant for varying selectivity factors for the same column. Any help would be appreciated. Thanks Mayank Mayank Maheshwari mayank@cs.wisc.edu