Many of the operators will be vectorised by the compiler because the c code is unambiguously operating on contiguous values in memory in many cases. 

I do think it would be interesting to study the generated code to see what operators are being vectorised.

One approach may be to try to compile the core with the Intel c compiler. Versions 2015 and later can output extremely thorough optimisation reports.

Alastair

Sent from my Sony Xperia™ smartphone



---- Xu,Wenjian wrote ----

Hi,

We know that MonetDB excels in application where the database hot-set can be largely held in main-memory, since the physical operators (e.g., hashjoin, sort) are highly optimized for main-memory. But it is strange to me that these operators do not utilize SIMD which a performance-critical feature of modern CPUs. For example, MonetDB use *timsort* as its underlying stable sort algorithm (do_ssort() in gdk/gdk_ssort_impl.h), but I cannot find any SIMD instructions there. Why doesn't MonetDB exploit SIMD feature? or did I miss something?

In paper 'Vectorwise: Beyond Column Stores', the authors claim that X100/Vectorwise use SIMD instructions. So my follow-up question is that whether any of X100/Vectorwise's technologies/features has been integrated into MonetDB's code base.

Thank you.

Best regards,
Xu, Wenjian