Another alternative we have considered is to store the UUID, split into two columns of type BIGINT (UUID1 holding the 64 most significant bits of the UUID, and UUID2 holding the least 64 bits) and do all our queries along the lines of SELCT * WHERE testUUID = UUID1+UUID2 On a 64 bit system this still represents a 64 bit storage penalty as compared to a single column of type VARBINARY(16) but it is still a 64 bit saving as compared to storing the UUID as a 32 byte string (separators removed) Alternative 1. VARBINARY(16) storage = 64bit OID + 128bit value = total 172 bits Alternative 2. BIGINT x 2 storage = 64bit OID + 64bit value + 64bit OID + 64bit value = total 256 bits Alternative 3. VARCHAR(32) storage = 64bit OID + 256bit value = total 320 bits Would there be a significant performance penalty to queries on UUID1+UUID2 as compared to VARCHAR(32) ? Thanks On Sunday 17 February 2008 10:16:49 McKennirey.Matthew wrote:
In our application we use a UUID (128 bits) to identify objects. Many of our columns are simply storing these UUIDs.
At the moment our only alternative seems to be to cast these 16 byte binary strings as character strings, increasing the size to 36 bytes (or 32 bytes if we remove the '-' separators). This has obvious drawbacks in terms of size and will often mean the difference between being able to read the column into memory or not with the obvious peformance degradation.
Ideally we would like to define a column type of VARBINARY(16) (using a MySQL datatype example, actual datatype names vary across database technologies, Oracle=RAW or VARBINARY, Postgres=BIT or BYTEA, etc.). Operations on such a column would treat the content simply as a sequence of bits, not interpreting them as characters, integers, etc.
Has anyone attempted something like this with MonetDB, or are we missing some obvious solution?
Thanks very much.