I am currently trying to update my code that uses the Mapi API to retry connections if we start an application when MonetDB is down. Our programs do not run interactively, so detecting the failure, terminating the program, and restarting the application is problematic. There are a couple of issues here that I can use help with. What we are doing is calling mapi_connect(). Mapi_connect returns what appears to be a pointer (it's non-zero). So we call mapi_error with this return value. If we are not connected, we get an MERROR, not an MTIMEOUT, which is what I would expect. With an MERROR failure, we call mapi_error_str() to see what kind of error. We then must do a string compare (actually, a strstr() call looking for a "Connection refused" pattern inside the string. Is there a way to get an enum or #define value instead, with a different function call? Sort of like looking at errno rather than calling strerror(errno) and doing a string compare on the result. This would be way more efficient if we could do this. Yes, errors should be rare, but determining the type of error (or protecting against a string text change on an RPM update) shouldn't be expensive. We then call mapi_explain() to stderr. Is there a routine that returns the value of the output of explain() to a std::string or a "const char *"? We have a logger that does stuff with the strings before logging (timestamping, __line__ __file__, putting error codes, etc. Having the multiline output of the explain() information would be very useful for us. The above process is executed multiple times until we connect. Since the failed connects return what appears to be a pointer, do we need to do anything to free the pointers up? It doesn't appear that we have a memory leak (I ran valgrind). Note that if the connect fails, calling mapi_destroy() crashes. From the documentation, "mapi_destroy()" does "Free handle resources", so it seems reasonable to call it after a failed (and non-null) mapi_connect(). It should probably do nothing, it should not crash. Also, it appears that if I connect multiple times without disconnecting (I had a bug in my code), I can't connect more than 65 times (simultaneous connections). Is this an API issue or a server issue? The reason I ask, is that if it a client issue, we will be having a single client that potentially does queries against hundreds of servers, and aggregates the result. If it is a server issue, is there a parameter that we can tweak to increase this? We do multiple queries in parallel, each on their own connections. Thanks, Dave