Hello -

I had a working installation of monetdb on my MAC (OS x 10.9.5).  Initially it was installed with the July 2015 release, I then updated to the july2015SP1.  With the original version I was able to load a large (10G ) file without problems.   

For reasons relating to debugging a  monetdb install on a red hat machine, I deleted the entries in the DB on my MAC and tried to re-add them.  My commands were:

> delete from annosites;
> select count(*) from annosites;

The db showed 0 items as expected.

sql>\d annosites

CREATE TABLE "testjeff"."annosites" (

"chr"                                     INTEGER,

"pos"                                     INTEGER,

"hapmap31_total_depth"                    INTEGER,

"hapmap31_num_taxa"                       SMALLINT,

"hapmap31_num_alleles"                    SMALLINT,

"hapmap31_minor_allele_avg_depth"         REAL,

"hapmap31_minor_allele_avg_phred"         REAL,

"hapmap31_num_hets"                       SMALLINT,

"hapmap31_ed_factor"                      REAL,

"hapmap31_seg_test_p_value"               REAL,

"hapmap31_ibd_one_allele"                 BOOLEAN,

"hapmap31_in_local_ld"                    BOOLEAN,

"hapmap31_maf"                            REAL,

"hapmap31_near_indel"                     BOOLEAN,

"hapmap31_first_alt_allele_is_ins_or_del" BOOLEAN,

"snpeff40e_effect_hapmap31"               CHARACTER LARGE OBJECT,

"snpeff40e_effectimpact_hapmap31"         CHARACTER LARGE OBJECT,

"snpeff40e_functionalclass_hapmap31"      CHARACTER LARGE OBJECT,

"gerp_neutral_tree_length"                REAL,

"gerp_score"                              REAL,

"gerp_conserved"                          BOOLEAN,

"mnase_low_minus_high_rpm_shoots"         REAL,

"mnase_bayes_factor_shoots"               REAL,

"mnase_hotspot_shoots"                    BOOLEAN,

"mnase_low_minus_high_rpm_roots"          REAL,

"mnase_bayes_factor_roots"                REAL,

"mnase_hotspot_roots"                     BOOLEAN,

"within_gene"                             BOOLEAN,

"within_transcript"                       BOOLEAN,

"within_exon"                             BOOLEAN,

"within_cds"                              BOOLEAN,

"within_cds_from_gff3"                    BOOLEAN,

"within_five_prime_utr"                   BOOLEAN,

"within_three_prime_utr"                  BOOLEAN,

"codon_position"                          SMALLINT,

"go_term_accession"                       CHARACTER LARGE OBJECT,

"go_term_name"                            CHARACTER LARGE OBJECT

);



But now, doing a “COPY INTO” continues to either hang with no error in the merovingian.log file , or to fail with the following error in the merovingian.log file.  Most often it hangs.  I have tried COPY into with and without the “records” option:

sql> COPY INTO annosites from '/Users/lcj34/notes_files/machineLearningDB/annoDB_related/siteAnnoNoHdrsCol35Fixed_20151011.txt' USING DELIMITERS '\t','\n';

sql>


sql>COPY 61000000 records INTO annosites from '/Users/lcj34/notes_files/machineLearningDB/annoDB_related/siteAnnoNoHdrsCol35Fixed_20151011.txt' USING DELIMITERS '\t','\n’;


The merovingian.log file the one time it had an error:


015-11-23 15:02:55 MSG testJeff[752]: # loading sql script: 99_system.sql

2015-11-23 15:15:49 ERR testJeff[752]: mserver5(752,0x112908000) malloc: *** error for object 0x7fbe21a08208: incorrect checksum for freed object - object was probably modified after being freed.

2015-11-23 15:15:49 ERR testJeff[752]: *** set a breakpoint in malloc_error_break to debug

2015-11-23 15:15:50 MSG merovingian[747]: database 'testJeff' (752) was killed by signal SIGABRT


Any idea what could be wrong?  I tried stopping, then destroying the database and starting it over.  This didn’t help.  I then created a new dbfarm,  and new database, connect monetdbd to it, and tried again.  While I can create this database of 37 columns, I am unable to load the file that previously was successfully loaded.  

Is this my a MAC issue?   What else can I try on monetdb side?  Is there any reason to re-install everything ?  I’m still in the testing stage so would not be losing data.

Thanks - Lynn