Please disregard.  I have narrowed down the problem to the July2015SP1 rpm.  Will send a different message with clearer stated problem.

From: users-list <users-list-bounces+lcj34=cornell.edu@monetdb.org> on behalf of Lynn Carol Johnson <lcj34@cornell.edu>
Reply-To: Communication channel for MonetDB users <users-list@monetdb.org>
Date: Tuesday, November 24, 2015 at 7:43 AM
To: Communication channel for MonetDB users <users-list@monetdb.org>
Subject: Re: err incorrect checksum for freed object

Update:

Tried running this again this morning.  I’m trying to remember if I have successfully loaded this large file since applying the July2015SP1.  I might not have.  Regardless, this morning when trying to load anew, my merovingian.log file shows:

  2015-11-24 07:28:33 MSG merovingian[18398]: database 'jeffTest' (24708) was killed by signal 8

Signal 8 indicates a floating point exception happened in the program.  Are there known issues with floating points?   I was able to successfully load this file before the upgrade to SP1.

Since nothing was loaded into the database (select count(*) on table shows 0 records), I tried another “COPY INTO” with a smaller version of the input file.  This time it fails with a data message:

sql>COPY 61000000 records INTO annosites FROM '/home/lcj34/monetdbFiles/sites10000Jeff.txt' USING DELIMITERS '\t','\n';

Failed to import table Leftover data 'componentof nuclearinner membrane;molecular_function;biological_process;endoplasmicreticulum'

sql>



These are not column headers in my table – does it indicate a particular place in the software that importing the file had a problem?

Thanks – Lynn

From: users-list <users-list-bounces+lcj34=cornell.edu@monetdb.org> on behalf of Lynn Carol Johnson <lcj34@cornell.edu>
Reply-To: Communication channel for MonetDB users <users-list@monetdb.org>
Date: Monday, November 23, 2015 at 3:48 PM
To: Communication channel for MonetDB users <users-list@monetdb.org>
Subject: err incorrect checksum for freed object

Hello -

I had a working installation of monetdb on my MAC (OS x 10.9.5).  Initially it was installed with the July 2015 release, I then updated to the july2015SP1.  With the original version I was able to load a large (10G ) file without problems.   

For reasons relating to debugging a  monetdb install on a red hat machine, I deleted the entries in the DB on my MAC and tried to re-add them.  My commands were:

> delete from annosites;
> select count(*) from annosites;

The db showed 0 items as expected.

sql>\d annosites

CREATE TABLE "testjeff"."annosites" (

"chr"                                     INTEGER,

"pos"                                     INTEGER,

"hapmap31_total_depth"                    INTEGER,

"hapmap31_num_taxa"                       SMALLINT,

"hapmap31_num_alleles"                    SMALLINT,

"hapmap31_minor_allele_avg_depth"         REAL,

"hapmap31_minor_allele_avg_phred"         REAL,

"hapmap31_num_hets"                       SMALLINT,

"hapmap31_ed_factor"                      REAL,

"hapmap31_seg_test_p_value"               REAL,

"hapmap31_ibd_one_allele"                 BOOLEAN,

"hapmap31_in_local_ld"                    BOOLEAN,

"hapmap31_maf"                            REAL,

"hapmap31_near_indel"                     BOOLEAN,

"hapmap31_first_alt_allele_is_ins_or_del" BOOLEAN,

"snpeff40e_effect_hapmap31"               CHARACTER LARGE OBJECT,

"snpeff40e_effectimpact_hapmap31"         CHARACTER LARGE OBJECT,

"snpeff40e_functionalclass_hapmap31"      CHARACTER LARGE OBJECT,

"gerp_neutral_tree_length"                REAL,

"gerp_score"                              REAL,

"gerp_conserved"                          BOOLEAN,

"mnase_low_minus_high_rpm_shoots"         REAL,

"mnase_bayes_factor_shoots"               REAL,

"mnase_hotspot_shoots"                    BOOLEAN,

"mnase_low_minus_high_rpm_roots"          REAL,

"mnase_bayes_factor_roots"                REAL,

"mnase_hotspot_roots"                     BOOLEAN,

"within_gene"                             BOOLEAN,

"within_transcript"                       BOOLEAN,

"within_exon"                             BOOLEAN,

"within_cds"                              BOOLEAN,

"within_cds_from_gff3"                    BOOLEAN,

"within_five_prime_utr"                   BOOLEAN,

"within_three_prime_utr"                  BOOLEAN,

"codon_position"                          SMALLINT,

"go_term_accession"                       CHARACTER LARGE OBJECT,

"go_term_name"                            CHARACTER LARGE OBJECT

);



But now, doing a “COPY INTO” continues to either hang with no error in the merovingian.log file , or to fail with the following error in the merovingian.log file.  Most often it hangs.  I have tried COPY into with and without the “records” option:

sql> COPY INTO annosites from '/Users/lcj34/notes_files/machineLearningDB/annoDB_related/siteAnnoNoHdrsCol35Fixed_20151011.txt' USING DELIMITERS '\t','\n';

sql>


sql>COPY 61000000 records INTO annosites from '/Users/lcj34/notes_files/machineLearningDB/annoDB_related/siteAnnoNoHdrsCol35Fixed_20151011.txt' USING DELIMITERS '\t','\n’;


The merovingian.log file the one time it had an error:


015-11-23 15:02:55 MSG testJeff[752]: # loading sql script: 99_system.sql

2015-11-23 15:15:49 ERR testJeff[752]: mserver5(752,0x112908000) malloc: *** error for object 0x7fbe21a08208: incorrect checksum for freed object - object was probably modified after being freed.

2015-11-23 15:15:49 ERR testJeff[752]: *** set a breakpoint in malloc_error_break to debug

2015-11-23 15:15:50 MSG merovingian[747]: database 'testJeff' (752) was killed by signal SIGABRT


Any idea what could be wrong?  I tried stopping, then destroying the database and starting it over.  This didn’t help.  I then created a new dbfarm,  and new database, connect monetdbd to it, and tried again.  While I can create this database of 37 columns, I am unable to load the file that previously was successfully loaded.  

Is this my a MAC issue?   What else can I try on monetdb side?  Is there any reason to re-install everything ?  I’m still in the testing stage so would not be losing data.

Thanks - Lynn