[MonetDB-users] predictable crashes of MonetDB5 SQL server under Win32bit

Hi, I experience predictable crashes of MonetDB5 SQLserver under Win32bit when COPY INTO in batches of 1 Mio more than ~9Mio rows, which is ~1.5 GB DB size. Any settings to avoid that? Is this a bug? Is MonetDB size limited to available RAM? Or is this a known limit just under Win32? Logs of sql client error messages and details for replication below. Regards Jens Oehlschlägel Lenovo Thinkcentre with 3 GB RAM under Windows XP Professional SP2 RAM consumption of m5server.exe during load ~300MB according to task manager Free RAM 2.1 GB Size of dbfarm finally 1.9 GB DiskFreeSpace 1.8 TB [System Summary] Item Value OS Name Microsoft Windows XP Professional Version 5.1.2600 Service Pack 2 Build 2600 OS Manufacturer Microsoft Corporation System Name GEMUN-38396-10- System Manufacturer LENOVO System Model 6073AA7 System Type X86-based PC Processor x86 Family 6 Model 15 Stepping 11 GenuineIntel ~2992 Mhz Processor x86 Family 6 Model 15 Stepping 11 GenuineIntel ~2992 Mhz BIOS Version/Date LENOVO 2RKT41AUS, 3/20/2008 SMBIOS Version 2.5 Windows Directory C:\WINDOWS System Directory C:\WINDOWS\system32 Boot Device \Device\HarddiskVolume1 Locale Germany Hardware Abstraction Layer Version = "5.1.2600.3023 (xpsp_sp2_qfe.061030-0020)" Time Zone W. Europe Standard Time Total Physical Memory 3,072.00 MB Available Physical Memory 1.70 GB Total Virtual Memory 2.00 GB Available Virtual Memory 1.96 GB Page File Space 5.79 GB Page File D:\pagefile.sys sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 67515.000 msec 0 rows sql>COMMIT; 0 tuples Timer 1255.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 79141.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6041.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 74227.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6943.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 73084.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6599.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 67158.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6520.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 18529.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 65106.000 msec 0 rows sql>COMMIT; 0 tuples Timer 5184.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 71136.000 msec 0 rows sql>COMMIT; 0 tuples Timer 9104.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 21021.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 69761.000 msec 0 rows sql>COMMIT; 0 tuples !MALException: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 34603008, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 34603008, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=11\1117, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=11\1117, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 69206016, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1534, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1534, ext=tail 0 tuples Timer 9276.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 96373.000 msec 0 rows sql>COMMIT; MAPI = voc@localhost:50000 ACTION= read_line QUERY = COMMIT; ERROR = Connection terminated Timer 44089.000 msec 0 rows -- restarting the crashed server looks good, but # MonetDB server v5.8.2, based on kernel v1.26.2 # Serving database 'demo', using 1 thread # Compiled for i686-pc-win32/32bit with 32bit OIDs dynamically linked # Copyright (c) 1993-July 2008 CWI. # Copyright (c) August 2008- MonetDB B.V., all rights reserved # Visit http://monetdb.cwi.nl/ for further information #warning: please don't forget to set your vault key! #(see D:\MonetDB\MonetDB5\etc\monetdb5.conf) # Listening for connection requests on mapi:monetdb://127.0.0.1:50000/ # MonetDB/SQL module v2.26.2 loaded -- when starting mclient the server says
!ERROR: MT_mmap: MapViewOfFileEx(1528, 2, 0, 0, 69206016, 0) failed !OS: Not enough storage is available to process this command. #GDKmmap(69206016) fail => BBPtrim(enter) usage[mem=165285248,vm=1495793664] #GDKmmap(69206016) fail => BBPtrim(ready) usage[mem=165284840,vm=574619648] #GDKmmap: recovery ok. Continuing..
-- replication instructions -- log in to freshly installed MonetDB and create user CREATE USER "voc" WITH PASSWORD 'voc' NAME 'VOC Explorer' SCHEMA "sys"; CREATE SCHEMA "voc" AUTHORIZATION "voc"; ALTER USER "voc" SET SCHEMA "voc"; -- log in as voc and execute the following statements CREATE TABLE databasetest ( "id" INTEGER , "ubyte1" SMALLINT , "ubyte2" SMALLINT , "ubyte3" SMALLINT , "ubyte4" SMALLINT , "ubyte5" SMALLINT , "ubyte6" SMALLINT , "ubyte7" SMALLINT , "ubyte8" SMALLINT , "ubyte9" SMALLINT , "ubyte10" SMALLINT , "ubyte11" SMALLINT , "ubyte12" SMALLINT , "ubyte13" SMALLINT , "ubyte14" SMALLINT , "ubyte15" SMALLINT , "smallint1" SMALLINT , "smallint2" SMALLINT , "smallint3" SMALLINT , "smallint4" SMALLINT , "smallint5" SMALLINT , "smallint6" SMALLINT , "smallint7" SMALLINT , "smallint8" SMALLINT , "smallint9" SMALLINT , "smallint10" SMALLINT , "smallint11" SMALLINT , "smallint12" SMALLINT , "smallint13" SMALLINT , "smallint14" SMALLINT , "smallint15" SMALLINT , "float1" DOUBLE , "float2" DOUBLE , "float3" DOUBLE , "float4" DOUBLE , "float5" DOUBLE , "float6" DOUBLE , "float7" DOUBLE , "float8" DOUBLE , "float9" DOUBLE , "float10" DOUBLE , "float11" DOUBLE , "float12" DOUBLE , "float13" DOUBLE , "float14" DOUBLE , "float15" DOUBLE , "float16" DOUBLE , "float17" DOUBLE , "float18" DOUBLE , "float19" DOUBLE , "float20" DOUBLE ) ; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; -- the file databasetest1mio.csv can be created with the following R script # this is an R comment and you find R at CRAN.r-project.org filename <- "F:/Periscope/databasetest1mio.csv" # create 1Mio rows in 100 batches of 10,000, total of 174MB csv file, takes ~2 minutes system.time({ b <- 1000L # number of batches n <- 1000L # batch size csvfile <- file(filename, open="w") for (i in 1:b){ x <- data.frame( id = ((i-1L)*n+1L):(i*n) # 4-byte integer record identifier , ubyte1 = sample(0:35, n, TRUE, prob=NULL) # unsigned byte columns (1 byte) , ubyte2 = sample(0:1, n, TRUE, prob=NULL) , ubyte3 = sample(0:3, n, TRUE, prob=NULL) , ubyte4 = sample(0:9, n, TRUE, prob=NULL) , ubyte5 = sample(0:24, n, TRUE, prob=NULL) , ubyte6 = sample(0:99, n, TRUE, prob=NULL) , ubyte7 = sample(-127:127, n, TRUE, prob=NULL) , ubyte8 = sample(0:1, n, TRUE, prob=c(0.75, 0.25)) , ubyte9 = sample(0:1, n, TRUE, prob=c(0.9,0.1)) , ubyte10 = sample(0:1, n, TRUE, prob=c(0.96,0.04)) , ubyte11 = sample(0:1, n, TRUE, prob=c(0.99,0.01)) , ubyte12 = sample(0:1, n, TRUE, prob=c(0.996,0.004)) , ubyte13 = sample(0:1, n, TRUE, prob=c(0.999,0.001)) , ubyte14 = sample(0:1, n, TRUE, prob=c(0.9996,0.0004)) , ubyte15 = sample(0:1, n, TRUE, prob=c(0.9999,0.0001)) , smallint1 = sample(1:4000, n, TRUE, prob=NULL) # smallint columns (2 bytes) , smallint2 = sample(1:2, n, TRUE, prob=NULL) , smallint3 = sample(1:4, n, TRUE, prob=NULL) , smallint4 = sample(1:10, n, TRUE, prob=NULL) , smallint5 = sample(1:25, n, TRUE, prob=NULL) , smallint6 = sample(1:100, n, TRUE, prob=NULL) , smallint7 = sample(1:256, n, TRUE, prob=NULL) , smallint8 = sample(1:1000, n, TRUE, prob=NULL) , smallint9 = sample(1:32000, n, TRUE, prob=NULL) , smallint10 = sample(0:1, n, TRUE, prob=c(0.75, 0.25)) , smallint11 = sample(0:1, n, TRUE, prob=c(0.9,0.1)) , smallint12 = sample(0:1, n, TRUE, prob=c(0.96,0.04)) , smallint13 = sample(0:1, n, TRUE, prob=c(0.99,0.01)) , smallint14 = sample(0:1, n, TRUE, prob=c(0.996,0.004)) , smallint15 = sample(0:1, n, TRUE, prob=c(0.999,0.001)) , float1 = round(runif(n, 0, 100), 1) # float columns (4 bytes) , float2 = round(runif(n, 0, 100), 1) , float3 = round(runif(n, 0, 100), 1) , float4 = round(runif(n, 0, 100), 1) , float5 = round(runif(n, 0, 100), 1) , float6 = round(runif(n, 0, 100), 1) , float7 = round(runif(n, 0, 100), 1) , float8 = round(runif(n, 0, 100), 1) , float9 = round(runif(n, 0, 100), 1) , float10 = round(runif(n, 0, 100), 1) , float11 = round(runif(n, 0, 100), 1) , float12 = round(runif(n, 0, 100), 1) , float13 = round(runif(n, 0, 100), 1) , float14 = round(runif(n, 0, 100), 1) , float15 = round(runif(n, 0, 100), 1) , float16 = round(runif(n, 0, 100), 1) , float17 = round(runif(n, 0, 100), 1) , float18 = round(runif(n, 0, 100), 1) , float19 = round(runif(n, 0, 100), 1) , float20 = round(runif(n, 0, 100), 1) ) write.table(x, file=csvfile, row.names=FALSE, col.names=(i==1L), append=i!=1L, dec=".", sep=",") } close(csvfile) })

Hi Jens, with loading ~1.5 GB on a 32 bit machine, there is a reasonable chance that the loading process (that needs to be able to address all data concurrently) comes close to or even hits the 32-bit (i.e., max. 4 GB, possibly only 2 GB) address space limit. Given that your machine has sufficient memory and more than sufficient diskspace (on the partition / "drive" that your dbfarm and sql_log directories resides on, I assume), I assume that the mmap fails not due to a full disk, but due to address space limits --- due do address space fragmentation, this can happen before the complete address space is used, e.g., in you case loading eventually cannot proceed mmap fails to find a *consecutive* free address space range of ~14 MB. Ther eis no other solution to this address space limitation than resorting to a 64-bit platform. Obviously, even in case that the address space is exceeded, MonetDB should not crash, but continue running after reporting the problem. We currently do not have a suitable Windows machine to test and analyse the crash, but we could try to reproduce it on a 32-bit Linux machine. Would it be possible that you could provide us with your data? That would help us a lot to analyze, locate and hopefully fix the crash... Regards, Stefan On Fri, Jan 23, 2009 at 11:13:08AM +0100, Jens Oehlschlägel wrote:
Hi,
I experience predictable crashes of MonetDB5 SQLserver under Win32bit when COPY INTO in batches of 1 Mio more than ~9Mio rows, which is ~1.5 GB DB size.
Any settings to avoid that? Is this a bug? Is MonetDB size limited to available RAM? Or is this a known limit just under Win32?
Logs of sql client error messages and details for replication below.
Regards
Jens Oehlschlägel
Lenovo Thinkcentre with 3 GB RAM under Windows XP Professional SP2 RAM consumption of m5server.exe during load ~300MB according to task manager Free RAM 2.1 GB Size of dbfarm finally 1.9 GB DiskFreeSpace 1.8 TB
[System Summary]
Item Value OS Name Microsoft Windows XP Professional Version 5.1.2600 Service Pack 2 Build 2600 OS Manufacturer Microsoft Corporation System Name GEMUN-38396-10- System Manufacturer LENOVO System Model 6073AA7 System Type X86-based PC Processor x86 Family 6 Model 15 Stepping 11 GenuineIntel ~2992 Mhz Processor x86 Family 6 Model 15 Stepping 11 GenuineIntel ~2992 Mhz BIOS Version/Date LENOVO 2RKT41AUS, 3/20/2008 SMBIOS Version 2.5 Windows Directory C:\WINDOWS System Directory C:\WINDOWS\system32 Boot Device \Device\HarddiskVolume1 Locale Germany Hardware Abstraction Layer Version = "5.1.2600.3023 (xpsp_sp2_qfe.061030-0020)" Time Zone W. Europe Standard Time Total Physical Memory 3,072.00 MB Available Physical Memory 1.70 GB Total Virtual Memory 2.00 GB Available Virtual Memory 1.96 GB Page File Space 5.79 GB Page File D:\pagefile.sys
sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 67515.000 msec 0 rows sql>COMMIT; 0 tuples Timer 1255.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 79141.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6041.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 74227.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6943.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 73084.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6599.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 67158.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6520.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 18529.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 65106.000 msec 0 rows sql>COMMIT; 0 tuples Timer 5184.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 71136.000 msec 0 rows sql>COMMIT; 0 tuples Timer 9104.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 21021.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 69761.000 msec 0 rows sql>COMMIT; 0 tuples !MALException: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 34603008, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 34603008, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=11\1117, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=11\1117, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 69206016, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1534, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1534, ext=tail 0 tuples Timer 9276.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 96373.000 msec 0 rows sql>COMMIT; MAPI = voc@localhost:50000 ACTION= read_line QUERY = COMMIT; ERROR = Connection terminated Timer 44089.000 msec 0 rows
-- restarting the crashed server looks good, but
# MonetDB server v5.8.2, based on kernel v1.26.2 # Serving database 'demo', using 1 thread # Compiled for i686-pc-win32/32bit with 32bit OIDs dynamically linked # Copyright (c) 1993-July 2008 CWI. # Copyright (c) August 2008- MonetDB B.V., all rights reserved # Visit http://monetdb.cwi.nl/ for further information #warning: please don't forget to set your vault key! #(see D:\MonetDB\MonetDB5\etc\monetdb5.conf) # Listening for connection requests on mapi:monetdb://127.0.0.1:50000/ # MonetDB/SQL module v2.26.2 loaded
-- when starting mclient the server says
!ERROR: MT_mmap: MapViewOfFileEx(1528, 2, 0, 0, 69206016, 0) failed !OS: Not enough storage is available to process this command. #GDKmmap(69206016) fail => BBPtrim(enter) usage[mem=165285248,vm=1495793664] #GDKmmap(69206016) fail => BBPtrim(ready) usage[mem=165284840,vm=574619648] #GDKmmap: recovery ok. Continuing..
-- replication instructions
-- log in to freshly installed MonetDB and create user CREATE USER "voc" WITH PASSWORD 'voc' NAME 'VOC Explorer' SCHEMA "sys"; CREATE SCHEMA "voc" AUTHORIZATION "voc"; ALTER USER "voc" SET SCHEMA "voc";
-- log in as voc and execute the following statements
CREATE TABLE databasetest ( "id" INTEGER , "ubyte1" SMALLINT , "ubyte2" SMALLINT , "ubyte3" SMALLINT , "ubyte4" SMALLINT , "ubyte5" SMALLINT , "ubyte6" SMALLINT , "ubyte7" SMALLINT , "ubyte8" SMALLINT , "ubyte9" SMALLINT , "ubyte10" SMALLINT , "ubyte11" SMALLINT , "ubyte12" SMALLINT , "ubyte13" SMALLINT , "ubyte14" SMALLINT , "ubyte15" SMALLINT , "smallint1" SMALLINT , "smallint2" SMALLINT , "smallint3" SMALLINT , "smallint4" SMALLINT , "smallint5" SMALLINT , "smallint6" SMALLINT , "smallint7" SMALLINT , "smallint8" SMALLINT , "smallint9" SMALLINT , "smallint10" SMALLINT , "smallint11" SMALLINT , "smallint12" SMALLINT , "smallint13" SMALLINT , "smallint14" SMALLINT , "smallint15" SMALLINT , "float1" DOUBLE , "float2" DOUBLE , "float3" DOUBLE , "float4" DOUBLE , "float5" DOUBLE , "float6" DOUBLE , "float7" DOUBLE , "float8" DOUBLE , "float9" DOUBLE , "float10" DOUBLE , "float11" DOUBLE , "float12" DOUBLE , "float13" DOUBLE , "float14" DOUBLE , "float15" DOUBLE , "float16" DOUBLE , "float17" DOUBLE , "float18" DOUBLE , "float19" DOUBLE , "float20" DOUBLE ) ;
START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT;
-- the file databasetest1mio.csv can be created with the following R script # this is an R comment and you find R at CRAN.r-project.org
filename <- "F:/Periscope/databasetest1mio.csv"
# create 1Mio rows in 100 batches of 10,000, total of 174MB csv file, takes ~2 minutes system.time({ b <- 1000L # number of batches n <- 1000L # batch size csvfile <- file(filename, open="w") for (i in 1:b){ x <- data.frame( id = ((i-1L)*n+1L):(i*n) # 4-byte integer record identifier , ubyte1 = sample(0:35, n, TRUE, prob=NULL) # unsigned byte columns (1 byte) , ubyte2 = sample(0:1, n, TRUE, prob=NULL) , ubyte3 = sample(0:3, n, TRUE, prob=NULL) , ubyte4 = sample(0:9, n, TRUE, prob=NULL) , ubyte5 = sample(0:24, n, TRUE, prob=NULL) , ubyte6 = sample(0:99, n, TRUE, prob=NULL) , ubyte7 = sample(-127:127, n, TRUE, prob=NULL) , ubyte8 = sample(0:1, n, TRUE, prob=c(0.75, 0.25)) , ubyte9 = sample(0:1, n, TRUE, prob=c(0.9,0.1)) , ubyte10 = sample(0:1, n, TRUE, prob=c(0.96,0.04)) , ubyte11 = sample(0:1, n, TRUE, prob=c(0.99,0.01)) , ubyte12 = sample(0:1, n, TRUE, prob=c(0.996,0.004)) , ubyte13 = sample(0:1, n, TRUE, prob=c(0.999,0.001)) , ubyte14 = sample(0:1, n, TRUE, prob=c(0.9996,0.0004)) , ubyte15 = sample(0:1, n, TRUE, prob=c(0.9999,0.0001)) , smallint1 = sample(1:4000, n, TRUE, prob=NULL) # smallint columns (2 bytes) , smallint2 = sample(1:2, n, TRUE, prob=NULL) , smallint3 = sample(1:4, n, TRUE, prob=NULL) , smallint4 = sample(1:10, n, TRUE, prob=NULL) , smallint5 = sample(1:25, n, TRUE, prob=NULL) , smallint6 = sample(1:100, n, TRUE, prob=NULL) , smallint7 = sample(1:256, n, TRUE, prob=NULL) , smallint8 = sample(1:1000, n, TRUE, prob=NULL) , smallint9 = sample(1:32000, n, TRUE, prob=NULL) , smallint10 = sample(0:1, n, TRUE, prob=c(0.75, 0.25)) , smallint11 = sample(0:1, n, TRUE, prob=c(0.9,0.1)) , smallint12 = sample(0:1, n, TRUE, prob=c(0.96,0.04)) , smallint13 = sample(0:1, n, TRUE, prob=c(0.99,0.01)) , smallint14 = sample(0:1, n, TRUE, prob=c(0.996,0.004)) , smallint15 = sample(0:1, n, TRUE, prob=c(0.999,0.001)) , float1 = round(runif(n, 0, 100), 1) # float columns (4 bytes) , float2 = round(runif(n, 0, 100), 1) , float3 = round(runif(n, 0, 100), 1) , float4 = round(runif(n, 0, 100), 1) , float5 = round(runif(n, 0, 100), 1) , float6 = round(runif(n, 0, 100), 1) , float7 = round(runif(n, 0, 100), 1) , float8 = round(runif(n, 0, 100), 1) , float9 = round(runif(n, 0, 100), 1) , float10 = round(runif(n, 0, 100), 1) , float11 = round(runif(n, 0, 100), 1) , float12 = round(runif(n, 0, 100), 1) , float13 = round(runif(n, 0, 100), 1) , float14 = round(runif(n, 0, 100), 1) , float15 = round(runif(n, 0, 100), 1) , float16 = round(runif(n, 0, 100), 1) , float17 = round(runif(n, 0, 100), 1) , float18 = round(runif(n, 0, 100), 1) , float19 = round(runif(n, 0, 100), 1) , float20 = round(runif(n, 0, 100), 1) ) write.table(x, file=csvfile, row.names=FALSE, col.names=(i==1L), append=i!=1L, dec=".", sep=",") } close(csvfile) })
------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl | | CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ | | 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 | | The Netherlands | Fax : +31 (20) 592-4312 |

Hi Jens - this is probably one of the most asked questions. Stefan Manegold
recently wrote up a nice explanation in this thread on the developers list:
http://www.nabble.com/mclient-running-out-of-memory%2C-crashing-mserver5-td2...
In short, everything you are loading needs to be able to fit into
addressable memory.
I'm not qualified to say if this would be an easy bit of code, but it seems
like the m5server should abort the load and return a friendly "hey I ran out
of memory" error.
-Ross
On Fri, Jan 23, 2009 at 4:13 AM, Jens Oehlschlägel
Hi,
I experience predictable crashes of MonetDB5 SQLserver under Win32bit when COPY INTO in batches of 1 Mio more than ~9Mio rows, which is ~1.5 GB DB size.
Any settings to avoid that? Is this a bug? Is MonetDB size limited to available RAM? Or is this a known limit just under Win32?
Logs of sql client error messages and details for replication below.
Regards
Jens Oehlschlägel
Lenovo Thinkcentre with 3 GB RAM under Windows XP Professional SP2 RAM consumption of m5server.exe during load ~300MB according to task manager Free RAM 2.1 GB Size of dbfarm finally 1.9 GB DiskFreeSpace 1.8 TB
[System Summary]
Item Value OS Name Microsoft Windows XP Professional Version 5.1.2600 Service Pack 2 Build 2600 OS Manufacturer Microsoft Corporation System Name GEMUN-38396-10- System Manufacturer LENOVO System Model 6073AA7 System Type X86-based PC Processor x86 Family 6 Model 15 Stepping 11 GenuineIntel ~2992 Mhz Processor x86 Family 6 Model 15 Stepping 11 GenuineIntel ~2992 Mhz BIOS Version/Date LENOVO 2RKT41AUS, 3/20/2008 SMBIOS Version 2.5 Windows Directory C:\WINDOWS System Directory C:\WINDOWS\system32 Boot Device \Device\HarddiskVolume1 Locale Germany Hardware Abstraction Layer Version = "5.1.2600.3023 (xpsp_sp2_qfe.061030-0020)" Time Zone W. Europe Standard Time Total Physical Memory 3,072.00 MB Available Physical Memory 1.70 GB Total Virtual Memory 2.00 GB Available Virtual Memory 1.96 GB Page File Space 5.79 GB Page File D:\pagefile.sys
sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 67515.000 msec 0 rows sql>COMMIT; 0 tuples Timer 1255.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 79141.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6041.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 74227.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6943.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 73084.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6599.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 67158.000 msec 0 rows sql>COMMIT; 0 tuples Timer 6520.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 18529.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 65106.000 msec 0 rows sql>COMMIT; 0 tuples Timer 5184.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 71136.000 msec 0 rows sql>COMMIT; 0 tuples Timer 9104.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 21021.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 69761.000 msec 0 rows sql>COMMIT; 0 tuples !MALException: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 14680064, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1511, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1511, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 34603008, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 34603008, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=11\1117, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=11\1117, ext=tail !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 69206016, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1558, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: MT_mmap: MapViewOfFileEx(1554, 2, 0, 0, 70254592, 0) failed !OS: Not enough storage is available to process this command. !ERROR: GDKload: cannot mmap(): name=15\1534, ext=tail !OS: Not enough storage is available to process this command. !ERROR: GDKload: failed name=15\1534, ext=tail 0 tuples Timer 9276.000 msec 0 rows sql>START TRANSACTION; 0 tuples Timer 0.000 msec 0 rows sql>COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/d atabasetest1mio.csv' USING DELIMITERS ','; Rows affected 1000000 Timer 96373.000 msec 0 rows sql>COMMIT; MAPI = voc@localhost:50000 ACTION= read_line QUERY = COMMIT; ERROR = Connection terminated Timer 44089.000 msec 0 rows
-- restarting the crashed server looks good, but
# MonetDB server v5.8.2, based on kernel v1.26.2 # Serving database 'demo', using 1 thread # Compiled for i686-pc-win32/32bit with 32bit OIDs dynamically linked # Copyright (c) 1993-July 2008 CWI. # Copyright (c) August 2008- MonetDB B.V., all rights reserved # Visit http://monetdb.cwi.nl/ for further information #warning: please don't forget to set your vault key! #(see D:\MonetDB\MonetDB5\etc\monetdb5.conf) # Listening for connection requests on mapi:monetdb://127.0.0.1:50000/ # MonetDB/SQL module v2.26.2 loaded
-- when starting mclient the server says
!ERROR: MT_mmap: MapViewOfFileEx(1528, 2, 0, 0, 69206016, 0) failed !OS: Not enough storage is available to process this command. #GDKmmap(69206016) fail => BBPtrim(enter) usage[mem=165285248,vm=1495793664] #GDKmmap(69206016) fail => BBPtrim(ready) usage[mem=165284840,vm=574619648] #GDKmmap: recovery ok. Continuing..
-- replication instructions
-- log in to freshly installed MonetDB and create user CREATE USER "voc" WITH PASSWORD 'voc' NAME 'VOC Explorer' SCHEMA "sys"; CREATE SCHEMA "voc" AUTHORIZATION "voc"; ALTER USER "voc" SET SCHEMA "voc";
-- log in as voc and execute the following statements
CREATE TABLE databasetest ( "id" INTEGER , "ubyte1" SMALLINT , "ubyte2" SMALLINT , "ubyte3" SMALLINT , "ubyte4" SMALLINT , "ubyte5" SMALLINT , "ubyte6" SMALLINT , "ubyte7" SMALLINT , "ubyte8" SMALLINT , "ubyte9" SMALLINT , "ubyte10" SMALLINT , "ubyte11" SMALLINT , "ubyte12" SMALLINT , "ubyte13" SMALLINT , "ubyte14" SMALLINT , "ubyte15" SMALLINT , "smallint1" SMALLINT , "smallint2" SMALLINT , "smallint3" SMALLINT , "smallint4" SMALLINT , "smallint5" SMALLINT , "smallint6" SMALLINT , "smallint7" SMALLINT , "smallint8" SMALLINT , "smallint9" SMALLINT , "smallint10" SMALLINT , "smallint11" SMALLINT , "smallint12" SMALLINT , "smallint13" SMALLINT , "smallint14" SMALLINT , "smallint15" SMALLINT , "float1" DOUBLE , "float2" DOUBLE , "float3" DOUBLE , "float4" DOUBLE , "float5" DOUBLE , "float6" DOUBLE , "float7" DOUBLE , "float8" DOUBLE , "float9" DOUBLE , "float10" DOUBLE , "float11" DOUBLE , "float12" DOUBLE , "float13" DOUBLE , "float14" DOUBLE , "float15" DOUBLE , "float16" DOUBLE , "float17" DOUBLE , "float18" DOUBLE , "float19" DOUBLE , "float20" DOUBLE ) ;
START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT; START TRANSACTION; COPY 1000000 OFFSET 2 RECORDS INTO "voc"."databasetest" FROM 'F:/Periscope/databasetest1mio.csv' USING DELIMITERS ','; COMMIT;
-- the file databasetest1mio.csv can be created with the following R script # this is an R comment and you find R at CRAN.r-project.org
filename <- "F:/Periscope/databasetest1mio.csv"
# create 1Mio rows in 100 batches of 10,000, total of 174MB csv file, takes ~2 minutes system.time({ b <- 1000L # number of batches n <- 1000L # batch size csvfile <- file(filename, open="w") for (i in 1:b){ x <- data.frame( id = ((i-1L)*n+1L):(i*n) # 4-byte integer record identifier , ubyte1 = sample(0:35, n, TRUE, prob=NULL) # unsigned byte columns (1 byte) , ubyte2 = sample(0:1, n, TRUE, prob=NULL) , ubyte3 = sample(0:3, n, TRUE, prob=NULL) , ubyte4 = sample(0:9, n, TRUE, prob=NULL) , ubyte5 = sample(0:24, n, TRUE, prob=NULL) , ubyte6 = sample(0:99, n, TRUE, prob=NULL) , ubyte7 = sample(-127:127, n, TRUE, prob=NULL) , ubyte8 = sample(0:1, n, TRUE, prob=c(0.75, 0.25)) , ubyte9 = sample(0:1, n, TRUE, prob=c(0.9,0.1)) , ubyte10 = sample(0:1, n, TRUE, prob=c(0.96,0.04)) , ubyte11 = sample(0:1, n, TRUE, prob=c(0.99,0.01)) , ubyte12 = sample(0:1, n, TRUE, prob=c(0.996,0.004)) , ubyte13 = sample(0:1, n, TRUE, prob=c(0.999,0.001)) , ubyte14 = sample(0:1, n, TRUE, prob=c(0.9996,0.0004)) , ubyte15 = sample(0:1, n, TRUE, prob=c(0.9999,0.0001)) , smallint1 = sample(1:4000, n, TRUE, prob=NULL) # smallint columns (2 bytes) , smallint2 = sample(1:2, n, TRUE, prob=NULL) , smallint3 = sample(1:4, n, TRUE, prob=NULL) , smallint4 = sample(1:10, n, TRUE, prob=NULL) , smallint5 = sample(1:25, n, TRUE, prob=NULL) , smallint6 = sample(1:100, n, TRUE, prob=NULL) , smallint7 = sample(1:256, n, TRUE, prob=NULL) , smallint8 = sample(1:1000, n, TRUE, prob=NULL) , smallint9 = sample(1:32000, n, TRUE, prob=NULL) , smallint10 = sample(0:1, n, TRUE, prob=c(0.75, 0.25)) , smallint11 = sample(0:1, n, TRUE, prob=c(0.9,0.1)) , smallint12 = sample(0:1, n, TRUE, prob=c(0.96,0.04)) , smallint13 = sample(0:1, n, TRUE, prob=c(0.99,0.01)) , smallint14 = sample(0:1, n, TRUE, prob=c(0.996,0.004)) , smallint15 = sample(0:1, n, TRUE, prob=c(0.999,0.001)) , float1 = round(runif(n, 0, 100), 1) # float columns (4 bytes) , float2 = round(runif(n, 0, 100), 1) , float3 = round(runif(n, 0, 100), 1) , float4 = round(runif(n, 0, 100), 1) , float5 = round(runif(n, 0, 100), 1) , float6 = round(runif(n, 0, 100), 1) , float7 = round(runif(n, 0, 100), 1) , float8 = round(runif(n, 0, 100), 1) , float9 = round(runif(n, 0, 100), 1) , float10 = round(runif(n, 0, 100), 1) , float11 = round(runif(n, 0, 100), 1) , float12 = round(runif(n, 0, 100), 1) , float13 = round(runif(n, 0, 100), 1) , float14 = round(runif(n, 0, 100), 1) , float15 = round(runif(n, 0, 100), 1) , float16 = round(runif(n, 0, 100), 1) , float17 = round(runif(n, 0, 100), 1) , float18 = round(runif(n, 0, 100), 1) , float19 = round(runif(n, 0, 100), 1) , float20 = round(runif(n, 0, 100), 1) ) write.table(x, file=csvfile, row.names=FALSE, col.names=(i==1L), append=i!=1L, dec=".", sep=",") } close(csvfile) })
------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
participants (3)
-
Jens Oehlschlägel
-
Ross Bates
-
Stefan Manegold