[MonetDB-users] Bulk load problem
Hi, I am a new user and I am trying to bulk load a very largefile. The following is intended for testing only: $ 7z e -so data.7z | head -n 100 | query "copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t';" And am getting the following error: MAPI = rk@localhost:50000 QUERY = copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t'; ERROR = !syntax error, unexpected IDENT, expecting STRING in: "copy 100 records into data_table from stdin using delimiters tuple_seperator" Timer 0.278 msec Can you tell me where am I going wrong ? With Regards, Raghav.
Hi Raghav, On 11-12-2009 22:40:00 -0800, Raghav Kumar Gautam wrote:
I am a new user and I am trying to bulk load a very largefile. The following is intended for testing only:
$ 7z e -so data.7z | head -n 100 | query "copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t';"
can you show us the contents of the query script?
Hi Raghav, "tuple_seperator" & "record_seperator" are no SQL key words, but merely place holders for the tuples separator and tuple separator, respectively, in the EBNF (Extended Backus–Naur Form; cf. e.g., http://en.wikipedia.org/wiki/Extended_Backus–Naur_Form) of the SQL-2003 syntax description. Hence, your query should rather look as follows: copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS '\r\n', '\t'; Stefan On Fri, Dec 11, 2009 at 10:40:00PM -0800, Raghav Kumar Gautam wrote:
Hi,
I am a new user and I am trying to bulk load a very largefile. The following is intended for testing only:
$ 7z e -so data.7z | head -n 100 | query "copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t';"
And am getting the following error: MAPI = rk@localhost:50000 QUERY = copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t'; ERROR = !syntax error, unexpected IDENT, expecting STRING in: "copy 100 records into data_table from stdin using delimiters tuple_seperator" Timer 0.278 msec
Can you tell me where am I going wrong ?
With Regards, Raghav.
------------------------------------------------------------------------------ Return on Information: Google Enterprise Search pays you back Get the facts. http://p.sf.net/sfu/google-dev2dev _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl | | CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ | | 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 | | The Netherlands | Fax : +31 (20) 592-4199 |
Thanks everyone. I figured that out. Now the problem that I have run into is the 2GB limitation on 32bit machine. It seems there is no solution for this, is there any ? With Regards, Raghav.
On Mon, Dec 14, 2009 at 12:13:53PM -0800, Raghav Kumar Gautam wrote:
Thanks everyone. I figured that out. Now the problem that I have run into is the 2GB limitation on 32bit machine. It seems there is no solution for this, is there any ?
Well, for handling (loading) huge amounts of data, I guess the only "solution" is to resort to a 64-bit system --- unless your data provides room for normalizing the schema to break a huge table into multiple smaller ones ... Stefan
With Regards, Raghav.
-- | Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl | | CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ | | 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 | | The Netherlands | Fax : +31 (20) 592-4199 |
Raghav Kumar Gautam wrote:
Hi,
I am a new user and I am trying to bulk load a very largefile. The following is intended for testing only:
$ 7z e -so data.7z | head -n 100 | query "copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t';"
And am getting the following error: MAPI = rk@localhost:50000 QUERY = copy 100 RECORDS into data_table FROM STDIN USING DELIMITERS tuple_seperator '\r\n' record_seperator '\t'; ERROR = !syntax error, unexpected IDENT, expecting STRING in: "copy 100 records into data_table from stdin using delimiters tuple_seperator" Timer 0.278 msec
Can you tell me where am I going wrong ?
With Regards, Raghav.
You're using incorrect syntax for MonetDB. The copy command should be copy 100 records into data_table from stdin using delimiters '\r\n','\t'; And if you're using quotes around strings, you also have specify those: copy 100 records into data_table from stdin using delimiters '\r\n','\t','"'; -- Sjoerd Mullender
participants (5)
-
Fabian Groffen
-
Raghav Kumar Gautam
-
Raghav Kumar Gautam
-
Sjoerd Mullender
-
Stefan Manegold