Hello, Scaling up a database is best done in steps. Use a 10M example, followed by 20M, and 50M. This give some indication on what to expect going to a real life database. Taking a small sample also gives you an opportunity to test it out on different DBMS platforms. I guess that the target application is heavily dominated by simple scanning and aggregation, because the individual columns will be quite large to handle. You easily fall in the disk IO trashing pitfall. JLP00993@correo.aeat.es wrote:
Hello:
I would try to load about 300.000.000 registers with 80 fields each one (150 GBytes) in Monet V4.12.0 database.
I saw in mail lists the next bulk load method:
COPY <number> RECORDS INTO <table> FROM stdin USING DELIMITERS '\t';
I have my data in an ASCII file but my fields are not delimited by any character, they are fixed length.
These are my questions:
Where can I find full documentation about COPY command?.
In my case, which is the most efficient way to do the bulk load?.