Page 1 of 1

Import Fixed-Width file into a Database

Posted: Tue Jun 16, 2009 10:29 am
by Support_Julie
Fixed-Width file to Database

Fixed Width file being imported:
Code: Select all
1111111111aaaaaaaaaaaaaaaaaaaaaaaaa 11111111.11
2222222222bbbbbbbbbbbbbbbbbbbbbbbbb-22222222.22
3333333333ccccccccccccccccccccccccc 33333333.33
Field layout of database table receiving data:
Code: Select all
PARTNBR             Character                   10      
DESCRIPTION         Character                   25       
PRICE               Packed Dec                  10,2 

This project will get a file from an FTP server, read the Fixed-Width file and then insert the data into a database table. Please note that on the SQL INSERT statement, you need to enter one question mark for each field you are inserting into. In this example three fields are being inserted into. That insert statement is: INSERT INTO DEMO.TEST3 (PARTNBR, DESCRIPTION, PRICE) VALUES(?, ?, ?)
When inserting into 2 fields you would have two questions marks in the insert statement i.e.: INSERT INTO LIB.FILE VALUES(?,?)
Code: Select all
<project name="Fixed-width to database" mainModule="Main" version="1.0" logLevel="debug">

	<module name="Main">
		<createWorkspace />

		<ftp label="Connect to FTP server" resourceId="Dev 54">
			<get label="Get Fixed-width File" sourceFile="/Demo/TEST1.TXT" destinationDir="${system.job.workspace}" type="binary" destinationFilesVariable="fixedwidthFile" />
		</ftp>

		<readFixedWidth label="Read Fixedwidth File" inputFile="${fixedwidthFile}" outputRowSetVariable="fixedwidthData" skipFirstRow="false" recordDelimiter="CRLF">
			<data>
				<column index="1" name="PARTNBR" type="CHAR" size="10" alignment="left" />
				<column index="2" name="DESCRIPTION" type="CHAR" size="25" alignment="left" />
				<column index="3" name="PRICE" size="12" alignment="right" />
			</data>
		</readFixedWidth>


		<sql label="Connect to DB" resourceId="DEV54">
			<query label="clear out file">
				<statement>DELETE 
FROM 
	DEMO.TEST3
</statement>
			</query>
			<query label="Import Data" inputRowSetVariable="${fixedwidthData}">
				<statement>INSERT INTO DEMO.TEST3
	(PARTNBR, DESCRIPTION, PRICE)
VALUES
	(?, ?, ?)</statement>
			</query>
		</sql>


		<deleteWorkspace />

	</module>

	<description>FTP Get Fixed-width file and insert into Database</description>
</project>
* When doing an FTP GET of a Fixed width file, change the transfer type to BINARY to maintain the End of Record marker of CRLF.

Re: Import Fixed-Width file into a Database

Posted: Fri Mar 03, 2017 7:41 am
by cmayil
Hello Team,
Could you please let me know how to do bulk upload from csv/text file to Database.
I have text files with 22,74,700 records. Executing SQL insert query as mentioned this topic, taking longer than the expected time.
It has been executing for around 2 hours.

Pls help me on this.
Thanks

Re: Import Fixed-Width file into a Database

Posted: Fri Mar 03, 2017 4:34 pm
by Support_Rick
Please look at the Advanced tab under the SQL/Query task... it will give you the "batch size" option as below:
Batch Size.png
Batch Size.png (8.71 KiB) Viewed 50600 times

Re: Import Fixed-Width file into a Database

Posted: Tue Mar 21, 2017 3:04 am
by cmayil
Hi Rick,
Thank you for your reply.
It doesnt drastically increase the performance duration.
I had set the batch size value = 1024, I afraid that my server would hung If I set the size more than this.
Without setting batch size , it took 79 minutes to get completed. After setting the batch size - 1024, it took 63 minutes.
Please refer the attached execution summary result. Is there any other way to increase the execution performance?

Thanks

Re: Import Fixed-Width file into a Database

Posted: Tue Mar 21, 2017 8:50 am
by Support_Rick
You could increment up from the 1024 ... even try 2048 and see what happens.

If you're afraid your server will hang, then you might have to look into why you feel that way about your server. Remember, this is a JDBC URL connection to your server. Making sure those connections have plenty of resources will make this run smoother. We're just giving you an option to increase the slice it sends for each transfer. Instead of 1 for each... sending 2048 for each could significantly decrease your throughput time.