Parsing Huge Text Files Using Java and JSapar

Last week a friend of mine and I decided to parse a huge size text file that consists some reports of legacy devices. After few times trying we notified that, opening and parsing the huge text files in Java is very time and resource consuming. We started with a 35MB log file. We have never worked with such a huge size text files. So we tried to find the relevant solution. Indeed, Java is not the best solution for this kind of problems. I believe Python or Perl could perform this requirement by a higher performance. However regards to later developments and project requirements we decided to use Java. After some searching through web we found a brilliant tool. has some valueable open source projects. JSapar is one of them. JSapar is a Java library providing a schema based parser/producer of CSV (Comma Separated Values) and flat files. The goal of this project is to create a java library that contains a parser of flat files and csv files.
The file imports in to an object oriented model that we called it telegrams. The parser produces a Document class, representing the content of the file, or you can choose to receive events for each line that has been successfully parsed. Tigris claims that JSapar can handle huge files without loading everything into memory.
The library is simple to use and possible to extend. Our log file consists thousands of lines just the same as below sample line:

948853 : 47 E6 18 FF 04 CD 0B 1D B1 C1 D1 1E ;

This is a telegram. First part is row number (948853) and next bytes contains a message. This two part are separated by a “:”. At first sight it seems it is a straight forward procedure, however, it is not as much easy as it looks. Millions of these lines makes a real slow running and unstable application if you use standard java scaner or parsers. First we defined a schema for csv files:

<?xml version=”1.0″ encoding=”UTF-8″?>

<schema xmlns=”″>

<csvschema lineseparator=”\n”>

<line occurs=”*” linetype=”Telegram” cellseparator=”:”>

<cell name=”Row No” />

<cell name=”Body”/>




Then we just used a simple java code to read a 40MB text file into memory in less than 10 seconds.

public final void loadTelegrams() throws SchemaException, IOException, JSaParException {
Reader schemaReader = new FileReader(“schema/schema.xml”);
Xml2SchemaBuilder xmlBuilder = new Xml2SchemaBuilder();
Reader fileReader = new FileReader(“repo/dat.txt”);
Parser parser = new Parser(;
telegrams =;

Using below command we moves whole of the file cell by cell so quickly.


This entry was posted in Java. Bookmark the permalink.

3 Responses to Parsing Huge Text Files Using Java and JSapar

  1. DHARA says:

    This example was very useful. Thanks a lot for detailed explanation. It would be great if u can proivde some more sample code if you have used the feature like building java objects (using org.jsapar.Parser.buildJava()) or if u have explored api for converting date or done some validation while parsing the text.


    • admin says:

      Have you tried the following way?

      Reader schemaReader = new FileReader("samples/CsvToJavaSampleSchema.xml");
      Xml2SchemaBuilder xmlBuilder = new Xml2SchemaBuilder();
      Reader fileReader = new FileReader("samples/names.csv");
      Parser parser = new Parser(;
      List parseErrors = new LinkedList()
      List people = parser.buildJava(fileReader, parseErrors);

      The code returns a list of the Person bean objects each one rapped into the Person definition in the XML schema.

Leave a Reply