Convert XML with Spark to Parquet

Chinmay Sinha Spark, XML

It can be very easy to use Spark to convert XML to Parquet and then query and analyse the output data.

As I have outlined in a previous post, XML processing can be painful especially when you need to convert large volumes of complex XML files. Apache Spark has various features that make it a perfect fit for processing XML files. It supports batch and streaming modes, can cache datasets in memory, and most importantly it can scale beyond a single server. These are some of the reasons why we have built our XML converter Flexter on top of Spark.

What is Flexter XML Converter?

Flexter is an enterprise XML converter. It is written in Scala and runs on Apache Spark. Flexter automatically converts XML to Hadoop formats (Parquet, Avro, ORC), Text (CSV, TSV etc.), or a database (Oracle, SQL Server, PostgreSQL etc.). You don’t have to write a single line of code. Everything happens automagically and you will be up and running in a day or two. As it runs on Spark it scales linearly with your XML volumes.

If you want to find out more about Flexter visit the product pages and our XML converter FAQ.

Show me how it works

Now that we have a good understanding what Flexter does, let’s have a look at how it works.

In this example we will use Flexter to convert an XML file to parquet. We then query and analyse the output with Spark. 

How does Flexter generate the target schema?
We generate the target schema based on the information from the XML, the XSD, or a combination of the two. If you can't provide an XSD we generate the target schema from a statistically significant sample of the XML files. In summary you have three options to generate the target: (1) XML only (2) XSD only (3) Combination of XML and XSD.

When we generate the target schema we also provide various optional optimisations, e.g. we can influence the level of denormalisation of the target schema and we may optionally eliminate redundant reference data and merge it into one and the same entity.

Flexter can generate a target schema from an XML file or a combination of XML and XML schema (XSD) files. In our example we process airline data based on the OTA standard. Both the XML files and the XSD are available and we use the information from both files to generate the target schema.

Now that we have gathered statistics from our XML sample we can generate the logical target schema with the xsd2er command line tool using the -k switch (-k is shortcut for --use-stats)

Let’s go through the steps

Happy days. Now we use the Logical Schema ID  (origin: 6) to convert the XML data to Parquet

When the command is ready, removing --skip or -s, allows us to process the data. We direct the parquet output to the output directory for the data.xml file. Let’s first create a folder “output_dir” as the location to extract the generated output. The location is given by -o parameter when extracting data using xml2er command.

We can find the extracted parquet files in the output folder. It is a directory structure, which you can find in the current directory. We can ‘ls’ to see the contents of the .parquet folder as shown below.

In order to look inside the parquet files, let’s initiate the spark-shell and create a dataframe to load the parquet tables parsed using Flexter

Once we have initiated the spark-shell, we can proceed with reading the parquet files generated and import them as dataframes in spark.

We can take a look at the schema of the data frames generated and do some preliminary analysis before proceeding further on the data parsed. For example, let’s look at the “Ticketing data” and the “Air Traveler data” created above.

We can see that headers and data types of the various columns. We can also perform some basic analysis on the dataset in Scala and look at the various variables present

Various basic data processing can be performed on the dataframe generated on the steps above as given below. The sql function on a SparkSession enables applications to run SQL queries programmatically and returns the result as a DataFrame.

Let's take the df2 data frame which contains the Ticketing.parquet output and query the rows which contains the non-null values of the TravelerRefNumber.

Temporary views in Spark SQL are session-scoped and will disappear if the session that creates it terminates. If you want to have a temporary view that is shared among all sessions and keep alive until the Spark application terminates, you can create a global temporary view. Global temporary view is tied to a system preserved database global_temp, and we must use the qualified name to refer it, e.g. SELECT * FROM global_temp.view1.

We can also perform other SQL queries on the dataframes. Let’s take an example to perform a join on the two datasets loaded from the parquet files


Are there other options for Spark XML conversion?

One option that can make the life of a Spark XML developer a little bit easier is an open source library. However, there are various issues with this library.


The library does not convert the XML hierarchy into a normalised representation of the data. For very simple XML files this may be ok. However, if you have slightly more complex XML files then this will be an issue. As an analogy think of dumping a complex ERP or CRM system with hundreds of tables into one flat table.

The library doesn’t accept multiple tables, it can’t handle complex trees, and it can’t work with an unknown xml tag structure.

Manual Development

In order to get to a properly normalised representation of your data you still have to go through all of the manual effort of normalising your data. This will likely require multiple passes over your data, which will eat up resources on your cluster and affect performance. Using a columnar compressed format can address this issue to some degree.


The library does not provide support for XSDs.

Schema Evolution

Changes to XML files are not handled gracefully, e.g. deleting or adding an attribute is not handled.

A unified schema across multiple different versions of an XML schema is not handled.

In summary, the library works for simple conversion cases. If you have something a bit more complex you are better off with an enterprise tool like Flexter.