Parse HTML in Java with XPath and Jsoup

In this tutorial, we will explain how to parse and extract content from an HTML source code. First we will download a real HTML source code with Apache HTTP client and then we will parse it with an awesome Java library called Xsoup. It is a mix of Jsoup and XPath. It is better adapted to parsing HTML than Jsoup alone.

1) Download some HTML source code from internet

Let’s download the HTML source code of the sitemap of our website . For this, we need an HTTP client like the one from Apache. So add these dependencies to your pom.xml:



Then to download the HTML source code into a String variable, do:

String url = "";

CloseableHttpClient client = HttpClients.createDefault();
String sitemap = EntityUtils.toString(client.execute(new HttpGet(url)).getEntity());

The sitemap looks like this:

	<url>	<loc></loc>
	<url>	<loc></loc>

2) Extract some content from the HTML source code using XPath

Let’s say we want to extract the list of our blog posts URLs from our own sitemap HTML, that we just downloaded. What we can easily notice from the HTML above is that all URLs are the text encapsulated firstly by a <loc> tag and then by a <url> tag. No other data follow this pattern in the HTML source code, so this pattern is a sure way of getting all the blog post URLs and nothing else.

Using XPath, this pattern can be translated as:


Now, let’s apply this pattern to the HTML code to extract the URLs. For this, we need a library called Xsoup. So add this dependency to your pom.xml:


Finally, the extraction code is:

String xpath = "//url/loc/text()";
Document document = Jsoup.parse(sitemap);
List<String> urls = Xsoup.compile(xpath).evaluate(document).list();

If you print the urls in the console, you will get:

which is exactly what we were looking for.

That’s it for this tutorial ! If you have any question, you can leave us a reply below, we reply within 24 hours.

Leave a Reply

Your email address will not be published. Required fields are marked *