SPARQL engine for Node.js?


So is there a SPARQL engine implemented in JS that runs in Node? For running queries against an in-memory graph loaded from a Turtle file? I can find lots of SPARQL clients written in JS, but I’m not sure about an actual SPARQL engine. Should I be looking at Communica?


There’s this project that looks very promising: , I haven’t tried it though.


oh wow I did not know that, wonder if this is on RDFJS specs already or N3.js only (old version). I asked Ruben to comment about Communica as well, he has some super interesting ideas with it.


Comunica can indeed be used for this.
However, note that Comunica is a modular meta query engine, which means that it is a framework to build query engines.
This allows you to configure your (SPARQL) query engine with the features and algorithms you need, to make it as lightweight/heavyweight as you want. This is especially useful if you need a lightweight query engine to solve specific tasks in some Webapp, without having to download MB’s of JS code.

We do however offer a couple of default configurations of Comunica, such as Comunica SPARQL, which supports (almost) the full SPARQL spec, and can query from a variety of sources (raw RDF files, SPARQL endpoints, TPF), and even federate over them. Next to that, we also offer a default config for querying over in-memory RDFJS sources, or querying over local HDT files.

An example that shows the usefulness of Comunica’s modularity is the Solid HTTP actor. It replaces Comunica’s default HTTP actor, and runs it through Solid’s authenticated HTTP request layer, which allows you to query Solid data pods that require authentication. This is for example being used in LDflex for Solid.

As far as I know, Comunica is the only engine that supports the RDFJS spec, which makes it directly compatible with all of the RDFJS tooling.

(I wanted to add some more links, but apparently there’s a limit on that for new users ;-))


Thanks @rubensworks, that’s informative and helpful. Communica SPARQL indeed looks like what I want.

Communica’s web presence and documentation is very polished, but not as effective as it could be in my opinion. It focuses very much on Communica’s most advanced capabilities. I understand why that is—these are the features that most of the work went into, and that the contributors are most proud of. But most users have rather simple needs (at least at first!), and if, after some clicking around and skimming, everything seems to be about very advanced use cases that they don’t really understand, then they might go off and look for some simpler solution.

The first example on the Communica SPARQL website is about querying Linked Data Fragments. It might be better to start off with something really simple, like querying a local file.

Describing Communica as a “framework to build query engines” is not ideal for the same reason. I just need a query engine that works out of the box, not a framework to build my own :slight_smile: The next sentences explained in more detail what you meant, and it put me at ease again.

Anyway, thanks again for sharing and keep up the great work.


Thanks for you comment @cygri! I agree that the documentation and examples may be a bit too advanced at the moment. I’ll try to work in some simpler examples first, such as querying a local file.


Does Communica now handle de-reference of URI-variables and URI-constants in the body of a SPARQL Query? This is crucial regarding the notion of crawling the Web using what I’ve started referring to as the “Small Data” pattern.

Due to link constraints here, I’ll refer you to my recent blog post about Small Data which also includes links and references to other tools (beyond our Virtuoso product) that have long adhered to this pattern e.g., the Semantic Web Client Library (@cygri worked on that) and Tabulator (the default Solid Data Browser).


@kidehen no, Comunica does not support crawling LD documents yet unfortunately (it does for SERVICE clauses).
I am working on some prototypes though (as it’s one of my main research goals), but none of those work well enough to be part of the public release.

I definitely agree that this is going to become essential in the future, especially when LD becomes distributed over many sources like Solid data pods.


Okay re your future deliverables regarding Small Data etc…

In regards to SERVICE clauses, are you saying Comunica supports SPARQL-FED as is, or it adds something to basic SPARQL-FED functionality support?


@kidehen Indeed, SPARQL federation using SERVICE clauses is supported in Comunica. While the SPARQL spec only supports SPARQL endpoints as a target, we also allow other source types (TPF, RDF documents).
Here’s a simple example of such a query.


@rubensworks do we have an issue for something like that in SPARQL-1.2? That looks super useful. I also love the idea of doing API calls to non-RDF resources that way, although that is a bit more difficult as many stores validate the query sent there.


@ktk Yes, I created an issue for it here:

I also love the idea of doing API calls to non-RDF resources that way, although that is a bit more difficult as many stores validate the query sent there.

That would be very useful indeed. But next to the source URL, you’d also need some kind of (RML?) mapping file so that the source can be RDF-ified on the fly.


@rubensworks for JSON APIs a JSON-LD context would be enough IMO. I don’t think that’s something that will go into SPARQL anytime soon but we should play around with it anyway to get some ideas


For URLs that respond with JSON, you wouldn’t even need a JSON-LD context, as it’s straightforward to translate JSON into triples in a generic way. The graph pattern inside the SERVICE clause would then be evaluated against the triples like for a local file.

For CSV and other formats that already have a tabular structure, the VALUES OF proposal here could be the way to go:


I just tried the following, but it didn’t work. Do you see anything wrong with the query etc?

  PREFIX wd: <>
  PREFIX wdt: <>
  PREFIX wdtn: <>
  PREFIX pq: <>
  PREFIX ps: <>
  PREFIX wikibase: <>
  PREFIX foaf: <> 
  PREFIX dbo: <>
  PREFIX bd: <>
  PREFIX dct: <> 

  SELECT DISTINCT ?item ?dbpediaID ?itemLabel ?source ?sourceLabel ?image ?subjectName 
	  		SERVICE <> 
					  SELECT DISTINCT ?item ("00FFFF" AS ?rgb) ?itemLabel ?source ?sourceLabel 
					  WHERE {
							    ?item wdt:P361 wd:Q81414.
							    ?item wdt:P1343 ?source.
							    SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
	 		SERVICE <> 
	 			   SELECT DISTINCT ?dbpediaID ?image ?subject ?subjectName 
	 		        FROM <> 
	 			    WHERE {
	 						?dbpediaID owl:sameAs ?item ;
	 			                        dct:subject ?subject .

	 						OPTIONAL { ?subject rdfs:label ?subjectName } . 	
	 						OPTIONAL {?dbpediaID foaf:depiction ?image } . 
	 						FILTER (LANG(?subjectName) = "en")


That looks like a valid query to me. The parser seems to (incorrectly) complain about it. Created an issue for it here:


Okay, great!

I’ll track the github issue :slight_smile: