I am new around
I made a proposal to improve WDQS (and ultimately Wikidata). Here is the revelant part from the pre-print I am working on at wikiversity’s wikijournal:
The idea that could be adopted to scale WDQS while being easy to setup, scale and that is future proof is to rely on the thick client, thin server paradigm based on the nstore. In this paradigm, the nomunofu nstore server only knows how to do pattern matching and is very fast at that particular task. Meanwhile, the client of nomunofu, a middleware must be able to translate SPARQL queries into the format understood by nomunofu nstore. At this point, two routes are possible:
- Wikimedia only host nomunofu servers
- Wikimedia host nomunofu servers and the SPARQL middlewares
The first solution is the less costly for Wikimedia. The second solution will, in theory with in proper provisioning, lead to faster end-user query times because there will be less data going through the Internet between WDQS and the end-user.
The grant project can be found at https://meta.wikimedia.org/wiki/Grants:Project/Future-proof_WDQS
Please give me some feedback