I wonder what a distributed search engine would look like. Basically, the index would be sharded across user computers, and queries would hit some representative sample of that index. This means:
hosting costs are very low - just need a way to proxy requests to the network
search times should improve as more people use the service
no risk of the service logging anything - individual nodes don't need to know who requested the data, just who to send the response to
My biggest concern is how to build the index, but if OP is willing to share that, I might start hacking on a distributed version.