Day 3: Upgrades and Stack Traces

Today, I bought a virtual private server for everything plantdata.io that I’m not planning on running out of a container, including this blog. I had been running it all on shared hosting. It took a bit to move everything over to the VPS, but things should be moving noticeably faster as a result. I had hoped it would be a bit longer until I had to upgrade my hosting, but I noticed it was occasionally taking longer than 10 seconds to load simple pages or contact the mediawiki api. Not only was it getting on my nerves, with response times like that, it would certainly cruise straight past updater script timeouts. This should be far less of a problem with the new VPS.

Once that was completed, I got back into the minor explosion I created at the end of Day 2 (which was not technically yesterday). I have so far been unable to get the containerized wikidata query service to pull data correctly from plantdata’s wikibase instance, but I think I know why that is now: Docker is doing some things I wasn’t expecting. This makes sense, as my expectations for what Docker does and does not do are roughly three days old at this point. It would be pretty weird if I already knew all the surprises after three days.

I did manage to catch a pretty good ‘gotcha’ on behalf of the crowd hoping to do similar things with a containerized wdqs pointing to an existing wikibase instance. Running runUpdate.sh in verbose mode revealed that the script assumes you’ve configured your wikibase instance to run at [domain]/w/. So, if your mediawiki api is configured to live somewhere other than [domain]/w/api.php, you will either need to do a redirect on the wikibase end, or hack your configured directory structure into runUpdate.sh. After reading some documented reasons why you probably don’t want to run mediawiki straight from the web root (it was), I opted to change the mediawiki configuration to match the script.

Tomorrow’s Plan is identical to yesterday’s plan in every way, including the part where it’s clearly a multi-day plan that couldn’t fit into a single day, no matter how great of a day it ends up being:

  • Continue learning things about docker, presumably by exploding and unexploding all the test containers until I stop being surprised by its behavior.
  • Finish writing my own compose file to forego the containerized wikibase instance, and instead point all the query service containers to my real pre-existing plantdata wikibase install
  • Verify that the instances are communicating the way I think they should be, or learn enough to alter my expectations
  • Start populating data in the real plantdata wikibase instance (Data In)
  • Get comfortable with SPARQL and write some sample queries (Data Out)

Leave a Reply

Your email address will not be published. Required fields are marked *