So you lost your Java Keystore and you need to update your APK on Google Play?

Well…. Not saying I ever did anything as stupid as this, or spent hours fixing things. Yesterday. But say I had, this is exactly what I would have done to fix this.

Disclaimer: These keys were for a pet project of mine and got lost during a reinstall of my OS. I’m much more careful with things like this for actual clients. Anyway.

So you’ve uploaded a build of your app to the Play store some time ago, then come to upload the next version and it complains about the SHA1 checksums not matching. You’ve built with the wrong key!

I wonder where the right one is? Your keys could have a number of file extensions: .jks (which stands for Java Keystore) .keystore .cer for instance. Find your backup drive, or whatever volume said keys are likely to be on and:

find /Volumes/YourDrive/ -name “*.jks”

repeat for other extensions until you get something. Lets see what SHAs those files have:

keytool -v -list -keystore ~/Code/Keystores/Android/your.keystore

Keystores will be passworded. If they’re “debug” keystores, they’ll have been autogenerated by the Android SDK and will have passwords set to “android”. If not, happy guessing…

Any matching strings of numbers and letters there? No? Well lets see what the APK was actually signed with. …presuming you have a copy of it. If not, I think there are various ways to get it direct from the Play Store if you have a Google…

APKs are zip files so:

unzip your.apk
keytool -printcert -file META-INF/CERT.RSA

What doe this show you? You’ll see the SHA1 fingerprint you’ve been nagged about to start with, and also the owner info. That will hopefully give you some clues as to where to search next for your keystore. In my case I had Owner: C=US, O=Android, CN=Android Debug because I was a complete idiot and had signed  the production build with my debug key. I found it in an old ~/.android/ folder on a backup disk. For some reason, the Play store was happy to accept the cert, despite what people say elsewhere. Congratulations: Your debug cert is now your production cert!

Hope that helps someone anyway.


The perils of “nulled” WordPress themes

Having the largest ecosystem of any CMS ever, WordPress users have a lot of choice when it comes to themes. There’s thousands of open source offerings on WordPress.org as well as many more paid for ones on marketplaces such as Envato.

I recently wanted to assess the code quality of a few different paid themes before deciding on one to buy, so I downloaded some “not quit legitimate” copies from one of the many shady “nulled theme” sites, to run in a sandbox. Obviously you’d have to be a) quite immoral and b) an idiot to use such things in production, but I figured what’s the harm if the winning one’s going to be bought anyway.

After choosing the winner I thought I’d diff the two copies I had. Just how bad was the malware going to be?

Continue reading “The perils of “nulled” WordPress themes”


Setting permissions for your Gitlab CI Runner & W3 Total Cache

So in the last post on this, I looked at setting up auto deploy for a WordPress site using GitLab’s CI runner.  I also wanted the W3TC cache to be cleared, and thanks to WP-CLI, that was possible by adding:

- sudo -u www-data wp w3-total-cache flush

to the end of the script node in .gitlab-ci.yml.

Now. “sudo?!?!?!?!” I hear you say? There’s a security risk if ever I saw one. Luckily this doesn’t have to be the case if things are set up right. Make a separate user for GitLab Runner and limit it to sudo-ing as only www-data and only running that command while doing so.

It’s a good idea to setup a separate user in general too, for security.

There wasn’t an awful lot of info on that when I Googled though, so (from inital user creation):

adduser gitlabrunner
usermod -a -G www-data gitlabrunner
passwd gitlabrunner # set a password for your new user
# ...then ssh into the box with that user & pass and:
ssh-keygen -t rsa
# Set a new deploy key in Gitlab admin setting using your new id_rsa.pub so your new user can "git pull"
vi /home/gitlabrunner/.ssh/authorized_keys
# copy the public key from your GitLab box in. For instance from /root/.ssh/id_rsa.pub
chown -R gitlabrunner:gitlabrunner /home/gitlabrunner/.ssh
chmod -R go-rwx /home/gitlabrunner/.ssh
# Edit your sudoers file
visudo
# add:
gitlabrunner ALL=(www-data:www-data) NOPASSWD: /usr/local/bin/wp 

Done! Your CI script will now be able to run WP-CLI commands as www-data. No root access, no entering passwords.


Using GitLab CI to Deploy WordPress Sites

So I work with a team that use Git to manage all the theme code for a large WordPress site. It’s a corporate, and as is the way with big organisations, they wanted their version control in house. Hence I set them up a GitLab box. Want to make an alteration to CSS or whatever?

  1. Code & test on your local dev stack
  2. Commit via git & push up to Gitlab
  3. SSH into the server, `git pull` the new code down
  4. Run `gulp build` to update the compiled assets

Job done. Wouldn’t it be great if you didn’t need steps 3 & 4 though? They don’t take long in the short term, but get very tedious after a while. They also introduce an element of risk if you have less technical people needing to deploy code that shouldn’t be let loose on the terminal of your production web server…. Hence: GitLab CI to the rescue!

Continue reading “Using GitLab CI to Deploy WordPress Sites”



Local Theme Development on WordPress MultiSite

So this blog runs as part of my WP Multi Site network. Multi-site is great – one click to upgrade the WP core for all sites, one setup for caching etc.

One problem is local dev though. You don’t want to pull the whole network down to your local machine to work on a theme (it’d take ages) but you still want to develop against the posts and pages that are on the live site.

Interesting solution to this:

WP Multi-site stores DB tables with different prefixes for each site. So for instance you’d have wp_2_postswp_2_options etc. for site No. 2.

Let’s try altering a few bits in wp-config.php to make use of this:

define('WP_HOME', 'http://localsite.loc');
define('WP_SITEURL', 'http://localsite.loc');
define('DB_NAME', 'your_multisite_db');
define('DB_USER', 'your_multisite_user');
define('DB_PASSWORD', 'yourpw');
define('DB_HOST', '127.0.0.1:3307');
$table_prefix = 'wp_2_';

Finally create an SSH tunnel so you can access the DB running on your server on local port 3307 (Your local MySQL will be running on 3306). …..presuming you need to do this of course (article all about this here):

ssh -L 127.0.0.1:3307:127.0.0.1:3306 user@yourserver -N

That done, you should be good to go! One important caveat with this method: wp_users and wp_usermeta tables are global to all sites in the Multisite network (along with a few others). Therefore you won’t be able to login on your local copy and anything involving users on the front end will be screwed (like displaying post authors for instance). This really wasn’t much of a problem for my purposes on this site though. …so I thought I’d share.

Hope this helps someone!


Browser Sync, WordPress and Access-Control-Allow-Origin Errors

Well hopefully this will save someone half an hour….

A common way to make front end JS interact with WordPress’s AJAX functions (/wp-admin/admin-ajax.php) is to use something like:

[code lang=text]
wp_localize_script( 'my-script', 'myAjax', array( 'ajaxurl' => admin_url( 'admin-ajax.php' ) ) );
[/code]

Which is great. Unfortunately you’ll run into problems if you’re using Browser Sync for testing, in the form of a load of Access-Control-Allow-Origin errors in your console.

This is because wp_localize_script will escape the URL, scuppering the clever search & replace stuff Browser Sync does in it’s middleware / proxy functions.
To get round this, do something like:

[code lang=text]
function my_ajax_url() {
?>
<script type="text/javascript">
var ajaxurl = '<?php echo admin_url( 'admin-ajax.php' ); ?>';
</script>
<?php
}
add_action( 'wp_head', 'my_ajax_url', 3 );
[/code]

Sorted. Also, don’t go down the first route I did of making all WP’s URLs root relative because:
1. You can’t filter admin_url()
2. There are good reasons not to.


The best way to spider & download an entire website

There seem to be a lot of GUI tools out there that purportedly download entire websites. The last one of these I tried using was SiteSucker, which sucked pretty badly. What you really need is this:

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --domains mywebsite.com --no-parent www.mywebsite.com

run from the terminal. I put this here in the interests of me remembering all the options and hopefully someone else saving themselves so many boobless hours of crap applications that don’t work.

Continue reading “The best way to spider & download an entire website”


Setting up SSH public key authentication

Tired of typing a password every time you login to your server? You’d be needing some SSH keys then. DSA ones are the best for this apparently, so on your local machine:

[bash]ssh-keygen -t dsa
cat ~/.ssh/id_dsa.pub[/bash]

copy the key you’ve just made to the clipboard, and on your remote machine paste it into the file ~/.ssh/authorized_keys

Simple as that. No more typing that password every time.


Fixing slow wp-cron requests on servers behind NAT routing

So we’ve been having a weird problem with WordPress installs on supposedly fast linux VMs running very slowly. It was mainly noticeable on server requests for the Dashboard by logged in users. After a bit of analysis using New Relic, we narrowed it down to external web requests taking a long time.

The trouble was, these requests were mainly pointed back to the server itself. For instance:

htp://myserver.com/wp-cron.php?doing_wp_cron=12345678

The actual load on Nginx & PHP processes was practically non-existent. PHP was just waiting for it’s cURL requests to come back. How strange.

As it turns out, the server was receiving all it’s network traffic via NAT, which meant that while myserver.com resolved to, say, 194.100.99.98 (an external IP), all the actual box was aware of was it’s internal IP, say 10.11.12.13. The external was never directly configured as one of it’s network interfaces. This meant that requests like the above had to make a round trip all around the network and back again before they’d get processed. Not good.

Our solution was to setup an alias for the loopback interface so the external IP would behave exactly the same as 127.0.0.1 when it was requested internally, and so never leave the box:

[bash]
vi /etc/sysconfig/network-scripts/ifcfg-lo:1

DEVICE=lo:1
IPADDR=194.100.99.98
NETMASK=255.255.255.255
NETWORK=194.100.99.98
ONBOOT=yes
NAME=loopback194
[/bash]

“service network restart” or a quick reboot and job done!

This was on CentOS 6.5 btw, but I’m sure it’s pretty similar for Ubuntu or any other flavour of Linux.