Steve Grunwell

Open-source contributor, speaker, and electronics tinkerer

Keeping WordPress Under [Version] Control with Git

Update 6/16/16: In the years since I wrote this blog post, there have absolutely been changes to my workflow. I no longer recommend versioning WordPress core, and prefer to install dependencies via dependency managers like Composer.

Update 9/26/12: Based on some excellent points made by Scott in the comments, I’ve updated the default .gitignore file

Over the last year or so I’ve been deploying my sites and applications almost exclusively through Git. It took a while to get used to, but pushing all of my code through git has forced me to think through my code before committing (lest I get git blame‘d), kept me concentrated on the task at hand, and has made collaborating with other developers so much easier.

There are a number of different ways developers like to keep WordPress sites in Git. Some people commit everything while others may only track the theme(s), excluding core, plugins, uploads, etc. My preferred way of tracking/deploying WordPress sites through Git requires a little configuration; this article outlines my personal WordPress-Git workflow.

WTH is Git?!

I’m assuming (or at least hoping) that all web developers today have at least heard of Git, but here’s the nickel tour: Git is a distributed source-control system developed largely by Linus Torvalds (creator of Linux). Git serves a similar function as other Version Control Systems (VCSs) in that it allows development teams to track code revisions, share code, and manage multiple software versions from a single codebase. Unlike centralized systems (Subversion, CVS, etc.), Git is decentralized; every instance of the repo contains the full history of the project.

Using a system like Git, a development team can easily track changes in projects. As a developer, I can see that Bob added three files to the repo and that Alice changed four lines in one of the stylesheets. I can do a git pull to merge their changes into my local copy of the repo. If next week a site goes down after new code is deployed, I can roll back changes and do a git blame to see when a particular line was changed, who made it, and what other files were changed at that time.

Additional resources

There are plenty of great resources for learning to use Git:

Using Git with WordPress

When I start tracking a new WordPress site in Git, my first step is creating a .gitignore file in the repo to control what does and, more importantly, what does not get into the repository.

My default .gitignore file looks something like this:

Most of the files are pretty explanatory; We don’t want to keep cache, upgrade, or dynamic sitemaps in our repository. I also like to keep out non-essential files and prevent Desktop Services Store (.DS_Store) or, in Windows environments, Thumbs.DB files from getting committed as they serve no purpose outside of my local environment.

I don’t keep my wp-content/uploads directory in WordPress for several reasons:

  1. As soon as a client starts uploading media on production, your repo will be out of date
  2. By default, WordPress generates multiple copies of every uploaded image (thumbnail, medium, large). If all of those images are in your repo, everyone who clones it will be downloading every version of every image that’s been uploaded. That means more bandwidth, larger repos, and no real benefit
  3. Remember: version control !== site backup. Just like anything else, repos can be deleted, corrupted, or generally messed up. There are plenty of automated ways to backup your site, WordPress-based or otherwise (I’ve been using WordPress Backup to Dropbox on this site and have no complaints).

Up until a few months ago, I would manually FTP the files from production into my local repo (thus preventing my local copy of the site from being broken) whenever I needed to make updates to a site theme. Of course, that got old really quickly so I put together this mod_rewrite rule to simplify the process:

This will check your local wp-content/uploads directory for the requested file and, if it doesn’t exist, attempt to load it from production (obviously you’ll need to replace with your domain/paths in the above snippet). This makes it easier to work on a local copy of a production site since you won’t need your own copy of the wp-content/uploads directory.

You may have noticed that I keep wp-config.php out of my repos. Other developers may disagree but I try to avoid committing passwords wherever possible. One of the drawbacks to Git keeping track of every change to a codebase is that passwords that have been committed once can be recovered later (unless you purge it from the repo). When deploying to a new environment, I think it’s easy enough to cp wp-config-sample.php wp-config.php then edit your newly-created wp-config.php file in the editor of your choice. By keeping your config file out of the repo you also reduce the risk of another developer accidentally overwriting your local (or worse, production) configuration upon deployment.

What’s your workflow like?

This workflow seems to work pretty well for me. I’m able to work locally, push it up to a remote Git server (usually Github, Bitbucket, or Gitorious), then pull the latest version of the repo into the staging and/or production environments. I don’t need to make a copy of my uploads directory, worry about my configuration being overwritten, or worry about polluting the repo with any of my environment-specific configurations or system files.

What about you? How are you using Git in your WordPress workflow? I’m especially curious to see how other developers handle moving the database between environments!


Quick Tip: Set the Default Display Name for WordPress Users


Quick Tip: Override WordPress Toolbar Styles


  1. Great read. I’m stil refining my git workflow, But so far have been limiting version control to just the theme folder. Definitely going to give your htaccess trick a try. My only concern with including other directories (namely /plugins/) is that some plugins will make changes to the database during updates, so I’m not sure if that could cause problems if the databases aren’t in sync. Syncing the DB seems to be the biggest unresolved issue with WP + git. I’ve read that Capistrano may help but haven’t really dived into that yet.

    • Agreed, there’s definitely a void in the whole multi-environment WordPress database realm. Could be a good plugin opportunity.

  2. Thanks for posting!

    I like what you are doing. I’d add not to include your database backups if you are using any of the DB backup tools.


    Also, I’d add windows users leave these files: [Tt]humbs.db

    • That’s an excellent point. I messed up on the capitalization on “Thumbs.db” in the original post (“Thumbs.DB”), so I’ve gone ahead and fixed that and added wp-content/backup/* to my defaults. I also took your advice on the square brackets for [Tt]humbs.db, though I don’t think I’ve encountered any lower-case “T”s in the wild (then again I don’t work with many developers who use PCs, so my exposure is limited).

  3. Excellent post covering the same setup/work flow I also use. Good job!

  4. I read about ignoring the uploads directory, but it didn’t tell ‘why’! But thanks to you, now I know … for the reasons you cited.

    And I like your proposed solution of utilizing a .htaccess inside uploads directory to delegate file requests to the live website. Brilliant solution!

    • Thanks, Sawant! The Htaccess trick was the result of way too many development/staging environments falling out of sync with production – so far I haven’t run into any issues with it.

  5. Great idea really, thanks for the excellent instructions. I made the switch to git a couple of months ago but wasn’t really happy with having to sync the uploaded media data. Your htaccess workaround is simple, easy and just brilliant. Saves me a lot of bandwith and time, thanks!

  6. The .htaccess in the content uploads folder is an interesting trick.

    I think the propose article, overall, isn’t practical, however.

    If you aren’t including plugins into the Repo, what happens if the client has installed a plugin on the production server which directly affects the (visual) outcome of the theme in development? Or, a user has changed a password? Is it expected that the production site would shut down or freeze until after local development is done?

    • I think you may have misinterpreted the article. I do keep plugins (and core) in the repository (some other articles I’ve read in the past have advocated against this to keep the repository limited to custom code) but I’ve found in practice it’s useful to keep the plugins in the repo.

  7. Bredon

    Great article, Steve.

    Been struggling with version control on WP and the .htaccess trick is a life saver. Thanks.

  8. Thanks for sharing your approach. How do you keep your local dev environment (and staging or test environments) aware of plugins being installed/uninstalled on the server?

    • That’s a great question! Historically the work that I’ve done has been as part of an agency where we’re generally in charge of general maintenance/updates for the site. Of course we want the client to be able to modify their site as they see fit but once they start getting in and changing any of the source it’s generally at their own risk.

      Depending on the project, client, and any SLAs you have in place it might make sense to have an SSH key on your production server that’s able to push to the repository. You could then merge client changes into the repo (as a committer identified as the client, lest you get git blamed) when you notice modified or untracked files on production. Alternately you could generate patches from the modified files and apply those patches to your own local repo – this would prevent production from having push access but would be a little more work on your end.

      Do you have any thoughts on how best to approach it? The conversation’s been coming up at work lately so I’ll probably have to come up with something clever sooner rather than later (I’ll be sure to write a follow-up once I do).

      • I’m mainly asking because I don’t have much devOps experience and I’m looking into a strategy for moving a WP website from FTP free-for-all to local/staging/production environments with version control (MAMP Pro for local dev, a GitHub repo, and deployments to staging and production via

        Pushing to the repo from the production server hadn’t occurred to me. It sounds workable, but I’d be interested to hear from others if there are any concerns with this approach.

        What I’m thinking I might do, is write a bash script (or scripts) for the local environment that will copy the prod db to the current environment and sync the files in the uploads and plugins directories from prod to the dev environment (probably with rsync). Again, with little devOps experience, this may not be the best approach, but I think it will work for us. Wether or not to keep the uploads and plugins directories in git with this approach is probably just a preference, but I think I will opt to keep them in git. I’d love to get feedback on this.

        One more thing… Your .htaccess trick for getting files from production if they’re not available in the dev environment is very clever. But I see 2 possible drawbacks. First, this could throw off your stats/bandwidth for the production site (probably not by much, but could be a concern on some – but probably not many – sites). Second, what happens when developing locally, but *gasp* not connected to the internet?

        • That’s a good point – pulling production assets into development could throw off statistics if you’re using a tool like Webalizer that parses access logs (though I’d imagine stats would be more concerned with page loads rather than individual media assets). Loading production media shouldn’t impact tools like Google Analytics as it won’t trigger the tracking script. If you’re really concerned with access stats for each of the media files it might make sense to rsync your uploaded assets to a second directory that isn’t tracked and use that single location for dev/testing environments to pull from.

          I have yet to find a really good solution for keeping databases synchronized across environments – a bash script is rather rudimentary but could certainly be effective for a one-way (i.e. pull the current production database into test) transfer. When you start getting into synchronizing and merging databases you’re opening a can of worms. If you do find a good cross-environment database solution please be sure to share it!

          • Excellent point about the Google Analytics script not being triggered.

            I ended up getting an account on and setting up a webhook in GitHub that notifies deployhq any time we push to our master or dev branch. When we push to the dev branch, deployhq takes care of getting the files in that branch to our staging server (which is a subdomain of the production site and on the same linux server). With deployhq, you can set up SSH commands to run before and/or after the main push.

            Since we don’t have the uploads or plugins directories in version control, we have deployhq run a few commands after getting all the fils from the github repo.

            We have it rsync the production uploads and plugins directories to the staging site like so:

            rsync -rt /var/www/vhosts/ /var/www/vhosts/ && rsync -rt /var/www/vhosts/ /var/www/vhosts/

            And then dump the production db and import it into the staging db:

            mkdir /var/www/vhosts/ && mysqldump -u PROD_DB_USERNAME -pPROD_DB_PASSWORD prod_db_name > /var/www/vhosts/ && mysql -u STAGING_DB_USERNAME -pSTAGING_DB_PASSWORD staging_db_name < /var/www/vhosts/ && rm /var/www/vhosts/ -rf

            Probably not the most elegant solution, but it works for us.

            To get our local environments in sync with production, we run some similar scripts. Since we have some designers that aren't very comfortable with the command line, the scripts are executed via PHP's shell_exec() when you visit /sync.php (only in the local dev environment of course).

            Once I do myself the favor of getting my own site off of WordPress and on to Middleman, I'll write a post covering this in a little more detail.

  9. Hey Steve,

    Just wanted to thank you for this .htaccess snippet:
    RewriteRule (.*)$1

    I have this pointing to my S3 bucket mirror.

    Local and production share the same database, so when the client uploads things, I am not getting broken images on my local end.

    Totally amazing!


  10. I just started using WP Migrate DB Pro – works pretty well for pulling and pushing the DB between environments. The only thing is it doesn’t really handle situations where both the staging and production environments databases change well, i.e. it’s not merging the DBs, it just drops the one and replaces it with the other, but does do the search and replace so that it works when it gets there.

    It works fine for me because I’m basically just managing my own site by myself, but could be problematic if there are DB changes on both sides, as one of the changes will have to get overwritten.

    – James

    • Thanks for sharing your experience – I’ve been meaning to look into WP Migrate DB Pro for a few weeks but haven’t found the time. I suppose it makes sense that they’re not just straight merging the databases (it could be difficult to determine what content should be treated as production-ready) but that means a “stage content in one environment and push to another” workflow is still less than eloquent.

  11. Hi Steve, thanks for a great article. I’m new to this and am just about to make the jump and keep my whole site under version control. My question is how do you manage plugin updates? Do you update plugins in a batch or 1 by 1? How do you commit these changes?

    • Hi David,

      I typically do plugin updates one at a time. It can be a bit cumbersome if you’re updating a ton of plugins at once but it’s really handy to be able to quickly revert a single plugin upgrade in those rare instances that a new version of a plugin totally breaks your site.

      My process is usually something like this:

      1. Find the plugin in the list, read the release notes to catch any incompatibility issues
      2. Upgrade the plugin on staging, test the site to make sure nothing was broken in the upgrade
      3. git add -a wp-content/plugins/{plugin} && git commit -m "Upgraded {plugin name} to version x.x.x"

      When I have large numbers of plugins in need of update (typically inherited sites) I’ll occasionally batch the plugin updates into logical groups (inactive plugins either get removed or updated in one commit, Gravity Forms and related plugins could be one commit, etc.) but I try to do separate commits whenever practical.

      • Hi Steve, appreciate you taking time to reply. Ok, so from your post and others I’ve read it seems that the main benefit of having the entire site under version control is so that we can easily roll back plugin updates (rolling back entire WP versions is possible but very unlikely). And I guess if you are in a team it is very useful in case somebody deletes some assets off the server. Are there any benefits that I’m missing?

        Also, switching to this workflow would mean that I think of my git repo as the canonical version of the site. The live site has to wait for the git repo to be updated before it gets updated too.

        Probably missing something very obvious here. Appreciate your patience to explain. And happy July 4!

        • Ease of deployment is the biggest benefit. When I inherit a site that’s set up in the way I’ve described in this post firing up a new instance is as simple as git clone , cloning wp-config-sample.php to wp-config.php and adding relevant settings, and importing a SQL dump into my local database (editing ‘siteurl’ and ‘home’ in wp_options to match my environment). Not counting database import/export time the instance takes all of two minutes to set up.

          I don’t think it’s bad to think of your repo as the canonical version of the site; I treat the repo as the canonical codebase and production as the canonical database. Even if your site is a single developer, deploy straight from local to production type of setup (like this one) it’s beneficial to have the repository in place and up-to-date. Make the changes locally, commit them, then SSH into production and do a git pull. If you introduce a dedicated staging server you pull and test there first. If your site gets popular you’re able to scale horizontally by deploying new copies of the site across a fleet of servers. Regardless of the number of developers or servers a process like this eliminates the need for FTP, prevents cases of version mismatch between environments, and, if you use a tool like Capistrano for deployments, can even avoid the need to run through the whole SSH-cd into directory-git pull process.

          Perhaps it would be helpful to take a look at the source for this site on Github. It’s not perfect as a WordPress site (I need to abstract some of the theme functionality into a plugin sometime soon) but you can see what lives in and, perhaps more importantly, what stays out of the repository. Core, plugin, and theme updates are all made on a local copy of the site and pushed to Github. When I’m ready to deploy to production I simply SSH into the server and do a git pull.

          Does that answer your questions?

          • Hi Steve, yes that answered all my questions. Thank you so much for sharing your knowledge and your site on Github too. I really appreciate your help, please let me know if I can return the favor!

  12. Thanks for sharing your approach. I have a similar approach as well, though I don’t store the WordPress core in Git.

    It’s interesting to combine the knowledge from your post with what’s been contributed to GitHub’s gitignore project:

  13. Hi Steve, firstly great article! I actually develop custom WordPress with git in team environment. But we always have problem when work in team with source control WordPress. How about site settings and environment in database? How do you manage this in team with source control?

    Thanks :)

  14. Alfred

    Hey Steve. Great post. Just wondering about using gitnore on a already pushed site. And when making changes now it wants to move everything it shouldn’t. How can I implement this to an already live server with git set up? I’m a git noob and trying to solve this problem!! Thanks a lot !!

    • Ah, this is always a fun one. If you haven’t started using Git on the site at all yet I’d recommend the following:
      $ cd (your-local-copy)

      # Initialize the new Git repo
      $ git init

      # Put your gitignore file in place (sub ‘vi’ for your favorite editor)
      $ vi .gitignore # …actually write the file, of course

      # Add the site to the repo
      $ git add . # You may also want to do it more selectively to ensure you’re not putting anything in that doesn’t belong

      # Commit your site, push to remote
      $ git commit -m “Keeping WordPress under [version] control with git; initial commit”
      $ git push

      # SSH into your remote server and clone the repository
      # I find it easiest to create my git clone right next to the existing web root, then swap them out.
      # Alternately you could change the document root to point to your new directory.
      # In this example my existing document root is /var/www/httpdocs
      $ ssh
      $ cd /var/www
      $ git clone [email protected]:username/my-repository-name.git ./httpdocs-git

      # We now have two directories in /var/www – httpdocs/ and httpdocs-git/
      # Copy assets from httpdocs/ to httpdocs-git/
      $ cp httpdocs/wp-config.php httpdocs-git/
      $ cp -r httpdocs/wp-content/uploads httpdocs/wp-content/uploads # If this directory is too large you may actually want to move it when we switch the directories in a minute
      # …etc.

      # After setting permissions and ensuring that everything we need is in place, we’ll flip the switch
      # This could also be done by changing your document root within your web server configuration
      $ mv httpdocs httpdocs-old && mv httpdocs-git httpdocs

      If all went well your site should look and function the same but you’ll be able to make updates by SSH-ing into the remote server and doing a git pull in your /var/www/httpdocs directory.

      I really need to get around to writing a WordPress-Capistrano article – I’ve been using that at work and at home and it’s really made (re-)deployments easier than ever.

      • Thanks so much for the reply. I WILL be giving this a try as I’m moving a bunch of sites from FTP school server to my git. Thanks again!!

  15. Zac

    Great tutorial!

    One thing – what does this mean?
    # Include these files in previously blocked directories

    Apparently it makes GIT say that there are untracked files on “wp-content/uploads/” even though I added wp-content/uploads to .gitignore file.

    I already removed cached content from that folder (because I guess I added that folder to .gitignore after commiting it):
    $ git rm –cached wp-content/uploads/ -r

    How to deal with that?

    • The bang (exclamation mark) before a path in .gitignore says “actually, nevermind, you can include this one”. There can be any number of files in wp-content/uploads not in your directory but my .gitinore structure basically says “ignore everything in there except the .htaccess.

      If you’re using a tool like Capistrano (or just don’t want to mess with that Htaccess file), you could do something like this in your main Htaccess file and/or a virtual host:

      <IfModule mod_rewrite.c>
      RewriteEngine on
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteRule wp-content/uploads/(.*) http://{PRODUCTION_URL}/wp-content/uploads/$1 [NC,L]

      • Jon

        Great post… thanks for the lesson!

        But still, why are you telling git to un-ignore the .htaccess file? If you’re not ignoring it, then the .htaccess file is moved into the production environment, isn’t it? Wouldn’t that create some kind of loop on the production server if a file isn’t found there?

        Maybe I’m missing something…. please enlighten this lost soul.

        Thanks again.


        • The Htaccess file being un-ignored is the one in wp-content/uploads that contains the rewrite rule that proxies assets from production. Admittedly, I’ve been putting this rewrite rule in the site’s main Htaccess file (or, better yet, the virtual host configuration file for Apache or Nginx) as of late :)

  16. Christopher Brown

    Great post Steve and I have a similar workflow. The htaccess trick is great but does not appear to work with timthumb. Any ideas?

  17. Sean Hudson

    Good introduction to a difficult topic! I have to point out that you mention keeping the wp-config.php file out of repos, but I can’t fathom why other developers would commit passwords to a public forum? For context, there is no mention here of creating a (paid) private repo, which some green developers might create and not understand the importance of leaving passwords out of the public repo! Am I off base here?

    • Public or private, it’s usually best to avoid committing sensitive information to any repository. Your wp-config.php file also contains environment-specific settings (for instance, I disable post revisions on development and staging but sometimes keep them enabled once it’s in the client’s hands on production). I find that it’s best to keep wp-config.php out entirely; it’s easy to set up but equally easy to mess up, especially when you’re moving between environments. Does that help clarify my reasoning?

  18. This is good stuff. That uploads folder .htaccess trick is gold!

    I set up WP similarly with a few differences.
    1) I setup WP as a git submodule so it’s not actually in my repo and I can easily roll back if an update blows something up
    2) I include wp-config.php but it points at a local-config.php which has adjustments for each environment (and is ignored by git)
    3) I put uploads in /media so the URLs are shorter.

    You can see my basic setup here:

    Also: wp cli ( and a tiny bit of bash scripting can go an awful long way to providing solutions to many of the questions within the comments on this post. Syncing databases is 2 or 3 lines (ssh in, wp db export, scp the export down, wp db import !)

  19. Thank you very much for this article! Even now, two years later after it was written, I found it still one of the most useful resources in starting with git for me.

    I would appreciate some more information about your git workflow with updates and custom code. I was looking at but can see only one branch. What are you doing to make sure you don’t loose any custom code if you update core or a plugin?

    • Hey Konrad, thanks for the kind words!

      You’re correct, I’m able to run off a single branch because I make a point to not edit plugins or core directly (making it a non-issue). Instead, if I really need to edit a plugin’s functionality, I look for hooks or filters (documented or not) to try to inject my changes without touching the plugins’ code (plugins like Gravity Forms and WooCommerce are great at both providing and documenting these hooks and filters), which you can see if you look at grunwell_format_tweet(). Sometimes you can also get by calling public methods of the plugin directly (I typically write a quick function in my theme that includes a function_exists() check before calling it, just in case that plugin or function ever goes away).

      We’re also fortunate to be working with a platform with such a strong community around it – if the plugin I’ve selected doesn’t give me hooks, filters, or other ways to manipulate the code in the ways I need to, it’s usually pretty easy to find another plugin that does. If one doesn’t exist, get involved with the plugins that are out there to make them more developer friendly, or write your own plugin and release it.

      Sorry that kind of skirted the question, but I’ve only edited plugin files directly twice in the last few years, and both were bugs in the then-current releases the plugins; both of those edits ended up getting submitted as pull requests to the plugin authors.

  20. r2evans


    Great article, thanks for posting it.

    Do you have any experience where you want or need to make the production server the master repo and clone locally?

    I’m intending to keep my local installation in a NAT’ed network. I’d prefer to *not* open a port-forward in the firewall just so that the production server can do a pull (even if it’s a temporary hole), so it seems I have two options:

    (1) Make the production server the master and clone to the dev system. With this, I only “push” (over ssh) to prod when ready.

    (2) Using ssh’s remote port forwarding, ala ssh -R 9191:localhost:443 prod. Not having tested it yet, I’m assuming this means the initial setup would be something like git clone https://localhost:9191/path/something.git so that subsequent pulls (with proper remote port forwarding) would work.

    The simplicity of #1 appeals to me, but I’m wondering if there are other gotchas with regards to reverse the intended roles in practice (of master vs. clone).

    A slightly different question: you said that you maintain your site on a single branch. Have you seen situations where using git branches would have simplified or solved specific problems?

    I’m fluent enough in git to hurt myself but not enough to recognize and mitigate unintended consequences.

    BTW: have you been following VersionPress ( It looks promising.

    • A simple solution (unless I’m missing something huge) would be to host the master repository on a service like GitHub or Bitbucket, not your local machine (I suppose I didn’t explicitly call that out in this post, but it’s where all my repos live). Develop locally, push (via SSH) to GitHub, then production is pulling over SSH (be sure to set up your production server’s public key in your repository settings as a “Deploy Key”, meaning it can pull but can’t push). Hosting the “canonical” copy of the repository locally makes collaboration difficult and hosting it on the production server makes scaling difficult (plus, if anything happens to that server it’s bad news bears).

      As far as branching goes, any major development gets built in a feature branch, but then merged back with master for releases. I’ve worked with teams who opted to have separate “staging” and “production” branches, but that introduces far more merging, cherry-picking, rebasing, and other fun Git management that I don’t find myself needing on most smaller projects.

      Does that help?

      • r2evans

        It does, thanks. I’m generally reluctant to hosting company-related stuff on github (or similar), but I suspect my rationale is based on paranoia. However, as long as I don’t accidentally commit wp-config.php, there shouldn’t be any other files with passwords or other personal information where doing this would be problematic.

        I hadn’t considered using a public repository, to be honest, always assuming that I’d unintentionally publicize some form of authentication or personal information. It’s definitely something to consider, conditioned on your suggestion to ignore wp-config.php.

        As far as collaboration, the good news is that the NAT’ed network is available via VPN, so anybody who needs access to website development (a very small pool) also has VPN access, so that’s not an issue. You can argue that I’ve already punched through the firewall for the VPN, “but that’s completely different.” (Alright, better stated: I don’t want to punch *another* hole in the FW. Sigh.)

        One more question: in, you mention defining WP_SITEURL and WP_HOME in wp-config.php, but you don’t mention it in this blog, nor am I able to hear your speaker’s notes. I think this has something to do with syncing the sql db as well, not just the git repo. Am I close?

        • Remember, there are “public” repos, then there are public (e.g. “hey everyone, look at my code!”) repos. Either way you won’t want sensitive info (wp-config.php nor any other config-type files your site may use) in the repository, but private repos on GitHub and/or Bitbucket should (in theory) only be visible to people on your team. If you’re really paranoid you can run something like GitLab to get the benefits of source control hosting but on a server you control (my old agency had an installation of Gitorious for this reason).

          As for WP_SITEURL and WP_HOME, setting these constants in your wp-config.php prevents you from having to do the whole “import the database, update ‘siteurl’ and ‘home’ in wp_options” game, which gets old really quickly. I didn’t learn about them until after I published the blog post (close to two years ago), but they’ve been included the last few times I’ve given the talk.

          • r2evans

            I have followed your blog and it’s been quite helpful! At least temporarily, I’ve set up the prod server as the master and my dev the clone. I’m using the wp-config.php definition for WP_SITEURL and WP_HOME.

            On the prod server, SITEURL and HOME are not the same; we’re using a rewrite rule so that the WP installation is in a subdir but appears as the root. On the dev host, I do not need this masquerading, and in fact it might be useful to have multiple subdirs for testing.

            I think this difference is causing a problem, though, since none of the other links are working (either directly from menus or via blog permalinks using, e.g., the “day and name” format).

            What do I need to understand about how redirection is done to get these permalinks to work?

          • r2evans

            Quick workaround for the permalink problem: setting to “default” vice “day and name”. Sorry for the spam, should have checked that before asking.

            BTW: for anyone thinking of repeating what I’ve done so far, one of the hurdles I hadn’t foreseen: by making the prod server the “master” in the git hierarchy, pushing to it is problematic since git will not push to a checked-out branch of a non-bare repo (for some discussion, see Huh, the things you learn by trying and falling.

            I am now trying hard to get the prod server to be a clone of a firewalled master repo; problematic since godaddy doesn’t have git (I copied a static binary); doesn’t have an ssh binary (though I can ssh *in*); and does not allow any tcp port forwarding (it’s disallowed in sshd_config). I may have to resort to your “much easier” solution of keeping my website in a public (e.g., github) repo! Resistance is futile …

            One solution (I’m *still* resisting the public repo *grin*) is to set up a bare repo outside of the html directories and clone it inside the html directories as well as on my firewalled dev system. So much for “simple setup.”

  21. Hi,

    Thanks a lot for this post. I hope this helps someone else, I tried the htaccess fix on a server but it wasn’t working at first, so my hosting support company helped me out.

    I had to change the config to this:


    RewriteEngine On
    RewriteBase /
    RewriteRule ^index\.php$ – [L]
    RewriteRule ^login/?$ /wp-login.php [QSA,L]
    RewriteRule ^register/?$ /wp-login.php [QSA,L]
    RewriteRule ^logout/?$ /wp-login.php [QSA,L]

    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule wp-content/uploads/(.*)$1 [NC,L]

    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . /index.php [L]


    This is a great little trick. We have about 280,000 files and 45GB worth of images in that folder!

    • RewriteEngine On
      RewriteBase /

      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule wp-content/uploads/(.*)$1 [NC,L]

      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]

      This code saved me a bunch! Thanks!

  22. Matthias

    Just here to leave a big thank you for this article. Love the redirect trick with uploads folder!

  23. Matthias

    And to add something productive to this, one might add

    RewriteCond %{REQUEST_URI} ^/wp-content/uploads/

    to the uploads redirect stuff to not blindly redirect everything that turns out to be a 404.

  24. Frank

    This article was really helpful, but I’d like to know your latest thoughts about “Version control!=backup”. If one did not want to use dropbox, what else would you recommend?

  25. andrew slack

    Hi Steve, great post.

    We follow this practise at the moment but we’re struggling with plugins, I want site admins to be able to update plugins on production (once tested on Dev of course), but this would make our GitHub repo outdated.

    Do you have any good resources/examples on how I can push any changed files from the /plugins/ folder back into our GitHub master repo?

    This would be fab for monitoring changes (and rollback) plus we could possibly track who updated what.

    Maybe a WP plugin with monitors file changes locally and commit them to a linked GitHub account.

  26. Awesome post . For help to how to synchronise local host to development server.


  27. Mackan

    If you want to serve media files from the production server but you are using nginx instead of apache on your local server, you should check out this link:

    Instead of editing a .htaccess file you will have to add some values to *.conf file.

  28. Hey! Did you find a good way of dealing with databases?

Comments are closed.

Be excellent to each other.