I know, this third part of the series regarding WordPress and OpsWorks comes a little bit late.
Between this article and the previous one of the series I wrote a lot of other gems spanning many different topics.
But now I’m here to continue right from where I stopped 😉
Before we begin, here’s the first and the second article.
Now, just to wrap up, last time I tried to explain a quirk about the compilation and the convergence phase of a chef node.
To present it I relied on a recipe I wrote to “set up” the database during the deploy of a WP (WordPress) application.
Everyone accustomed with WP development knows that, due the characteristics of WP itself, it is very often difficult to keep everything updated and working as expected.
Many functionalities depend not only on the code but also on the state of the database where most of the configurations are stored. The problem with WP is that there isn’t a recognized and out-of-the-box standard that can be followed to handle the aformentioned database state.
No Rails-like migrations, sorry.
A simple (but bad) way to solve this problem is to push, together with the code, a dump of the database directly inside the repository of the project. In this way it’s state can also be tracked and most of all restored (imported) whenever needed.
Obviously this solution doesn’t fit the case in which there are sensitive information that can’t be simply shared between all the project contributors.
Anyway, assuming this is not the case, if you plan to deploy a WP project through OpsWorks you may end up with the need to automatically import a dump just during a deploy.
This is exactly the purpose of the recipe taken as an example in the last article of this series.
But hey, as L.T. says, “Talk is cheap. Show me the code”. So, here it is:
script 'load_database' do
only_if { File.file?(db_path) and Chef::Log.info('Load PrestaShop database...') }
interpreter 'bash'
user 'root'
code <<-MIKA
mysql -h #{deploy['database']['host']} -u #{deploy['database']['username']} #{deploy['database']['password'].blank? ? '' : "-p#{deploy['database']['password']}"} #{deploy['database']['database']} < #{db_path};
MIKA
end
What I do here is simply to rely on the script resource to invoke the mysql Command Line Interface (CLI) and tell it to import the dump located at the path stored inside the db_path variable inside the proper database.
This is done by relying on the bash interpreter and by using all the info that OpsWorks gives us through a JSON object (one per deployed application) embedded inside the deploy attribute of the JSON associated with the deploy event.
{
"deploy" : {
"app1" : {
"database" : {
"database" : "...",
"host" : "...",
"username" : "...",
"password" : "...",
}
},
"app2" : {
...
},
...
}
}
The overmentioned info are picked up by OpsWorks (and underline, Chef) right from the app configuration that can be accessed from the OpsWorks dashboard.
The complete list of the deploy attributes can be found right here.
As a side note and as already stated in the previous article, the import of the database gets triggered only if the the dump is actually present and if a proper flag (i.e. “import_database”) is present inside the overmentioned JSON.
Next time I will talk about…well…I don’t know! There are really many things to talk about OpsWorks (and WP) so just stay tuned! 😉
Cheers!