One of the biggest challenges that a digital development firm faces is the ease of getting a developer working on a new project quickly and securely. Aside from getting the developer up to speed on the specifics of the project, there can be a large amount of time spent getting the developer an environment to work in (that hopefully closely mirrors the production environment the site is on or will be hosted on). You also need to make sure that the developer is not inadvertently sending emails to users, triggering cron jobs that could charge people’s credit cards, interacting with 3rd party systems to update accounts, etc. from their development environment.
Throughout the years’ many developers have tackled this problem with varying success. Initially, local development environments on Mac using MAMP (Mac, Apache, MySQL, PHP) and Windows using WAMP (Windows, Apache, MySQL, PHP) were used. These were relatively easy to spin up for a developer but they were very different environments than production. Furthermore, Macs and Windows come in many versions that are constantly changing and run on different hardware so even 2 developers running MAMP environments can actually be very different. There were also complexities in running multiple projects that require different versions of MySQL, Apache or PHP. This can be a big challenge for management and IT to handle and results in a bunch of “unicorn” development environments which can be hard to maintain and debug.
The next evolution of local development came about with the introduction of virtual machines. Virtual machines are provisioned on your local computer using software like VirtualBox and can simulate different operating systems than the host computer. This means that you can run a Linux “machine” on a Mac or Windows computer or even a Windows “machine” on a Linux box. The advantage here is that your local environment can actually run the same operating system as the production environment and even the same programs and extensions. The disadvantage was that the virtual machines were difficult to fully provision and took up a lot of space on your computer and interacting with the files was sometimes a slow process.
To help overcome this issue, Vagrant was developed. Vagrant allows a user to specify not only a machine and its operating system but also various programs and tools to be installed on a virtual machine. It can also configure things like PHP and Apache from a single configuration file which means that IT can track those files in a repository and get all developers working off the same environments for each project. This drastically sped up the time needed to get a developer started on a project, but the same issues of the virtual machine taking up a lot of hard drive space and some difficulties interacting with the files still existed.
The next major evolution in development operations has been spurred on by Docker containers. The details of what Docker is and how it operates are a bit complex. A good way to understand Docker is to think of each container as its own virtual machine, but without all of the underlying operating system. Multiple Docker containers run together within one Docker machine, and all of the containers share most of their underlying code. The parts of each container that are different are the specific programs and configurations required for that container to work. For example, a PHP container would only have the PHP code and extensions and none of the basic operating system files. The Apache container similarly only contains Apache and nothing else. This means that containers are much smaller than virtual machines and are usually created to do one specific task and do it as lean as possible. Also, containers can be turned into images which can then be stored and distributed to teams so that all team members are using the exact same version and configuration of things like PHP, MySQL and Apache.
While Docker containers are very powerful, lightweight and portable, configuring and managing multiple containers on a single computer can be complicated. To help combat this problem, an open source project called Docksal was started in the Drupal community. The Docksal project allows developers to take advantage of Docker containers in a simple way. A single configuration file can specify the exact versions of PHP, MySQL and Apache required for a project to work. In addition, adding new services like Memcache, Apache SOLR, MailHog and more is easy to do. All of this is controlled via a command line program called fin and through the use of Docker commands and other custom commands.
So now that we can easily provision the services required for a project and track that configuration in some configuration files, the next step is to integrate repository code and database backups into the development process. Each of our repositories is arranged so that we can track the development configuration files (Docksal files), Drupal files (combination of a drupal.make or composer.json file and custom code), deployment files (GitLab CI runner files) as well as testing files, READMEs and more.
The most important file in each project is the “init” command which is what developers use to get started on a project. In our case, the init command downloads all the Drupal core and contributed files necessary for the project to run. In the case of maintenance work, it will also download a recent database backup from the production site. It then runs some post-initialization tasks which might include things like switching credit card processing to TEST mode, enabling the stage_file_proxy module to download production assets on demand, changing admin logins to use a common password and much more. The developer only has to do two steps now to start developing a project:
After this is done, the developer will have a fully functioning Drupal site on their local computer ready for them to start committing code to. Merges to the staging branch on each project auto-deploy to test environments for QA teams and clients to see. Commits to the master branch are handled through merge requests and are approved by lead developers and manually deployed to production when ready. By standardizing all project repositories and using Docksal we have drastically sped up the time it takes to get a developer working on a new project and can focus on what we do best, writing amazing code!
This incremental progression from VMs, to local development machines, to specialized containers led us to the middle of 2019 where Kubernetes allows automating, deploying and scaling containerized applications across many data centers. Fruition’s hosting infrastructure was updated to take advantage of these improvements and is now cloud agnostic allowing for hosting arbitrage and other advanced techniques.
Drew Michael is Fruition’s Vice President of Technology. After graduating from Duke in 2000, Drew and two partners started Tribalectic — an online body piercing community and store. He then started his own web development company in 2008 and worked with several agencies, including Fruition. When Drew came to Fruition to work full time, he brought a few of his employees. Since, Drew has mentored and helped grow the direction and future of Fruition. When he’s not at work, Drew is out seeing live music and skiing.
President & Founder, Tru Family Dental
Marketing, Dependable Cleaners
President & Founder, Family Travel Association