Sunday, 7 May 2017

The Bus Factor

I’ve been meaning to write this for quite a while, as the bus factor is something I’ve (literally) run into in my career. For those of you not familiar with it, the “Bus Factor” is basically an informal measure of resiliency of a project to the loss of one or more key members. It’s basically the programming version of the old adage “Don’t put all your eggs in one basket”.

Story Time

Some years ago I was a software development intern at a large company in Milwaukee, Wisconsin. The team I was on was broken into a U.S. development team, an offshore dev team in India and an offshore QA team in China. We had daily scrum meetings at 8 AM every morning so that the US and Indian teams could participate all in one. One day we got word that one of the senior-most developers had literally been hit by a bus while crossing the street (thankfully he made a full recovery, but it certainly slowed down that part of the team as he was out for 8 weeks or so).

How to Reduce the Bus Factor

I’m sure people can (and probably have) written entire books on the subject of reducing the bus factor, and spreading knowledge around through the entire team. Spreading knowledge is really the key element in bus factor reduction.

How many people currently work on a team where one or two people are basically the wizards who secret spells make critical things happen (like deployments or provisioning infrastructure assets, or SSL certificates, or any of the other million things that need to be done in order to make software work)? I know I’ve worked on several teams where that happened. I’ve also worked with people who wanted to increase the bus factor as they thought it gave them better job security (a notion I strongly disagree with).

In my experience one of the best ways to reduce the bus factor is to maintain an internal wiki where developers and administrators can document processes for anything which they are going to do more than once (and sometimes it’s good to document things that are being done once as well). Another great idea is to regularly schedule cross training (n+1 isn’t only a good idea for infrastructure, developers and admins should have a bit of redundancy as well).

Ethics

I personally feel that there is an ethical responsibility for all engineers to be transparent in what they do. I never want to be the only person capable of doing something, instead I do my best to make sure that anything I do, which may ever need to be done again, is documented at least well enough that someone can probably piece it together. Doing this ensures that if I am ever hit by a bus the rest of my team won’t have to try to figure out the magical incantations I have developed in order to do a number of things.

Conclusion

At the end of the day reducing the bus factor is good for your team. You never know when you or one of your colleagues are going to end up no longer being available to work (they might be hit by a bus, or it might be something more mundane like taking a new job, or leaving for a few months for a sabbatical or maternity/paternity leave). As an engineer and a member of a team you have an ethical obligation to ensure that you are both sharing processes and techniques you’ve developed with your colleagues, and also trying to learn those processes and techniques from your colleagues.

Saturday, 6 May 2017

YASC: Yet Another SSL Checker

I wanted to do a little side project in less than 24 hours, which is fairly simple, but enough to really get my feet wet with ASP.NET Core MVC. I decided to build a little tool which can examine current details of an SSL certificate, and also which you can use to get proactive emails when you are 30 days out from a certificate expiring. This was also a great opportunity to play with bootstrap 4 and C# 7.

Technologies Used

  1. Bootstrap 4
  2. ASP.NET Core MVC
  3. Entity Framework Core
  4. SendGrid + SendGrid Transactional Templates
  5. Hangfire.IO

Before I start getting long-winded, if you’d like to just see the code, head over to my github.

High Level Approach

This application was kind of fun in that the core logic which actually “does” the important part of checking the SSL certificates is only a couple-dozen lines of code. The way it works is to open up a connection to a server and then take a look at the SSL certificate that was associated with the response. There are a few “limitations” around what can be inspected currently, as anything other than a non-expired, trusted certificate will throw an exception. I figured it would be fun to play around with Hangfire and Sendgrid as well in order to do a nice background batch email process.

Hosting On Azure

If you want to play around with it, the application is running on a free Azure App Service here. Note that these free services will turn themselves off with inactivity, so almost certainly (unless this service becomes incredibly popular), the cron tasks that Hangfire is executing will not happen (as that would require the server to be running).

Thoughts

This was a really fun 1 day project which let me play around with a few technologies I have played with in a very limited fashion. There are a number of bugs I noticed, so this could really use a bit of polish and TLC, but that probably won’t happen anytime soon. Please poke around at the code, let me know if you see anything really “surprising” or anything which would be an easy improvement.

Thanks for reading!

Wednesday, 12 April 2017

HOWTO: Migrating from a VPS to GCP Compute Engine

This post is going to be a brief guide on how to migrate a wordpress site from an existing host to Google Cloud’s Compute Engine service. Note that this guide assumes that multiple sites are being migrated, if only one is then that should make things slightly simpler.

At the end of this guide the site will be migrated, an SSL certificate from Let’s Encrypt will be provisioned and Apache will be doing it’s thing. I’ll leave it as an exercise to the reader to put the site behind CloudFlare (hint: if you are already there the only thing you probably have to change is your A records). For my purposes this guide will also include migrating all images and attachments to Google Cloud Storage.

Step 1: Back up EVERYTHING

In order to make this all work you are going to need backups of everything: the databases, the existing wordpress installs, any other assets you are using, etc. The easy way of doing this is to ssh to your web server and just tar cvzf my_site.tgz /var/www/my_site (this creates a tarball). Then use something like sftp or scp in order to copy the tarball to your local machine. Rinse and repeat for each site.

Repeat this for the database server. Assuming you have 1 database per site (my preferred way of doing it) you can just mysqldump -u root -p <pw> mydb > mydb.sql. If your database is huge it might be prudent to zip or otherwise compress the dump file (mine are only a few megabytes, so I didn’t bother). Once you have the dump file, copy it to your local machine. Keep it up until you have all your databases.

Step 2: Make the Cloud a thing

Now that you have all of the goods on your local machine, it’s time to make a place for them to live. To that end you’ll want to provision a brand new VM on the Compute Engine. I’m not going to walk you through that process as it is pretty well documented (and also just consists of pressing a few buttons. The one thing to watch out for is to make sure that the VM has all of the API Access Scopes that you will need as in order to change them you first have to power off the virtual machine (which is stupid and lots of people have complained about, but that is how it is).

While that is booting up, let’s get the MySQL stuff setup.

If you’re moving to Google Cloud, you might as well move in fully so for MySQL we’ll use Google’s SQL – MySQL Second Generation. The only real weird part here is to make sure you whitelist the IP address of the VM you setup. Alternatively you can go through a process to use the Google SQL Cloud Proxy, but IP whitelisting is a lot easier (and has fewer moving parts).

Step 3: To the CLOUD

Now that your data has a place to live, it’s time to start pushing those bytes to google. This is a 2 step process:
1. Upload the wordpress tarballs to the vm (assuming it has finished booting by now). The easiest way I’ve found to do this is to install gcloud and then execute gcloud compute copy-files ~/local/path <vm_name>:~/ replacing the chevrons and vm_name with the name of the instance you created (mine is web1 because I’m all sorts of imaginative when it comes to naming servers).
2. Navigate to Google Cloud Storage and create a new bucket which is not publicly accessible. After the bucket is created upload all of the sql dumps (in an uncompressed form).

Step 4: Make it Live!

All of the parts are in place, they just need to be configured properly and you’ll have all your blogs running.

Web Server

Decompress the tarballs so that you have the wordpress directories again: tar xvzf my_site.tgz. Copy the resultant directory to wherever you want your blog to live (I just throw them in /var/www/). Next up you’ll need to setup Apache to know about your site.

Since we are only going to support SSL, we will only configure virtual hosts files for SSL. Your configuration should look something like this

<IfModule mod_ssl.c>
        <VirtualHost _default_:443>
                ServerAdmin [email protected]
                ServerName lukebearl.com
                ServerAlias www.lukebearl.com

                DocumentRoot /path/to/lukebearl.com

                <Directory />
                        Options FollowSymLinks
                        AllowOverride None
                </Directory>

                <Directory /path/to/lukebearl.com>
                        Options Indexes FollowSymLinks MultiViews
                        AllowOverride All
                        Require all granted
                </Directory>

                # Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
                # error, crit, alert, emerg.
                # It is also possible to configure the loglevel for particular
                # modules, e.g.
                #LogLevel info ssl:warn

                ErrorLog ${APACHE_LOG_DIR}/error.log
                CustomLog ${APACHE_LOG_DIR}/access.log combined

                # For most configuration files from conf-available/, which are
                # enabled or disabled at a global level, it is possible to
                # include a line for only one particular virtual host. For example the
                # following line enables the CGI configuration for this host only
                # after it has been globally disabled with "a2disconf".
                #Include conf-available/serve-cgi-bin.conf

                #   SSL Engine Switch:
                #   Enable/Disable SSL for this virtual host.
                SSLEngine on

                #   A self-signed (snakeoil) certificate can be created by installing
                #   the ssl-cert package. See
                #   /usr/share/doc/apache2/README.Debian.gz for more info.
                #   If both key and certificate are stored in the same file, only the
                #   SSLCertificateFile directive is needed.
                SSLCertificateFile    /etc/letsencrypt/live/lukebearl.com/cert.pem
                SSLCertificateKeyFile /etc/letsencrypt/live/lukebearl.com/privkey.pem
                SSLCertificateChainFile /etc/letsencrypt/live/lukebearl.com/fullchain.pem

                # ... <snipsnip> ...

        </VirtualHost>
</IfModule>

Once you have that setup execute sudo service apache2 reload and then setup the Let’s Encrypt certificate.

In order to do that you’ll first need to make sure that you have certbot installed. After that run this command: sudo certbot certonly --webroot --webroot-path /var/www/html/ --renew-by-default --email [email protected] --text --agree-tos -d lukebearl.com -d www.lukebearl.com The /var/www/html is still serving the default document on port 80 due to Apache’s default site (which we never disabled).

You’ll also want to make sure that the renewal is scheduled in a crontab entry.

At this point you should be able to navigate to your site (and get a database connection error).

SQL

From my SQL control panel in console.cloud.google.com, click the “import” button and then in the dialog that appears find each of the dump files in turn. You’ll also want to use the SQL Console in order to create a user with rights to all of those wordpress databases, but who isn’t root. Make sure the password is decently strong.

After you are done importing all of the databases, go back to the web server and the final configuration task before your site is live can be done.

Editing wp-config.php

cd into the wordpress directory and then open the wp-config.php file in your text editor of choice (like vim). You’ll need to look for and edit all of the DB_* settings to reflect your new MySQL instance. Pay attention to the DB_HOST as that should be the IPv4 address from the SQL management pane.

Extra Credit: Images

If you run an image heavy blog (which this blog obviously is an example of), you’ll notice a considerable speed-up if you make use of the Google Cloud Storage wordpress plugin. One big gotcha that I found is that php5-curl must be installed or this plugin breaks. A big thanks to the developers who work on that plugin as they quickly helped resolve the issue.

Moving the images is a kind of 2 step process:
1. Create the bucket (and make sure to assign the allUsers user “Read” access before you upload anything)
– You probably want to create the bucket as “Multi-Regional” so that images get cached to edge locations.
– Also create a new folder in the bucket named “1”.
2. Copy all of the files from the wp-content/uploads directory to the bucket. When doing this I have found it easiest to cd into the wp-content/uploads directory and then execute the following: gsutil -m cp -r . gs://<bucket_name>/1/
– This may take a few minutes to complete. The -m flag will make it multithreaded.
3. Login to the WordPress Admin and install the plugin.
– After installing, configure the plugin to point to the bucket you just copied everything into. Also make sure that the “Use Secure URLs for serving” flag is checked or you’ll end up with mixed-content errors.

At this point you should have your blog setup on Google Compute Engine using the Cloud MySQL instance and all of your images should be hosted through Google Storage. Let me know how it goes! @lukebearl

Tuesday, 11 April 2017

Does it even exist if it isn’t in source control?

I was working on a little side project at work to make my employer’s data backup processes a bit more robust which involved writing a small console utility. I was asked if the code for the utility was already in source control (this code represented about 10 hours of effort on my part), and I almost replied “Does it even exist if it isn’t in source control?” Instead, of course, I simply sent the link to the repo where the code lives.

This made me start thinking about what really goes into a project, small or large. How many small little side projects have I started, and abandoned within a few hours simply because whatever I was working on wasn’t interesting enough to hold my interest? Would it be logical to start every side project by setting up source control?

At the end of the day I think there is a real judgement call that has to be made about what is appropriate to set up a repository for and what can just live (and likely die) on your local disks. When learning new technologies I generally work through a few different types of “Hello World” type applications that are very simple and then I’ll work through a tutorial or two that is a bit more advanced. Generally the bulk of the code for those things is heavily derived from whatever resource I’m using, so there is minimal to no benefit to putting it in source control. As soon as you move beyond the tutorials: enter source control, stage left.

Modern day software professionals have no excuse not to use source control. Github has free public repositories (and at $7/mo, most folks in software can afford the expense), Bitbucket has free public and private repositories, and those are only the two options that I have used extensively. If you don’t trust anyone else with your code, standing up your own git instance on a VPS is possible as well.

If you really don’t like git for some reason (I’d highly encourage you to learn it though), there are plenty of other options:
1. Bitbucket offers mercurial hosting
2. Fossil SCM (with hosting available here) – note I’ve never used Fossil, but I’ve heard many good things about it
3. If you want to kick it old school, there are plenty of SVN hosting options available as well

In the end, the adage “Does it even exist if it isn’t in source control” rings true. There are so many different options suitable for just about any skill level and personal preference and cost bracket that there are no excuses to no simply use source control.

Monday, 3 April 2017

VPS Migration to GCP

I’ve had all of the blogs that I manage on several VPSs hosted through RamNode for the past couple of years. While there is nothing wrong with RamNode, I figured it was about time to try out something new. I originally was going to move to AWS, since their MySQL RDS offering looked pretty compelling, but then I discovered Google pretty much had feature parity with the parts of AWS I was interested in (plus they have a much nicer value proposition than AWS does for hosting costs).

Out With the Old

In order to keep things reasonably fast, I ran 4 separate VMs: web1, web2, db1, and cache1. The initial idea was that web1 and web2 would be replicas of each other and cache1 (which ran Varnish) would be responsible for balancing load. I never got around to any of that, instead I have web1 (which had about 90% of the traffic) and web2 as completely independent machines. Both talked to cache1, which was running varnish and acted as the public entry point to the blogs. Db1 was just a database server, running some recentish version of MariaDB. Also, in order to help secure the database server, all of the VMs ran OpenVPN so that all internal communication was happening in a private network.

Unfortunately, this was probably a bit over-engineered for the amount of traffic that the blogs actually get in aggregate (under 200k uniques/month), plus it was a maintenance nightmare since I needed to watch over 4 separate servers (but it was a fun learning experience).

In With the New

Now that I have moved to Google Compute Engine, I just have 1 moderately beefy VM running which hosts all of the blogs. Instead of running a dedicated VM (which I would have to administrate), I’m using Google’s Cloud SQL MySQL offering (a dbn1-standard-1 instance). Eventually all of the images will be moved to Google Cloud Storage (this is one area where AWS is light years ahead of Google), in order for that work I need the Google sponsored wordpress plugin to actually work properly.

Future Plans

The reason that I wanted to migrate to either AWS or Google is in order to support future growth. While volume is moderately low right now, it is to be hoped that eventually one or more of the blogs being hosted will have considerable amounts of traffic. If that happens being able to reconfigure a few things in order to support GCP’s load balancing will be critical. The only thing preventing me from setting that up right now is that I will need an SSL certificate which can terminate all of the blogs I manage, which would be a bit expensive (especially since I currently just use LetsEncrypt for all my certificates).

Overall we’ll have to see if this ends up being more reliable and at least as fast as what I previously had configured. I have enough CPU and memory budget where I can probably implement a caching strategy again if the performance isn’t quite where it needs to be (or I’ll just end up setting up a second VM in order to handle all of the caching duties). I’m still running everything through CloudFlare, so that makes sizing everything much more forgiving.

Saturday, 7 January 2017

XUnit Test Lifecycles

You are unit testing right? I hope so. If you are you may have run into some scenarios where things are not working quite right. One issue I’ve personally run into is things which attempt to maintain state between tests (eww, I know). Unfortunately, while it is good practice to make sure all of the tests you are writing are completely independent from each other, sometimes shared state will creep in due to other factors (like using an in-memory database because Microsoft kills kittens* and didn’t make Entity Framework easy to mock out).

XUnit Test Context

How many times does a constructor get called for a class in C#? If you answered one, you’d be completely wrong when it comes to XUnit (it kind of surprised me too when I first learned about it). Turns out that as part of the XUnit lifecycle the constructor is called before each test is run, likewise you can implement IDisposable and have Dispose() called after each test is run. While it runs kind of counter to expectations, no state set in any instance variables will be shared between any tests. If you need the ability to share state XUnit provides not one, but two separate options: 1) Class Fixtures and 2) Collection Fixtures. The choice of which to use is entirely dependent upon the scope of what needs to share state. As you’ll see below by default a class is by default a collection although you can also build collections composed of the tests of multiple classes.

Class Fixtures

In order to create a class fixture a small amount of setup needs to be done: a new class needs to be created which will maintain the shared state across all runs of all tests that are part of the class. The test in question needs to implement IClassFixture<YourFixtureName> and your fixture will be passed in as a constructor argument.

An example fixture might be something like:

public class MyFixture{
   private MySuperObject _myInstanceVar;
   public MyFixture(){
       _myInstanceVar = new MySuperObject();
   }

while consuming the fixture would look like this:

public class FixtureTests : IClassFixture<MyFixture>
{
    private MyFixture _fixture;
    public FixtureTests(MyFixture <span class="hiddenGrammarError" pre=""><span class="hiddenGrammarError" pre=""><span class="hiddenGrammarError" pre="">fixture)
    {
         _fixture</span></span></span> = fixture;
    }
}

From a test writers perspective the Fixture mechanism almost looks like a dependency injection engine (except with absolutely zero magic in it).

Collection Fixtures

Collection fixtures operate pretty similarly to class fixtures, although they do require a slightly larger amount of setup. In order to setup a collection fixture, much like a class fixture, you must first define the actual fixture. The wiring logic is a bit different:
First define a collection:

[CollectionDefinition("Awesome Collection")]
public class MyCollection : ICollectionFixture<MyFixture>
{
    // Intentionally left empty. 
}

The actual test class would look like the below:

[Collection("Awesome Collection")]
public class MemberOfMyCollectionTests
{
    private MyFixture _fixture;
    public FixtureTests(MyFixture fixture)
    {
         _fixture = fixture;
    }
}

Note on best practice: it is probably almost always best to pull the collection name out to a constants file to prevent simple typos from dramatically changing the behavior of your tests. But I’m in favor of just using constants everywhere unless there is a compelling reason not to…
Second note: the fixtures must all be in the same assembly. XUnit doesn’t let us get fancy with collection definitions spanning multiple assemblies (not that there would be any big benefit from that)

Fixture Lifetimes

For either of the possible fixture types the lifetime logic is basically the same: the fixture is created immediately prior to the first invocation of the first test in the collection (be it a class-collection or a collection-collection). The fixture is destroyed immediately after the last test in the collection.

XUnit Test Collections

The lifecycle and parallelization of tests is largely driven by the concept of Test Collections. The official documentation does a good job of giving a quick overview. To summarize: by default all tests in a given class are part of the same collection thus will run in a serial fashion. Classes in the same assembly will run their collections in parallel. This behavior is able to customized by specifying a couple of assembly level attributes:

  1. Forces all tests in all classes to be in a single collection – i.e. force serial execution of all tests in the assembly.
    [assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]
  2. Determines the maximum parallelization of tests. Setting this to 1 only allows one test to execute at a time, although multiple tests may be executed in an interleaved fashion. By default this is equal to the number of virtual CPUs on the PC (which seems to be a good default unless you have some very specific use-cases).
    [assembly: CollectionBehavior(MaxParallelThreads = n)]
  3. Disables test paralleization assembly-wide. Note that this is different than MaxParallelThreads since it actually turns off the parallelization infrastructure in xunit (for the assembly this attribute decorates)
    [assembly: CollectionBehavior(DisableTestParallelization = true)]


*Just kidding. No kittens were harmed in the writing of this post.

Sunday, 1 January 2017

Goals for 2017

It’s a new year, so in the tradition of setting a few resolutions, here are a few of my professional goals for 2017:

  1. More Networking
  2. More Side Projects
  3. More Reading
  4. More Writing
  5. Take Better Care of Myself

All of those things should be pretty easy, but there is the all important question of “Why?”

Networking

I work as a senior full-stack engineer at my full-time job, and also work as (the only) infrastructure engineer for my wife’s businesses (1, 2) both of which keep me fairly busy, but as a professional I’m always curious what other people are working on, and what other stacks are out there. In order to start getting more exposure to other ways people are doing things I’m going to start trying to attend at least one meetup a month just to chat with other developers and engineers. A stretch goal to this would be to find one or more conference (ideally within the Greater Los Angeles area) to attend.

More Side Projects

This one is kind of easy resolution as I’ve already started working on another side project. Historically my issue with side projects is that I’m great at working on something for one or two days, but staying focused on it long-term has been an issue. The goal with this resolution isn’t so much to just start a bunch of projects, it’s to work on actually shipping things. I did ship a couple of (very small) side projects last year (both are .NET centric):

  1. BlockListChecker – A tool to check if an email address is on any of many ESPs suppression lists.
  2. NuGetVersionChecker – A tool to generate a (very simplistic) report of packages in a package.config file.

I would like to ship at least 6 projects this year (assuming some mix of very simple to somewhat complex).

More Reading

I’ve always been a bit of a bookworm, although I’ve gravitated towards fiction. I recently got a Kindle Paper-white and have found it to be very easy to read on (even for technical books). I’ve already started reading “The Pragmatic Programmer”, and while the book itself is a bit dated, I’m finding there to be a lot of great content. At the tail end of last year I also signed up as a volunteer reviewer for Manning Publication Review, and have already reviewed one book (which was a fantastic read, I’m excited for the remainder of the book to be released so I can finish it). I’ve been adding programming books to an Amazon wish list for a few years now, and 2017 will the year I’ll try to get through at least a few of them. To steal an idea from “The Pragmatic Programmer” I’m going to try to get through at least one book per quarter (ideally even quicker than that though).

More Writing

I’m going to assume that as of the writing of this post I have exactly zero regular readers on this blog. There are probably two major reasons for that: a) I don’t write on a regular basis; and b) I need to become a better writer. The nice thing is that the resolution to both of those issues is the same: write more. Writing is a skill like any other, where the more it is done, the better it will become (getting feedback is helpful in that as well).

Take Better Care of Myself

I think every person has this resolution every year. I spend 60+ hours a week in front a computer between my job, my wife’s businesses, and working on side projects/playing games/etc. I do a few things which help: I ride my bicycle to work most days, and try to either go for a long hike or a long bike ride every weekend. There are a few low hanging fruit and a few habit-changes I can make which will make me healthier and also prevent RSI in my hands/wrists (something I periodically struggle with).

  1. Stop slouching – proper posture would go light-years to keeping me in good shape.
  2. Drink more water – as my coworkers can attest I drink way to much coffee (3-5 cups most days), and in the afternoon I like carbonated and heavily caffeinated drinks. To start I am going to start trying to drink one glass of water for every cup of coffee I have in the mornings. For my afternoon drinks I’m going to stop with the energy drinks and start sticking to sugar-free soda (which obviously has less sugar, and also much less caffeine).
  3. Eat healthier – While I get a fair amount of cardio in every day (bike rides and afternoon walks), most things I’ve read lately show that weight control is really a function of your diet (i.e. working out is great for your heart, but if you want to lose or maintain your current weight, calorie restriction is the only real sure-fire way to do it).

Let’s Make 2017 a Great Year

I’ve never tried publicly blogging my resolution before, let’s see if it makes me a bit more accountable. If nothing else, I’ll be able to point my browser to this page anytime I need a bit of motivation to keep on track with my goals for the year. These are all fairly conservative goals, so I’m optimistic of being to both meet and exceed them, which should put me in a great spot for 2018.

Tuesday, 18 October 2016

A Gentle Introduction to Onion Architecture in ASP.NET MVC – Part 2

In part 1 of this series we discussed what an onion architecture application would look like and discussed the technologies that we can leverage in .Net 4 in order to make that work. In this section we’ll go over how the project is structured, including spending a bit of time looking at how the IoC container is configured. This being a simple application the configuration is significantly easier to understand than it can be in more complex applications.

Project Structure

The application consists of 4 projects: Core, Infrastructure, Infrastructure.Tests, and Web. Each one of these projects has a unique purpose and it behooves all developers to ensure that they don’t mix concerns between projects.

Core Project

The core project is responsible for defining all interfaces for all services which will be implemented in the infrastructure layer, and it also is responsible for having all domain models. In Entity Framework Code First projects, the EF Entity Models can exist in Core. The models that exist in the sample application are not “true” domain models, instead they are just POCO representations

Infrastructure and Test Project

The infrastructure project is responsible for the implementation of all of the services defined in the Core Project. One of the critical distinctions between onion architecture and traditional layered applications is that the data access code (if there is any) will live in an infrastructure style project instead of living in the base/core layer. In the sample application the Infrastructure project only calls out to third-party services. The test project only tests the behavior of the composite service as I have not written this application in a sufficiently decoupled fashion to pass in the RestClient and ideally an abstraction would also be built around the ConfigurationManager

Web Project

As the name probably implies, this is where the “web” parts of the application go, including controllers, views, front-end assets, etc. For this application front end assets are just managed by downloading and saving them into /Scripts, and then everything is manually wired up using the BundleConfig.cs in App_Start. The interactivity within the application is achieved by using a little bit of jQuery.

Dependency Resolution

This application exposes 4 services in total, but only has 2 interfaces. This is due to the fact that the CompositeBounceCheckerService is composed of both MailgunService and SendGridService, hence all three of them share the same interface. The final service, the SuppressionListCheckService just consumes the CompositeBounceCheckerService. This final layer of indirection isn’t, strictly speaking, necessary, however it does afford the ability easily pass one of the service-specific IThirdPartyBounceService services as its dependency if we only wanted to check for suppression in a single ESP. The DefaultRegistry below shows how to get that all setup.

            For<IThirdPartyBounceService>().Use<CompositeBounceCheckerService>()
                .EnumerableOf<IThirdPartyBounceService>().Contains(x =>
                {
                    x.Type<SendGridService>();
                    x.Type<MailgunService>();
                });

This code basically tells StructureMap to scan all registered assemblies (all assemblies listed in the Scan call above this) and register the SendGridService and MailgunService as services within the composite service.

StructureMap is capable of doing a lot more, such as having custom life-cycles for certain services or handling weird object hierarchies (you can do a lot more than just have interfaces and services).

Testing

Testing is fairly direct as we just mock out the services that feed into the service we want to test (generally known as the “SUT” or the System Under Test). One easy example of a test is this:

        [Fact]
        public void Get_Bounces_Returns_A_List_Of_View_Models()
        {
            // Arrange
            var returnList = new List<SuppressedEmailViewModel>
            {
                new SuppressedEmailViewModel
                {
                    AddedOn = DateTime.Now,
                    EmailAddress = "[email protected]",
                    EmailServiceProvider = EspEnum.UNKNOWN,
                    ErrorCode = string.Empty,
                    ErrorText = string.Empty
                }
            };

            var mockService1 = new Mock<IThirdPartyBounceService>();
            mockService1.Setup(x => x.GetBounces()).Returns(returnList);
            // Initialize the composite service with an array of one third party service.
            var compositeService = new CompositeBounceCheckerService(new[] { mockService1.Object });

            //Act and Assert
            var result = compositeService.GetBounces();

            Assert.Equal(returnList, result);
            mockService1.Verify(x => x.GetBounces(), Times.Once);
        }

The code above will first: create data for the Mock to return, then: create and setup the mock, then: inject it into the service, then: call the service, and finally: make sure that the behavior of the service was correct. Having a robust suite of tests allows us to change the implementation of any of the services and still verify that the output is correct.

When writing tests, I generally follow the AAA pattern (Arrange, Act, Assert) and leave comments in the code where those things are happening as an easy way to make sure that my tests are structured in a consistent fashion.

That’s all for now folks. I hope that this two part series on the onion architecture made the benefits of using it a bit more clear.

Please, check out the code on Github, and drop me a line if you have any questions!

Sunday, 18 September 2016

A Gentle Introduction to Onion Architecture in ASP.NET MVC – Part 1

Welcome to part one of a multi part series on Enterprise Application in .Net Core! In this series we’ll go over everything that is necessary to build a best in breed enterprise application with the Onion Architecture at its core. The word enterprise is integral in describing the software being described as it is going to be software which is capable of being extended year after year in a clean fashion while allowing things to stay DRY and testable. We’ll cover the following:

  1. System to Design
  2. Initial Architecture and Architectural Considerations
  3. Proper Abstractions Around Each Layer of the “Onion”
  4. Unit Testing

This series of posts isn’t going to exhaustively walk you through every decision that may need to be considered (i.e. we’ll only very briefly discuss localization). My intention with these posts is to put forth my ideas of what a solid architecture looks like. With that being said, the application we are going to design is called “Block List Checker” which is an application which will allow one or more Email Service Providers (ESPs) such as Mailgun, SendGrid, etc. to be queried simultaneously to return a list of all email addresses which are on a suppression list. This is a very simple application which is intentionally somewhat over-engineered for the purposes of illustrating the various components of the system.

Inversion of Control and Dependency Injection

In order to design this software appropriately there are several tools that will need to be incorporated into the solution. The first (and arguably the most important) of these is StructureMap a .Net-friendly IoC/DI Container. For those not familiar: IoC (or Inversion of Control) and DI (or Dependency Injection) are design patterns which help make code much more testable. They do this by “inverting” the dependency chain. The way this plays out in MVC applications is that controllers will be injected with all of the dependencies that all of the services which are used by those controllers require. The IoC container will basically maintain a mapping of interfaces to services and, at runtime, will provide the controller with an instance of the service requested. Due to the fact that this effectively requires loose-coupling between components, writing unit tests becomes much easier (note that the description of IoC here is just barely skimming the surface, entire books have been written on this subject).

Unit Testing

Because this application is so simple, I have opted not to write comprehensive tests around it, however I have written a few tests just to demonstrate using xunit with this application and how the loose coupling allows the testing to be achieved.

Known Issues

This application is very basic and the UI could use a bit of love. It currently targets Asp.Net MVC5 instead of .Net Core MVC6 due to the fact that I prefer writing API access code using RestSharp, and that currently doesn’t seem compatible with .Net Core. I may release another sample project in the not distant future which targets .Net Core. It also currently doesn’t make use of either a proper front end framework, or of most of the functionality that MVC5 exposes on the front end. I haven’t written this application to be overly robust, and due to the nature of how it consumes API keys without any authentication I wouldn’t expose it on a publicly facing web server.

Get the code

With all that being said, the code is available here: https://github.com/lbearl/BlockListChecker.

Part Two of the series is available here

Monday, 25 April 2016

PucciThe.Dog – Python/Flask on an RPI

Overview

Flask is an amazing little Python web micro-framework for those who aren’t familiar with it. It allowed me to build out this entire application in about 8 hours of total dev time (including a whole bunch of time just not quite grokking flask-login). The minimal goals I wanted to achieve was to expose a very basic web presence for Pucci (the dog), along with building a very basic puppy cam. Since I had an old Raspberry Pi lying around, I figured that this might be the ideal project to use it for.

Please feel free to take a look at the code

Bill of Materials

  1. Raspberry Pi
  2. Microsoft LifeCam NX-3000
  3. Sweetbox Case (optional)

The Plumbing

In the interest of making this work as quickly as possible, the actual picture-taking logic is just a shell script running in cron (specifically once every 5 minutes). It likewise will examine all of the files in the directory and delete any that are more than a day old. The real magic is actually done by fswebcam as documented here.

fswebcam -r 320x240 --jpeg 80 -D 3 -S 13 \ 
/home/pi/poochpics/$DATE.jpg

Technically the webcam should support higher resolution pictures, but I suspect that it isn’t quite as compatible as I was led to believe it was. The -D 3 -S 13 are very important for me as the camera was corrupting 70+% of the images that it was capturing. These arguments will first delay the capture for 3 seconds and then skip the first 13 frames captured, finally generating a photo based on the 14th frame captured. This numbers were very scientifically found by simply playing with the webcam until it was returning reliable results 100% of the time (it is possible that others will be able to operate the webcam without any delay or skipping any frames).

Now that all of the photo generation and automatic purging of old images is handled the actual responsibilities for the web application are pretty slim: basically just a regular old mostly static web page with the ability to login to a secure area which will have the actual photos on display.

I made the jump to almost 100% Windows at home (since I have been working on Windows machine for pretty much my entire career), and decided to use this project as an excuse to try out the Python Tools now available in Visual Studio (spoiler alert: they are awesome). I can’t say that I have ever really used an IDE for Python development before (traditionally I’ve done it in a mix of Vim and Sublime Text), so I can’t compare it to some of the other Python IDEs out there, but as someone who gets paid to do C# in Visual Studio 2015, the experience was very nice.

Implementing the Web Application

In order to actually get things working I just started with the default template that comes with VS for Flask + Jinja2 templating (I contemplated doing this project as an Angular app with a Flask RESTful backend, but decided against it). The code is pretty basic especially considering the entire application only consists of 4 routes and doesn’t really do any magic (there isn’t even a CRUD component to it). The one thing that isn’t exactly standard was my choice for authentication. As I mentioned earlier, this application does use flask-login, and while my original thought was to just hardcode the credentials, I ended up not going down that path as I couldn’t come up with a non-hacky way to persist the credentials that wasn’t less complex than just adding a SQLite database with some credentials tossed into it. To that end if you look at __init__.py you’ll notice the need to have a SQLite db called test.db. This database just has a single table called “user” which will store ids, usernames, emails, and bcrypt hashed passwords:

Deployment

It really is an exciting time to be alive when I can build something on a Core i7 desktop with 32 GB of RAM, and then relatively effortlessly deploy it onto a computer the size of a microcontroller with a whopping 512MB of RAM and a sub-1Ghz ARM processor. The deployment is actually pretty standard: it’s just Apache2 with mod_wsgi and the appropriate wiring as described here. The most interesting part of all this is probably the way that DNS is being handled: Namecheap now offers Dynamic DNS, so I have ddclient running on the Pi, automatically updating my home IP address to Namecheap’s DNS servers. If you want to try it out for yourself and give me a little boost in the wallet as well sign up here.

The only thing I’m a little nervous about with this setup is whether or not the relatively underpowered RPi really is going to do that well connected to the public internet. That being said, it is pretty heavily firewalled, with only port 80 exposed so its a fairly limited attack surface for any of those internet hooligans.

I hope you enjoyed reading about my very small flask application on a RPi, please reach out to me on Twitter @lukebearl or head over to Github.