Matthew Lindfied Seager

Today I Learned that to include a Ruby symbol in a YAML file it needs to be prefixed with !ruby/symbol

For example:

my_list_of_symbols:
  - !ruby/symbol one
  - !ruby/symbol two

becomes:

{ my_list_of_symbols: [:one, :two] }

Windows usage Venn Diagram

Sometimes I’m happy to be a late adopter of Ruby and Rails, other times I feel like I missed out on all the fun.

Anyway, today I heard of Rails Metal for the first time.

Yep, I’m linking to a blog post from 10.5 years ago! How’s that for a hot take?

Today I started learning about ActiveJob.

Thanks to a deprecation warning I’m also learning about Sidekiq, Redis and how (not) to store data in Redis :)

Short version:

Various (but widespread) adoption problems with Teams, SharePoint and OneDrive are a good reminder that less is more…

Google’s offerings may have a shorter feature checklist but in my mind that’s a good thing. The fundamentals are solid and the features they do have, work.

Nate Berkopec’s email series on practical Sidekiq has been really good.

The most recent one delved into Ruby memory usage in Sidekiq and in general and I found it really informative!

Ruby 2.6.2 is out (and 2.5.4) with some security fixes.

My upgrade steps (fish, homebrew and rbenv) were:

  • brew update; and brew upgrade ruby-build
  • rbenv install 2.6.2

I’m still amazed at how much effort people pour into these open source languages and tools!

Matching Bundler Version with Heroku

Bundler is a very helpful tool for managing third party dependencies in Ruby.

Bundler takes a “Gemfile” where you specify which gems you want to use (and potentially which version). When you run bundle (or bundle install) it reads the Gemfile and automagically figures out dependencies, sub-dependencies, sub-sub-dependencies (you get the idea) and then tries to find mutually compatible versions of each of them. Once it’s done that it downloads them all, installs them and then records which versions it chose in a Gemfile.lock. You then commit your Gemfile and Gemfile.lock to version control to make sure your collaborators (and your deployed application!) are all using the same trusted versions.

Bundler is itself a gem so you do need to install it before you can use it to install (and manage) all your other gems. Assuming you already have ruby installed it’s as simple as:
gem install bundler


Recently, I was deploying an application to Heroku and I noticed a warning from the remote system. It’s only a warning but wherever possible I like to treat warnings as errors, otherwise they tend to build up and accumulate and hide real problems in the noise. Anyway the warning was:

Warning: the running version of Bundler (1.15.2) is older than the version that created the lockfile (1.17.1). We suggest you upgrade to the latest version of Bundler by running gem install bundler

On a local machine that message (and the following two lines I snipped) is quite helpful but not so much when it’s coming from a server I can’t control. Thankfully, Heroku have documented all this very thoroughly. You should definitely read their documentation but the short version is that they only support a limited number of “carefully curated” bundler versions.

The best way around such warnings is to match your local version of Bundler to Heroku’s carefully curated version(s). That page above links to another page with the currently supported versions:
https://devcenter.heroku.com/articles/ruby-support#libraries

As of today that’s version 2.0.1 for Gemfile.locks bundled with 2.x and 1.15.2 for everything else.


I ended up upgrading to Bundler 2.0.1 but I could just as easily have reverted to 1.15.2.

Below are some instructions on how to manage which version(s) of Bundler you have installed and how to massage your environment to use a particular version of Bundler.

# To check which version(s) of bundler you currently have installed:
$ gem list | grep bundler
bundler (1.17.1)

# To install an older version
$ gem install bundler -v 1.15.2
Fetching: bundler-1.15.2.gem (100%)
...
1 gem installed

# To install currently supported 2.x version (currently 2.0.1)
$ gem install bundler -v 2.0.1
Fetching: bundler-2.0.1.gem (100%)
...
1 gem installed

# Check again:
$ gem list | grep bundler
bundler (2.0.1, 1.17.1, 1.15.2)

# Bundle with the latest installed version (now 2.0.1)
$ bundle install

# Try to bundle with an older version (may break if your Gemfile.lock was built with 2.x)
$ bundle _1.15.2_ install
Traceback...
Could not find 'bundler' (2.0.1) required by your Gemfile.lock (Gem::GemNotFoundException)

# Actually bundle with an older version
$ rm Gemfile.lock
$ bundle _1.15.2_ install

Deploying a Rails app to Heroku

Prompted by Ruby Rogues episode 403 (Overcast link) I finally deployed my very unfinished Parkrun tracking app to Heroku today.

The premise of the episode is that Rails needs an “Active Deployment” gem built in to ease deployment of new apps to a variety of different services.

Our apps at work run an AWS EC2. To spin up a new one we currently need to create a new VM in AWS, somehow bootstrap the SSH certificates for ansible, run ansible to turn it into a Rails server and then run capistrano to deploy the application. It’s all documented in Confluence but I wouldn’t even know where to begin to try and create a similar set up for my own app(s).

The Heroku process was quite painless, their documentation is very thorough, but there were still a few sharp edges.

One thing that tripped me up for a while was getting Bundler versions in sync. I wrote up what I did on StackOverflow but I’ll cross post a lightly edited version on here for posterity.

Yesterday I read (listened to) The Fox by Frederick Forsyth (read by David Rintoul) 📚

It was an easy listen and a nice way to while away some hours. Thankfully, it didn’t distract by trying to be too technical with its description of firewalls and security 🙂

Model View Controller and Rails Apps

Model View Controller (MVC) is a design pattern in which an application’s code is divided by responsibility.

The “Model” refers to the underlying objects and code that represent (model) the business processes and logic. This includes the actual business objects themselves, stored data, schemas (including relationships, constraints/validations and indices) and operations specific to manipulating and storing the data. In a Rails app this usually consists of model objects, each of which inherits from ActiveRecord, the schema and migrations.

The “View” refers to the parts of the application a user sees and interacts with. Pages, forms, tables, visualisations, feeds and summaries all form part of the view layer. In a Rails app the user usually interacts with HTML pages (with supporting CSS and JS) but other “views” of the application could include JSON/XML feeds or APIs. In Rails, these various views are often delivered using ERB templates & partials and the supporting assets are delivered using Sprockets and Webpack/Webpacker.

The “Controller” is responsible for mediating between the Model & the View and between the user & the application as a whole. In a Rails app the former is accomplished with “controllers”, which inherit from Action Controller, while the latter is handled by the routes file and routing system. Additional responsibilities usually handled at the Controller layer include caching and session management.

Some of the benefits of separating these responsibilities include the ability to test and refactor different parts in isolation, the option to reuse objects or even use the same Model (backend) with different Views (e.g. a web app, a RESTful API and an iOS app) and better designed interfaces which make for easier collaboration within (or between) teams.

I feel confident with Git but I keep learning new things. Today:

  • using git add --patch to choose individual hunks (or smaller). Now I know what Atom is doing behind the scenes when I use the GUI
  • using :\string to specify a commit by message rather than hexadecimal identifier

I was surprised to learn recently that web browsers also cache DNS lookups (in addition to the OS network stack doing so).

It’s usually only for a short time (up to about 70 seconds according to that article) but something to be aware of when troubleshooting.

Shortening the Feedback Loop - Automatic PDF Refresh on Source Change

I’ve been exploring ways to generate nicely formatted PDFs from a Ruby on Rails app (without trying to convert HTML to PDF). As part of that exploration I’ve been looking into Prawn, a fast and powerful PDF generator written in Ruby.

I find one of the most important parts of learning a new technology is to shorten the feedback loop between making a change and seeing the result of that change. When using Prawn out of the box the feedback loop looks a little bit like this:

  1. Write some ruby code in my text editor e.g. In hello.rb: require 'prawn'; Prawn::Document.generate("hello.pdf") { text "Hello World!!" }
  2. Switch to Terminal to run the code and compile the PDF
  3. Open the PDF (or switch to Preview to view the changes)
  4. Switch back to my text editor

Rinse and repeat.

That’s not terrible, particularly since the Preview app on macOS automatically refreshes the document when you bring it to the front, but all that keyboarding/mousing/track-padding adds up when you’re trying to learn a technology and making lots of little changes. In addition to being a little slow, most of those steps never change. In fact, steps 2 through 4 are identical every time. This process is an ideal candidate for automation, for making steps 2-4 happen automatically every time I complete step 1. Here’s how I did it (with caveats).


I started with a very basic Prawn script (based on the README) for testing:

# hello.rb
require 'prawn'

script_name_sans_extension = File.basename(__FILE__, '.rb')
Prawn::Document.generate("#{script_name_sans_extension}.pdf") do
  text "Hello World!!"
end

The first step is creating a trigger that detects when the Ruby script(s) are saved. For this I chose Guard, a command line tool for responding to file system modifications. Guard is very handy for Test Driven Development, you can set it up to run tests automatically when your code changes. That’s pretty much exactly what I want to do here!

Since I already have Ruby and Bundler configured on my machine this step was as simple as:

  1. Adding a Gemfile to my Prawn script folder:

    # Gemfile
    source '[rubygems.org](https://rubygems.org)'
    
    gem 'prawn'
    
    group :development do
      gem 'guard'
      gem 'guard-shell'
    end
    
  2. Installing the gems by running bundle (or bundle install)

  3. Creating a basic guard file template with bundle exec guard init shell

  4. Tweaking the Guardfile to execute any ruby scripts whenever they change

    # Guardfile
    guard :shell do
      watch(/(.*).rb/) do |m|
        `ruby #{m[0]}`
      end
    end
    
  5. Leaving guard running while I work on the scripts with bundle exec guard

Now whenever I save the Ruby script, Guard detects the change and immediately compiles it (the `ruby #{m[0]}` line in the script above). So that’s step 2 taken care of, now for steps 3 & 4…


Step 3 is easy on its own. You can just add a simple `open "#{m[1]}.pdf"` inside the watch block. Then, every time you save, a few moments later Preview will be brought to the front and the PDF reloaded. If you’re working on a single screen, you might want to stop the script there. Your workflow will now be:

  1. Save [,compile and view]
  2. Switch back to text editor to make more changes

Rinse and repeat.


If you’re working with multiple screens (or have one very large screen) there is a way to mostly automate step 4 as well. The main problem is we need to take note of the current frontmost application, open Preview, and then reopen the previous application.

To dynamically take note of the current application and switch back to it requires AppleScript. To call AppleScript from the shell just use osascript. We’ll also need to remove the open call we made for Step 3. Our Guardfile then becomes:

    # Guardfile
    guard :shell do
      watch(/(.*).rb/) do |m|
        `ruby #{m[0]}`
        `osascript<<EOF
          tell application "System Events"
            set frontApp to name of first application process whose frontmost is true
          end tell
          activate application "Preview"
          activate application frontApp
        EOF`
      end
    end

If you’re happy to hardcode your editor, you can skip all that malarky and just add one more open line after the one we added for step 3: `open -a "/Applications/TextEdit.app"` (replacing TextEdit.app with your editor of choice). Your Guardfile will then look like:

    # Guardfile
    guard :shell do
      watch(/(.*).rb/) do |m|
        `ruby #{m[0]}`
        `open "#{m[1]}.pdf"`
        `open -a "/Applications/TextEdit.app"`
      end
    end

I said “mostly” above because there is a downside (or two) with both these options. The main downside is that bringing your text editor back to the front brings all it’s windows to the front too. You’ll have to arrange it so that the Preview window isn’t being covered by another text editor window when the app is brought back to the foreground (hence the need for plenty of screen real estate).

Another thing I noticed is that sometimes (but not always) a different text editor window would become uppermost when the focus came back. I can’t be sure but it seemed to happen more often when the editor window I was using wasn’t on the “main” screen. Moving it over to my main display seemed to fix the issue.

Another option would be to either close all your other text editor windows or muck around with trying to specify which window to bring to the foreground. I decided not to spend any more time on it since it was working well enough for me. If you want to try it out, take a look at the accessibility methods to figure out which window is frontmost. According to one StackOverflow comment it’s something along the lines of tell (1st window whose value of attribute "AXMain" is true)


Hopefully this proves interesting to someone. Even if you don’t care about Prawn you can adapt this technique to any format that requires a compilation step. Markdown and Latex spring to mind. You could even trigger a complete publishing workflow!

First their video lessons, now their books… Thoughtbot are giving away all their knowledge for free!

At this stage they’re still charging for their wisdom though!

Every time I hear Tim Riley speak it all makes so much sense…

But then when I think about starting down the dry-rb path I get confused where to start and anxious about becoming a beginner all over again (even though I know it’s all Just Ruby™)

The more I hear about Facebook’s product, spying & corporate culture the gladder I am to have ditched it years ago.

I recently deleted my Instagram account & now I’m trying to figure out how to eliminate WhatsApp… I may just have to miss out on some group messages.

I ran my monthly 10K Sydney Striders race this morning at North Head… beautiful spot and I went a full minute quicker than the same event last year 🎉

Online Payment Redirects - Proof of Concept

As I mentioned yesterday, I recently needed to make an API call as part of the request-response cycle in order to fetch a temporary token to embed in a redirect URL. Having satisfied myself in the console that I could get the token I needed, it was time to turn this into a Proof of Concept.


I’ve recently been enjoying applying the strict red-green-refactor cycle of TDD to code. I first saw it done this way in an Upcase tutorial (which are all now free!!! 🎉) and I’ve come to appreciate the approach of fixing one error at a time until the test passes. It breaks tasks down into stupidly simple chunks and I find it helps me zero in on errors much faster, after all there’s usually only one place to look. I used a similar approach here, just without the formal tests to verify it as this was just a proof of concept. I’d perform the “test” manually (visit the URL), write a line of code, refresh the browser, add another line of code, etc.

First off, I visited the page I wanted to see and got a 404 error, as expected. To fix that error I added the route (get 'payments/:code' => 'bank_payments#show') and tried again. No more 404 error (excellent), but I did get an “uninitialized constant BankPaymentsController” error, again as expected.


And here’s where the step by step approach paid off. To fix that error I created the app/controllers/bank_paymemts_controller.rb file with an otherwise empty class definition. But when I tested, I got the same error as before! I was actually expecting an error about the “show” action not being found. 🤔

Because I found the error straight away I was able to quickly figure out that I spelt the file name wrong (did you notice that?) and therefore Rails didn’t know where to find it. Past me would have written a whole bunch of other code in that file (and maybe even extracted some into another file or object) before testing it and seeing it blow up. There’s no way of knowing for sure but there’s a good chance I would looked in all the wrong places for a while, trying to figure out what I’d done wrong. So thank you Thoughtbot/Upcase!


Anyway, I got back to fixing one error at a time until I had something that looked a fair bit like this:

# routes.rb
get 'payments/:code' => 'bank_payments#show'

# app/controllers/bank_payments_controller.rb
class BankPaymentsController < ApplicationController
  def show
    uri = URI('[bank.example.com/foo/bar/T...](https://bank.example.com/foo/bar/TokenRequestServlet')).to_s
    request = Typhoeus::Request.new(
      uri,
      method: :post,
      params: {
        username:        'customer',
        password:        'P@$$w0rD',
        supplierCode:    params[:code],
        connectionType:  'bank_product',
        product:         'bank_product',
        returnUrl:       '[customer.example.org/',](https://customer.example.org/',)
        cancelUrl:       '[customer.example.org/',](https://customer.example.org/',)
      },
    )
    result = request.run
    token = result.response_body.split('=').each_slice(2).to_h["token"]
    uri = URI("[bank.example.com/PaymentSe...](https://bank.example.com/PaymentServlet)")
    uri.query = URI.encode_www_form({communityCode: 'customer', token: token})
    redirect_to uri.to_s
  end
end

To be clear, this is ugly, poorly structured code. I would not deploy this to a production app. But the “test” passes. When I load the page I now get redirected to the appropriate payments page. And I can change which page just by passing in the right supplier code in the URL, e.g. our_app.example.org/payments/MATTSRETIREMENTFUND.


The next step was to refactor the code, test edge cases, add error-handling, etc, etc… you know, the final 20% of the product that takes 80% of the time. But early in that process some new information came to light which made it clear that this feature was not going to be necessary.

That particular discovery reinforced another lesson! As I mentioned yesterday, I’m trying to get better at not letting perfect be the enemy of good, to first build something Good Enough™ and then later to refine and improve it, if that proves necessary. By building this Proof of Concept and showing it to someone else in the business early, we collectively went on a journey of discovery. The new information we surfaced would have been valuable for rebuilding this feature properly but it proved even more valuable by revealing to us that building the feature was unnecessary in the first place!

Past me probably would have spent two or three days trying to craft an elegant, perfectly architected, Practical Object Oriented Design exemplar. I never would have got to that standard of course, but I would have tried. As a result I would have been heavily invested in the sunk cost of my “solution” and probably would have either been crushed to discover the feature is no longer needed or I might have been blinded to the reality of the situation and tried to justify the need for what I’d already built.

Instead, I only spent a few hours wrestling with the documentation, spiking out a demo and bringing the business on the journey with me. It was uncomfortable in the moment (that code is really ugly to share publicly) but it has been valuable. Hopefully by writing this down it will solidify the lesson and make it a little bit easier to take a similar path next time.

Online Payment Redirects - Initial Experiment

Recently I came across the need, on our server, to fetch a secure token from another server run by a bank and then redirect the client to a payment URL containing that token as a parameter, all as part of the request cycle. This is quite straight forward in Ruby on Rails but I thought I’d summarise the thinking and discovery process I went through.


After adding my public IP to the whitelist with the bank, the first thing I did was to attempt to fetch the token from my console to see what sort of format it came back in, using Typhoeus because it’s already in use in our app. I started with a request that didn’t have all the mandatory parameters:

uri = URI('[bank.example.com/foo/bar/T...](https://bank.example.com/foo/bar/TokenRequestServlet')).to_s
request = Typhoeus::Request.new(
            uri,
            method: :post,
            params: {},
          )
result = request.run

I was disappointed, but not really surprised, to get a 200 OK response with a body that told me there was an error. SIGH If only there were meaningful HTTP Status Codes that could be used to communicate errors… At least the error message was helpful!

After providing the required params from the documentation I tried again. Again I got a 200 OK but this time the body of the response just contained token=zk_BxvIFDTifsevfc-W_QhAKCdd2zEFZxbDfpXtJ230. Success!


Turning the string into an array of the two parts I was after was easy with .split('=') but I figured I probably wanted a hash to verify the key and value and I wasn’t sure how to turn that array into a hash. Thankfully I have a couple of friends called Google Duck Duck Go and StackOverflow.

According to an accepted and much upvoted answer, simply calling .to_h should work but it didn’t work for me and the documentation linked to from that answer suggested that to_h needs to be called on an array of two item arrays. Thankfully, someone else had commented with the suggestion of a.each_slice(2).to_h which worked a treat. Time to move on, but not before upvoting the helpful comment… and adding my own to say that the accepted answer is (no longer) correct. As they say, duty calls!


Since this was just an experiment I threw caution to the wind and wrote this beautiful train wreck:
result.response_body.split('=').each_slice(2).to_h["token"]

Yes it’s ugly, but I’m trying to learn to fight my perfectionist tendencies and not let perfect be the enemy of good. Besides, that’s what the “refactor” step is for in the red-green-refactor cycle. More on that tomorrow!

I think Deep Learning (starting with Keras) might be my next holiday learning project…

How I Moved from GitHub Pages to Micro.blog

Yesterday I described why I moved from Github Pages to Micro.blog so today I wanted to cover how I moved. Moving my domain and static pages was trivial but migrating the content has proved harder than I thought it would be!

Domain and Static Pages

As I mentioned yesterday, my previous site was hosted by GitHub Pages which has a very similar architecture to Micro.blog.

The first thing I did was update my DNS entries (hosted by Cloudflare on the free plan) as per the succinct and clear instructions on the Micro.blog help site. In my case I set up both the A record and the CNAME record (and had to delete one of my existing A records as GitHub Pages requires two).

With that done I set my Domain name in Micro.blog as www.matt17r.com, with the www so Micro.blog would work with both matt17r.com and www.matt17r.com.

The last thing I did for this part of the migration was move my about page (I only have one static page). Within Micro.blog I first had to delete the standard about page (which takes content from the “About Me” box on your profile page) and create a new page called “About”. I then copied the Markdown content from matt17r/about.md in my git repository (less the YAML Front Matter) into the new page.

Previous Content

This bit was surprisingly tricky! I already have my posts in Jekyll format so I thought there’d be a simple Jekyll (or at least Markdown) importer. I searched around and even asked on the Indie Microblogging Slack and found out there’s nothing publicly available.

I moved one article my pasting the content into MarsEdit and editing the date before posting it… but even that proved difficult as I made a mistake with the date and there was no way to edit the (automatically generated) URL after I posted it. In the end I had to delete it and repost it more carefully.

I’m currently experimenting on writing a Jekyll plugin with a post_render Hook that could maybe write every post to M.b using the Micropub API. If I’m successful I’ll post more here. In the meantime my old posts are all 404ing.

Why I Moved from GitHub Pages to Micro.blog

My first attempt at blogging regularly was on my Squarespace site while I was trying to “go indie” in 2016. If you’ve ever heard a podcast episode you probably already know the benefits of Squarespace but what drew me in was getting a high quality, mobile-optimised site without having to worry about servers, backups, downtime and deployments.

A few months later I was back in a real job and so when it came time to renew my indie “hire me” site I decided to cancel with Squarespace and find somewhere cheaper (preferably free) to host my writing.

In August 2017 I decided to migrate my posts from Squarespace to a static Jekyll site hosted on Github Pages. In addition to being free, I was really enjoying learning how to use Git more and more in my day job and I had grown to like GitHub. The fact my content was now portable (more portable than Squarespace anyway) and I could write in a real text editor instead of Squarespace’s web UI (not optimised for free form writing) was the icing on the cake.

There’s a lot to recommend about a Jekyll site on GitHub Pages including:

  • You can use your own domain (as I did)
  • You own your own content, you aren’t just contributing to the content mill of Facebook/Medium/LinkedIn/whoever
  • Your content is very portable, especially if you use Markdown for formatting (I was already using Markdown for nearly all of my Squarespace posts)
  • You have fine grained control over appearance, dates, comments, URLs etc
  • You can write in any text editor you like, including native Mac/iOS clients

That being said, several of those strengths have flip-sides that I discovered actually turned them into weaknesses for my wants and needs. For example, being able to use any text editor you like is nice but that flexibility also means you need to use a separate tool (git) for the actual publishing. And having fine grained control over appearance, dates, etc can be nice from time to time but the rest of the time I’d rather not have to manually name & move files and type out the Front Matter by hand.

They aren’t deal breakers but they were impediments to me writing regularly and so I’ve decided to move all my writing to Micro.blog. It feels like a really nice blend of what I liked about Squarespace and what I like about Github Pages:

  • Someone else takes care of all the plumbing… but at $5 (US) per month it’s a lot more affordable than Squarespace and I still have good control over the content (including mirroring it to Github if I want to)
  • I can use a variety of apps (I’m not limited to the Squarespace web UI) but, if I choose one that supports the MetaWeblog API (I’m writing this in MarsEdit), I don’t need to manually manage all the metadata or switch tools to publish

Tomorrow I plan to write about the process of moving my domain name and old posts from GitHub pages to Micro.blog (if I’ve figured it out by then :)).

Trying to Build a Blogging Habit

A classic is something everybody wants to have read, but no one wants to read.
Mark Twain

I love that quote because it’s funny, it’s true and it can be adapted to so many different situations. Right now, the situation I’m thinking of is blogging:

A blog is something everyone wants to have written, but no one wants to write.
Me

It doesn’t have quite the same punch, but you get the idea. I’ve started many blogs over the years but none of them really stuck. Except that makes it sound like it was the blogs’ fault. A more accurate way to say it would be that I’ve started many blogs over the years but I’ve never really stuck at any of them.

The closest I got was a patch in 2016 when I wrote sixteen(!) posts over the course of a month and a half. I was on fire!!! But then it then took me a year and a half to write the next two posts. :(

There are many contributing factors to my frequent failings in this area but I think the main two are my fickleness, I frequently go through fads, and my lazy-perfectionism, a tendency to want to do things perfectly or else not at all. I’m not particularly pleased with either of those tendencies and so I am working to change them.

To combat my fickleness I am working to slowly but surely build up good habits, starting with small achievable things (like cleaning my teeth every night, not just when I feel like it) and slowly building on them. I’ve found tracking my activity on Apple Watch, and trying to maintain streaks, has led to some better physical habits over the past few years so now I’ve started tracking other streaks using the Streaks app.

So far my goals are:

  • brush my teeth every night (current streak 2, best streak 39)
  • brush my teeth every morning, minimum 5 days a week (current streak 47)
  • do at least five minutes of coding per day, minimum 6 days a week (current streak 2, best streak 25)

I’m also working on being less of a perfectionist, to not only give myself permission to miss a day or two here and there (see the minimums above), but also the permission to put things out there, even if they’re incomplete or embarrassingly bad.

So today I am officially adding another item to my list of regular tasks; to “Post Something to Blog”, at least 6 days a week… even if that’s just a short “Today I Learned” tidbit.

Listening to old episode of All Things Git (hosted by two Microsoft employees).

Fascinating moment when the guest points out the future of Github is uncertain as they’ll need to either change biz model or be acquired.

1 month later MS announced they’d be acquiring Github!