I was surprised to learn recently that web browsers also cache DNS lookups (in addition to the OS network stack doing so).
It’s usually only for a short time (up to about 70 seconds according to that article) but something to be aware of when troubleshooting.
I’ve been exploring ways to generate nicely formatted PDFs from a Ruby on Rails app (without trying to convert HTML to PDF). As part of that exploration I’ve been looking into Prawn, a fast and powerful PDF generator written in Ruby.
I find one of the most important parts of learning a new technology is to shorten the feedback loop between making a change and seeing the result of that change. When using Prawn out of the box the feedback loop looks a little bit like this:
require 'prawn'; Prawn::Document.generate("hello.pdf") { text "Hello World!!" }
Rinse and repeat.
That’s not terrible, particularly since the Preview app on macOS automatically refreshes the document when you bring it to the front, but all that keyboarding/mousing/track-padding adds up when you’re trying to learn a technology and making lots of little changes. In addition to being a little slow, most of those steps never change. In fact, steps 2 through 4 are identical every time. This process is an ideal candidate for automation, for making steps 2-4 happen automatically every time I complete step 1. Here’s how I did it (with caveats).
I started with a very basic Prawn script (based on the README) for testing:
# hello.rb
require 'prawn'
script_name_sans_extension = File.basename(__FILE__, '.rb')
Prawn::Document.generate("#{script_name_sans_extension}.pdf") do
text "Hello World!!"
end
The first step is creating a trigger that detects when the Ruby script(s) are saved. For this I chose Guard, a command line tool for responding to file system modifications. Guard is very handy for Test Driven Development, you can set it up to run tests automatically when your code changes. That’s pretty much exactly what I want to do here!
Since I already have Ruby and Bundler configured on my machine this step was as simple as:
Adding a Gemfile to my Prawn script folder:
# Gemfile
source '[rubygems.org](https://rubygems.org)'
gem 'prawn'
group :development do
gem 'guard'
gem 'guard-shell'
end
Installing the gems by running bundle
(or bundle install
)
Creating a basic guard file template with bundle exec guard init shell
Tweaking the Guardfile to execute any ruby scripts whenever they change
# Guardfile
guard :shell do
watch(/(.*).rb/) do |m|
`ruby #{m[0]}`
end
end
Leaving guard running while I work on the scripts with bundle exec guard
Now whenever I save the Ruby script, Guard detects the change and immediately compiles it (the `ruby #{m[0]}`
line in the script above). So that’s step 2 taken care of, now for steps 3 & 4…
Step 3 is easy on its own. You can just add a simple `open "#{m[1]}.pdf"`
inside the watch
block. Then, every time you save, a few moments later Preview will be brought to the front and the PDF reloaded. If you’re working on a single screen, you might want to stop the script there. Your workflow will now be:
Rinse and repeat.
If you’re working with multiple screens (or have one very large screen) there is a way to mostly automate step 4 as well. The main problem is we need to take note of the current frontmost application, open Preview, and then reopen the previous application.
To dynamically take note of the current application and switch back to it requires AppleScript. To call AppleScript from the shell just use osascript
. We’ll also need to remove the open
call we made for Step 3. Our Guardfile then becomes:
# Guardfile
guard :shell do
watch(/(.*).rb/) do |m|
`ruby #{m[0]}`
`osascript<<EOF
tell application "System Events"
set frontApp to name of first application process whose frontmost is true
end tell
activate application "Preview"
activate application frontApp
EOF`
end
end
If you’re happy to hardcode your editor, you can skip all that malarky and just add one more open
line after the one we added for step 3: `open -a "/Applications/TextEdit.app"`
(replacing TextEdit.app with your editor of choice). Your Guardfile will then look like:
# Guardfile
guard :shell do
watch(/(.*).rb/) do |m|
`ruby #{m[0]}`
`open "#{m[1]}.pdf"`
`open -a "/Applications/TextEdit.app"`
end
end
I said “mostly” above because there is a downside (or two) with both these options. The main downside is that bringing your text editor back to the front brings all it’s windows to the front too. You’ll have to arrange it so that the Preview window isn’t being covered by another text editor window when the app is brought back to the foreground (hence the need for plenty of screen real estate).
Another thing I noticed is that sometimes (but not always) a different text editor window would become uppermost when the focus came back. I can’t be sure but it seemed to happen more often when the editor window I was using wasn’t on the “main” screen. Moving it over to my main display seemed to fix the issue.
Another option would be to either close all your other text editor windows or muck around with trying to specify which window to bring to the foreground. I decided not to spend any more time on it since it was working well enough for me. If you want to try it out, take a look at the accessibility methods to figure out which window is frontmost. According to one StackOverflow comment it’s something along the lines of tell (1st window whose value of attribute "AXMain" is true)
…
Hopefully this proves interesting to someone. Even if you don’t care about Prawn you can adapt this technique to any format that requires a compilation step. Markdown and Latex spring to mind. You could even trigger a complete publishing workflow!
First their video lessons, now their books… Thoughtbot are giving away all their knowledge for free!
At this stage they’re still charging for their wisdom though!
Every time I hear Tim Riley speak it all makes so much sense…
But then when I think about starting down the dry-rb path I get confused where to start and anxious about becoming a beginner all over again (even though I know it’s all Just Ruby™)
The more I hear about Facebook’s product, spying & corporate culture the gladder I am to have ditched it years ago.
I recently deleted my Instagram account & now I’m trying to figure out how to eliminate WhatsApp… I may just have to miss out on some group messages.
I ran my monthly 10K Sydney Striders race this morning at North Head… beautiful spot and I went a full minute quicker than the same event last year 🎉
As I mentioned yesterday, I recently needed to make an API call as part of the request-response cycle in order to fetch a temporary token to embed in a redirect URL. Having satisfied myself in the console that I could get the token I needed, it was time to turn this into a Proof of Concept.
I’ve recently been enjoying applying the strict red-green-refactor cycle of TDD to code. I first saw it done this way in an Upcase tutorial (which are all now free!!! 🎉) and I’ve come to appreciate the approach of fixing one error at a time until the test passes. It breaks tasks down into stupidly simple chunks and I find it helps me zero in on errors much faster, after all there’s usually only one place to look. I used a similar approach here, just without the formal tests to verify it as this was just a proof of concept. I’d perform the “test” manually (visit the URL), write a line of code, refresh the browser, add another line of code, etc.
First off, I visited the page I wanted to see and got a 404 error, as expected. To fix that error I added the route (get 'payments/:code' => 'bank_payments#show'
) and tried again. No more 404 error (excellent), but I did get an “uninitialized constant BankPaymentsController” error, again as expected.
And here’s where the step by step approach paid off. To fix that error I created the app/controllers/bank_paymemts_controller.rb
file with an otherwise empty class definition. But when I tested, I got the same error as before! I was actually expecting an error about the “show” action not being found. 🤔
Because I found the error straight away I was able to quickly figure out that I spelt the file name wrong (did you notice that?) and therefore Rails didn’t know where to find it. Past me would have written a whole bunch of other code in that file (and maybe even extracted some into another file or object) before testing it and seeing it blow up. There’s no way of knowing for sure but there’s a good chance I would looked in all the wrong places for a while, trying to figure out what I’d done wrong. So thank you Thoughtbot/Upcase!
Anyway, I got back to fixing one error at a time until I had something that looked a fair bit like this:
# routes.rb
get 'payments/:code' => 'bank_payments#show'
# app/controllers/bank_payments_controller.rb
class BankPaymentsController < ApplicationController
def show
uri = URI('[bank.example.com/foo/bar/T...](https://bank.example.com/foo/bar/TokenRequestServlet')).to_s
request = Typhoeus::Request.new(
uri,
method: :post,
params: {
username: 'customer',
password: 'P@$$w0rD',
supplierCode: params[:code],
connectionType: 'bank_product',
product: 'bank_product',
returnUrl: '[customer.example.org/',](https://customer.example.org/',)
cancelUrl: '[customer.example.org/',](https://customer.example.org/',)
},
)
result = request.run
token = result.response_body.split('=').each_slice(2).to_h["token"]
uri = URI("[bank.example.com/PaymentSe...](https://bank.example.com/PaymentServlet)")
uri.query = URI.encode_www_form({communityCode: 'customer', token: token})
redirect_to uri.to_s
end
end
To be clear, this is ugly, poorly structured code. I would not deploy this to a production app. But the “test” passes. When I load the page I now get redirected to the appropriate payments page. And I can change which page just by passing in the right supplier code in the URL, e.g. our_app.example.org/payments/MATTSRETIREMENTFUND.
The next step was to refactor the code, test edge cases, add error-handling, etc, etc… you know, the final 20% of the product that takes 80% of the time. But early in that process some new information came to light which made it clear that this feature was not going to be necessary.
That particular discovery reinforced another lesson! As I mentioned yesterday, I’m trying to get better at not letting perfect be the enemy of good, to first build something Good Enough™ and then later to refine and improve it, if that proves necessary. By building this Proof of Concept and showing it to someone else in the business early, we collectively went on a journey of discovery. The new information we surfaced would have been valuable for rebuilding this feature properly but it proved even more valuable by revealing to us that building the feature was unnecessary in the first place!
Past me probably would have spent two or three days trying to craft an elegant, perfectly architected, Practical Object Oriented Design exemplar. I never would have got to that standard of course, but I would have tried. As a result I would have been heavily invested in the sunk cost of my “solution” and probably would have either been crushed to discover the feature is no longer needed or I might have been blinded to the reality of the situation and tried to justify the need for what I’d already built.
Instead, I only spent a few hours wrestling with the documentation, spiking out a demo and bringing the business on the journey with me. It was uncomfortable in the moment (that code is really ugly to share publicly) but it has been valuable. Hopefully by writing this down it will solidify the lesson and make it a little bit easier to take a similar path next time.
Recently I came across the need, on our server, to fetch a secure token from another server run by a bank and then redirect the client to a payment URL containing that token as a parameter, all as part of the request cycle. This is quite straight forward in Ruby on Rails but I thought I’d summarise the thinking and discovery process I went through.
After adding my public IP to the whitelist with the bank, the first thing I did was to attempt to fetch the token from my console to see what sort of format it came back in, using Typhoeus because it’s already in use in our app. I started with a request that didn’t have all the mandatory parameters:
uri = URI('[bank.example.com/foo/bar/T...](https://bank.example.com/foo/bar/TokenRequestServlet')).to_s
request = Typhoeus::Request.new(
uri,
method: :post,
params: {},
)
result = request.run
I was disappointed, but not really surprised, to get a 200 OK
response with a body that told me there was an error. SIGH If only there were meaningful HTTP Status Codes that could be used to communicate errors… At least the error message was helpful!
After providing the required params from the documentation I tried again. Again I got a 200 OK
but this time the body of the response just contained token=zk_BxvIFDTifsevfc-W_QhAKCdd2zEFZxbDfpXtJ230
. Success!
Turning the string into an array of the two parts I was after was easy with .split('=')
but I figured I probably wanted a hash to verify the key and value and I wasn’t sure how to turn that array into a hash. Thankfully I have a couple of friends called Google Duck Duck Go and StackOverflow.
According to an accepted and much upvoted answer, simply calling .to_h
should work but it didn’t work for me and the documentation linked to from that answer suggested that to_h
needs to be called on an array of two item arrays. Thankfully, someone else had commented with the suggestion of a.each_slice(2).to_h
which worked a treat. Time to move on, but not before upvoting the helpful comment… and adding my own to say that the accepted answer is (no longer) correct. As they say, duty calls!
Since this was just an experiment I threw caution to the wind and wrote this beautiful train wreck:
result.response_body.split('=').each_slice(2).to_h["token"]
Yes it’s ugly, but I’m trying to learn to fight my perfectionist tendencies and not let perfect be the enemy of good. Besides, that’s what the “refactor” step is for in the red-green-refactor cycle. More on that tomorrow!
I think Deep Learning (starting with Keras) might be my next holiday learning project…
Yesterday I described why I moved from Github Pages to Micro.blog so today I wanted to cover how I moved. Moving my domain and static pages was trivial but migrating the content has proved harder than I thought it would be!
As I mentioned yesterday, my previous site was hosted by GitHub Pages which has a very similar architecture to Micro.blog.
The first thing I did was update my DNS entries (hosted by Cloudflare on the free plan) as per the succinct and clear instructions on the Micro.blog help site. In my case I set up both the A record and the CNAME record (and had to delete one of my existing A records as GitHub Pages requires two).
With that done I set my Domain name in Micro.blog as www.matt17r.com
, with the www
so Micro.blog would work with both matt17r.com and www.matt17r.com.
The last thing I did for this part of the migration was move my about page (I only have one static page). Within Micro.blog I first had to delete the standard about
page (which takes content from the “About Me” box on your profile page) and create a new page called “About”. I then copied the Markdown content from matt17r/about.md
in my git repository (less the YAML Front Matter) into the new page.
This bit was surprisingly tricky! I already have my posts in Jekyll format so I thought there’d be a simple Jekyll (or at least Markdown) importer. I searched around and even asked on the Indie Microblogging Slack and found out there’s nothing publicly available.
I moved one article my pasting the content into MarsEdit and editing the date before posting it… but even that proved difficult as I made a mistake with the date and there was no way to edit the (automatically generated) URL after I posted it. In the end I had to delete it and repost it more carefully.
I’m currently experimenting on writing a Jekyll plugin with a post_render
Hook that could maybe write every post to M.b using the Micropub API. If I’m successful I’ll post more here. In the meantime my old posts are all 404ing.
My first attempt at blogging regularly was on my Squarespace site while I was trying to “go indie” in 2016. If you’ve ever heard a podcast episode you probably already know the benefits of Squarespace but what drew me in was getting a high quality, mobile-optimised site without having to worry about servers, backups, downtime and deployments.
A few months later I was back in a real job and so when it came time to renew my indie “hire me” site I decided to cancel with Squarespace and find somewhere cheaper (preferably free) to host my writing.
In August 2017 I decided to migrate my posts from Squarespace to a static Jekyll site hosted on Github Pages. In addition to being free, I was really enjoying learning how to use Git more and more in my day job and I had grown to like GitHub. The fact my content was now portable (more portable than Squarespace anyway) and I could write in a real text editor instead of Squarespace’s web UI (not optimised for free form writing) was the icing on the cake.
There’s a lot to recommend about a Jekyll site on GitHub Pages including:
That being said, several of those strengths have flip-sides that I discovered actually turned them into weaknesses for my wants and needs. For example, being able to use any text editor you like is nice but that flexibility also means you need to use a separate tool (git
) for the actual publishing. And having fine grained control over appearance, dates, etc can be nice from time to time but the rest of the time I’d rather not have to manually name & move files and type out the Front Matter by hand.
They aren’t deal breakers but they were impediments to me writing regularly and so I’ve decided to move all my writing to Micro.blog. It feels like a really nice blend of what I liked about Squarespace and what I like about Github Pages:
Tomorrow I plan to write about the process of moving my domain name and old posts from GitHub pages to Micro.blog (if I’ve figured it out by then :)).
A classic is something everybody wants to have read, but no one wants to read.
– Mark Twain
I love that quote because it’s funny, it’s true and it can be adapted to so many different situations. Right now, the situation I’m thinking of is blogging:
A blog is something everyone wants to have written, but no one wants to write.
– Me
It doesn’t have quite the same punch, but you get the idea. I’ve started many blogs over the years but none of them really stuck. Except that makes it sound like it was the blogs’ fault. A more accurate way to say it would be that I’ve started many blogs over the years but I’ve never really stuck at any of them.
The closest I got was a patch in 2016 when I wrote sixteen(!) posts over the course of a month and a half. I was on fire!!! But then it then took me a year and a half to write the next two posts. :(
There are many contributing factors to my frequent failings in this area but I think the main two are my fickleness, I frequently go through fads, and my lazy-perfectionism, a tendency to want to do things perfectly or else not at all. I’m not particularly pleased with either of those tendencies and so I am working to change them.
To combat my fickleness I am working to slowly but surely build up good habits, starting with small achievable things (like cleaning my teeth every night, not just when I feel like it) and slowly building on them. I’ve found tracking my activity on Apple Watch, and trying to maintain streaks, has led to some better physical habits over the past few years so now I’ve started tracking other streaks using the Streaks app.
So far my goals are:
I’m also working on being less of a perfectionist, to not only give myself permission to miss a day or two here and there (see the minimums above), but also the permission to put things out there, even if they’re incomplete or embarrassingly bad.
So today I am officially adding another item to my list of regular tasks; to “Post Something to Blog”, at least 6 days a week… even if that’s just a short “Today I Learned” tidbit.
Listening to old episode of All Things Git (hosted by two Microsoft employees).
Fascinating moment when the guest points out the future of Github is uncertain as they’ll need to either change biz model or be acquired.
1 month later MS announced they’d be acquiring Github!
Enjoyed this article on the bigger picture considerations around Implementing Impersonation.
I particularly liked the idea of posting a notice in Slack. Makes the audit trail so much more visible.
Today I learned that require: false
in a Ruby Gemfile turns off auto-require for that gem. Bundler still downloads the code but doesn’t automatically load it into your main app.
If you want to use it somewhere (e.g. in a rake task) you’ll need a require
statement in that file.
Idle thought: Could the Chaos Monkey/Resilience Engineering approach be applied to people & teams?
On a random day each fortnight nominate a random team member to take the day off (and not respond to calls or emails).
novice designers are best served by writing test-first code. Their lack of design skills may make this bafflingly difficult but if they persevere they will at least have testable code — Sandi Metz in POODR
I think I’m just starting to get past the bafflingly difficult stage!
Reason # 1,562 that I love Ruby (and the frameworks it has fostered):
ActiveSupport::Duration has plurals of duration methods so 1.hour
and 2.hours
both work.
Enjoyed hearing the emphasis on simplicity and speed of deployment on the latest Ruby Rogues episode.
I love using Heroku to speed up deployment and feedback but I’m also keen to learn more about where Dokku and Beanstalk (different to the AWS product!!!) might fit in.
I run up a new Rails app often enough that I have a certain way I like to do things, but infrequently enough that I normally have to spend a bit of time on Google/Stack Overflow remembering how to do it that way. This checklist is my attempt to remedy that.
If I’m starting a new project I want to be starting with the latest (usually stable) ruby.
brew upgrade rbenv ruby-build
rbenv install -l
rbenv install 2.5.3
rbenv global 2.5.3
I also want it to be on the latest (again, stable) Rails.
gem list rails --remote -e --all
(-e for exact match, –all shows all versions, not just the latest… is optional)
gem install rails -v 5.2.1
gem install rails
to get latestgem install rails -v '~> 5.0
to get latest 5.x.y versionMy philosophy (as a relative newbie) is to pretty much use Rails “The Rails Way” (the Basecamp way?). At this stage that means I don’t monkey with the default gems (with one exception), I just use the default test framework, javascript framework, etc initially. I do change my dev database to Postgres however.
rails new --skip-bundle --database=postgresql <app-name>
bundle
rails haml:erb2haml
I use Github mainly due to inertia/convenience. I have tried Bitbucket, and like being able to hide my work in private repositories for free, but it’s probably good for me to develop “in the open” a bit more.
git add .
git commit
:
git remote add origin git@github.com:<user>/<repo-name>.git
git push -u origin master
Next steps should probably be along the lines of (not necessarily in this order):
These are some things I might want to think about enhancing next time I use this checklist (or review this process).
curl -H "Authorization: token 12d3fd45f6ac7c89012345678db901ac2a3456f7" '[api.github.com/user/repo...](https://api.github.com/user/repos') -d '{"name":"test-repo"}'
Most people know the basic keyboard shortcuts ⌘X
, ⌘C
and ⌘V
for cut, copy and paste but, if you like to keep your hands on the keyboard as much as possible, an important related Mac keyboard shortcut to know is ⌘⌥⇧V
(command-option-shift-v) for “Paste and Match Style” (or “Paste as Text” which is how I think of it).
Under the hood, Paste and Match Style simply pastes in the text only version of whatever is on your clipboard, thereby stripping out fonts, colours, sizes and other rich text formatting.
This can be useful when pasting styled text into Pages or Mail (for example), when you just want the raw text, not all the styles.
Another time it comes in handy is when copying and pasting links. If you right click on the Micro.blog “Timeline” link above and choose “Copy Link” you’ll get a rich text link on your clipboard. If you try and paste it into the body of an email message you’ll get the word “Timeline” linked to the url. If you just want the URL you can use ⌘⌥⇧V
to “Paste and Match Style”, pasting just the unformatted URL.
Notes
⌘⇧V
(no option key)Nearly finished the Upcase Intermediate Rails course. Last lesson is on search and in addition to mentioning Elasticsearch (which I’ve heard of but not yet used) they discussed Solr (via Sunspot) which is completely new to me.
Much to learn!
I’m going through the (now free 🎉) Upcase course by Thoughtbot. In lesson 3 I just learned about rails db:migrate:redo
to test rollback and migration in one go.
(On that particular exercise I still chose to run them separately so I could examine the DB in between though)
Listening to episode 10 of the Ruby Testing Podcast and Zach mentioned Page Object Model, a way to make integration tests more robust against minor interface changes.
Keen to investigate it more.
Update: A more Ruby focused link
In a REST API I was writing, I wanted certain unlikely failures effecting customers to get logged to BugSnag as warnings so we would know if there was a pattern of failures, even if customers didn’t follow the instructions on the page to let us know.
From what I could tell reading the docs, BugSnag wanted me to pass it an error (exception) of some kind but these failures weren’t raising exceptions, they were just returning appropriate 4xx HTTP error codes.
There’s probably a better way of doing it but my eventual solution involved creating, but not raising, an error:
e = StandardError.new(‘Verified successfully but couldn’t fetch any records’)
Bugsnag.notify(e)
By the way, I would normally use a more descriptive variable name but I think this is one of those rare exceptions (pun not intended) where the meaning is so obvious and the variable is so short lived that it’s acceptable. A bit like using i
and j
as variables in a loop.
I tested this code from the production console to make sure it worked and that notifications came through to our internal chat app. What I noticed is that, perhaps because I didn’t raise
the errors, the Bugsnags didn’t include helpful backtrace information like file names, line numbers or method names. The docs revealed a set_backtrace
method and StackOverflow pointed me in the direction of [caller](https://ruby-doc.org/core-2.5.3/Kernel.html)
.
Of course I found myself using this same code 4 times in the same file, an obvious candidate to split into a method. Of those 4 times, they were split evenly between warnings and errors so the method needed to allow for that. I also wanted to be able to add a tab to Bugsnag with arbitrary information. Skipping to the finished product, the end result was:
def notify_via_bugsnag(message:, severity: 'warning', aditional_info: {})
e = RuntimeError.new message
e.set_backtrace caller(2)
Bugsnag.notify(e) do |report|
report.severity = severity
report.add_tab(:appname_API, additional_info) if additional_info.present?
end
end
The main thing to note is the addition of (2)
to caller
. Because I’m setting the backtrace from within a called method I want it to start one frame higher in the stack.
I then consumed this method in the controller with code like this:
notify_via_bugsnag(message: 'Requestor was verified but student data wasn\'t saved',
severity: 'error',
additional_info: {
student_id: params[:id],
})
head :unprocessable_entity