How I stay organised using Vim

I spend all day in VIM. Either editing configuration, writing plans or using it as a full blown IDE.

I use nvim-metals to get IDE features like showing variable types when editing Scala code.

Whilst every morning I give a standup update, through out the day I keep all my notes in a text file. Its just a text file and looks roughly like this

Where ever I am in Vim or whatever I’m doing, I can immediately open the file with `G I’m using the feature called “marks”. In short m will mark the current cursor position so ma will remember the cursor position and associate it with the letter a. Typing` awill take you to the line your mark was on. (The quote character is under escape on a UK keyboard).

It’s an amazingly useful feature. I use it the most when I’m half-way through writing some code and I need to jump to the top of the file to add an import. I mark the cursor position, jump to the top and then return once I’m done. Lowercase letters are local to your file so you can store 26 unique positions per file. Using an uppercase letters operate at the global level opening my notes file regardless of where I am, or if the notes file isn’t opened yet. It just opens it and puts me in.

If I need to write an essay on a particular topic or the notes get long and ugly, the second trick I use is to create a specific text file for that thing and keep the filename in my notes file. Using gf in vim opens the file path under the cursor, so it takes me hardly anytime to navigate through my collection of notes, drilling in and out when necessary.

Vim’s Ctrl+o feature helps me move back to where I was when I’m finished with the notes, speeding up navigation. I also have this bind in my vim rc:

noremap <silent><leader>bd :MBEbd<CR>

When I type \bd the file I’m looking at is close and vanishes. This is my trick to keep my workspace clean and tab list short and focused. It’s not uncommon for me to spend a minute every now and again closing all the unused files like this, just as you might also do when you periodically close those browser tabs you weren’t using.

Finally, the benefit of just using a text file instead of some plugin solution is the flexibility. It’s in my sync’d cloud files so it’s backed up and can be opened in other programs. Sometimes i open it in Kate if i’m not in vim at that particular moment.

Open In JIRA – 2021 Edition

If you’re developer who uses Linux for work and has an appreciation for command line tools, it’s likely you spend all day in apps like IRC, Mutt, use Vim, emacs, and git commands. All of which, are probably littered with references to your project management code. My company uses Jira but yours may very. 6-7 years ago I “hacked” KDE’s Konsole to add an “open in Jira” link to the Konsole and it was like enriching dozens of programs at once! Now IRC, Vim, git all benefited at once from a single change. See the image for screenshots

Konsole augments email
Konsole augments bash
Konsole augments IRC Chat
Konsole augments git log command

Checking out the latest dev version of Konsole on my path to reimplement the feature again I spot a new feature. The feature allows selected text to be used to open a URL:

KDE Konsole’s new search option

It appears to have a simple configuration for building simple urls

The feature is called Web Shortcuts and appears in Konsole 20.12.2 (and possibly earlier)

Huge thanks to the KDE Konsole team for this awesome feature! As a Debian Stable user I have no idea how long this has been around, but I love it! I don’t know how much publicity Konsole gets but I wanted to share!

As a serious command line user, like many Linux users, Konsole’s 3 killer features for me are:

  1. This open in JIRA thing
  2. Infinite scrollback
  3. Search output

Tour of my working environment

User interface: Tiling window manager DWM

Resizeable windows that you drag around the screen, that overlap helps people using a computer for the first time to recognise that opening a new window doesn’t lose the old one, and that the computer can do multiple things at once, and so can they.

Once you know that however, moving and resizing windows is fiddly and unnecessary. DWM manages and resizing windows for me and means I use the mouse less which makes me more efficient too. Essentially I have 9 workspaces. If I have one app per workspace then each app is full screen. I keep my webbrowser on Alt+1, my editor on Alt+2, chat on Alt+7. If I open a second window I get a vertical split which helps me do things side-by-side if I need to, and if I open more they stack up on the right hand window, automatically sized and I can still cycle between windows with Alt+j and Alt+k like a Windows user might switch between windows with Alt+tab.

The DWM development team has some quirks in that DWM provides no separate configuration. You edit the source code, apply and patches directly to the source code and compile it into a binary yourself, keeping the size and complexity of the window manager small. The custom config I’ve applied (clients per tag, some colour changes and some keybindings) were first committed to a git repo I created in 2013 and still use to this day.

Operating System: Debian Stable

I don’t have much to say about Debian Stable. It’s stable and care is taken by the Debian team to make sure things don’t break. Aptitude is straight forwards, the documentation is good. Many people complain that Debian Stable isn’t very “up to date” but whatever is released brand new today will be in Debian in 2 years. After 5 of these 2 year upgrade cycles in my ten years of using Debian, there’s nothing new or looming that makes me feel like I’m missing out.

Vertical tabs instead of horizontal ones

One thing that’s important to keep up to date in Debian is your web browser. I downloaded Firefox directly from Mozilla, unpacked the tar ball into ~/opt (~/opt is in my home directory) and because it lives outside of root’s permissions it can keep itself up to date and doesn’t damage the stability of my machine. I use the tree-tabs plugin since it provides cleaner visibility when you have 30+ tabs open at once.

To keep my machine clean in other ways I use docker heavily to build applications I develop. I haven’t installed perl, nodejs, mongodb onto my system (which I need for work). Instead I maintain a bunch of docker images that can be thrown away or reset whenever I fancy.


One thing that will no doubt stick out as odd or niche in my setup is my music player. Created as a little treat by the Spotify team in 2013 this Winamp clone connects to Spotify to play music and has a lot of nice functionality. Yes, including the visualisations you may remember. Winamp made them remove the “amp” hence it’s now “Spotiamb”. It’s not been very easy to maintain this application but I love it so much I persist. It’s only written for Windows and it’s a 32 bit application. To keep it alive I’ve held on to the original installer and milkdrop plugin all these years. I tracked down a docker image designed to do X11 and audio forwarding and so I didn’t have to add 32 bit architecture support to my main desktop. I hope to share this in a blog post soon.

Terminal: Hacked KDE Konsole

I still use a patched copy of KDE Konsole that has hardcoded regex’s embedded inside it so it ingrates with JIRA, the program that tracks the work I do, or should be doing. Read the full post for more information but I feel like it’s a win for open source if end users can make their little changes.

Editor, IDE: Vim / Neovim

Neovim with Scala-metals

I wrote Python and Perl professionally from like 2008-2018 and honestly vim is a fantastic editor. I used to believe in the slogan “Unix is my IDE”, often using Ctrl-Z to put vim into the background, returning to bash and using find, grep etc to help me develop the code. For those lightweight languages that don’t benefit from large IDEs, vim had become the editor I use all day. I had to adopt JetBrains IDEA when I learnt Scala just because of the complexity of the type system. I had tried and failed to readopt vim when working Scala but it didn’t really happen until scala-metals came out a year ago. To be honest I only really wanted accurate “go to definition”, “find references” and “show type” and it gives me those. I use a separate install of neovim as an IDE to keep open all day and use the regular lightweight default vim for quick editing sessions everywhere else.

Physical Setup

Desk with Chair, light, background Bookcase for show
Work laptop on desk, Gaming desktop underneath
So much more keyboard and comfort when a monitor stand is replaced with a mount. Get one!
  • Monitor: 24″ Asus 144Hz Gaming Monitor, 1080p
  • Programming Keyboard: Steel Series M400 UK – Blue Switches
  • CSGO Keyboard: HyperX Origins Alloy Core – Red Switches
  • Mouse: Roccat Kone Pure
  • Gaming Desktop: Core i7 6700K (4GHz) + Geforce 1660Ti
  • My desk arm gives me tons of space to push keyboard around and be comfortable. I literally get the whole desk. See before/after pictures. I couldn’t recommend it enough!

One thing I want to draw attention to (the pictures are too old to show) is a Usb-C hub costing £20. My mouse, keyboard and monitor are connected to the usb-c hub and so is a power charger. It means I have just one cable to put into my laptop and it connects to all the devices and even charges from the same cable. I can just freely come and go between my desk and my living room with only a single cable to connect/remove. Super convenient

16TB Synology Diskstation, Fibre Internet Terminator, 1Gbps Switch

Supporting Equipment

I use next-cloud sync to my self-hosted next cloud instance to backup a dozen files and folders, although I don’t backup the operating system itself. My work laptop is a Dell Inspiron 13 7000 2 in 1 I bought in 2018. I’ve upgraded constantly from Debian 9 without issues and if I need to reinstall so be it, but it’s likely I’d be moving to a new machine at the same time anyway since my environment seems to be very reliable. In my house I have a Synology diskstation with 2x8TB disks (mirrored) and also backed up using the Synology cloud backup service. That provides 3 copies of my important files.

How I got banned from Reddit’s r/Linux forum

I wake up one morning to find this drivel “stickied” to the top of r/Linux. A post from the moderators stating:

  • Readers should stop using social media (such as reddit)
  • your usage of social media spreads mis-information even if you don’t share misinformation

The headline alone is confusing for me. I’ve been a Reddit user for 14 years and actually considered it a “news aggregator” rather than social network. I don’t have any friends on the platform, it’s just a good place to see critiques of news stories. In fact if anything it’s THE place to ensure you don’t swallow misinformation.

I’m not on Facebook, Instagram or Twitter. I stopped using What’s App after Facebook bought it. It’s always interesting for someone on social media to tells others to drop it, but why Reddit allows a moderator to have such an anti-reddit views is a bigger question.

So the author wants to stop people using Reddit, stop people using Social Media and they want to control misinformation. Why then suggest people use decentralised channels to communicate where misinformation is harder to control?

In a moment of not caring I drop this insignificant comment into the chat:

Then I receive my ban

Pathetically authoritarian if you ask me. You do decide what’s appropriate for linux, I just threw out an opinion. Suspecting this was a single mod acting in bad faith I try to appeal my ban but it seems to be the same person

When you ban people for no reason and act like this you embarrass yourself and destroy the community. I think Reddit has a problem with these sorts of moderators who don’t actually contribute to the community, have no investment in a community and rate their own contribution by the number of bans they hand out.

One week later they sticky a “read the rules” post with this gem:

Top violations of this rule are trolling, starting a flamewar, or not “Remembering the human” aka being hostile or incredibly impolite.

I’d classify calling someone “a waste of time” as incredibly impolite. I’m feeling very much like I’m the forgotten human at the end of this ordeal.

Easily Format JSON using KDE Kate (easy technique)

For a long time I’ve wanted to format confidential JSON to easily inspect it without pasting it online sites. I’m not keen on installing huge tools for such a simple action but as a Debian stable user, finding KDE documentation that works for me isn’t always simple, so this method is very low complexity.

I had tried copying scripts and custom commands but for some reason that hasn’t worked for my version of KDE so I present a alternative method that works.


  • Install the lightweight command line jq program
  • Enable the Text Filter app that already ships with Kate in Debian

First, check you have the plugin available and enable it if you don’t

Enable the Text Filter plugin

Paste your json payload into Kate and select it all and then go Tools > Text Filter

Remember to use SELECT ALL

Enter the command jq ‘.’ into the dialogue and press enter. Voila it’s formatted it.

The result

Phillip Taylor

Quickly configure a keycloak server for Single Sign On

I’m writing a tutorial on how to make Single Sign on work with the Play Framework in Scala and how to integrate Silhouette authentication library with Keycloak. (this isn’t published yet though) One part of that tutorial is spinning up a Keycloak server you can run your app against. These are the minimal steps required to get something running.

  1. Spin up a keycloak server in a local docker instance
  2. Add a client app with a secret
  3. Add a test user with a username/password combo

1. Start and configure a local Keycloak server for testing.

The following command requires docker to run. This will start Keycloak locally, listening on port 8080 with the username and password of admin/admin. It uses an InMemory database so be aware that changes are lost when the container is stopped. For this tutorial however this is acceptable.

docker run -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -e DB_VENDOR=H2 -p 8080:8080 -p 9990:9990 jboss/keycloak

Once it’s running you should be able to navigate to http://localhost:8080 to access the main page (shown below)

Click on the “Administration Console” link above and use the username admin and the password admin to log in. Below is a screenshot of the main screen “The Master Realm”. It’s too much work for me to explain how Keycloak and Realms work except to say that one Keycloak instance can manage dozens of upstream and downstream auth providers and applications and each realm is a segregation of users and permissions in some way. The master realm typically controls the auth for the Keycloak instance itself and each app would have it’s own realm. For this tutorial we are going to breeze over this and do the minimum.

Click on “Clients” in the left hand navigation menu. A client is an application that can use the Realm. Think of Keycloak as a database of users. Clients are programs that login to query it. (So clients are user accounts to log in to the “user database” where users are just data.) We’re going to write a Keycloak client app in the next tutorial below so we need to tell Keycloak who we are and how we’re going to connect. Click on Clients and then click Create in the top right corner of the table. See the image below.

Here I’ve entered the ClientId. Make a note of this! I’ve also said what the Root URL for the project is, and for Play applications it’s port 9000 by default. Keycloak needs this because it checks “referers” [sic] and redirects users to and from our site. Therefore it needs to be http://localhost:9000 if that is where your app is running and you’re following my Scala guide. Once you save this new client you will be taken to a screen for configuring it (shown below):

When this page opens you will not have a “Credentials” tab but you need one! You should change the Access type from public to confidential and hit save. Then the credentials tab will appear. The credential page is shown below:

On the credentials tab we can now see the secret. Make a note of this secret.

Now we have a working Keycloak server with a clientId and secret that enables another program to login to this server. What we need next is an actual user account we can use for testing and logging into the app we write ourselves. Lets click on the “Users” tab (Under Manage) and click “Add User” in the top right corner of the users table.

You can fill this form in however you want, I don’t really care, but make a note of the username and email! Save the user. Then go to the Credentials tab shown below

First turn off the temporary option and then enter a password. I recommend “pwd” but whatever you choose make a note of it. Then click Reset Password and confirm the prompt when it opens.

Now you’ve done this, I highly recommend you put an email address against the admin user. It can lead some people who are both configuring the app and testing it at the same time to run into some confusion and doing this helps. For the Scala tutorial you might want to create a user called Sinclair who has the exact email “” in order to make the Scala example work right-out-of-the-box but this isn’t that important.

Your system is now minimally functional for the Scala tutorial. I haven’t written this tutorial yet.. so please wait.. If you wander off and change other settings do remember all those settings are lost when you stop the container! When we’ve finished setting up Keycloak we need to take away some details from it to use in our application. So far we should have:

  • A client id (keycloak-seed)
  • A client secret (different for everyone, mine is 45cb055e-d93c-4a14-a4ce-43c2bc0c1414)
  • A user account with a username name and password (mine are sinclair/pwd)

What we need are the special keycloak urls to connect to. Click on Realm under Configure to go back to the main page.

See this link next to Endpoints that says OpenId Endpoint Configuration. Click it and read the json (use a formatter to help you if your browser sucks)

We need and care about the following URLs that we’re going to use in our app:

authorization_endpoint: http://localhost:8080/auth/realms/master/protocol/openid-connect/auth
token_endpoint:         http://localhost:8080/auth/realms/master/protocol/openid-connect/token
userinfo_endpoint:      http://localhost:8080/auth/realms/master/protocol/openid-connect/userinfo
end_session_endpoint:   http://localhost:8080/auth/realms/master/protocol/openid-connect/logout

We now have a running server and the following information:

  • Authorization Endpoint
  • Token Endpoint
  • UserInfo Endpoint
  • End Session Endpoint
  • ClientId
  • ClientSecret

This is everything we’re going to need in our application, so now we’re ready to move to the Scala part of my tutorial (which is not available yet but will be published soon).


This post is essentially a write up of a talk my friend Johan Lindstrom did years and years ago, which in turn are ideas stolen from other people. This advice is aimed at really novice programmers who heavily rely only the initial pieces of knowledge they leverage when they start out. I don’t see this advice shared a lot online despite being common knowledge in some circles so please forgive me if you think it’s overly simplified beginner stuff.

IF statements in programming are bad. Johan and I worked on an warehouse backend system. One that involved taking orders, reserving stock, doing stock checks etc. At the time we had two warehouses, DC1 in England and DC2 in America, so code would often look like this (examples are transposed from Perl into Scala):

if (warehouse == DC1)

Our code was absolutely full of these bad boys. Hundreds upon hundreds of separate statements throughout enormous legacy monstrosity. This code base will celebrate it’s 20 year anniversary next year.

def printInvoice(warehouse :String) = {
    val address = if (warehouse == "DC1") "England" else "America"
    val papersize = if (warehouse == "DC2") USLetter else A4
    val invoice = generateInvoice(address, papersize)

Of course, when we added a third warehouse nothing worked and it took an enormous effort to isolate all the behaviour and fix it. Some of the changes were in little blocks that went together. IF <something> assumes a key exists in a map etc or that a function had already been called. Adding the third DC didn’t result in a random blend of features. Just unpredictable crashes and a world of pain.

The way == or != were used would shape the way the default behaviour would play out. Stringification and easy regexs in Perl also made it harder to track down where comparisons or warehouse specific logic even resided.

warehouse.toLowercase == "dc1"    // lowercased alternatives

wh == "DC2"                       // alternative names

warehouse.matches("1")            // regexes are seamless in Perl
                                  they aren't so unnoticeably odd

if (letterSizeIsUSLegal)          // warehouse derived from something
                                  set earlier and not passed through

Perl doesn’t have the support of rich IDEs to help track references and all these different programming styles that have grown over 20 years means the process of finding these errors involves dozens of GREPs, lots of testing and a lot of code base inspection.

It didn’t take too long to realise that our IF statements should be based on small reusable features (ie. modular reusable components) and not switch on a global “whole warehouse” value. This code would have been much easier to manage:

if (warehouseHasConveyorBelts)

if (shipmentRequiresInvoice) {
   val invoice = getInvoiceTemplateForCountry(

Ultimately however, the problem also extends passed this modularity and the realisation that IF statements themselves are bad. Necessary in a few places and possibly the simplest fundamental building blocks of all programs… but still bad… Lets look at a comparison to find out why.

The history of goto

Many languages like C, C++, Java, VB, Perl etc support the GOTO keyword, which is a language construct that allows you to jump around a function by providing a label next to a statement. GOTO will jump to the named label. Here is an example.

#include <stdio.h>

int main(void) {
	int someNumber = 0;
	int stop = 0;
	if (someNumber < 23)

	  printf("hello. app finished with someNumber = %d", someNumber);
	  stop = 1;
	  someNumber += 13;
	  if (stop == 0)
		goto BEGIN;
	return 0;

The code is really difficult to read since execution jumps around all over the place. You may have difficulties even following the simple example above. Tracking the state of variables is really hard. Pretty much everyone is in agreement that GOTO statements are too low level and difficult to use and that IF, FOR/WHILE/DO loops and a good use of function calls actually make GOTOs redundant and bad practice.

Foreach loops are so much more elegant than GOTO statements because it’s obvious that you’re visiting each element once. It really speaks to the intent of the programmer or algorithm. Do-while-loops make it obvious the loop will always execute at least once. Scala supports .map, .filter, .headOption, dropWhile, foldLeft which all perform very simple well defined operations that convey intent to other people reading that GOTO simply cant.

So if a construct like GOTO is confusion, leads to spaghetti code, and can be replaced with more elegant solutions should we not prefer those alternatives? Of course! IF statements scatter your business logic around and leave it in disjointed locations across your code base that are hard to track, follow and change. They make refactoring hard. IF statements are bad for the same reasons that GOTO statements are bad, and that’s why we should aim to use them as little as possible.

Switching it up

Here’s a collection of constructs that can be used instead of IF statements to keep your application more readable, and more easy to follow and maintain.

Switch Statements

Not exactly much of an improvement, especially in most languages, but Scala’s specifically can be. If your choices extend a Sealed Trait, Scala can warn you which switch statements aren’t exhaustive. No DC3 slipping into DC2’s warehouse code paths!

sealed trait Warehouse
case object DC1 extends Warehouse
case object DC2 extends Warehouse
case object DC3 extends Warehouse

val myWarehouse :Warehouse = DC1

myWarehouse match {
   case DC1 => println("europe")
   case DC2 => println("america")

// scala reports: warning: match may not be exhaustive.
// It would fail on the following input: DC3

A super common one, especially for Scala is to map over an optional value only doing something if it exists and doing nothing if it isn’t. This is the functional equivalent of an “if null” check. { invoice => invoice.print() }

Map is way more generic than this. It applies a function to a value inside a Monad and is commonly used to manipulate lists. Please don’t punish my brevity, it’s just an example for my own ends.


Inheritance allows you to override the behaviour of an existing object to do many specific things so it’s absolutely perfect at reducing the use of IF.

trait Warehouse {
  def hasAutomation() :Boolean
  def address() :String
  def isInEurope() :Boolean

class DC1 extends Warehouse {
  override def hasAutomation = true
  override def address = "England"
  override def isInEurope = true

class DC2 extends Warehouse {
  override def hasAutomation = false
  override def address = "America"
  override def isInEurope = false

class DC3 extends Warehouse {
  override def hasAutomation = false
  override def address = "Europe"
  override def isInEurope = true

// App is set up once.
val warehouse = if ("DC1") new DC1 else new DC2.

// use in code
if (warehouse.hasAutomation && warehouse.isInEurope)


When it comes to adding DC3, we have an interface to extend so we know exactly which methods we need to define in order to specify how a warehouse behaves. Our behaviour is vastly centralised. We only have to extend the initial warehouse setup once as well since we’ve bought everything together.

We can also go a step further and make the Warehouse class responsible for doing things. This removes IF statements even more!

object Printer { def print() = ??? }
object Browser { def handle() = ??? }
case class RoutingInstruction(destination :String)
val REDIRECT = 303
type Invoice = String

trait Warehouse {
  def packItem() :Either[String, Boolean]
  def generateInvoice() :List[Invoice]
  def maybeRouteItem() :Option[RoutingInstruction]
  def getNextWebpage() :Option[(Int, String)]

class DC1 extends Warehouse {
  override def packItem() = Right(true)
  override def generateInvoice()  = List.empty // no invoice since we are in england
  override def maybeRouteItem() = Some(RoutingInstruction("PackingArea11")) // we have automation
  override def getNextWebpage() = Some((REDIRECT, "/confirmation/place-on-conveyor"))

val warehouse :Warehouse = new DC1

// look, no if statements yet lots of diverse functionality
// being used.

warehouse.packItem() { Printer.print } { Browser.handle }

There are some variations of Inheritance I won’t cover, such as Mixins and Traits or Interfaces. They all follow the same theme so I won’t list them individually. The code might be a little crap here because I’m trying to be slightly language independent in my samples.

Function Pointer Tables

You can effectively have cheap object orientation by having a Hash/Map of functions and passing around whole “collections of decisions” together.

def accessGranted() = println("granted!")
def accessDenied() = println("denied!")
val permission = "allowed"

// old, redundant.
if (permission == "allowed") accessGranted() else accessDenied()

// single place for logic.
val mapOfAnswers = Map(
    "allowed" -> accessGranted _,
    "denied" -> accessDenied _

val func = mapOfAnswers(permission) // no if here

func() // executes function which causes println to run

Partial Functions / Closures

Partial functions allow us to build functions using composition which can help mix up and select the appropriate logic without actually having to use IF statements.

def makeAddress(inEurope :Boolean)(country :String)(addressLines :String) =
    println(s"$addressLines\n$country\ninEurope: $inEurope")

val europeanFactory = makeAddress(true) _    // variables type
                                             // refers to a function
val britishFactory = europeanFactory("UK")


Closures are functions that reference variables outside of their direct scope. It allows you to do something like this:

def setTimeout(timeMs :Int, onTimeout :Unit => Unit)

val myVariable = 66
def doingMyThing() = println("myVariable")

setTimeout(500, doingMyThing) // setTimeout doesnt have any logic
                                 but does the right thing

Lambdas are typically short hand syntax for functions so this general class of ideas can be used to encapsulate decision making without callers having to use IF statements everywhere.

Dependency Injection

Dependency injection is generally a technique to remove global variables from an application and is just an application of inheritance to a certain degree but it’s perfect for dynamically changing the behaviour of code without using repetitive IF statements.

// Old code with embedded IF statements

class FetchData {
   def fetchOrders() :List[Order] = {
      if (testMode == true)
        List(sampleOrder1, sampleOrder2)
      elseif (DC == 1)

// New version simply trusts whatever is passed in.

class FetchData(httpLibrary :DCSpecificHttpLibrary, convertor :Option[Order => Order] = None) = {

    def fetchOrders() :List[Order] = {
       val order = httpLibrary.httpGet() // was built knowing which DC { c => c(order) }.getOrElse(order)

// testing code would make a fake httpLibrary and pass it in before the test. Real code would use the real one.


I’m going to stop list alternatives now but hopefully you go away with some interesting thoughts on the subject and possibly an idea that sometimes IF statements can be detrimental if overused.

Some of my examples are really poor, especially my Inheritance one. I was going to model lots of subprocesses of a warehouse like ScreenFlowChartManager, StockCheckManager and make a warehouse point to them but the code was getting too big for a simple example.

I would accept some criticism that some IF statements can’t be avoided and I would accept that some of these alternatives only move the IF statement to another place in the code base. Certainly dependency injection only moves things to when the application starts. Still armed with this knowledge you can write applications which are easier to maintain and move your variables and mutable state around into places that make it easier to work with.

Devops often don’t understand logging

My job involves writing software. Working on bug fixes, adding new features and generally making the software better. That could mean easier to use so less training time for users. It could mean the software is faster so our users can do more of their other work. It could mean safer so we cause less frustration and upset to the general public. This all fits into this end goal we call “delivering value”. Value is an incredibly loose term not necessarily related to money but commonly it can be. It can also simply be called “improvements to the product”. It’s not a science but we identify pain points and try and smooth them out.

Businesses should try and utilise data in all their decision making and move away from gut based decision making because the later is significantly flawed. I can name dozens of examples from my previous experience of where assumptions essentially wasted money, introduced avoidable technical debt and other complexities. As one example, at the place I currently work someone was moving all the Mongo database backups to use a new Mongo replica instead of master because the backups were slowing down the production applications. That turned out to be a waste of two months since it never had an impact on application speed. In another example, the business ask for dozens of reports, each more meaningless than the last unless truly challenged. Maybe look at an actual report once and decide if it’s useful before I code it into the application and have to support it forever. It’s always best practice to try and use data to prove your beliefs and numerous companies exist to help companies understand their own data better. In short, we should use the data we have to identify and assign value to certain work when we prioritise it instead of just guessing at what will improve the product.

Data warehousing is a very old discipline used by many companies. You collect ad-hoc and unprocessed data from across your business and then practice combining it in different ways to try and understand your customers and objectives in new ways. For me, I personally see my application logs as a huge data warehousing effort. So my boss and I will discuss a problem like how long it takes to do some task in the system and we’ll start looking at our logs and our database. Maybe the “edits” a user makes to an page denotes how many mistakes other users are making. Perhaps comparing two urls allows to see how long a mistake goes unnoticed for. Perhaps if we quantify this mistake-rate we can prove our work yields improvements by measuring how many less edits are made after the change. We can measure it before and after in order to prove our work is of some demonstrable value. One thing we do in our department is count the number of emails to our support bucket and try and ask ourselves which changes will reduce that expensive and annoying correspondence the most? However I don’t know what metric or check is going to be useful until I am looking at the JIRA tickets on my backlog. It could be the distance between log lines, it could be urls, it could be times and it could be a mixture. It’s incredibly situational.

Perhaps you think the work of attaching costs and values or parsing logs is for the product owners or managers – I would argue it’s a shared responsibility across all levels and we should challenge work and enrich requests will real stats rather than blindly implement meaningless change for a pay cheque.

In order to enrich JIRA tickets with provable estimates and data, I specifically need access to an ad-hoc, dynamic tool where I can make meaning out of unstructured data with no upfront planning. I can do this from the logs with Splunk. Splunk allows me to perform a free-form regex-like search over my logs and then draw graphs from them and derive averages, maximums, trends and deviations. However if I need to either define a fixed parsing pipeline to turn adhoc logs into structured json data, or if I need to add triggers to my code for sysdig – this immediately means I cannot evaluate any historic data. It also means I have to do upfront and expensive development work to find out if another piece of work is worth doing. That is expensive in terms of time, effort, effeciency, especially since it’s not a science and could be meaningless. I need to be able to experiment very cheaply (i.e a regex or a SQL query) and writing data to sysdig manually is not cheap. It means waiting for two weeks to find out the answer to my question assuming two weeks data is even enough to make an informed decision. It’s better to have a tool that runs like dogshit but answers business questions on demand with no upfront planning than to have a tool that draws graphs from extracted data but requires forethought when configuring it.

People who think Kibana and logs are useful for finding errors but should only keep data short-term, and people who think Kibana should only be fed parsed, structured json, are ignoring the enormous amounts of useful information that would make them better developers. I hate to generalise but I find at every company I go to that I run into DevOps members tend to overlap with the former group. Kibana and Splunk having similar looking UIs but since one opens a world of business intelligence and the other one doesn’t, that’s where the similarities end. I also advise you keep logs forever as you may want to do “year-on-year” analysis of growth and things like that later.

The closed source Scala code problem

Java touts itself as the write-once, run anywhere programming language. Unfortunately Scala is not. It’s write once but when you publish a library, it must be compiled against a known specific major version of Scala such as 2.11 or 2.12. The version of Scala goes into the library’s name. If you upgrade your applications from Scala 2.11 to 2.12, you will need to recompile your libraries with the matching version as well.

This page of the sbt documentation explains how you can build and publish the library for multiple copies of Scala for instance, 2.10, 2.11, 2.12 in a single instruction. However you can’t compile the library against future versions, which obviously do not exist yet.

The underlying reason we need to recompile the library is to allow the compiler to make “breaking changes” to the bytecode between versions, so they can more aggressively improve the Scala compiler with fewer concerns for supporting backwards compatible Java bytecode. This makes a lot of sense for them for a minor inconvenience on the user side but it does have a larger implication for the community.

I recently upgraded a Play application from Scala 2.11 to 2.12 and I ran across a few projects that hadn’t been upgraded to 2.12 such as play2-auth and stackable-controller. Fortunately the code was open source and someone was able to create a working 2.12 fork. Yay for open source! The compiled version wasn’t published anywhere, so I had to fork it again and publish it to my organisation’s internal Artifactory repo. This was an inconvenient pain, configuring the Drone pipeline etc but what concerns me more is that, if this library were closed source, this fix would not have been possible.

Our application would be locked to Scala 2.11 until the whim of the library author, or until we managed to rewrite the dependency out. For this reason, I highly suggest you don’t choose to make your application depend on closed source libraries.

Job Security Y2K

I see a lot of folks advising young people that job security is important and they should pick a career path or skill set that gives them job security. I consider this bad advice and will outline why I believe so below.

Job security, the likelyhood of you losing your job, is incredibly important and especially so, when you get to the age that you are responsible for others as well as yourself, and an age when going home to your parents is no longer an option. However it is not the end goal. The true security you want is financial security. Money to live on, even if you’re unable to work. It’s an important distinction and the terms are not interchangeable.

Whoever you work for, everyone is expendable and companies just do not give a fuck about you. They never will and are probably, actively seeking to replace you behind your back. They have teams and projects designed to replace you. There is no such thing as job security. People do get made redundant from government jobs, regardless of what is claimed. That threat is always there.

Everyone in a company falls into two categories: back office “workers” who are cost centres that should be reduced via offshoring or automation and front office staff to be replaced by self service websites. I’ve seen jobs like accountancies morphed into “work pipelines” filled by unskilled, minimum wage people who escalate to a limited number of real accountants for actual issues. This “process driven” approach takes the demand off the need for expensive skilled employees and can be seen in every sector.

Nurses do the most work and escalate to doctors who in turn escalate to consultants and specialists.

I know many people who have been made redundant from jobs and it can cause some incredibly difficult problems for them, especially if their job or skill or being a provider is what gave them their self-worth. Who doesn’t define themselves by their work just a little?

That’s why I say never work for a single employer like the government (teacher, NHS, admin etc), despite the claims of unions, they can sack you and you’ve no where to take your skills when they do. Can you really dodge being a political escape goat for 40 years or somehow play out 40 years without taking on at least some responsibility? We can all be fired, and you shouldn’t consider yourself an exception.

People worry that computers and robots, self service checkout tills and vacuum cleaners are going to replace their jobs, and they’re probably right. They also believe us IT guys are completely safe, building these replacements and we’ve gotten the better end of the job security situation. Unfortunately they couldn’t be more wrong.

I work in IT and whilst “job security” for A job is high because it’s an in-demand skill.. in any given company that’s not true. For example I worked for website founded in 2001 that now makes over £1 billion per year selling clothes. It’s privately owned and can splash its cash anywhere. I worked on complex warehouse software that I believe helped our company edge out its luxury customer unique selling point. Only we got bought by a rival. They already have a competing warehouse, so you know how that down… send us your customer database to load into our system, ship your stock to our warehouse locations and go home. (ok it wasn’t quite like that at all.. but thats a real thing).

As a population we need to understand that job security is a meaningless word and that we should be aiming for “employability” and changing careers to suit demand. That’s just how we should view life now, because it’s the only way to survive in the real world of uncaring companies.

Even if you hate your job you still need the money so don’t confuse job security with your real concern: financial security.

If you have the opportunity to be a contractor, on twice the salary for only half the time.. I’d even go so far as to recommend that, personally. The purpose of the article is only to achieve the basic goal of making you think twice about job security as a metric.

When talking about groups being made redundant all this “go get another job stuff is meaningless because almost no one can afford to go out and re-educate or reskill themselves and take a zero experience entry role even if they did have the motivation to. The companies laying them off or replacing them with machines certainly aren’t footing the bill.

I don’t know what the solution is, it just seems to be the government that’s supposed to pick up the pieces. It just seems that regardless of what we say, companies will do what they want and we have to live with this situation regardless. Fighting the technology doesn’t even work. How we can help people stay in their jobs, which definitely helps those people, I honestly don’t know. You’re literally fighting the employers themselves. Something that only unions or the government can successfully do.