A Stupidly Simple PHP Dependency Injection Container

I’ve already written at length about dependency injection. And in the months since it’s only proven to be more helpful. But just because I got over some of my hangups about SOLID doesn’t mean I got rid of all of my bad habits.

Particularly the bad habit of deciding that even though there’s a perfectly servicable library I’m already using, I can’t ignore the persistent thought that I can do better.

So I did. I got irked by something and ended up writing my own dependency injection container.

The lay of the land

The way I’m building Smolblog’s core library at the moment separates objects into two categories:

  • Value objects contain strongly-typed information. They are (mostly) read-only, as mutation should only happen in certain places. Any methods in a Value object should be self-contained; they do not call out to other objects, Services, or dependencies. Value objects are state, not code.
  • Service objects perform actions. They can have dependencies on other services and objects and should be given those dependencies at construction. They can act on information in Value objects; those should be given to the Service when the particular method is called. Services should typically not contain data; they should be able to function as singletons (regardless of whether they actually are). Service objects are code, not state.

Having this separation has actually really helped me focus the architecture in Smolblog, and it’s kept me from making any one class too “big.”

Dependency injection containers are classes that store dependencies for other classes and can provide instances of them. Essentially, instead of creating new instances, you get new instances from the container.

Where normal code might create a service like this:

$service = new Service(db: new Database(), fetch: new HttpClient());

Using a dependency injection container, it would look like this:

$service = $container->get(Service::class);

This takes all the responsibility for knowing how to instantiate a service away from the classes throughout the application and centralizes it into one place.

Containers are a common pattern, such that there is a common interface for containers to use: PSR-11. This way, different frameworks and libraries can define containers, and other libraries can use them without having to depend on the specific behavior of specific containers. For a while, I was using Container from The League of Extraordinary Packages as the container for Smolblog.

Until I wasn’t.

Preoccupied with whether I could

Let me state two things first:

  1. I could not have built my own container at the outset. I needed to fully grasp the concept first, and that could only happen by writing my own code against another library.
  2. Nothing in this article is a dig against the League’s Container. I want to be absolutely clear on this. I’m not interested in starting drama or picking fights.

But as I got more and more into using dependency injection, especially in the very specific ways I was using it for Smolblog, I realized how… simple the concept was.

In PHP, every class has a static constant class that is simply a string of the fully-qualified class name:

namespace oddEvan\Example\Simple;

class StupidSimple {}

echo StupidSimple::class;
// oddEvan\Example\Simple\StupidSimple

Passing that string into a container’s get method will typically return an instance of that class.

Now let’s consider the constraints I have for Smolblog:

  1. The only classes with dependencies are Services.
  2. Services should be given all dependencies at construction.
  3. Services should function as singletons.

This makes our container’s logic… actually pretty simple:

  1. Have a configuration of classes and dependencies.
  2. Given a class name, check for an existing instance and skip to step 6 if there is one.
  3. If no instances, retrieve that class' dependencies.
  4. For each class in those dependencies, call step 2 with the dependency’s class.
  5. Use the dependencies to create an instance of the class and store it.
  6. Return the instance to the caller.

…I think we can do this.

Considered whether I should

That’s cool and all, but replacing an established library with my own implementation is not something to be done lightly. A well-built library, like the ones from the League, are well-tested and well-maintained by a group of people. I’m just me.

By rolling my own solution, I’m eschewing the time and effort put into the existing library. Sometimes it can look like “bloat” or “unnecessary” code, but often that code covers edge cases that aren’t immediately obvious. Some of those potential bugs can even be security concerns.

In this specific case, a lot of the code in the League’s container involves different ways to load classes into the container. Because it is a general-purpose library, it has to handle several different scenarios:

  • Singleton classes (return the same instance every time)
  • Multiple classes (return a new instance every time)
  • Set dependencies in the constructor
  • Set dependencies by method calls after construction
  • Store classes by name
  • Store classes by aliases
  • Receive an initial configuration
  • Accept changes at any time
  • Determine if a dependency is another class or a value

With Smolblog’s constraints, this list is a lot shorter:

  • Singleton classes
  • Set dependencies in the constructor
  • Store classes by name
  • Receive an initial configuration
  • Determine if a dependency is another class or a value
  • Uses named arguments

That last point is what tipped me over to writing my own container. In PHP 8, you can now use named arguments. This is a language construct I first saw in Objective-C that Apple carried over into Swift, and understandably so. It makes method calls much more readable, especially if they have many optional parameters. Let’s start with an obtuse function:

make_superhero('Larry', 'Larry-Boy', 'Archibald', 3, false);

With named arguments, not only is it clearer what argument is what, but the order is no longer significant:

make_superhero(
  super_name: 'Larry-Boy',
  num_episodes: 3,
	citizen_name: 'Larry',
  assistant: 'Archibald',
  can_super_size: false,
);

I’ve been using named arguments extensively in Smolblog, and I wanted that flexibility in my container. And wanting that feature is ultimately what let me give myself permission to write my own container. It wasn’t—and isn’t!—enough just to want “less code”; there has to be a reason for me to write my code.

So let’s get to it.

Level 1: it begins

We’ll start with a naive implementation just to get an idea of where we are, a simple configuration and handler.

Let’s set up some pretend services first:

class DatabaseService {
  public function __construct() {
    $this->connection = new DatabaseConnection('db://user:pass@server/db');
  }
  //...
}

class UserService {
  public function __construct(private DatabaseService $db) {}
  //...
}

class UserApiService {
  public function __construct(private UserService $users) {}
  //...
}

For configuration, we’ll create an array of arrays. Each array will contain a class' dependencies, and we’ll key that array to the class' name:

$config = [
  UserApiService::class => [
    'users' => UserService::class,
  ],
  UserService::class => [
    'db' => DatabaseService::class,
  ],
  DatabaseService::class => [],
];

And now, our container:

class Container implements Psr\Container\ContainerInterface {
  private array $instances = [];
  
  public function __construct(private array $config) {}

  public function has(string $id): bool {
    return array_key_exists($id, $this->config);
  }
  
  public function get(string $id) {
    // Check if $id is in the configuration.
    if (!$this->has($id)) { throw new ServiceNotFoundException($id); }
    
    // If we don't already have an instance, create one.
    $this->instances[$id] ??= $this->instantiateService($id);
    
    // Return the instance.
    return $this->instances[$id];
  }
  
  private function instantiateService(string $id) {
    // Get the listed dependencies from the container.
		$args = array_map(
			fn($dependency) => $this->get($dependency),
			$this->config[$id]
		);

		return new $service(...$args);
  }
}

Simple! But these are hardly real-world conditions.

Level 2: Other Parameters

Now let’s say we want to make DatabaseService more resilient. Instead of having a hard-coded database connection string, we’ll pass one into the constructor:

class DatabaseService {
  public function __construct(string $connectionString) {
    $this->connection = new DatabaseConnection($connectionString);
  }
  //...
}

Now we just add that string to our configuration… wait…

$config = [
  //...
  DatabaseService::class => [
    'connectionString' => 'db://user:pass@server/db', // This is ambiguous
  ]
];

Remember that the class constants are just strings. How is our container going to tell the difference between a class string like oddEvan\Thing\DatabaseService and db://user:pass@server/db?

  • We could check class_exists or $this->has() to see if the given string represents a class or a value.
  • We could have some way of tagging an entry as a value.

Right now, I prefer explicit signals over trying to “figure out” a programmer’s intent. So to explicitly tag this as a value, we’ll use a callable (such as an arrow function) that will return the value we want. Let’s revisit our configuration with this:

$config = [
  //...
  DatabaseService::class => [
    'connectionString' => fn() => 'db://user:pass@server/db', // This is clearer.
  ]
];

Then we’ll look for callables in the container:

class Container implements Psr\Container\ContainerInterface {
  //...
  private function instantiateService(string $id) {
    // Get the listed dependencies from the container.
		$args = array_map(
			fn($dependency) =>
      	is_callable($dependency) ?
      		call_user_func($dependency) :
      		$this->get($dependency),
			$this->config[$id]
		);

		return new $service(...$args);
  }
}

Level 3: Interfaces

What about when a class takes an interface as a dependency (which it should)? Let’s add a PSR-18 HTTP client to one of our services:

class UserService {
  public function __construct(
    private DatabaseService $db,
    private \Psr\Http\Client\ClientInterface $http,
  ) {}
  //...
}

Updating the UserService configuration is easy enough since an interface also has a class constant:

$config = [
  //...
  UserService::class => [
    'db' => DatabaseService::class,
    'http' => \Psr\Http\Client\ClientInterface::class,
  ],
];

But now we need to add ClientInterface to our container somehow. We need to have some way to give an implementation in the configuration; otherwise our container will (unsuccessfully) try to instantiate an interface!

Going back to the idea of explicit signals, we actually can use strings here:

$config = [
  //...
  \Psr\Http\Client\ClientInterface::class => MyHttpClient::class,
];

Now we check the type of the class' configuration: if it’s a string, we get that class.

class Container implements Psr\Container\ContainerInterface {
  //...
  private function instantiateService(string $id) {
    $config = $this->config[$id];
    
    if (is_string($config)) {
			// This is an alias.
			return $this->get($config);
		}
    
    //...
  }
}

Note that we are very specifically not checking if $id is an interface. We want to be able to alias any class in here in case we want to replace a particular dependency with a subclass.

We kind of handwaved an implementation of that class. What if we wanted to use something specific?

Level 4: Factories

Let’s say instead of rolling our own HTTP client, we used an off-the-shelf library like Guzzle?

$config = [
  //...
	\Psr\Http\Client\ClientInterface::class => \GuzzleHttp\Client::class,
];

According to the Guzzle docs, a Client only needs a configuration array. We could do this with our existing config structure:

$config = [
  //...
	\GuzzleHttp\Client::class => [
    'config' => fn() => ['connect_timeout' => 30],
  ],
];

And this would work! But there’s a small assumption here that could turn into technical debt.

Remember that our container splats the configuration into the parameters of the class' constructor. If the maintainers of Guzzle ever change the name of the parameter from $config to something else, our container would break. One way to avoid this would be to remove the key from the dependency array, but that still feels fragile to me. What we need is a way to create an instance of Client without assuming it will have the same constraints our services have.

We can do something similar to aliases: provide a callable function that returns the entire object.

$config = [
  //...
	\GuzzleHttp\Client::class =>
  	fn() => new \GuzzleHttp\Client(['connect_timeout' => 30]),
];

Then we check for those in the container:

class Container implements Psr\Container\ContainerInterface {
  //...
  private function instantiateService(string $id) {
    $config = $this->config[$id];
    
    if (is_callable($config)) {
			// The config is a factory function.
			return call_user_func($config);
		}
    
    //...
  }
}

Finishing up

At this point, we’ve hit all the use cases I have for a dependency injection container:

  • Lazy instantiation
  • One instance per class
  • Aliases (replacing one class/interface with another)
  • Dependencies can be other classes or functions returning a value
  • Factory methods to manually create instances

There’s a few places we could go from here. We could use the Reflection API to automatically determine configuration for some simple cases. We could (should!) add more error handling for when the configuration doesn’t match the code. And if you need those features, you can build them! Or just use something off-the-shelf that already does it.

Anyway, here’s our completed configuration and container:

// Service Classes //

class DatabaseService {
  public function __construct(string $connectionString) {
    $this->connection = new DatabaseConnection($connectionString);
  }
  //...
}

class UserService {
  public function __construct(
    private DatabaseService $db,
    private \Psr\Http\Client\ClientInterface $http,
  ) {}
  //...
}

class UserApiService {
  public function __construct(private UserService $users) {}
  //...
}

// Configuration //

$config = [
  UserApiService::class => [
    'users' => UserService::class,
  ],
  UserService::class => [
    'db' => DatabaseService::class,
    'http' => \Psr\Http\Client\ClientInterface::class,
  ],
  DatabaseService::class => [
    'connectionString' => fn() => 'db://user:pass@server/db',
  ],
	\Psr\Http\Client\ClientInterface::class => \GuzzleHttp\Client::class,
	\GuzzleHttp\Client::class =>
  	fn() => new \GuzzleHttp\Client(['connect_timeout' => 30]),
];

// Dependency Injection Container //

class Container implements Psr\Container\ContainerInterface {
  private array $instances = [];
  
  public function __construct(private array $config) {}

  public function has(string $id): bool {
    return array_key_exists($id, $this->config);
  }
  
  public function get(string $id) {
    // Check if $id is in the configuration.
    if (!$this->has($id)) { throw new ServiceNotFoundException($id); }
    
    // If we don't already have an instance, create one.
    $this->instances[$id] ??= $this->instantiateService($id);
    
    // Return the instance.
    return $this->instances[$id];
  }
  
  private function instantiateService(string $id) {
    $config = $this->config[$id];
    
    if (is_callable($config)) {
			// The config is a factory function.
			return call_user_func($config);
		}
    
    if (is_string($config)) {
			// This is an alias.
			return $this->get($config);
		}

    // Get the listed dependencies from the container.
		$args = array_map(
			fn($dependency) =>
      	is_callable($dependency) ?
      		call_user_func($dependency) :
      		$this->get($dependency),
			$config
		);

		return new $service(...$args);
  }
}

I’ll leave you with this last comment. You’ll note that our simple container still adheres to the Psr\Container\ContainerInterface interface. When I’m building a service that needs a container, I’m depending on this interface, not my specific container. The only part of Smolblog that really cares about how the container works is this configuration. And because this configuration is itself so simple, I could adapt it to a different container if and when I need to.

Which is really the whole point of this exercise: loosely couple things together. Using standard interfaces and a dependency injection container means that many of the key libraries Smolblog depends on can be swapped out. And that includes the container itself.

Thanks for reading; I’ll see y’all next time.


Use Laravel’s Illuminate Database Query Builder With WordPress

I’ve been working on Smolblog, a social web blogging app. To help me get to a minimally viable product sooner, I’ve been building it on top of WordPress. However, WordPress is built exclusively for the MySQL database, and I eventually want Smolblog to work with many different databases, especially SQLite. This means, for my own code, I need to abstract the database away.

The first pass I had at this was to simply have Query objects and services to handle those. This would effectively abstract away the entire data layer, making it completely system-agnostic. It wouldn’t even need to be a traditional database. But as I built this system out, I was making more and more assumptions about what the database and data code would look like. And while the database code was all abstracted away, I still had to write it. A lot of it. And every line I wrote using $wpdb was another line I’d have to rewrite someday.

I’ve been looking at other frameworks to use, and Laravel is by far the strongest contender. Their approach to dependency injection and services seems to line up well with how I’ve organically built Smolblog to this point. So when I found out that their database abstraction layer also included a way to use the library without taking on the entire Laravel framework, I decided to make “someday” today.

Prerequisites

  • Composer: While you can use this library without using Composer, it’s very much not recommended. That being said, if you’re using this in a plugin for general use or otherwise don’t have control over your entire WordPress environment, be sure to use Mozart or some other tool to isolate the namespaces of your dependencies.
  • Populated database constants: Some of the more modern WordPress setups use a connection string or other way to connect to MySQL. I didn’t find a way to get that information out of the $wpdb constant, so this code relies on having DB_HOST and other constants from wp-config.php defined.
  • PDO::MySQL: Illuminate DB uses PDO to handle databases, so you’ll need to make sure your PHP server has the PDO::MySQL extension installed. I’m using the official PHP image, so I needed to add these two lines to my Dockerfile:
RUN docker-php-ext-install pdo_mysql  
RUN docker-php-ext-enable pdo_mysql

Step 1: Dependency Injection

We’re going to use dependency injection to separate creating the database connection from using the database connection. This way the database connection can change without as much code changing.

The documentation for Laravel’s query builder involves calling their DB facade, a global class that calls a singleton instance. Digging through the documentation and code, it looks like the underlying class conforms to the Illuminate\Database\ConnectionInterface interface. So that’s what we’ll use in our service’s constructor:

use Illuminate\Database\ConnectionInterface;

class ContentEventStream implements Listener {
	public function __construct(
		private ConnectionInterface $db,
	) {
	}
}

Inside the service, we’ll follow the documentation, replacing any use of the DB facade with our $db object:

$this->db->table('content_events')->insert(['column' => 'value']);

Step 2: Connection Factory

Now that we know what we need, we need to create it.

The README for the Illuminate Database package has good starting instructions. We’ll combine those with data from wp-config.php and $wpdb:

use Illuminate\Database\Capsule\Manager;
use Illuminate\Database\ConnectionInterface;

function getLaravelConnection(): ConnectionInterface {
	global $wpdb;

	$capsule = new Manager();
	$capsule->addConnection( [
		'driver' => 'mysql',
		'host' => DB_HOST,
		'database' => DB_NAME,
		'username' => DB_USER,
		'password' => DB_PASSWORD,
		'charset' => DB_CHARSET,
		'prefix' => $wpdb->prefix,
	] );

	return $capsule->getConnection();
}

(As mentioned, we’re pulling the connection information straight from configuration. If you know how to get it from $wpdb, let me know!)

The prefix property on the connection works much the same way as WordPress' table prefix. Since we’re using the connection object to also build our queries, it will add the prefix to our queries automatically. Using this property will also use the correct tables for blogs in multisite, so data from one blog doesn’t leak into another.

For Smolblog, I only want one set of tables regardless of multisite. I also want to prefix the Smolblog-specific tables, mostly so they’re all in one place when I’m scrolling. So my prefix property looks like this:

$capsule->addConnection( [
	// ...
	'prefix' => $wpdb->base_prefix . 'sb_',
] );

Because I don’t want a global object or the Eloquent ORM, I can ignore the rest of the setup from the project README.

Finally, we’ll want to store this created object somewhere central. Smolblog uses a simple dependency injection container, so we’ll store it there. The first time a service that needs a database connection is created, the container will run this function and provide the object.

(Honestly, the container probably deserves a blog post of its own; you can look at the source code in the meantime.)

Step 3: Update the Schema

We have our code to build queries. We have our connection to the database. The only thing we need now is the actual tables for the database.

Here is where we can use WordPress to its full extent. We will be using the dbDelta function in particular. This will tie into WordPress' existing system for updating the database structure alongside WordPress itself.

Some plugins tie this migration code to an activation hook, but we want to be able to modify the tables even after the plugin is activated. So our process will look like this:

  1. Loop through the different tables we will need.
  2. Check the blog options for a schema version.
  3. If the version matches what we have in code, we’re up-to-date. Skip to the next table.
  4. Pass the new table schema to dbDelta and let it do its thing.
  5. Save the schema version to blog options.
  6. Rinse and repeat for each table.

At this point, I should bring up some of the caveats with the dbDelta function. The comments on the WordPress documentation are invaluable here, especially as they point out a few things that need to be consistent with our schemas.

Because there’s so many things that need to be consistent, we’ll isolate the unique parts of our table schemas to two things:

  1. A name. Because every table needs one. We will declare it without the prefix.
  2. The fields excluding the primary key. We can have UNIQUE indexes on other fields for a similar effect, but every table will have an auto-incrementing id field.

A series of values keyed to short strings? That sounds like an array! Here’s part of what Smolblog’s schema array looks like:

class DatabaseHelper {
	public const SCHEMA = [
		'content_events' => <<<EOF
			event_uuid varchar(40) NOT NULL UNIQUE,
			event_time varchar(30) NOT NULL,
			content_uuid varchar(40) NOT NULL,
			site_uuid varchar(40) NOT NULL,
			user_uuid varchar(40) NOT NULL,
			event_type varchar(255) NOT NULL,
			payload text,
		EOF,
		'notes' => <<<EOF
			content_uuid varchar(40) NOT NULL UNIQUE,
			markdown text NOT NULL,
			html text,
		EOF,
	];

	public static function update_schema(): void {
		foreach ( self::SCHEMA as $table => $fields ) {
			self::table_delta( $table, $fields );
		}
	}

	//...
}

A brief aside: Smolblog uses UUIDs for its unique identifiers, and they’re stored here as full strings in fields ending with _uuid. I ran into trouble storing them as bytes, and something in WordPress would frequently mess with my queries when I had fields named things like user_id and site_id. I’m noting this here in case you run into the same things I did.

When WordPress loads the plugin, it will call the update_schema function declared here. That function loops through the array, extracts the table name and fields, and passes them to this function:

public static function table_delta( string $table, string $fields ): void {
	global $wpdb;

	$table_name      = $wpdb->base_prefix . 'sb_' . $table;
	$charset_collate = $wpdb->get_charset_collate();

	$sql = "CREATE TABLE $table_name (
		id bigint(20) NOT NULL AUTO_INCREMENT,
		$fields
		PRIMARY KEY  (id)
	) $charset_collate;";

	if ( md5( $sql ) === get_option( $table . '_schemaver', '' ) ) {
		return;
	}

	require_once ABSPATH . 'wp-admin/includes/upgrade.php';
	dbDelta( $sql );

	update_option( $table . '_schemaver', md5( $sql ) );
}

This function takes care of the boilerplate we talked about earlier and runs the steps:

  1. It creates the table name using the same pattern as before: the base prefix plus sb_.
  2. It creates a CREATE TABLE SQL statement using the table name and fields. (It’s okay to build a SQL query this way because all of the data is coming from constants inside the PHP file; none of it is coming from form data or other untrusted sources.)
  3. It takes the MD5 hash of the SQL statement and compares that to the saved option for this table. The hash will change when the code changes, so this is a quick way to keep our code and database in-sync.
  4. If the database needs to be updated, it requires the correct file from WordPress Core and runs the dbDelta function.
  5. Finally, it saves the MD5 hash to the blog options so we know what version the database is on.

By calculating the version using the hash of the actual SQL, we don’t have to worry about whether some other version number has been updated. This may or may not be the approach you want to take in a production application, but it has proven very useful in development. This is the same idea as using the filemtime function as the “version number” of static CSS and JavaScript in your theme.


So there we have it. We’ve used the connection information in WordPress to hook up a Laravel database connection. And at some point in the future, it’ll be that much easier to let Smolblog work with SQLite which will in turn let Smolblog work on even more web hosts. And you can use this to do whatever you want! Maybe you just wanted to transfer some skills from Laravel to WordPress. Maybe you’re just in it for the academic challenge.

One thing you can do with this is unit-test your services using an in-memory SQLite database… and I’ll leave you with that.

final class DatabaseServiceUnitTest extends \PHPUnit\Framework\TestCase {
	private \Illuminate\Database\Connection $db;
	private DatabaseService $subject;

	protected function setUp(): void {
		$manager = new \Illuminate\Database\Capsule\Manager();
		$manager->addConnection([
			'driver' => 'sqlite',
			'database' => ':memory:',
			'prefix' => '',
		]);
		$manager->getConnection()->getSchemaBuilder()->create(
			'content_events',
			function(\Illuminate\Database\Schema\Blueprint $table) {
				$table->uuid('event_uuid')->primary();
				$table->dateTimeTz('event_time');
				$table->text('payload');
			}
		);

		$this->db = $manager->getConnection();
		$this->subject = new DatabaseService(db: $this->db);
	}

	public function testItPersistsAContentEvent() {
		$event = new class() extends ContentEvent {
			public function __construct() {
				parent::__construct(
					id: Identifier::fromString('8289a96d-e8c7-4c6a-8d6e-143436c59ec2'),
					timestamp: new \DateTimeImmutable('2022-02-22 02:02:02+00:00'),
				);
			}

			public function getPayload(): array {
				return ['one' => 'two', 'three' => 'four'];
			}
		};

		$this->subject->onContentEvent($event);

		$expected = [
			'event_uuid' => '8289a96d-e8c7-4c6a-8d6e-143436c59ec2',
			'event_time' => '2022-02-22T02:02:02.000+00:00',
			'payload' => '{"one":"two","three":"four"}',
		];

		$this->assertEquals((object)$expected, $this->db->table('content_events')->first());
		$this->assertEquals(1, $this->db->table('content_events')->count());
	}
}

Building Smolblog: Separation

My first exposure to the SOLID principles for object-oriented programming came from the senior developers at the WordPress agency I was working at. It was a bit of a big change to how I had normally approached object-oriented programming. I was used to objects encapsulating broad swaths of functionality related to a particular thing. SOLID demanded that my objects be smaller, more focused, and much more numerous.

SOLID was hard for me to get the hang of, and in reality, I never got the hang of it while I was working for the agency. It felt like overkill: why should I create five different classes and boilerplate a ton of infrastructure just to end up calling register_post_type? Why go to all the trouble of dependency injection when the functions I need to call are right there?

A few months removed from agency work (and knee-deep in the Ruby world), I’m finally starting to get it. And in a way, I was right: it is overkill… if I’m only building a WordPress plugin.

But Smolblog is not only a WordPress plugin.

SOLID-ly Overkill

SOLID is an acronym which stands for five principles of object-oriented programming:

  1. Each class has a single responsibility.
  2. A class is open to use by others but closed to changes.
  3. A class can be replaced by any of its subclasses according to the Liskov Substitution Principle.
  4. A class' interface is a promise to other classes to behave a certain way.
  5. A class' dependencies should be given to it according to the other principles.

Points 2, 3, and 4 are ideas I take for granted. Classes have public and private properties and functions, and while the private aspects can change as much as anyone wants, the public aspects are a contract with the rest of the program. And interfaces and inheritance are amazing ways to keep similar logic in one place and change things where they’re different. I learned this back in my C++ days. No big.

It was the first and last points that tripped me up, badly. My idea of a single responsibility was “This class handles everything to do with this thing.” The seniors said a single responsibility was “This class handles this one task.” I thought that sounded more like a function. I also struggled with dependency injection. What was the point of encapsulating logic inside of an object if you had to give the object the logic?

Trying to implement these principles just to create a post type simply wasn’t worth it. It made the code bigger and more complex than it needed to be. Combined with the fact that there were no real testing practices in place meant that trying to code fully-idealized SOLID code felt like all of the hassle and none of the payoff.

What the senior devs were aiming for was more than a couple of hooks; it was a future of much more complex functionality that would need to be picked up by a rotating squad of developers. It was a potential for writing tests on client-specific business logic.

SOLID principles aren’t overkill when you’re building an application; they’re essential.

Stop Trying To Do Everything

The first hurdle I had to get over was personal. I’m a people pleaser. I want to do everything for everyone all the time so that maybe people will like me. What I didn’t realize was that (toxic) idea had spread to my coding style: my “everything” classes were made in my own image.

I wanted to encapsulate logic into neat little packages that I could hide from the rest of the application. For example, I would want creating a new post to be (essentially) one line of code:

$post = new Post(
	id: null,
	title: 'My first blog post!',
	author_id: 5,
	content: 'This is so cool, I wish I knew how to blog.'
);

Behind the scenes, though, there would be too much happening:

class Post {
	function __construct(
		$id,
		$title,
		$author_id,
		$content,
	) {
		global $db;

		if (
			!isset($title) ||
			empty($title) ||
			!isset($content) ||
			empty($content) ||
			!isset($author_id) ||
			$author_id <= 0
		) {
			throw new Exception('Bad post data!');
		}

		if (isset($id)) {
			$id = $db->query(
				"UPDATE `posts` SET `title`=?, `author_id`=?, `content`=? WHERE `id`=?",
				$title,
				$author_id,
				$content,
				$id
			);
		} else {
			$db->query(
				"INSERT INTO `posts` SET `title`=?, `author_id`=?, `content`=?",
				$title,
				$author_id,
				$content
			);
		}

		$this->id = $id;
		$this->title = $title;
		$this->author = $db->query('SELECT * FROM `users` WHERE id=?', $author_id);
		$this->content = $content;
	}

	// Other helper methods and such...
}

This pretend class is written with these requirements:

  • Every Post object should correspond to something in the database.
  • Every Post object should have a title, author, and content.
  • Every Post object should have an ID; we can infer that a new post will not have an ID and get one when the post is created in the database.

Right off the bat, though, we’ve coded some big assumptions into our class:

  • The global $db object exists.
  • The global $db object has a query method.
  • Posts are stored in the posts table.
  • Authors are stored in the users table.

Here’s the thing that took me so long to grok: even though these assumptions are probably true now, they may not be true later. On some level I understood this, but I figured if that day came I would spend a day pouring through the codebase making the necessary changes.

People pleaser, remember?

If we were to make this code more SOLID, we’d have a few different classes. First, we’ll pare down the Post class to just one responsibility: data integrity. If the class is given bad data, it should not create the object. So now our class can look like this:

class Post {
	function __construct(
		?int $id = null,
		string $title,
		Author $author,
		string $content,
	) {
		if (
			empty($title) ||
			empty($content)
		) {
			throw new Exception('Bad post data!');
		}

		$this->id = $id;
		$this->title = $title;
		$this->author = $author;
		$this->content = $content;
	}

	// Get/set methods...
}

Not only did we take out all the database code, we also added type hints to the constructor’s parameters. This way, PHP itself can check if title, author, and content are set and throw an error if not.

Saving $post to the database and turning some author_id into an Author object with data are not the responsibility of a Post.

Creating a Dependency

Let’s go back to our hypothetical post creation and put that code in context. We’ll say we’re getting a request through the API to create a new post. With our old do-everything Post class, that endpoint class could look like this:

class NewPostApiEndpoint {
	public function __construct() {}

	public function run(WebRequest $request) {
		$post = new Post(
			id: $request['id'] ?? null,
			title: $request['title'],
			author_id: $request['author_id'],
			content: $request['content'],
		);

		return new WebResponse(200, $post);
	}
}

Short, sweet, and to-the-point. Beautiful. Except now we know what horrors once lied beneath that innocuous new Post call. We could bring all those database calls into our endpoint class, but that wouldn’t fix the underlying issue: what happens when the database needs to change?

Really, the first question we should ask is, “What is the responsibility of NewPostApiEndpoint?” Our short-and-sweet class helps us answer that question: to save a Post with the data from the web request.

What’s not included: knowing how the Post is stored. “But we know it’s a database!” Yes, we know it’s a database; the class should only know what it needs to do its job. So let’s start writing our new endpoint but leave comments where we have missing information:

class NewPostApiEndpoint {
	public function __construct() {}

	public function run(WebRequest $request) {
		$post = new Post(
			id: $request['id'] ?? null,
			title: $request['title'],
			author: // TODO We have author_id, need object
			content: $request['content'],
		);

		// TODO Have post, need to save

		return new WebResponse(200, $post);
	}
}

We’ve identified two outside responsibilities: getting an Author object and saving a Post object. Those sound like single responsibilities to me!

Here’s where the power comes in: our endpoint object doesn’t need a specific object for these jobs, just an object that can do the job. So instead of writing new classes, we’ll create two interfaces:

interface AuthorGetter {
	public function getAuthor(int $author_id): Author;
}

interface PostSaver {
	public function savePost(Post $post): void;
}

Now that we have those interfaces defined, we can finish our endpoint:

class NewPostApiEndpoint {
	public function __construct(
		private AuthorGetter $authors,
		private PostSaver $posts,
	) {}

	public function run(WebRequest $request) {
		$post = new Post(
			id: $request['id'] ?? null,
			title: $request['title'],
			author: $authors->getAuthor($author_id),
			content: $request['content'],
		);

		$posts->savePost($post);

		return new WebResponse(200, $post);
	}
}

And that’s Dependency Injection in a nutshell! Cool, right?

Yeah…except, again, we’ve only moved the complexity. We still have to make those database calls at some point. And when it comes time to finally assemble the application, we have to keep track of what classes have which dependency, and… ugh.

Can’t See the Trees For the Forest

This was my other problem with the SOLID principles: how is Dependency Injection supposed to actually make things easier? I understood the idea of passing in what an object needs, but I got overwhelmed trying to picture doing that for an entire application. Trying to keep track of all of all the dependencies for an object also meant keeping track of those dependencies' dependencies, and it didn’t take long for the infinite recursion to crash my brain.

What I failed to grasp was that knowing objects' dependencies counts as a single responsibility. So why not let computers do what they do best?

The established pattern here is known as a Dependency Injection Container. The PHP Framework Interop Group has an established interface for these containers that is widely accepted across the industry. These objects store a mapping of classes and dependencies and create properly-initialized objects.

To complete our example, we’ll use the Container package from The League Of Extraordinary Packages:

use League\Container\Container;

$container = new Container();
$container->add(NewPostApiEndpoint::class)
          ->addArgument(AuthorGetter::class)
          ->addArgument(PostSaver::class);

// Later on...

$newPostEndpoint = $container->get(NewPostApiEndpoint::class);

And that’s pretty much it! We set up our classes to accept the dependencies they need, then we set up the container to get those dependencies to the classes. If those dependencies have dependencies, the container will take care of them too.

The only thing we have left to do is actually set up our two dependencies. We added the interfaces as arguments, but we haven’t given any concrete implementations to the container. We’ll skip writing out those classes and just show how it could work here:

$container->add(AuthorRepo::class)
	->addArgument(DbConnector::class);
$container->add(PostRepo::class)
	->addArgument(DbConnector::class);

$container->add(AuthorGetter::class, AuthorRepo::class);
$container->add(PostSaver::class, PostRepo::class);

This tells our container that that anything that depends on AuthorGetter should be given an instance of AuthorRepo, and anything that needs PostSaver should be given PostRepo.

But why go to all this trouble? We’ve taken a few lines of code and spread them out over three classes (five if you count the interfaces) and introduced an entirely new library to our code. While the individual pieces of the code may be easier to follow, the flow of code through the entire application is now much more complex. What does all this extra work actually get us?

Know What You (Don’t) Need

I could say “it makes things easier to change” and leave it at that. By isolating different responsibilities in the code, it’s easier to find and change that specific code when circumstances change (which they inevitably will). But this truth can be hard to visualize when those changes seem unlikely.

I’m writing about these principles because I’m using them to rewrite Smolblog. The version currently running here is a WordPress plugin top-to-bottom, albeit with some efforts at SOLID principles. It gets the job done, but it doesn’t feel maintainable. There’s a lot of code that should be abstracted out, but I didn’t see a good way to.

For the rewrite, my guiding principle was “leave your options open.” I wasn’t sure what was going to be best for Smolblog in the long term despite feeling very sure that WordPress was the best option in the short term. I didn’t want to box Smolblog into using WordPress longer than it needed to, but I also didn’t want to bet on a PHP framework only to discover it was a bad fit and have to rewrite large swaths of code again.

About a month into the rewrite I realized I had stumbled backwards into SOLID programming. In order to isolate my code from the outside platform, I had to write interfaces for anything I needed from the platform. I started with several overly-complicated setups before everything finally clicked enough to…

Well, it’s clicked enough that I finally feel confident enough to write this blog post and introduce you to Smolblog\Core. This is where all of the Smolblog-y logic will be that makes Smolblog what it is. And the single biggest change is that while the code still depends on an outside platform, that platform doesn’t have to be WordPress.

There’s more complexity here, sure. Version one has just under 1600 lines of code while Smolblog\Core has 1800, and it can’t do everything version one can yet! But with that added complexity comes a better separation of concerns.

In the container example above, I noted that we could define interfaces in one area then set the concrete implementations in another. That’s the principle I’m using to keep the WordPress-specific code separate from the core code. This way, the WordPress-backed PostSaver class might look something like this:

class WordPressPostSaver implements PostSaver {
	public function savePost(Post $post): void {
		wp_insert_post(
			'id' => $post->id ?? 0,
			'post_title' => $post->title,
			// et cetera...
		);
	}
}

One that used the database directly could look more like this:

class DatabasePostSaver implements PostSaver {
	//...
	public function savePost(Post $post): void {
		$this->db->query(
			"INSERT INTO `posts` SET `title`=?, `author_id`=?, `content`=?",
			$post->title,
			$post->author->id,
			$post->content
		);
	}
}

And because of the abstractions we’re using (namely the PostSaver interface), nothing else has to change.

Smolblog is still being built on top of WordPress. This time, though, all of the WordPress-specific code is in its own project. All of the WordPress idioms and functions get passed through their own set of classes to match the interfaces that Smolblog requires.

Now, instead of different WordPress functions being sprinkled throughout the project, they’re centralized and cataloged. We know what Smolblog can do and what it needs WordPress (or some other framework) to do.

SOLID-ly understood

I genuinely think part of the reason SOLID never clicked for me at the agency was the simple fact that we were always going to be using WordPress. I didn’t see a difference between using a WordPress function and using a built-in PHP function; both were always going to be there, so why bother isolating them as a dependency? Now that I’m working on a project—a big one!—that doesn’t have that constraint, I’m beginning to see the value even if we were staying with WordPress.

I still maintain that if you know the plugin will never be more than a few calls to WordPress-specific functionality, like custom post types and taxonomies, then it’s best to use some namespaces and just get the job done. But I should also admit that it’s taken a lot of experience to know which is which.

It’s not lost on me that at some point in this project I’ll have 90% of a framework for using these principles in a WordPress plugin if not 90% of a framework in general. Combining these principles with Domain-Driven Design and Command-Query Responsibility Separation almost guarantees it… but that’s another blog post.

For now, I’ll just go ahead and admit it: y’all were right. As usual.


Building Smolblog: Open

I’ve been meaning to blog more as I’ve been working on the actual Smolblog code. And, with one of my other side projects finally shipping, I feel like I can start putting down some thoughts here. So here I am.

And the first thing I want to talk about isn’t just code, it’s about what specific words mean. I specifically want to start with something that isn’t a programming or coding problem. It’s really easy for us developers to try to solve all sorts of problems with code. But while well-built software in the right hands can do amazing things, the biggest problems we will solve are social, not technical.

So when I say I want Smolblog to be “open,” this is a question that is more social than technical.

You Know What Else Means Open?

Lots of things like to say that they are “open.” Google repeatedly calls Android open. Epic games has been called “champions for a free and open internet." Cryptocurrency and blockchain projects are often touted as decentralized and open. And WordPress, the system that Smolblog is currently using as a foundation, is famously open. But these all mean different things.

Epic Games advocates for free and open systems where anyone can install anything they want, especially their own Epic Games Store. That store, at least currently, does not have the freedom for anyone to sell whatever they want. By the same token, Android is a freely downloadable project that can be used by any phone manufacturer, but it is heavily tied to the Google Play store that has its own approval process. And while anyone can get into cryptocurrency and make transactions, the resources required to actually participate in “mining” on popular blockchains are prohibitive to all but a few.

So when I say Smolblog is open, what do I mean? How about this:

Smolblog’s Definition Of Open

  1. The Smolblog software is freely available to use, modify, and share.
  2. Interactions do not require blogs or users to be on the same instance of the Smolblog software.
  3. Users can reasonably expect to take their data from one instance of the Smolblog software to another with no change in functionality.

A Brief Aside About Free and/or Open Source

The first point is one well-known in the software world. It corresponds to the freedoms championed by Free Software and Open Source advocates. Though the two groups have philosophical differences, they agree in practice: software should be free to use, free to change, and free to share (both modified and unmodified).

This is often found in libraries, frameworks, and infrastructure for web apps. Most web apps are written in scripting languages where there is no way to run the app without having the source code. And as companies base more and more of their existence on the web, the level of control that freely usable and modifiable software provides is essential.

While the source code is available for free, and anyone can search on their preferred search engine for help, companies with the budget to do so often buy official support from the vendor. Vendors often also provide fully-hosted versions of their products as a subscription offering. Discourse and Gitlab are two examples of projects like this.

This approach hasn’t worked for everyone, though. Elasticsearch used to be a project with Open Source and an official hosted solution. However, in the mid-2010s, their paid hosting was undercut by other vendors that offered the open source project on their systems, not Elastic’s. Elastic eventually changed their license to prohibit this, but in doing so violated the “freedom to use.”

While I don’t envy Elastic (and other similar companies) for the decisions they had to make, it highlights the key tradeoff of Free Software: the freedoms apply to everyone, including competitors. If Smolblog is going to be an open system, it has to be open for everyone. Any plan to make money from Smolblog has to take this into account.

How Do We Want To Do This?

First, some technical background. Smolblog is currently using WordPress as its foundation. I use those specific words because while Smolblog currently exists as a WordPress plugin, it is being built as its own product. Not everything in WordPress may be used or supported by Smolblog in the long term, but by making use of WordPress Smolblog is able to be a complete product sooner.

So, for our definition of open, we have three basic pillars: software, interactions, and data. Let’s tackle them in reverse order.

Open Data

This is a technical problem, and a relatively easy one at that. Most systems and web apps have a way of exporting a user’s data for download. This has been helped along by privacy laws in some parts of the world.

Smolblog will need a feature that allows users to download their data in a standard format. Smolblog will also need a feature that allows users to upload their data export.

This feature should be as self-contained as possible. The downloaded export should contain everything needed to load the data into a new server with minimal setup. This includes not just posts and images but also information on connected social media accounts and plugin-specific data. Another Smolblog server should be able to take this dowloaded export and re-create the user’s data from it.

By making this feature robust, it would provide end-users the freedom to leave a server for whatever reason they need, whether social, technical, or financial. It would also provide server maintainers the social freedom to remove unwanted users: with easy data portability, removing a user becomes less a case of “freedom of speech” and more a case of “should this speech be on this platform?”

WordPress currently has basic functionality in this area, but based on my time in a professional WordPress agency, it lacks the robustness this feature would require.

Open Interactions

Smolblog is intended to be as personal as a blog and as fun as social media. Part of social media’s appeal is the ease of interactions between people, such as replies or likes.

Smolblog features involving interactions will need to work identically whether the blogs are on the same server or different servers. No core features should rely on a central server.

The clearest example I can give of this is email. No single company “owns” email. Email works the same whether a user is on Gmail, Outlook, or iCloud (extensions, plugins, and other add-ons not withstanding). Most importantly, emails can be sent between users on the same server (bob@gmail.com to alice@gmail.com) or users on different servers (bob@outlook.com to alice@icloud.com).

Social interactions on Smolblog need to work the same way. A blog on smolblog.com needs to be able to interact with a self-hosted blog (say, oddevan.com) just as easily as another blog on smolblog.com. We don’t know what these interactions will look like yet, but this will be a requirement.

Some interactions, like following and reblogging, can be handled through existing standards like RSS/JSONfeed and oEmbed. This can open these features beyond Smolblog and extend Smolblog’s “openness” to other sites and apps.

Open Source™

This is more than just making the source code available. To embrace this as a principle and not just a bullet-point, Smolblog needs to not only have an Open Source license but be written in a way that is truly open.

The majority of the Smolblog project will be released with a strong copyleft license. Exemptions to this can be made in the interest of supporting the project and its openness.

I see three tiers to this:

Tier One: Copyleft through the GNU Affero General Public License

The Affero General Public License (AGPL) is possibly the strongest (or most restrictive) open source license. It requires the full source code of the application to be made available for sharing and modification to all users of the application, including users that only use it as a web app. It is called a “copyleft” license because any changes or derivative works must also be covered by the AGPL. For most cases, this will ensure that a Smolblog user can get not just the “official” source code but the source code to the specific server they are on.

WordPress currently uses an older copyleft license that provides most of these freedoms, but there is one key exception. Code for a web app is never “distributed” to its users, only those running the server. Automattic, the company behind WordPress, is able to use this exception to make products using WordPress like P2 exclusive to their own services. While they say they are committed to data portability and open source (and they have been), the ElasticSearch feud has shown that many companies will do everything they legally can.

We want to avoid any Smolblog or Smolblog-derived products from falling into this trap. The AGPL provides legal coverage for this.

Tier Two: Permissive through the Apache License

Licenses that do not require derivative works to be covered by the same license are sometimes called “permissive” licenses. These are especially useful for libraries and frameworks since they can be used by developers in commercial or private projects without involving the company lawyers.

Some of the code written for Smolblog will have a general purpose outside of the project. These could include tools for working with a social media site’s API, a library for putting data into a standard format, or a framework that enables a particular programming style. As part of being in a community of developers, sharing this code with a permissive license will enable Smolblog to benefit people beyond its users.

The Apache License is a recommended permissive license as it includes permissions and rights related to software patents.

Tier Three: Proprietary through individual commercial licenses

Wait, what? Hear me out.

This comes back to the definition of “open” I mentioned at the beginning. Smolblog being open means data portability and decentralized interactions as much as it means Open Source. Of those three principles, Open Source is the one least valuable to the average user (despite its necessity for the other two). There may be times where compromising a little on Open Source can enable uses for Smolblog that make it useful for even more people.

I don’t expect these situations to manifest anytime soon if ever. But putting this option on the table now means that anyone contributing to Smolblog’s source code is aware of it and can agree to it. Asking contributors to assign full copyright to their contributions, while reasonable, has the potential for abuse. Instead, I would prefer that any contribution agreement for Smolblog list the ways the contribution can be used.

One benefit to commercial licenses is being able to custom-tailor them to each business. For example, say a hosting business wants to offer managed Smolblog hosting. Their “secret sauce” is a caching layer that requires a custom-built plugin. This plugin wouldn’t enable any user-facing features, and it would not work without the host’s custom software. This business could get a commercial license limited to their integration code that would exempt their plugin from the AGPL requirements in exchange for a commission on their Smolblog service.

I choose these two examples specifically: Licensing Smolblog under the AGPL is intended to prevent someone building a product or feature locked to a specific provider. Users of Automattic’s P2 cannot move to a different WordPress and keep the same experience; the data is not truly portable in that sense. The hosting company example does not involve any impact to true data portability or use, since the user experience (and the data created by the users) is indistinguishable from the main project. The openness of Smolblog is not impacted in any meaningful way, and the project gets a source of funding that is not dependent on user-hostile advertising.

But as I said, this is all philosophy. None of it matters until Smolblog is actually built. And so we build. You’re welcome to join along.

Take care of each other; I’ll see you next time.


Introducing Grimoire

TL;DR: I'm building Deckbox but for Pokémon cards. Headless WordPress app with a Next.js/React frontend. You can browse the catalog now; you can also request a beta invite if you want to try it out. Want to learn more? Read on!


My first job out of college was for Blackbaud working on their next-generation platform. It was a software-as-a-service built with an API-first design: every action taken by a user, even through the official app, went through the API. During my time there, the primary app went from a Windows application to a Javascript application (something that made my Mac-loving heart happy), and this was possible because the user interface was decoupled from the application logic.

I think this architecture has stuck with me more than I realized. As headless WordPress took off, I had the chance to learn how to properly build a API-based application. Now all I needed was a problem to solve...

A problem like the massive amount of Pokémon cards in my collection. I've started selling some of them on TCGplayer, and while they have a decent app, it didn't quite fit my needs. I needed an application I could store my catalog in and quickly check it for cards that have increased in value. It also needed to be able to tag different versions of the same card for when it came time to build a deck.

I'd worked on something for this before, even wrote a blog post about it, but now it's time to finish the job. To that end, let me introduce Grimoire.

Yeah, it doesn't look like much. In the interest of finishing, this is a minimally viable product. In this case, lots of Bootstrap. But let me show you what there is!

The Features

One one level, Grimoire is just a catalog of Pokémon cards. It uses the TCGplayer API to get the different cards. TCGplayer is already invested in having an extensive catalog of all the Pokémon cards printed, so that is thankfully work I do not have to do. For Grimoire, I wanted to add two things to their catalog:

Unique, Discoverable IDs

A Grimoire ID (pkm-evs-49-r in the screenshot) consists of up to 4 parts:

  • The game the card is from. In this case, pkm denotes a Pokémon card. This part is mostly in place for when I inevitably support for Magic the Gathering.
  • The set the card is from. This card is from the Evolving Skies set, so it's abbreviated evs.
  • The card number, including any descriptors and ignoring any total numbers.
  • One last part for any extra modifiers that are not part of the card number. The card in the screenshot is a reverse holographic printing, so its ID has an extra r.

The idea is that by looking at the card, you can infer the ID as long as you know the patterns. This is the part of the project that's going to take the longest, as there is a major manual process to all this. Most cards can fit in this pattern, but there are always exceptions. There are deck-exclusive variants, league staff variants, and a bunch of other cards that will have to be manually assigned IDs.

It's okay. It's not like I have a full-time job or anything.

Identify Alternate Printings

The card in the screenshot above is a reverse-holographic printing. There's also a normal, non-holographic printing. These cards are the same from a gameplay perspective, but they have different collection values. With Grimoire, alternate printings are all linked together:

Different card, different price, but same text. The two versions of this card link to each other. This is largely in place so that, in the future, it can be easier to find out which cards you have as you're building a deck. Some desirable cards may have more inexpensive versions. That's why it was important for this feature not just to work within a set, as shown for the Pikachu card, but between different sets.

One of the headline cards for Evolving Skies was Umbreon VMAX, shown in this screenshot:

There was also a "secret" version of this card with alternate artwork:

Very cool! And very expensive. But that wasn't the last time they printed this card. In the Brilliant Stars set, there is a special collection called the Trainer Gallery featuring Pokémon posing with their trainers. And here's the gigantic Umbreon:

All three of these are different cards with (very!) different prices. But when building a deck, all three are functionally the same.

Personal Collections

But I set out to build a personal catalog, not just a list. So once I've logged in, how does that change things?

At the bottom of each card's page, there is a list of the different collections I've made. I can change the quantity of this card in each of those collections. In this case, it's a pretty rare card, so I've only got one.

On my profile page, I can see all my collections and the current value of those cards:

And because entering this data can take a long time, it was important for me to have a CSV export so that I can download my cards and their quantities in a standard format.

Tech Specs

I could write several blog posts about the tech problems I solved making this app. And in fact, I probably will, sometime in the next... time. If you want to see a writeup on any of the features, leave a comment!

At a high level, the frontend website is a fully static Next.js application. This means that the website is written in React and TypeScript with anything that can be rendered ahead of time written to static HTML. It's currently hosted on Vercel, but I could just as easily host it anywhere else because, again, it's static HTML. If Geocities was still around, I could host it there.

That would be a bad idea, I would not host it there.

The backend is a WordPress theme hosted on Smolblog. Remember that? The static rendering uses GraphQL to get the cards and sets, while the more interactive features use custom endpoints in the WordPress REST API. The only reason for the separation is... that... I couldn't figure out how to make the custom endpoints I wanted in GraphQL and I didn't feel like taking the time to learn it just yet.

But there were plenty of fun problems I did solve, including

  • Excluding browser-only Javascript from static rendering in Next
  • Setting up OAuth so that it works with WordPress multisite correctly
  • Writing TypeScript types for a React component that didn't include them
  • Using basic card data and an MD5 hash to find different printings
  • Store authentication credentials in a cookie
  • Use React context to access authentication details throughout the application
  • Set up custom tables in WordPress and use them with the REST API and GraphQL

The Future

As I get to the end of the first version of this project, I learned an important lesson:

WordPress was a bad choice.

I don't say that lightly. I've spent the last few years of my life immersed in the WordPress world, and I truly believe it can be used for almost anything.

But in the case of Grimoire, the data does not lend itself to custom post types as easily as custom tables. While sets and cards could conceivably be custom post types, they would rely heavily on custom metadata and taxonomies. The data is much more suited for a traditional relational database. At this point in the project, WordPress is only being used for authentication and as an administration tool. For the future of Grimoire, the benefits of a fully-featured platform like WordPress are outweighed by the difficulties in working directly with the database.

I have a few plans for Grimoire moving forward:

  1. Rewrite backend in a modern framework like Laravel or Ruby on Rails. This will making with the database much easier.
  2. Consider using Next.js' server-side capabilities. This could take some pressure off of the backend by moving some functionality closer to the edge.
  3. Add detailed card information. This will only need to be stored once per printing and can enable some fun features like finding recent cards that work well together.
  4. Sync inventory to TCGplayer. I'd love to use Grimoire to manage my inventory that I have for sale.
  5. Offer a "Pro" subscription with access to historical pricing data and store inventory management. Because the people most willing to pay for something are the ones making money.

I've rambled long enough. Go check out Grimoire and let me know what you think!


Introducing Smolblog

Around the end of last year, I wrote an essay about what made Tumblr unique in the blogging world, followed by another essay about different technologies that can be used by a blog platform. And then I did nothing.

Well, not nothing. I went and got a new job. I also started sketching out some more concrete ideas. And while I want to be farther along in the actual development of things, I also want to start getting feedback on the ideas themselves.

Full disclosure: I'm great at talking about ideas, but I'm still learning to actually execute on them. Which is kinda disappointing, since the execution is where so many ideas go from "good" to "awesome." So, bear in mind, this is an idea. It may not get very far, it may not get very good, it may crash and burn spectacularly. But these are problems I have wanted to solve for myself, and if I can help solve them for others, then I feel that I must try. So with that, let me announce...

Smolblog

The name was carefully considered and chosen for the following reasons:

  • “Smol” is one of my favorite “internet words.” It’s small, but more adorable. More comfortable. “Small” isn’t big enough, but “Smol” is just right.
  • It’s a blogging platform.
  • It’s for the space in between a micro-sized blog and a medium-sized blog.
  • Most importantly, smolblog.com was available.

Side note: it’s honestly ridiculous how hard it is to get a good dot-com these days.

“Smol” blogging is something I want to emphasize with this platform. Blogging on platforms like WordPress and Medium can feel intimidating. You have a blank page and a site that encourages posting about “big” ideas. What Tumblr excelled at was encouraging small posts of just a picture. Just a sentence. Just a link.

It’s no coincidence that I’ve probably posted more on my Tumblr blog in a year than I did on my WordPress blog in five. While Medium has become a home for presenting big ideas, Tumblr was a home for just... being yourself. That’s the kind of atmosphere I want to build on Smolblog.

The Mission

A project needs to have guiding principles, a central problem to solve. Focus on these can help determine what features need to be built and which ones can wait for later. They can also help set the tone for interactions between people within and around the project.

Keep the gates open

  • Anyone should be able to set up a technically identical server. While some design elements and trademarks may be reserved for the ”canonical” site, there should be almost no difference using sites hosted on different servers.
  • Individual blogs should be easily moved (import/export) between servers or saved offline
  • Use open protocols for interactions

The end result is something like Mastodon: you don’t need to be on the same server as someone in order to interact with them.

Play well with others

  • Allow synchronization from and syndication to other social networks
  • Use oEmbed instead of copying others’ posts

I'm going to be much more willing to try something new if it means I don't lose the social connections I've made on existing services. I'm shooting for Twitter and Tumblr crossposting for phase one as these are the services I use most.

Enable self-expression

  • Allow multiple blogs on multiple domains
  • Allow user-installed themes
  • Make it easy to post small posts and reblogs

There is a time and a place for standardized, beautiful web design. Your personal site should only be that if you want it to be.

Phase One

Spoiler alert: it's WordPress. It's always been WordPress. Why?

  • It's easily deployable on inexpensive web servers.
  • It's well-supported and actively maintained.
  • It comes with several key features for Smolblog out of the box, including but not limited to
    • Multi-user support
    • Multi-site support
    • Image management and manipulation
    • REST API
    • oEmbed provider and consumer support
    • Standard format for import/export
  • Lots of people are invested in extending WordPress for custom purposes. I work with some of them.

So while I talk about Smolblog as its own thing, the first phase (at least) will be delivered as a WordPress plugin. If the project ever outgrows WordPress, then it will need to be at least as easy-to-deploy as vanilla WordPress is currently.

Building on top of WordPress, I plan on adding Tumblr and Twitter crossposting. I've already worked on a large part of the logic through a previous project of mine. By the end of phase one, I'm hoping to have the following features in addition to a standard WordPress Multisite install:

  • Import a full Twitter archive
  • Authorize against Twitter as a single account
  • Pull tweets from that account on a regular basis if they do not already exist on the site
  • Pull Retweets and Retweet-with-comments as embedded tweets to clearly deliniate original and reposted content
  • Push new posts to Twitter, either in full or as links back to site
  • Authorize against Tumblr as an account and indicate a blog
  • Pull posts from that blog, both historical and on a regular basis if they do not already exist on the site
  • Pull reblogs as embedded posts to clearly deliniate original and reposted content
  • Push new posts to Tumblr in as native a format as possible

This should lay the groundwork for adding more services as time and available methods allow.

Phase Later

Some other ideas that will have to come later, after the basic version is working:

  • Posting natively with ActivityPub
  • Cross-posting to Facebook Pages (depends on API support from Facebook)
  • Cross-posting to Instagram (currently being privately tested by Facebook, will depend on Facebook being kind and benevolent and honestly I don't expect this to ever be possible)
  • Cross-posting to YouTube/Vimeo
  • Cross-posting to DeviantArt
  • Dashboard for following other sites/people (use RSS/ActivityPub to "follow anyone")
  • Easy reblogging-as-oembed
  • Supporting Micropub APIs well
  • Native post editor for when Gutenberg is too much
  • Allow end-user editable custom theme
  • Easy podcasting
  • Asks

There's a lot here. I'm not going to be able to do this myself. But I'm going to try. If you want to follow along, the best place to do that is here on this blog (see email widget at the bottom of the page). If you want to see or contribute code, check out the GitHub repo for the plugin.

I have lots of hope and plans. I hope to have more. Thanks for reading, everyone.


Technology Cannot Make a Platform, But It Does Help

The web literally exists to share content. The first web browser was also a web editor. And ever since then, programmers have been working on ways to make publishing easier and better. As such, there’s no shortage of existing technologies that a new platform can build off of.

A brief aside about the nature of technology and its place as a part of a whole

It’s easy to think that the right technology will change everything. That somehow, the right code will make all the problems with Old Blue go away and we will live happily ever after in our new paradise.

It’s easy to forget that Posterous existed around the time of Old Blue’s ascendency. It was blessed with better technology, including a dedicated URL shortener and the ability to post via email. Old Blue arguably had inferior technology. But it won. The right technology came together with the right design and the right people at the right time, and the lightning in a bottle struck.

It takes more than good technology to change things. It takes good design, good timing, and a good understanding of the problems being solved. But the right technology can enable change. And as we talk about the technologies that can enable a new platform, it’s important to remember this.

The Interface Is Hot

So, for this essay, let’s look at some interfaces. These are also called “protocols” or “standards.” The general idea here is a group of people have written down, in technical language, how a thing should be accomplished. The most obvious of these would be the HTTP standard that governs how web browsers and servers talk to each other.

We’re not talking about code yet, just the ways we can use it.

oEmbed

This is what turns https://www.youtube.com/watch?v=dQw4w9WgXcQ into the embed code that makes all your friends hate you. It involves a few steps:

  1. Blog gets URL from user.
  2. Blog looks up oEmbed endpoint for the URL, either
    • Matching the URL to a list of known endpoints, or
    • Looking for a particular link tag in the page’s head.
  3. Blog hits the oEmbed endpoint and gets back the code required to embed the content from the URL into a page.

While this was likely originally intended for video websites, it has since grown to encompass all manner of sites, including Old Blue herself. The maintainers of the standard have a (non-comprehensive) list of sites using oEmbed on their website.

The takeaway: Reblogs worked on Old Blue because everything was still happening within the platform. Old Blue was able to enforce attribution and keep social information flowing back to the original poster of the content, no matter how far from the original it traveled. On the open internet, however, the distinction between “reblogging” and simple plagiarism can be hard to see. The ability to embed posts from other blogs, however, can re-create the idea of reblogging while maintaining attribution and social information.

RSS / JSON Feed

RSS has been used for nearly 20 years to allow other sites and programs to read updates from blogs and other regularly-updated websites. It’s evolved slowly, but its simplicity has allowed it to remain relevant even as most internet users don’t realize they’re using it.

Today, though Google Reader has shut down, RSS readers can still be found in services like Feedly and Feed Wrangler. It’s also used to populate stories in the Apple News app. Most prevalently, though, it’s used to deliver every podcast episode to their many listeners.

JSON Feed takes the same principle as RSS but uses the more JSON instead of XML as its primary syntax. This makes the format easier to understand at a glance, and it helps make the format more resilient in some edge cases.

The takeaway: Old Blue’s dashboard allowed you to follow other blogs on the platform. A decentralized platform would need a standard way to follow other blogs, and it already exists in this.

oAuth

This is the authentication flow that allows external, third-party apps to tie back into a platform. By now, it’s hard to exist on the internet without using this to connect one app or website to another. Whether it’s signing into a mobile game with Facebook, or connecting Old Blue to Twitter, everyone’s familiar with “An app would like permission to connect.”

The takeaway: No social network can exist in a vacuum, at least not anymore. Any new platform is going to need to exploit connections to other networks, even if only for cross-publishing posts.

Webmention

Webmention is a new standard that allows posts that are responses to others to link to each other automatically. It is patterned after the similar functionality found on social networks.

The takeaway: Once again, getting the same social information normally found on a monolithic platform would be key for making a decentralized platform feel like a “normal” social network. This is a relatively new standard, and care would have to be taken to make sure that spam and harassment wouldn’t overwhelm they system.

When a Plan Comes Together

None of these technologies alone will make a new platform successful. Even all of them together doesn’t guarantee success; in fact, if the different parts are not integrated well, the end result will be worse. Much worse.

Many of these technologies are in use by the IndieWeb, and all of them have open-source code that can be used by any platform. There is work being done to make these technologies more accessible and usable. And I am particularly impressed by the Micro.blog platform that has taken many of these technologies and others and made them into a plausible alternative to Twitter.

A new platform has to be aware of how these technologies interact. As I mentioned earlier, Old Blue won not on the strength of its technology but in how it used that technology to meet the goals of its users. Any potential replacement for Old Blue will need to take the same path: choosing the right technology and presenting it in the best way to allow people to understand it and use it effectively.

Design is a hard problem.


What Makes A Platform, or How Do We Recreate Old Blue

It’s not enough to just make something. It’s got to be worthwhile. So if we’re going to do this, we’re going to do this right. Let’s start with the past.

What made Old Blue so good?

Old Blue (the site I will not name for fear of Big Red) was lightning in a bottle. There’s not way any site can hope to recreate the same success. It was the right parts at the right time, and whatever truly takes its place will be something unexpected. So what were the right parts?

The Easiest Way To Actually Blog

Old Blue removed a lot of the friction of blogging. These weren’t just technical challenges, though it took care of those as well. There were no servers to configure, no software to download. You picked a username and boom! You had a blog.

Big deal, other services (like WordPress and Blogger) were that easy. Where Old Blue really excelled was in getting content onto your blog. You were allowed and even encouraged to post content you found, not just content you wrote yourself. This was emphasized further by the “reblog” functionality that allowed you to easily repost content from another’s blog onto your own, giving you content for your own blog while attributing it to the original poster.

The problem of starting a blog is easily solved. Old Blue solved the much harder problem of how to easily get content onto a blog.

Dashboard Confessions

Even with the reblog button, though, there was still the matter of finding blogs to reblog from. For this, Old Blue took a page from the then-new Twitter and added the ability to “follow” other blogs. Their posts would then show up in a standard format on your “Dashboard.”

While this took away a large portion of the customization, it made keeping up with blogs easier than ever. There was no worrying about RSS feed readers or poorly-configured Google Analytics to worry about; readers got to read and bloggers got their consistent audience.

Mid–2000s Geocities-Style Self-Expression

Purists will complain about the single-column layout of most Old Blue blogs. They will decry the lack of responsiveness, complaining in tandem that the owner has neither heard of smartphones nor twenty-seven-inch monitors. One comment complained that the state of web design on Old Blue was similar to Geocities in the mid–2000s. I agree wholeheartedly, but I see it as a positive.

Self-expression has always been a part of the social internet. It started with Geocities sites, migrated to MySpace profiles, and eventually settled on Old Blue blogs. All of these allowed mostly unrestricted styles, letting site owners pick and choose random HTML, CSS, and JavaScript snippets from across the internet and blend them together into a miasma that was unmistakably them. Old Blue took it a step further, allowing custom domain names for free. If you didn’t want Old Blue’s name anywhere on your public blog, you didn’t need it.

Did it look ugly? To some. Did it sometimes break? Yes. But it gave people ownership over their blogs, allowing them to feel like their space was truly theirs.

Anything Goes

Everyone “knows” that Old Blue was full of illicit/NSFW material. And, let’s be honest, it’s made it hard for many to take the service seriously. In a professional context, the last thing a service needs is something work-related showing up next to something, well, not safe for work! This is doubly true when it comes to advertising, a sad fact that has robbed the service of much-needed revenue.

And yet, this exceptionally permissive content policy had a side-benefit. Content creators were free to post without fear of their content being removed for a nebulous “terms of service violation.” This was especially relevant in the wake of other online communities like LiveJournal and FanFiction.net nominally “cracking down” on adult content. These crackdowns were, at best, selectively enforced and relied heavily on community reports; the end result being illicit material that was nominally disallowed but somehow acceptable or unknown to the wider community was able to survive on those sites.

Content creators whose work was illicit (or even objectionable in other ways) could post freely on Old Blue without worrying about their content suddenly disappearing. This drove more people to the platform, in turn making it more attractive to other content creators with “safer” material. The network effects took over and made Old Blue a force to be reckoned with.

Hyper-specific Hyperfixations. Or not.

Old Blue made it incredibly easy to sign up and start a blog. That blog could be as specific or general as you wanted. And when you got to the point where you needed a different space, you could start another blog. And another. And another.

Content creators could make different blogs for different fandoms, different levels of content safety, or just different ideas in general. This gave rise to creatively-named specific blogs, like the notable “effyeah” named blogs, or particularly specific names like “picsthatmakeyougohmm.”

What Would We Need?

So, using these principles, what features would a potential replacement for Old Blue need?

  • Low-friction signups
  • Easy to find and post content
  • Easy to make multiple blogs
  • Easy to follow interesting blogs
  • Open-ended theming
  • Custom domain option
  • Clearly-defined (if not permissive) content policy

Five of these are technical problems. Good programming and good design can make these features sing. The issue is the last, social problem: the content policy.

The only site of any significant size that has survived with a permissive content policy is Archive Of Our Own. It’s run by the Office of Transformative Works, a nonprofit dedicated to making a space for works that would not otherwise have a home. As such, they have devoted significant resources to ensuring their policy can withstand legal challenges, and they rely on true tax-deductible donations to fund the site instead of skittish advertisers. Any platform that would truly wish to fill the shoes of Old Blue would probably need to take a similar approach.

An alternative is the one taken by WordPress. Savvy web citizens know that there are two sides to WordPress: the free website where anyone can sign up for a blog, and the open-source software anyone can install for free on their own web server. While downloading and installing WordPress is not necessarily for the faint of heart (it requires some technical knowledge of web servers and how to maintain them), WordPress is widely considered one of the easiest pieces of web software to install and use.

This ease of deployment allows the free website WordPress.com to have a stricter content policy, since anyone adversely affected can take their content to a self-hosted blog with a little effort. This is more than simply offering a blog “backup”; WordPress has built-in mechanisms to move content from one WordPress-powered blog to another with few changes. A blog hosted on WordPress.com with a custom domain can be changed to a self-hosted WordPress blog with few to no visible changes to visitors.

While the WordPress method doesn’t eliminate the social problem of a content policy, it does reduce the stakes. If a group of users find the content policy onerous, they can set up (and pay for) their own WordPress-powered platform.

What next?

And here is where I will cut this off. I humbly submit this for comment, knowing I’ve left some things out that may not have been integral to my experience on Old Blue but essential to others.

I’ll also be working on a follow-up to discuss particular technologies that could be used to create a new platform in this vein, so if you have any suggestions there, I’m all ears.

But I do want to close with this: these are ideas. These are thoughts. And that’s all they are. Building a platform takes a lot of work, both in the programming but also in how it is socially maintained. And as Facebook, Twitter, Google, and Big Red are learning, the rules you choose to have and how you enforce them can have dramatic consequences to the community that builds up around your platform. This is not something I can tackle on my own, and it is not something I would ask anyone to volunteer for.

This is a thought exercise, a way of getting these ideas out of my head. I hope you find it useful, or at least a little informative. And if it helps shape whatever platforms come next, I’ll be even more happy. Thanks for reading; I’ll see you next time.