Unexpected Unpermitted Parameters In Rails

Based on a true story; I’ve reframed this to be about a personal project to protect identities involved.

Some surprises are good; this one is just annoying. I recently had some unexpected “unauthorized parameters” show up in my Rails app logs during development. Nothing was breaking, it was just noise. Annoying noise. Turns out, there were two issues at play:

First: Unexpected Parameter

Let’s say I had a POST endpoint at /some_things/:session_id that expected a JSON payload like this:

{
	"some_param": "one",
	"some_other_param": "two"
}

Then in my controller, I permitted the parameters like so:

permitted_params = params.permit(:session_id, :some_param, :some_other_param)

And yet, without fail, my logs would show “Unpermitted parameter: :some_thing”. What gives?

Turns out, by default Rails will wrap a JSON or XML payload in its own singular parameter which turned my payload into this:

{
	"some_param" => "one",
	"some_other_param" => "two",
	"some_thing" => {
		"some_param" => "one",
		"some_other_param" => "two"
	}
}

There are reasons for this that mostly have to do with building create/update routes for resources. Reasons that have nothing to do with my controller!

The solution

Fixing this meant adding this single line to my controller to opt-out of this default Rails behavior:

wrap_parameters false

But I was still getting “Unpermitted parameter” in my logs, this time from a different source.

Second: Premature Permitting

The log entry was pretty sparse, and I wanted to know what line was causing this error! The quickest way was to change the behavior when Rails encountered an unpermitted parameter to throw an exception instead of just logging the parameter. This should not be done in production!

ActionController::Parameters.action_on_unpermitted_parameters = :raise

Unless you need to have your application come to a full stop if the input is even the slightest bit off, you don’t want to do this in production. But doing this locally, it will pinpoint exactly where the issue is happening. And sure enough, I found the culprit:

class SomeConcern
	# ...
	def parameter_is_set?
		params.permit(:override)[:override] == "yes"
	end
	# ...
end

Somewhere deep in the controller stack, I was doing what I was “supposed” to do and only checking “permitted” parameters. Since this was a relatively safe check, I removed the “permit” check:

params[:override] == "yes"

This left the only params.permit call as the one in my main controller.

And the logs were quiet again. For now…


A Stupidly Simple PHP Dependency Injection Container

I’ve already written at length about dependency injection. And in the months since it’s only proven to be more helpful. But just because I got over some of my hangups about SOLID doesn’t mean I got rid of all of my bad habits.

Particularly the bad habit of deciding that even though there’s a perfectly servicable library I’m already using, I can’t ignore the persistent thought that I can do better.

So I did. I got irked by something and ended up writing my own dependency injection container.

The lay of the land

The way I’m building Smolblog’s core library at the moment separates objects into two categories:

  • Value objects contain strongly-typed information. They are (mostly) read-only, as mutation should only happen in certain places. Any methods in a Value object should be self-contained; they do not call out to other objects, Services, or dependencies. Value objects are state, not code.
  • Service objects perform actions. They can have dependencies on other services and objects and should be given those dependencies at construction. They can act on information in Value objects; those should be given to the Service when the particular method is called. Services should typically not contain data; they should be able to function as singletons (regardless of whether they actually are). Service objects are code, not state.

Having this separation has actually really helped me focus the architecture in Smolblog, and it’s kept me from making any one class too “big.”

Dependency injection containers are classes that store dependencies for other classes and can provide instances of them. Essentially, instead of creating new instances, you get new instances from the container.

Where normal code might create a service like this:

$service = new Service(db: new Database(), fetch: new HttpClient());

Using a dependency injection container, it would look like this:

$service = $container->get(Service::class);

This takes all the responsibility for knowing how to instantiate a service away from the classes throughout the application and centralizes it into one place.

Containers are a common pattern, such that there is a common interface for containers to use: PSR-11. This way, different frameworks and libraries can define containers, and other libraries can use them without having to depend on the specific behavior of specific containers. For a while, I was using Container from The League of Extraordinary Packages as the container for Smolblog.

Until I wasn’t.

Preoccupied with whether I could

Let me state two things first:

  1. I could not have built my own container at the outset. I needed to fully grasp the concept first, and that could only happen by writing my own code against another library.
  2. Nothing in this article is a dig against the League’s Container. I want to be absolutely clear on this. I’m not interested in starting drama or picking fights.

But as I got more and more into using dependency injection, especially in the very specific ways I was using it for Smolblog, I realized how… simple the concept was.

In PHP, every class has a static constant class that is simply a string of the fully-qualified class name:

namespace oddEvan\Example\Simple;

class StupidSimple {}

echo StupidSimple::class;
// oddEvan\Example\Simple\StupidSimple

Passing that string into a container’s get method will typically return an instance of that class.

Now let’s consider the constraints I have for Smolblog:

  1. The only classes with dependencies are Services.
  2. Services should be given all dependencies at construction.
  3. Services should function as singletons.

This makes our container’s logic… actually pretty simple:

  1. Have a configuration of classes and dependencies.
  2. Given a class name, check for an existing instance and skip to step 6 if there is one.
  3. If no instances, retrieve that class' dependencies.
  4. For each class in those dependencies, call step 2 with the dependency’s class.
  5. Use the dependencies to create an instance of the class and store it.
  6. Return the instance to the caller.

…I think we can do this.

Considered whether I should

That’s cool and all, but replacing an established library with my own implementation is not something to be done lightly. A well-built library, like the ones from the League, are well-tested and well-maintained by a group of people. I’m just me.

By rolling my own solution, I’m eschewing the time and effort put into the existing library. Sometimes it can look like “bloat” or “unnecessary” code, but often that code covers edge cases that aren’t immediately obvious. Some of those potential bugs can even be security concerns.

In this specific case, a lot of the code in the League’s container involves different ways to load classes into the container. Because it is a general-purpose library, it has to handle several different scenarios:

  • Singleton classes (return the same instance every time)
  • Multiple classes (return a new instance every time)
  • Set dependencies in the constructor
  • Set dependencies by method calls after construction
  • Store classes by name
  • Store classes by aliases
  • Receive an initial configuration
  • Accept changes at any time
  • Determine if a dependency is another class or a value

With Smolblog’s constraints, this list is a lot shorter:

  • Singleton classes
  • Set dependencies in the constructor
  • Store classes by name
  • Receive an initial configuration
  • Determine if a dependency is another class or a value
  • Uses named arguments

That last point is what tipped me over to writing my own container. In PHP 8, you can now use named arguments. This is a language construct I first saw in Objective-C that Apple carried over into Swift, and understandably so. It makes method calls much more readable, especially if they have many optional parameters. Let’s start with an obtuse function:

make_superhero('Larry', 'Larry-Boy', 'Archibald', 3, false);

With named arguments, not only is it clearer what argument is what, but the order is no longer significant:

make_superhero(
  super_name: 'Larry-Boy',
  num_episodes: 3,
	citizen_name: 'Larry',
  assistant: 'Archibald',
  can_super_size: false,
);

I’ve been using named arguments extensively in Smolblog, and I wanted that flexibility in my container. And wanting that feature is ultimately what let me give myself permission to write my own container. It wasn’t—and isn’t!—enough just to want “less code”; there has to be a reason for me to write my code.

So let’s get to it.

Level 1: it begins

We’ll start with a naive implementation just to get an idea of where we are, a simple configuration and handler.

Let’s set up some pretend services first:

class DatabaseService {
  public function __construct() {
    $this->connection = new DatabaseConnection('db://user:pass@server/db');
  }
  //...
}

class UserService {
  public function __construct(private DatabaseService $db) {}
  //...
}

class UserApiService {
  public function __construct(private UserService $users) {}
  //...
}

For configuration, we’ll create an array of arrays. Each array will contain a class' dependencies, and we’ll key that array to the class' name:

$config = [
  UserApiService::class => [
    'users' => UserService::class,
  ],
  UserService::class => [
    'db' => DatabaseService::class,
  ],
  DatabaseService::class => [],
];

And now, our container:

class Container implements Psr\Container\ContainerInterface {
  private array $instances = [];
  
  public function __construct(private array $config) {}

  public function has(string $id): bool {
    return array_key_exists($id, $this->config);
  }
  
  public function get(string $id) {
    // Check if $id is in the configuration.
    if (!$this->has($id)) { throw new ServiceNotFoundException($id); }
    
    // If we don't already have an instance, create one.
    $this->instances[$id] ??= $this->instantiateService($id);
    
    // Return the instance.
    return $this->instances[$id];
  }
  
  private function instantiateService(string $id) {
    // Get the listed dependencies from the container.
		$args = array_map(
			fn($dependency) => $this->get($dependency),
			$this->config[$id]
		);

		return new $service(...$args);
  }
}

Simple! But these are hardly real-world conditions.

Level 2: Other Parameters

Now let’s say we want to make DatabaseService more resilient. Instead of having a hard-coded database connection string, we’ll pass one into the constructor:

class DatabaseService {
  public function __construct(string $connectionString) {
    $this->connection = new DatabaseConnection($connectionString);
  }
  //...
}

Now we just add that string to our configuration… wait…

$config = [
  //...
  DatabaseService::class => [
    'connectionString' => 'db://user:pass@server/db', // This is ambiguous
  ]
];

Remember that the class constants are just strings. How is our container going to tell the difference between a class string like oddEvan\Thing\DatabaseService and db://user:pass@server/db?

  • We could check class_exists or $this->has() to see if the given string represents a class or a value.
  • We could have some way of tagging an entry as a value.

Right now, I prefer explicit signals over trying to “figure out” a programmer’s intent. So to explicitly tag this as a value, we’ll use a callable (such as an arrow function) that will return the value we want. Let’s revisit our configuration with this:

$config = [
  //...
  DatabaseService::class => [
    'connectionString' => fn() => 'db://user:pass@server/db', // This is clearer.
  ]
];

Then we’ll look for callables in the container:

class Container implements Psr\Container\ContainerInterface {
  //...
  private function instantiateService(string $id) {
    // Get the listed dependencies from the container.
		$args = array_map(
			fn($dependency) =>
      	is_callable($dependency) ?
      		call_user_func($dependency) :
      		$this->get($dependency),
			$this->config[$id]
		);

		return new $service(...$args);
  }
}

Level 3: Interfaces

What about when a class takes an interface as a dependency (which it should)? Let’s add a PSR-18 HTTP client to one of our services:

class UserService {
  public function __construct(
    private DatabaseService $db,
    private \Psr\Http\Client\ClientInterface $http,
  ) {}
  //...
}

Updating the UserService configuration is easy enough since an interface also has a class constant:

$config = [
  //...
  UserService::class => [
    'db' => DatabaseService::class,
    'http' => \Psr\Http\Client\ClientInterface::class,
  ],
];

But now we need to add ClientInterface to our container somehow. We need to have some way to give an implementation in the configuration; otherwise our container will (unsuccessfully) try to instantiate an interface!

Going back to the idea of explicit signals, we actually can use strings here:

$config = [
  //...
  \Psr\Http\Client\ClientInterface::class => MyHttpClient::class,
];

Now we check the type of the class' configuration: if it’s a string, we get that class.

class Container implements Psr\Container\ContainerInterface {
  //...
  private function instantiateService(string $id) {
    $config = $this->config[$id];
    
    if (is_string($config)) {
			// This is an alias.
			return $this->get($config);
		}
    
    //...
  }
}

Note that we are very specifically not checking if $id is an interface. We want to be able to alias any class in here in case we want to replace a particular dependency with a subclass.

We kind of handwaved an implementation of that class. What if we wanted to use something specific?

Level 4: Factories

Let’s say instead of rolling our own HTTP client, we used an off-the-shelf library like Guzzle?

$config = [
  //...
	\Psr\Http\Client\ClientInterface::class => \GuzzleHttp\Client::class,
];

According to the Guzzle docs, a Client only needs a configuration array. We could do this with our existing config structure:

$config = [
  //...
	\GuzzleHttp\Client::class => [
    'config' => fn() => ['connect_timeout' => 30],
  ],
];

And this would work! But there’s a small assumption here that could turn into technical debt.

Remember that our container splats the configuration into the parameters of the class' constructor. If the maintainers of Guzzle ever change the name of the parameter from $config to something else, our container would break. One way to avoid this would be to remove the key from the dependency array, but that still feels fragile to me. What we need is a way to create an instance of Client without assuming it will have the same constraints our services have.

We can do something similar to aliases: provide a callable function that returns the entire object.

$config = [
  //...
	\GuzzleHttp\Client::class =>
  	fn() => new \GuzzleHttp\Client(['connect_timeout' => 30]),
];

Then we check for those in the container:

class Container implements Psr\Container\ContainerInterface {
  //...
  private function instantiateService(string $id) {
    $config = $this->config[$id];
    
    if (is_callable($config)) {
			// The config is a factory function.
			return call_user_func($config);
		}
    
    //...
  }
}

Finishing up

At this point, we’ve hit all the use cases I have for a dependency injection container:

  • Lazy instantiation
  • One instance per class
  • Aliases (replacing one class/interface with another)
  • Dependencies can be other classes or functions returning a value
  • Factory methods to manually create instances

There’s a few places we could go from here. We could use the Reflection API to automatically determine configuration for some simple cases. We could (should!) add more error handling for when the configuration doesn’t match the code. And if you need those features, you can build them! Or just use something off-the-shelf that already does it.

Anyway, here’s our completed configuration and container:

// Service Classes //

class DatabaseService {
  public function __construct(string $connectionString) {
    $this->connection = new DatabaseConnection($connectionString);
  }
  //...
}

class UserService {
  public function __construct(
    private DatabaseService $db,
    private \Psr\Http\Client\ClientInterface $http,
  ) {}
  //...
}

class UserApiService {
  public function __construct(private UserService $users) {}
  //...
}

// Configuration //

$config = [
  UserApiService::class => [
    'users' => UserService::class,
  ],
  UserService::class => [
    'db' => DatabaseService::class,
    'http' => \Psr\Http\Client\ClientInterface::class,
  ],
  DatabaseService::class => [
    'connectionString' => fn() => 'db://user:pass@server/db',
  ],
	\Psr\Http\Client\ClientInterface::class => \GuzzleHttp\Client::class,
	\GuzzleHttp\Client::class =>
  	fn() => new \GuzzleHttp\Client(['connect_timeout' => 30]),
];

// Dependency Injection Container //

class Container implements Psr\Container\ContainerInterface {
  private array $instances = [];
  
  public function __construct(private array $config) {}

  public function has(string $id): bool {
    return array_key_exists($id, $this->config);
  }
  
  public function get(string $id) {
    // Check if $id is in the configuration.
    if (!$this->has($id)) { throw new ServiceNotFoundException($id); }
    
    // If we don't already have an instance, create one.
    $this->instances[$id] ??= $this->instantiateService($id);
    
    // Return the instance.
    return $this->instances[$id];
  }
  
  private function instantiateService(string $id) {
    $config = $this->config[$id];
    
    if (is_callable($config)) {
			// The config is a factory function.
			return call_user_func($config);
		}
    
    if (is_string($config)) {
			// This is an alias.
			return $this->get($config);
		}

    // Get the listed dependencies from the container.
		$args = array_map(
			fn($dependency) =>
      	is_callable($dependency) ?
      		call_user_func($dependency) :
      		$this->get($dependency),
			$config
		);

		return new $service(...$args);
  }
}

I’ll leave you with this last comment. You’ll note that our simple container still adheres to the Psr\Container\ContainerInterface interface. When I’m building a service that needs a container, I’m depending on this interface, not my specific container. The only part of Smolblog that really cares about how the container works is this configuration. And because this configuration is itself so simple, I could adapt it to a different container if and when I need to.

Which is really the whole point of this exercise: loosely couple things together. Using standard interfaces and a dependency injection container means that many of the key libraries Smolblog depends on can be swapped out. And that includes the container itself.

Thanks for reading; I’ll see y’all next time.


Use Laravel’s Illuminate Database Query Builder With WordPress

I’ve been working on Smolblog, a social web blogging app. To help me get to a minimally viable product sooner, I’ve been building it on top of WordPress. However, WordPress is built exclusively for the MySQL database, and I eventually want Smolblog to work with many different databases, especially SQLite. This means, for my own code, I need to abstract the database away.

The first pass I had at this was to simply have Query objects and services to handle those. This would effectively abstract away the entire data layer, making it completely system-agnostic. It wouldn’t even need to be a traditional database. But as I built this system out, I was making more and more assumptions about what the database and data code would look like. And while the database code was all abstracted away, I still had to write it. A lot of it. And every line I wrote using $wpdb was another line I’d have to rewrite someday.

I’ve been looking at other frameworks to use, and Laravel is by far the strongest contender. Their approach to dependency injection and services seems to line up well with how I’ve organically built Smolblog to this point. So when I found out that their database abstraction layer also included a way to use the library without taking on the entire Laravel framework, I decided to make “someday” today.

Prerequisites

  • Composer: While you can use this library without using Composer, it’s very much not recommended. That being said, if you’re using this in a plugin for general use or otherwise don’t have control over your entire WordPress environment, be sure to use Mozart or some other tool to isolate the namespaces of your dependencies.
  • Populated database constants: Some of the more modern WordPress setups use a connection string or other way to connect to MySQL. I didn’t find a way to get that information out of the $wpdb constant, so this code relies on having DB_HOST and other constants from wp-config.php defined.
  • PDO::MySQL: Illuminate DB uses PDO to handle databases, so you’ll need to make sure your PHP server has the PDO::MySQL extension installed. I’m using the official PHP image, so I needed to add these two lines to my Dockerfile:
RUN docker-php-ext-install pdo_mysql  
RUN docker-php-ext-enable pdo_mysql

Step 1: Dependency Injection

We’re going to use dependency injection to separate creating the database connection from using the database connection. This way the database connection can change without as much code changing.

The documentation for Laravel’s query builder involves calling their DB facade, a global class that calls a singleton instance. Digging through the documentation and code, it looks like the underlying class conforms to the Illuminate\Database\ConnectionInterface interface. So that’s what we’ll use in our service’s constructor:

use Illuminate\Database\ConnectionInterface;

class ContentEventStream implements Listener {
	public function __construct(
		private ConnectionInterface $db,
	) {
	}
}

Inside the service, we’ll follow the documentation, replacing any use of the DB facade with our $db object:

$this->db->table('content_events')->insert(['column' => 'value']);

Step 2: Connection Factory

Now that we know what we need, we need to create it.

The README for the Illuminate Database package has good starting instructions. We’ll combine those with data from wp-config.php and $wpdb:

use Illuminate\Database\Capsule\Manager;
use Illuminate\Database\ConnectionInterface;

function getLaravelConnection(): ConnectionInterface {
	global $wpdb;

	$capsule = new Manager();
	$capsule->addConnection( [
		'driver' => 'mysql',
		'host' => DB_HOST,
		'database' => DB_NAME,
		'username' => DB_USER,
		'password' => DB_PASSWORD,
		'charset' => DB_CHARSET,
		'prefix' => $wpdb->prefix,
	] );

	return $capsule->getConnection();
}

(As mentioned, we’re pulling the connection information straight from configuration. If you know how to get it from $wpdb, let me know!)

The prefix property on the connection works much the same way as WordPress' table prefix. Since we’re using the connection object to also build our queries, it will add the prefix to our queries automatically. Using this property will also use the correct tables for blogs in multisite, so data from one blog doesn’t leak into another.

For Smolblog, I only want one set of tables regardless of multisite. I also want to prefix the Smolblog-specific tables, mostly so they’re all in one place when I’m scrolling. So my prefix property looks like this:

$capsule->addConnection( [
	// ...
	'prefix' => $wpdb->base_prefix . 'sb_',
] );

Because I don’t want a global object or the Eloquent ORM, I can ignore the rest of the setup from the project README.

Finally, we’ll want to store this created object somewhere central. Smolblog uses a simple dependency injection container, so we’ll store it there. The first time a service that needs a database connection is created, the container will run this function and provide the object.

(Honestly, the container probably deserves a blog post of its own; you can look at the source code in the meantime.)

Step 3: Update the Schema

We have our code to build queries. We have our connection to the database. The only thing we need now is the actual tables for the database.

Here is where we can use WordPress to its full extent. We will be using the dbDelta function in particular. This will tie into WordPress' existing system for updating the database structure alongside WordPress itself.

Some plugins tie this migration code to an activation hook, but we want to be able to modify the tables even after the plugin is activated. So our process will look like this:

  1. Loop through the different tables we will need.
  2. Check the blog options for a schema version.
  3. If the version matches what we have in code, we’re up-to-date. Skip to the next table.
  4. Pass the new table schema to dbDelta and let it do its thing.
  5. Save the schema version to blog options.
  6. Rinse and repeat for each table.

At this point, I should bring up some of the caveats with the dbDelta function. The comments on the WordPress documentation are invaluable here, especially as they point out a few things that need to be consistent with our schemas.

Because there’s so many things that need to be consistent, we’ll isolate the unique parts of our table schemas to two things:

  1. A name. Because every table needs one. We will declare it without the prefix.
  2. The fields excluding the primary key. We can have UNIQUE indexes on other fields for a similar effect, but every table will have an auto-incrementing id field.

A series of values keyed to short strings? That sounds like an array! Here’s part of what Smolblog’s schema array looks like:

class DatabaseHelper {
	public const SCHEMA = [
		'content_events' => <<<EOF
			event_uuid varchar(40) NOT NULL UNIQUE,
			event_time varchar(30) NOT NULL,
			content_uuid varchar(40) NOT NULL,
			site_uuid varchar(40) NOT NULL,
			user_uuid varchar(40) NOT NULL,
			event_type varchar(255) NOT NULL,
			payload text,
		EOF,
		'notes' => <<<EOF
			content_uuid varchar(40) NOT NULL UNIQUE,
			markdown text NOT NULL,
			html text,
		EOF,
	];

	public static function update_schema(): void {
		foreach ( self::SCHEMA as $table => $fields ) {
			self::table_delta( $table, $fields );
		}
	}

	//...
}

A brief aside: Smolblog uses UUIDs for its unique identifiers, and they’re stored here as full strings in fields ending with _uuid. I ran into trouble storing them as bytes, and something in WordPress would frequently mess with my queries when I had fields named things like user_id and site_id. I’m noting this here in case you run into the same things I did.

When WordPress loads the plugin, it will call the update_schema function declared here. That function loops through the array, extracts the table name and fields, and passes them to this function:

public static function table_delta( string $table, string $fields ): void {
	global $wpdb;

	$table_name      = $wpdb->base_prefix . 'sb_' . $table;
	$charset_collate = $wpdb->get_charset_collate();

	$sql = "CREATE TABLE $table_name (
		id bigint(20) NOT NULL AUTO_INCREMENT,
		$fields
		PRIMARY KEY  (id)
	) $charset_collate;";

	if ( md5( $sql ) === get_option( $table . '_schemaver', '' ) ) {
		return;
	}

	require_once ABSPATH . 'wp-admin/includes/upgrade.php';
	dbDelta( $sql );

	update_option( $table . '_schemaver', md5( $sql ) );
}

This function takes care of the boilerplate we talked about earlier and runs the steps:

  1. It creates the table name using the same pattern as before: the base prefix plus sb_.
  2. It creates a CREATE TABLE SQL statement using the table name and fields. (It’s okay to build a SQL query this way because all of the data is coming from constants inside the PHP file; none of it is coming from form data or other untrusted sources.)
  3. It takes the MD5 hash of the SQL statement and compares that to the saved option for this table. The hash will change when the code changes, so this is a quick way to keep our code and database in-sync.
  4. If the database needs to be updated, it requires the correct file from WordPress Core and runs the dbDelta function.
  5. Finally, it saves the MD5 hash to the blog options so we know what version the database is on.

By calculating the version using the hash of the actual SQL, we don’t have to worry about whether some other version number has been updated. This may or may not be the approach you want to take in a production application, but it has proven very useful in development. This is the same idea as using the filemtime function as the “version number” of static CSS and JavaScript in your theme.


So there we have it. We’ve used the connection information in WordPress to hook up a Laravel database connection. And at some point in the future, it’ll be that much easier to let Smolblog work with SQLite which will in turn let Smolblog work on even more web hosts. And you can use this to do whatever you want! Maybe you just wanted to transfer some skills from Laravel to WordPress. Maybe you’re just in it for the academic challenge.

One thing you can do with this is unit-test your services using an in-memory SQLite database… and I’ll leave you with that.

final class DatabaseServiceUnitTest extends \PHPUnit\Framework\TestCase {
	private \Illuminate\Database\Connection $db;
	private DatabaseService $subject;

	protected function setUp(): void {
		$manager = new \Illuminate\Database\Capsule\Manager();
		$manager->addConnection([
			'driver' => 'sqlite',
			'database' => ':memory:',
			'prefix' => '',
		]);
		$manager->getConnection()->getSchemaBuilder()->create(
			'content_events',
			function(\Illuminate\Database\Schema\Blueprint $table) {
				$table->uuid('event_uuid')->primary();
				$table->dateTimeTz('event_time');
				$table->text('payload');
			}
		);

		$this->db = $manager->getConnection();
		$this->subject = new DatabaseService(db: $this->db);
	}

	public function testItPersistsAContentEvent() {
		$event = new class() extends ContentEvent {
			public function __construct() {
				parent::__construct(
					id: Identifier::fromString('8289a96d-e8c7-4c6a-8d6e-143436c59ec2'),
					timestamp: new \DateTimeImmutable('2022-02-22 02:02:02+00:00'),
				);
			}

			public function getPayload(): array {
				return ['one' => 'two', 'three' => 'four'];
			}
		};

		$this->subject->onContentEvent($event);

		$expected = [
			'event_uuid' => '8289a96d-e8c7-4c6a-8d6e-143436c59ec2',
			'event_time' => '2022-02-22T02:02:02.000+00:00',
			'payload' => '{"one":"two","three":"four"}',
		];

		$this->assertEquals((object)$expected, $this->db->table('content_events')->first());
		$this->assertEquals(1, $this->db->table('content_events')->count());
	}
}

Building Smolblog: Separation

My first exposure to the SOLID principles for object-oriented programming came from the senior developers at the WordPress agency I was working at. It was a bit of a big change to how I had normally approached object-oriented programming. I was used to objects encapsulating broad swaths of functionality related to a particular thing. SOLID demanded that my objects be smaller, more focused, and much more numerous.

SOLID was hard for me to get the hang of, and in reality, I never got the hang of it while I was working for the agency. It felt like overkill: why should I create five different classes and boilerplate a ton of infrastructure just to end up calling register_post_type? Why go to all the trouble of dependency injection when the functions I need to call are right there?

A few months removed from agency work (and knee-deep in the Ruby world), I’m finally starting to get it. And in a way, I was right: it is overkill… if I’m only building a WordPress plugin.

But Smolblog is not only a WordPress plugin.

SOLID-ly Overkill

SOLID is an acronym which stands for five principles of object-oriented programming:

  1. Each class has a single responsibility.
  2. A class is open to use by others but closed to changes.
  3. A class can be replaced by any of its subclasses according to the Liskov Substitution Principle.
  4. A class' interface is a promise to other classes to behave a certain way.
  5. A class' dependencies should be given to it according to the other principles.

Points 2, 3, and 4 are ideas I take for granted. Classes have public and private properties and functions, and while the private aspects can change as much as anyone wants, the public aspects are a contract with the rest of the program. And interfaces and inheritance are amazing ways to keep similar logic in one place and change things where they’re different. I learned this back in my C++ days. No big.

It was the first and last points that tripped me up, badly. My idea of a single responsibility was “This class handles everything to do with this thing.” The seniors said a single responsibility was “This class handles this one task.” I thought that sounded more like a function. I also struggled with dependency injection. What was the point of encapsulating logic inside of an object if you had to give the object the logic?

Trying to implement these principles just to create a post type simply wasn’t worth it. It made the code bigger and more complex than it needed to be. Combined with the fact that there were no real testing practices in place meant that trying to code fully-idealized SOLID code felt like all of the hassle and none of the payoff.

What the senior devs were aiming for was more than a couple of hooks; it was a future of much more complex functionality that would need to be picked up by a rotating squad of developers. It was a potential for writing tests on client-specific business logic.

SOLID principles aren’t overkill when you’re building an application; they’re essential.

Stop Trying To Do Everything

The first hurdle I had to get over was personal. I’m a people pleaser. I want to do everything for everyone all the time so that maybe people will like me. What I didn’t realize was that (toxic) idea had spread to my coding style: my “everything” classes were made in my own image.

I wanted to encapsulate logic into neat little packages that I could hide from the rest of the application. For example, I would want creating a new post to be (essentially) one line of code:

$post = new Post(
	id: null,
	title: 'My first blog post!',
	author_id: 5,
	content: 'This is so cool, I wish I knew how to blog.'
);

Behind the scenes, though, there would be too much happening:

class Post {
	function __construct(
		$id,
		$title,
		$author_id,
		$content,
	) {
		global $db;

		if (
			!isset($title) ||
			empty($title) ||
			!isset($content) ||
			empty($content) ||
			!isset($author_id) ||
			$author_id <= 0
		) {
			throw new Exception('Bad post data!');
		}

		if (isset($id)) {
			$id = $db->query(
				"UPDATE `posts` SET `title`=?, `author_id`=?, `content`=? WHERE `id`=?",
				$title,
				$author_id,
				$content,
				$id
			);
		} else {
			$db->query(
				"INSERT INTO `posts` SET `title`=?, `author_id`=?, `content`=?",
				$title,
				$author_id,
				$content
			);
		}

		$this->id = $id;
		$this->title = $title;
		$this->author = $db->query('SELECT * FROM `users` WHERE id=?', $author_id);
		$this->content = $content;
	}

	// Other helper methods and such...
}

This pretend class is written with these requirements:

  • Every Post object should correspond to something in the database.
  • Every Post object should have a title, author, and content.
  • Every Post object should have an ID; we can infer that a new post will not have an ID and get one when the post is created in the database.

Right off the bat, though, we’ve coded some big assumptions into our class:

  • The global $db object exists.
  • The global $db object has a query method.
  • Posts are stored in the posts table.
  • Authors are stored in the users table.

Here’s the thing that took me so long to grok: even though these assumptions are probably true now, they may not be true later. On some level I understood this, but I figured if that day came I would spend a day pouring through the codebase making the necessary changes.

People pleaser, remember?

If we were to make this code more SOLID, we’d have a few different classes. First, we’ll pare down the Post class to just one responsibility: data integrity. If the class is given bad data, it should not create the object. So now our class can look like this:

class Post {
	function __construct(
		?int $id = null,
		string $title,
		Author $author,
		string $content,
	) {
		if (
			empty($title) ||
			empty($content)
		) {
			throw new Exception('Bad post data!');
		}

		$this->id = $id;
		$this->title = $title;
		$this->author = $author;
		$this->content = $content;
	}

	// Get/set methods...
}

Not only did we take out all the database code, we also added type hints to the constructor’s parameters. This way, PHP itself can check if title, author, and content are set and throw an error if not.

Saving $post to the database and turning some author_id into an Author object with data are not the responsibility of a Post.

Creating a Dependency

Let’s go back to our hypothetical post creation and put that code in context. We’ll say we’re getting a request through the API to create a new post. With our old do-everything Post class, that endpoint class could look like this:

class NewPostApiEndpoint {
	public function __construct() {}

	public function run(WebRequest $request) {
		$post = new Post(
			id: $request['id'] ?? null,
			title: $request['title'],
			author_id: $request['author_id'],
			content: $request['content'],
		);

		return new WebResponse(200, $post);
	}
}

Short, sweet, and to-the-point. Beautiful. Except now we know what horrors once lied beneath that innocuous new Post call. We could bring all those database calls into our endpoint class, but that wouldn’t fix the underlying issue: what happens when the database needs to change?

Really, the first question we should ask is, “What is the responsibility of NewPostApiEndpoint?” Our short-and-sweet class helps us answer that question: to save a Post with the data from the web request.

What’s not included: knowing how the Post is stored. “But we know it’s a database!” Yes, we know it’s a database; the class should only know what it needs to do its job. So let’s start writing our new endpoint but leave comments where we have missing information:

class NewPostApiEndpoint {
	public function __construct() {}

	public function run(WebRequest $request) {
		$post = new Post(
			id: $request['id'] ?? null,
			title: $request['title'],
			author: // TODO We have author_id, need object
			content: $request['content'],
		);

		// TODO Have post, need to save

		return new WebResponse(200, $post);
	}
}

We’ve identified two outside responsibilities: getting an Author object and saving a Post object. Those sound like single responsibilities to me!

Here’s where the power comes in: our endpoint object doesn’t need a specific object for these jobs, just an object that can do the job. So instead of writing new classes, we’ll create two interfaces:

interface AuthorGetter {
	public function getAuthor(int $author_id): Author;
}

interface PostSaver {
	public function savePost(Post $post): void;
}

Now that we have those interfaces defined, we can finish our endpoint:

class NewPostApiEndpoint {
	public function __construct(
		private AuthorGetter $authors,
		private PostSaver $posts,
	) {}

	public function run(WebRequest $request) {
		$post = new Post(
			id: $request['id'] ?? null,
			title: $request['title'],
			author: $authors->getAuthor($author_id),
			content: $request['content'],
		);

		$posts->savePost($post);

		return new WebResponse(200, $post);
	}
}

And that’s Dependency Injection in a nutshell! Cool, right?

Yeah…except, again, we’ve only moved the complexity. We still have to make those database calls at some point. And when it comes time to finally assemble the application, we have to keep track of what classes have which dependency, and… ugh.

Can’t See the Trees For the Forest

This was my other problem with the SOLID principles: how is Dependency Injection supposed to actually make things easier? I understood the idea of passing in what an object needs, but I got overwhelmed trying to picture doing that for an entire application. Trying to keep track of all of all the dependencies for an object also meant keeping track of those dependencies' dependencies, and it didn’t take long for the infinite recursion to crash my brain.

What I failed to grasp was that knowing objects' dependencies counts as a single responsibility. So why not let computers do what they do best?

The established pattern here is known as a Dependency Injection Container. The PHP Framework Interop Group has an established interface for these containers that is widely accepted across the industry. These objects store a mapping of classes and dependencies and create properly-initialized objects.

To complete our example, we’ll use the Container package from The League Of Extraordinary Packages:

use League\Container\Container;

$container = new Container();
$container->add(NewPostApiEndpoint::class)
          ->addArgument(AuthorGetter::class)
          ->addArgument(PostSaver::class);

// Later on...

$newPostEndpoint = $container->get(NewPostApiEndpoint::class);

And that’s pretty much it! We set up our classes to accept the dependencies they need, then we set up the container to get those dependencies to the classes. If those dependencies have dependencies, the container will take care of them too.

The only thing we have left to do is actually set up our two dependencies. We added the interfaces as arguments, but we haven’t given any concrete implementations to the container. We’ll skip writing out those classes and just show how it could work here:

$container->add(AuthorRepo::class)
	->addArgument(DbConnector::class);
$container->add(PostRepo::class)
	->addArgument(DbConnector::class);

$container->add(AuthorGetter::class, AuthorRepo::class);
$container->add(PostSaver::class, PostRepo::class);

This tells our container that that anything that depends on AuthorGetter should be given an instance of AuthorRepo, and anything that needs PostSaver should be given PostRepo.

But why go to all this trouble? We’ve taken a few lines of code and spread them out over three classes (five if you count the interfaces) and introduced an entirely new library to our code. While the individual pieces of the code may be easier to follow, the flow of code through the entire application is now much more complex. What does all this extra work actually get us?

Know What You (Don’t) Need

I could say “it makes things easier to change” and leave it at that. By isolating different responsibilities in the code, it’s easier to find and change that specific code when circumstances change (which they inevitably will). But this truth can be hard to visualize when those changes seem unlikely.

I’m writing about these principles because I’m using them to rewrite Smolblog. The version currently running here is a WordPress plugin top-to-bottom, albeit with some efforts at SOLID principles. It gets the job done, but it doesn’t feel maintainable. There’s a lot of code that should be abstracted out, but I didn’t see a good way to.

For the rewrite, my guiding principle was “leave your options open.” I wasn’t sure what was going to be best for Smolblog in the long term despite feeling very sure that WordPress was the best option in the short term. I didn’t want to box Smolblog into using WordPress longer than it needed to, but I also didn’t want to bet on a PHP framework only to discover it was a bad fit and have to rewrite large swaths of code again.

About a month into the rewrite I realized I had stumbled backwards into SOLID programming. In order to isolate my code from the outside platform, I had to write interfaces for anything I needed from the platform. I started with several overly-complicated setups before everything finally clicked enough to…

Well, it’s clicked enough that I finally feel confident enough to write this blog post and introduce you to Smolblog\Core. This is where all of the Smolblog-y logic will be that makes Smolblog what it is. And the single biggest change is that while the code still depends on an outside platform, that platform doesn’t have to be WordPress.

There’s more complexity here, sure. Version one has just under 1600 lines of code while Smolblog\Core has 1800, and it can’t do everything version one can yet! But with that added complexity comes a better separation of concerns.

In the container example above, I noted that we could define interfaces in one area then set the concrete implementations in another. That’s the principle I’m using to keep the WordPress-specific code separate from the core code. This way, the WordPress-backed PostSaver class might look something like this:

class WordPressPostSaver implements PostSaver {
	public function savePost(Post $post): void {
		wp_insert_post(
			'id' => $post->id ?? 0,
			'post_title' => $post->title,
			// et cetera...
		);
	}
}

One that used the database directly could look more like this:

class DatabasePostSaver implements PostSaver {
	//...
	public function savePost(Post $post): void {
		$this->db->query(
			"INSERT INTO `posts` SET `title`=?, `author_id`=?, `content`=?",
			$post->title,
			$post->author->id,
			$post->content
		);
	}
}

And because of the abstractions we’re using (namely the PostSaver interface), nothing else has to change.

Smolblog is still being built on top of WordPress. This time, though, all of the WordPress-specific code is in its own project. All of the WordPress idioms and functions get passed through their own set of classes to match the interfaces that Smolblog requires.

Now, instead of different WordPress functions being sprinkled throughout the project, they’re centralized and cataloged. We know what Smolblog can do and what it needs WordPress (or some other framework) to do.

SOLID-ly understood

I genuinely think part of the reason SOLID never clicked for me at the agency was the simple fact that we were always going to be using WordPress. I didn’t see a difference between using a WordPress function and using a built-in PHP function; both were always going to be there, so why bother isolating them as a dependency? Now that I’m working on a project—a big one!—that doesn’t have that constraint, I’m beginning to see the value even if we were staying with WordPress.

I still maintain that if you know the plugin will never be more than a few calls to WordPress-specific functionality, like custom post types and taxonomies, then it’s best to use some namespaces and just get the job done. But I should also admit that it’s taken a lot of experience to know which is which.

It’s not lost on me that at some point in this project I’ll have 90% of a framework for using these principles in a WordPress plugin if not 90% of a framework in general. Combining these principles with Domain-Driven Design and Command-Query Responsibility Separation almost guarantees it… but that’s another blog post.

For now, I’ll just go ahead and admit it: y’all were right. As usual.


Looping Background Music With Intro in Godot Engine

Title is a mouthful, right? But this is something I've been trying and failing to find a clear answer to for months. Now that I've found the answer, time to take Sal's advice and write a blog post.

The Basic Idea

You're making a video game. You're using Godot because you like open source and free stuff. You've got some kick-awesome background music because, well, reasons. And while you can get a good loop out of the music, you'd still like to have an introduction to the music.

Background music with an intro is pretty standard in most video game music. Take a listen to "Green Hill Zone" from... well, every Sonic game ever. But particularly Sonic Mania.

https://www.youtube.com/watch?v=8lkTRBywniw

Notice how when the music loops at 0:53, it doesn't go all the way back to the beginning but instead loops from partway into the music? This helps set the stage for the level's atmosphere and provide a more natural feel to the music.

It's also a feature we've come to expect as players, so if you can do it in your game it's a good idea. So how do we get it in Godot?

Preparing Your Music

For my game, I'm using "1977" by Adam Young as the background music. I will, of course, need to replace it with something officially licensed (or original) for the finished product. I used my copy of Ableton Live to process the audio in order to get a clean-ish loop and saved it as an Ogg Vorbis file to be imported into Godot.

Import into Godot

The key info for the loop is kept on the file itself. We want the file to loop (obviously), and we want to set the loop offset at where we want the looping portion of the file to begin. This value is in seconds!

Now for the unintuitive part: do not auto-play the music.

Selecting Autoplay here will start playback at the loop offset, which will skip the intro. That's the exact thing we're trying to avoid!

Instead of autoplaying, we need to add a line to the root node's script:

What's the difference? We're using the play method in the AudioStreamPlayer class. This method uses an optional parameter to indicate where to start playing from. We want to start playing from the beginning, so we pass 0 to the method.

The end result should be background audio that starts playing at the beginning of the file but loops over only one specific part of it.

Hope this helps! Leave a comment if you've got feedback.


Retroactively Sign Git Commits

It's a classic situation. You're contributing to a project, filed your pull request, and gotten it approved by your peers. You're all ready to merge your code and add your humble contribution to the project when, out of nowhere, GitHub gives you this:

GitHub error: Merging is blocked: The base branch requires all commits to be signed.

How do you go about doing this, especially when you've already committed your work and pushed it to the server? How do you retroactively sign your Git commits? We'll do this in six steps:

  1. Gather Information
  2. Install GPG
  3. Create or use a key
  4. Set up Git to sign commits using GPG
  5. Rebase your commits
  6. Overwrite your branch with your newly signed commits

There's a lot to unpack, so we're going to need six steps. Also, these instructions are for macOS; Windows and Linux users may have different commands.

Read more on WebDevStudios.com »


Fixing oEmbed With a Custom Provider

The story of my blogging career over the past few years could be summed up as, "How I Learned to Stop Worrying and Love the Reblog." There's something that's so much fun about finding something cool and sharing it on my blog without worrying about saying something poignant or ultimately meaningless in an effort to "add to the conversation." Sometimes it's enough just to amplify the conversation, or share that video.

This is why I'm so excited about oEmbed. It's a standard API that takes a given link and turns it into an embeddable… thing. It's most commonly known as "that thing that turns a YouTube link into a video," but the underlying technology can be used for any web page. This includes sites like Reddit, Twitter, and Imgur, as well as blogging sites like Tumblr and, yes, WordPress.

Read the rest on WebDevStudios »


Hashing a Custom Taxonomy

I have a confession. I like Pokémon.

It'd really be more accurate to say I never stopped liking it. Sure, there was that point when it wasn't "cool" anymore and I was too busy trying to be "cool," but this is the kind of thing that just sticks with me. So, naturally, I've built up a collection of Pokémon cards. And what does someone like me do when they have a collection of something?

Build a WordPress-based database site for cataloging it, of course! This is where hashing a custom taxonomy comes into the picture.

Read more on WebDevStudios.com »