Just stop, Apple.

I’m with John Gruber on this one:

Whatever revenue Apple would lose to non-commissioned web sales (for non-games) is not worth the hit they are taking to the company’s brand and reputation — this move reeks of greed and avarice — nor the increased ire and scrutiny of regulators and legislators on the “anti-Big-Tech” hunt.

I find it incredibly ironic that the flag allowing—allowing!—an app to have an external payment link is called an “entitlement."1 Apple is a company with a valuation only matched by oil companies, and here they are reaching for 12-27% of app developers' revenue even if they use their own payment and purchase-tracking systems. And all this still requires apps to use in-app purchases; external payments can only be a second option.

I want to blame some of this on Wall Street. The stock market expects infinite growth, and while Apple’s profits are legendary, growth has slowed. This is why Apple has been so keen to start services like Apple TV+ and Fitness+. Why they keep pushing iCloud+ and Apple Music even at the expense of the user experience. Why they didn’t want the details of their search deal with Google getting out.

But the more I think about it, the more I worry that this is just in Apple’s character. This is still very much the company that got pantsed by Microsoft in the 1980s and refuses to forget. Now that they’re on top of the world, I worry that they don’t know how to stop. And they need to.

Apple is certainly entitled to reap the rewards of its hard work. As John Gruber (again) put it:

Apple’s 30/15 percent commissions from App Store purchases and subscriptions are not payment processing fees. They include payment processing fees, but most of those commissions are, in Apple’s view, their way of monetizing their intellectual property. And they see the entire iOS platform as their IP.

We all pay for the phones. Developers pay $100/year to get and stay on the store. We give a 15-30% commission to tie into the App Store’s payment infrastructure (which includes purchase management and verification, asset hosting, and a bunch of other stuff that makes it definitely worth it for my little app. But this demand for 12-27% of outside-app purchases just because? Grow up.


  1. (Yes, I know all app permissions like this, including ones for iCloud and Health data, are called entitlements. Go bug Mike Trapp.) ↩︎


Building Smolblog: Open

I’ve been meaning to blog more as I’ve been working on the actual Smolblog code. And, with one of my other side projects finally shipping, I feel like I can start putting down some thoughts here. So here I am.

And the first thing I want to talk about isn’t just code, it’s about what specific words mean. I specifically want to start with something that isn’t a programming or coding problem. It’s really easy for us developers to try to solve all sorts of problems with code. But while well-built software in the right hands can do amazing things, the biggest problems we will solve are social, not technical.

So when I say I want Smolblog to be “open,” this is a question that is more social than technical.

You Know What Else Means Open?

Lots of things like to say that they are “open.” Google repeatedly calls Android open. Epic games has been called “champions for a free and open internet." Cryptocurrency and blockchain projects are often touted as decentralized and open. And WordPress, the system that Smolblog is currently using as a foundation, is famously open. But these all mean different things.

Epic Games advocates for free and open systems where anyone can install anything they want, especially their own Epic Games Store. That store, at least currently, does not have the freedom for anyone to sell whatever they want. By the same token, Android is a freely downloadable project that can be used by any phone manufacturer, but it is heavily tied to the Google Play store that has its own approval process. And while anyone can get into cryptocurrency and make transactions, the resources required to actually participate in “mining” on popular blockchains are prohibitive to all but a few.

So when I say Smolblog is open, what do I mean? How about this:

Smolblog’s Definition Of Open

  1. The Smolblog software is freely available to use, modify, and share.
  2. Interactions do not require blogs or users to be on the same instance of the Smolblog software.
  3. Users can reasonably expect to take their data from one instance of the Smolblog software to another with no change in functionality.

A Brief Aside About Free and/or Open Source

The first point is one well-known in the software world. It corresponds to the freedoms championed by Free Software and Open Source advocates. Though the two groups have philosophical differences, they agree in practice: software should be free to use, free to change, and free to share (both modified and unmodified).

This is often found in libraries, frameworks, and infrastructure for web apps. Most web apps are written in scripting languages where there is no way to run the app without having the source code. And as companies base more and more of their existence on the web, the level of control that freely usable and modifiable software provides is essential.

While the source code is available for free, and anyone can search on their preferred search engine for help, companies with the budget to do so often buy official support from the vendor. Vendors often also provide fully-hosted versions of their products as a subscription offering. Discourse and Gitlab are two examples of projects like this.

This approach hasn’t worked for everyone, though. Elasticsearch used to be a project with Open Source and an official hosted solution. However, in the mid-2010s, their paid hosting was undercut by other vendors that offered the open source project on their systems, not Elastic’s. Elastic eventually changed their license to prohibit this, but in doing so violated the “freedom to use.”

While I don’t envy Elastic (and other similar companies) for the decisions they had to make, it highlights the key tradeoff of Free Software: the freedoms apply to everyone, including competitors. If Smolblog is going to be an open system, it has to be open for everyone. Any plan to make money from Smolblog has to take this into account.

How Do We Want To Do This?

First, some technical background. Smolblog is currently using WordPress as its foundation. I use those specific words because while Smolblog currently exists as a WordPress plugin, it is being built as its own product. Not everything in WordPress may be used or supported by Smolblog in the long term, but by making use of WordPress Smolblog is able to be a complete product sooner.

So, for our definition of open, we have three basic pillars: software, interactions, and data. Let’s tackle them in reverse order.

Open Data

This is a technical problem, and a relatively easy one at that. Most systems and web apps have a way of exporting a user’s data for download. This has been helped along by privacy laws in some parts of the world.

Smolblog will need a feature that allows users to download their data in a standard format. Smolblog will also need a feature that allows users to upload their data export.

This feature should be as self-contained as possible. The downloaded export should contain everything needed to load the data into a new server with minimal setup. This includes not just posts and images but also information on connected social media accounts and plugin-specific data. Another Smolblog server should be able to take this dowloaded export and re-create the user’s data from it.

By making this feature robust, it would provide end-users the freedom to leave a server for whatever reason they need, whether social, technical, or financial. It would also provide server maintainers the social freedom to remove unwanted users: with easy data portability, removing a user becomes less a case of “freedom of speech” and more a case of “should this speech be on this platform?”

WordPress currently has basic functionality in this area, but based on my time in a professional WordPress agency, it lacks the robustness this feature would require.

Open Interactions

Smolblog is intended to be as personal as a blog and as fun as social media. Part of social media’s appeal is the ease of interactions between people, such as replies or likes.

Smolblog features involving interactions will need to work identically whether the blogs are on the same server or different servers. No core features should rely on a central server.

The clearest example I can give of this is email. No single company “owns” email. Email works the same whether a user is on Gmail, Outlook, or iCloud (extensions, plugins, and other add-ons not withstanding). Most importantly, emails can be sent between users on the same server (bob@gmail.com to alice@gmail.com) or users on different servers (bob@outlook.com to alice@icloud.com).

Social interactions on Smolblog need to work the same way. A blog on smolblog.com needs to be able to interact with a self-hosted blog (say, oddevan.com) just as easily as another blog on smolblog.com. We don’t know what these interactions will look like yet, but this will be a requirement.

Some interactions, like following and reblogging, can be handled through existing standards like RSS/JSONfeed and oEmbed. This can open these features beyond Smolblog and extend Smolblog’s “openness” to other sites and apps.

Open Source™

This is more than just making the source code available. To embrace this as a principle and not just a bullet-point, Smolblog needs to not only have an Open Source license but be written in a way that is truly open.

The majority of the Smolblog project will be released with a strong copyleft license. Exemptions to this can be made in the interest of supporting the project and its openness.

I see three tiers to this:

Tier One: Copyleft through the GNU Affero General Public License

The Affero General Public License (AGPL) is possibly the strongest (or most restrictive) open source license. It requires the full source code of the application to be made available for sharing and modification to all users of the application, including users that only use it as a web app. It is called a “copyleft” license because any changes or derivative works must also be covered by the AGPL. For most cases, this will ensure that a Smolblog user can get not just the “official” source code but the source code to the specific server they are on.

WordPress currently uses an older copyleft license that provides most of these freedoms, but there is one key exception. Code for a web app is never “distributed” to its users, only those running the server. Automattic, the company behind WordPress, is able to use this exception to make products using WordPress like P2 exclusive to their own services. While they say they are committed to data portability and open source (and they have been), the ElasticSearch feud has shown that many companies will do everything they legally can.

We want to avoid any Smolblog or Smolblog-derived products from falling into this trap. The AGPL provides legal coverage for this.

Tier Two: Permissive through the Apache License

Licenses that do not require derivative works to be covered by the same license are sometimes called “permissive” licenses. These are especially useful for libraries and frameworks since they can be used by developers in commercial or private projects without involving the company lawyers.

Some of the code written for Smolblog will have a general purpose outside of the project. These could include tools for working with a social media site’s API, a library for putting data into a standard format, or a framework that enables a particular programming style. As part of being in a community of developers, sharing this code with a permissive license will enable Smolblog to benefit people beyond its users.

The Apache License is a recommended permissive license as it includes permissions and rights related to software patents.

Tier Three: Proprietary through individual commercial licenses

Wait, what? Hear me out.

This comes back to the definition of “open” I mentioned at the beginning. Smolblog being open means data portability and decentralized interactions as much as it means Open Source. Of those three principles, Open Source is the one least valuable to the average user (despite its necessity for the other two). There may be times where compromising a little on Open Source can enable uses for Smolblog that make it useful for even more people.

I don’t expect these situations to manifest anytime soon if ever. But putting this option on the table now means that anyone contributing to Smolblog’s source code is aware of it and can agree to it. Asking contributors to assign full copyright to their contributions, while reasonable, has the potential for abuse. Instead, I would prefer that any contribution agreement for Smolblog list the ways the contribution can be used.

One benefit to commercial licenses is being able to custom-tailor them to each business. For example, say a hosting business wants to offer managed Smolblog hosting. Their “secret sauce” is a caching layer that requires a custom-built plugin. This plugin wouldn’t enable any user-facing features, and it would not work without the host’s custom software. This business could get a commercial license limited to their integration code that would exempt their plugin from the AGPL requirements in exchange for a commission on their Smolblog service.

I choose these two examples specifically: Licensing Smolblog under the AGPL is intended to prevent someone building a product or feature locked to a specific provider. Users of Automattic’s P2 cannot move to a different WordPress and keep the same experience; the data is not truly portable in that sense. The hosting company example does not involve any impact to true data portability or use, since the user experience (and the data created by the users) is indistinguishable from the main project. The openness of Smolblog is not impacted in any meaningful way, and the project gets a source of funding that is not dependent on user-hostile advertising.

But as I said, this is all philosophy. None of it matters until Smolblog is actually built. And so we build. You’re welcome to join along.

Take care of each other; I’ll see you next time.


Technology Cannot Make a Platform, But It Does Help

The web literally exists to share content. The first web browser was also a web editor. And ever since then, programmers have been working on ways to make publishing easier and better. As such, there’s no shortage of existing technologies that a new platform can build off of.

A brief aside about the nature of technology and its place as a part of a whole

It’s easy to think that the right technology will change everything. That somehow, the right code will make all the problems with Old Blue go away and we will live happily ever after in our new paradise.

It’s easy to forget that Posterous existed around the time of Old Blue’s ascendency. It was blessed with better technology, including a dedicated URL shortener and the ability to post via email. Old Blue arguably had inferior technology. But it won. The right technology came together with the right design and the right people at the right time, and the lightning in a bottle struck.

It takes more than good technology to change things. It takes good design, good timing, and a good understanding of the problems being solved. But the right technology can enable change. And as we talk about the technologies that can enable a new platform, it’s important to remember this.

The Interface Is Hot

So, for this essay, let’s look at some interfaces. These are also called “protocols” or “standards.” The general idea here is a group of people have written down, in technical language, how a thing should be accomplished. The most obvious of these would be the HTTP standard that governs how web browsers and servers talk to each other.

We’re not talking about code yet, just the ways we can use it.

oEmbed

This is what turns https://www.youtube.com/watch?v=dQw4w9WgXcQ into the embed code that makes all your friends hate you. It involves a few steps:

  1. Blog gets URL from user.
  2. Blog looks up oEmbed endpoint for the URL, either
    • Matching the URL to a list of known endpoints, or
    • Looking for a particular link tag in the page’s head.
  3. Blog hits the oEmbed endpoint and gets back the code required to embed the content from the URL into a page.

While this was likely originally intended for video websites, it has since grown to encompass all manner of sites, including Old Blue herself. The maintainers of the standard have a (non-comprehensive) list of sites using oEmbed on their website.

The takeaway: Reblogs worked on Old Blue because everything was still happening within the platform. Old Blue was able to enforce attribution and keep social information flowing back to the original poster of the content, no matter how far from the original it traveled. On the open internet, however, the distinction between “reblogging” and simple plagiarism can be hard to see. The ability to embed posts from other blogs, however, can re-create the idea of reblogging while maintaining attribution and social information.

RSS / JSON Feed

RSS has been used for nearly 20 years to allow other sites and programs to read updates from blogs and other regularly-updated websites. It’s evolved slowly, but its simplicity has allowed it to remain relevant even as most internet users don’t realize they’re using it.

Today, though Google Reader has shut down, RSS readers can still be found in services like Feedly and Feed Wrangler. It’s also used to populate stories in the Apple News app. Most prevalently, though, it’s used to deliver every podcast episode to their many listeners.

JSON Feed takes the same principle as RSS but uses the more JSON instead of XML as its primary syntax. This makes the format easier to understand at a glance, and it helps make the format more resilient in some edge cases.

The takeaway: Old Blue’s dashboard allowed you to follow other blogs on the platform. A decentralized platform would need a standard way to follow other blogs, and it already exists in this.

oAuth

This is the authentication flow that allows external, third-party apps to tie back into a platform. By now, it’s hard to exist on the internet without using this to connect one app or website to another. Whether it’s signing into a mobile game with Facebook, or connecting Old Blue to Twitter, everyone’s familiar with “An app would like permission to connect.”

The takeaway: No social network can exist in a vacuum, at least not anymore. Any new platform is going to need to exploit connections to other networks, even if only for cross-publishing posts.

Webmention

Webmention is a new standard that allows posts that are responses to others to link to each other automatically. It is patterned after the similar functionality found on social networks.

The takeaway: Once again, getting the same social information normally found on a monolithic platform would be key for making a decentralized platform feel like a “normal” social network. This is a relatively new standard, and care would have to be taken to make sure that spam and harassment wouldn’t overwhelm they system.

When a Plan Comes Together

None of these technologies alone will make a new platform successful. Even all of them together doesn’t guarantee success; in fact, if the different parts are not integrated well, the end result will be worse. Much worse.

Many of these technologies are in use by the IndieWeb, and all of them have open-source code that can be used by any platform. There is work being done to make these technologies more accessible and usable. And I am particularly impressed by the Micro.blog platform that has taken many of these technologies and others and made them into a plausible alternative to Twitter.

A new platform has to be aware of how these technologies interact. As I mentioned earlier, Old Blue won not on the strength of its technology but in how it used that technology to meet the goals of its users. Any potential replacement for Old Blue will need to take the same path: choosing the right technology and presenting it in the best way to allow people to understand it and use it effectively.

Design is a hard problem.


What Makes A Platform, or How Do We Recreate Old Blue

It’s not enough to just make something. It’s got to be worthwhile. So if we’re going to do this, we’re going to do this right. Let’s start with the past.

What made Old Blue so good?

Old Blue (the site I will not name for fear of Big Red) was lightning in a bottle. There’s not way any site can hope to recreate the same success. It was the right parts at the right time, and whatever truly takes its place will be something unexpected. So what were the right parts?

The Easiest Way To Actually Blog

Old Blue removed a lot of the friction of blogging. These weren’t just technical challenges, though it took care of those as well. There were no servers to configure, no software to download. You picked a username and boom! You had a blog.

Big deal, other services (like WordPress and Blogger) were that easy. Where Old Blue really excelled was in getting content onto your blog. You were allowed and even encouraged to post content you found, not just content you wrote yourself. This was emphasized further by the “reblog” functionality that allowed you to easily repost content from another’s blog onto your own, giving you content for your own blog while attributing it to the original poster.

The problem of starting a blog is easily solved. Old Blue solved the much harder problem of how to easily get content onto a blog.

Dashboard Confessions

Even with the reblog button, though, there was still the matter of finding blogs to reblog from. For this, Old Blue took a page from the then-new Twitter and added the ability to “follow” other blogs. Their posts would then show up in a standard format on your “Dashboard.”

While this took away a large portion of the customization, it made keeping up with blogs easier than ever. There was no worrying about RSS feed readers or poorly-configured Google Analytics to worry about; readers got to read and bloggers got their consistent audience.

Mid–2000s Geocities-Style Self-Expression

Purists will complain about the single-column layout of most Old Blue blogs. They will decry the lack of responsiveness, complaining in tandem that the owner has neither heard of smartphones nor twenty-seven-inch monitors. One comment complained that the state of web design on Old Blue was similar to Geocities in the mid–2000s. I agree wholeheartedly, but I see it as a positive.

Self-expression has always been a part of the social internet. It started with Geocities sites, migrated to MySpace profiles, and eventually settled on Old Blue blogs. All of these allowed mostly unrestricted styles, letting site owners pick and choose random HTML, CSS, and JavaScript snippets from across the internet and blend them together into a miasma that was unmistakably them. Old Blue took it a step further, allowing custom domain names for free. If you didn’t want Old Blue’s name anywhere on your public blog, you didn’t need it.

Did it look ugly? To some. Did it sometimes break? Yes. But it gave people ownership over their blogs, allowing them to feel like their space was truly theirs.

Anything Goes

Everyone “knows” that Old Blue was full of illicit/NSFW material. And, let’s be honest, it’s made it hard for many to take the service seriously. In a professional context, the last thing a service needs is something work-related showing up next to something, well, not safe for work! This is doubly true when it comes to advertising, a sad fact that has robbed the service of much-needed revenue.

And yet, this exceptionally permissive content policy had a side-benefit. Content creators were free to post without fear of their content being removed for a nebulous “terms of service violation.” This was especially relevant in the wake of other online communities like LiveJournal and FanFiction.net nominally “cracking down” on adult content. These crackdowns were, at best, selectively enforced and relied heavily on community reports; the end result being illicit material that was nominally disallowed but somehow acceptable or unknown to the wider community was able to survive on those sites.

Content creators whose work was illicit (or even objectionable in other ways) could post freely on Old Blue without worrying about their content suddenly disappearing. This drove more people to the platform, in turn making it more attractive to other content creators with “safer” material. The network effects took over and made Old Blue a force to be reckoned with.

Hyper-specific Hyperfixations. Or not.

Old Blue made it incredibly easy to sign up and start a blog. That blog could be as specific or general as you wanted. And when you got to the point where you needed a different space, you could start another blog. And another. And another.

Content creators could make different blogs for different fandoms, different levels of content safety, or just different ideas in general. This gave rise to creatively-named specific blogs, like the notable “effyeah” named blogs, or particularly specific names like “picsthatmakeyougohmm.”

What Would We Need?

So, using these principles, what features would a potential replacement for Old Blue need?

  • Low-friction signups
  • Easy to find and post content
  • Easy to make multiple blogs
  • Easy to follow interesting blogs
  • Open-ended theming
  • Custom domain option
  • Clearly-defined (if not permissive) content policy

Five of these are technical problems. Good programming and good design can make these features sing. The issue is the last, social problem: the content policy.

The only site of any significant size that has survived with a permissive content policy is Archive Of Our Own. It’s run by the Office of Transformative Works, a nonprofit dedicated to making a space for works that would not otherwise have a home. As such, they have devoted significant resources to ensuring their policy can withstand legal challenges, and they rely on true tax-deductible donations to fund the site instead of skittish advertisers. Any platform that would truly wish to fill the shoes of Old Blue would probably need to take a similar approach.

An alternative is the one taken by WordPress. Savvy web citizens know that there are two sides to WordPress: the free website where anyone can sign up for a blog, and the open-source software anyone can install for free on their own web server. While downloading and installing WordPress is not necessarily for the faint of heart (it requires some technical knowledge of web servers and how to maintain them), WordPress is widely considered one of the easiest pieces of web software to install and use.

This ease of deployment allows the free website WordPress.com to have a stricter content policy, since anyone adversely affected can take their content to a self-hosted blog with a little effort. This is more than simply offering a blog “backup”; WordPress has built-in mechanisms to move content from one WordPress-powered blog to another with few changes. A blog hosted on WordPress.com with a custom domain can be changed to a self-hosted WordPress blog with few to no visible changes to visitors.

While the WordPress method doesn’t eliminate the social problem of a content policy, it does reduce the stakes. If a group of users find the content policy onerous, they can set up (and pay for) their own WordPress-powered platform.

What next?

And here is where I will cut this off. I humbly submit this for comment, knowing I’ve left some things out that may not have been integral to my experience on Old Blue but essential to others.

I’ll also be working on a follow-up to discuss particular technologies that could be used to create a new platform in this vein, so if you have any suggestions there, I’m all ears.

But I do want to close with this: these are ideas. These are thoughts. And that’s all they are. Building a platform takes a lot of work, both in the programming but also in how it is socially maintained. And as Facebook, Twitter, Google, and Big Red are learning, the rules you choose to have and how you enforce them can have dramatic consequences to the community that builds up around your platform. This is not something I can tackle on my own, and it is not something I would ask anyone to volunteer for.

This is a thought exercise, a way of getting these ideas out of my head. I hope you find it useful, or at least a little informative. And if it helps shape whatever platforms come next, I’ll be even more happy. Thanks for reading; I’ll see you next time.


Sonic Mania and the Triumph of Fan Culture

The livestream was cutting in and out. There was a constant buzzing that replaced most of the audio. And instead of a trailer, the in-house audience saw a brief Sonic Mania logo followed by a Mac desktop. But after many fits and starts, the trailer finally played.

[youtu.be/xmkT113ML...](http://youtu.be/xmkT113MLYI)

There’s a level of excitement around Sonic Mania that Sega hasn’t seen in roughly five years, the time since Sonic Generations was released. Fans are excited about the chance to play a new game in the vein of the classic Sega Genesis games many grew up playing. Polygon described it as “finding a long-lost Sonic title from the mid–90’s.”

But what makes Sonic Mania any different from previous Sonic games, introduced with much hype and fanfare only to be revealed as spectacularly mediocre at best? After all, this is the same company that has had many attempts to “reboot” the franchise or “return it to its roots,” often with lackluster (if not horrible) results. Is the anticipation justified this time?

The proof, as they say, is in the pudding. But this time, Sega’s investing in the right chefs. And before I try to string this metaphor out too far, let’s talk about The Avengers.

Precedence: Marvel’s The Avengers

Side-by-side picture of the six Avengers
via Wallpapers Wide

No, Sonic Mania isn’t about crossing over with other video game franchises; I’m talking about the making of the film itself. It’s easy to forget not even ten years since it came out, but The Avengers really was the culmination of an unprecedented project.

Most of what I could say about The Avengers has, honestly, already been said more eloquently by MovieBob in his forty-minute video on how The Avengers is “really that good.” Go ahead and watch it if you’ve got time. If you don’t have time, find the time and watch it.

For our discussion, the biggest takeaway is the impact of Joss Whedon. Whedon loved comics, he loved the characters, and he was already emotionally invested in making their big-screen outing the best it could be. In other words, he was a fanboy.

But the simple fact of him being a fanboy wasn’t enough to qualify him to direct a multimillion dollar motion picture. After all, the world is filled with half-finished projects of people with an abundance of passion but a lack of skill. Whedon, however, had already proven himself able to handle a “band of misfits comes together to form a team” story before.

Promotional image for Firefly
via Geek League of America

In the case of Joss Whedon, Marvel could have reasonable confidence in their director based on his prior work. While The Avengers is very obviously different from Firefly and Serenity, the “feel” of the two is similar enough that Whedon could apply lessons from one to the other. Only this time, he got to do it using Marvel’s money.

What does this have to do with Sonic? Fanboys.

Labors of Love

Title screen from Sonic 2 HD
via Sonic Retro

There’s a difference between being “a fan” and being “in a fandom.” For myself, I’m a fan of movies like The Princess Bride and The Matrix and video games like Super Mario Bros: I enjoy them and will talk at length about them. But I’m not just a fan of Sonic; I’m in the Sonic the Hedgehog fandom: in addition to talking about and analyzing the series, I also participate in fan-created works based on the series. Many Sonic fans–myself included–use the characters and concepts from the series as a jumping-off point for their own creative works.

These works can be as simple as a short story featuring Sonic and Tails or a drawing of the antagonist Dr. Eggman. Sometimes, though, these works can get a little more complex. Short stories blossom into novels, simple drawings turn into detailed comics, and transcriptions of the songs turn into complete re-interpretations.

And then there’s fan games.

The Sonic fan game community is helped by what many consider to be a lack of quality Sonic games in recent years. Since fans can’t get the game they want from Sega, they decide to program it themselves:

  • Christian Whitehead, a.k.a. Taxman, programmed a game called Retro Sonic using his own game engine. Fans loved how it captured the same feel of the original games. Whitehead currently works as an independent developer.
  • Simon Thomley, a.k.a. Stealth, made a name for himself by modifying the code of the original games to create entirely new games, one of which was the incredibly ambitious project Sonic the Hedgehog Megamix, a modification of the already-complex Sonic CD. He is also working as an independent developer under the name Headcannon.
  • A massive fan effort was undertaken to create a high-resolution remake of Sonic the Hedgehog 2, aptly titled Sonic the Hedgehog 2 HD. Two of the designers involved went on to found the mobile games studio PagodaWest Games, bringing along the musician for the ride.

A Brief But Important Aside About Sonic 4

It is worth mentioning that Sonic Mania is not the first time Sega has attempted a “Classic Sonic” game in the modern era. The most notable of these attempts is Sonic the Hedgehog 4, a series of episodic games merging the modern Sonic art style with the classic Sonic gameplay.

To do this, Sonic Team brought on the development team responsible for the Sonic Advance and Sonic Rush series, Dimps. The Sonic Advance series was itself a series of two-dimensional Sonic games that, while not incredibly popular, held their own. In the end, Sonic 4 played very similar to those games, and that wasn’t a good thing.

If the game had been called Sonic Advance 4, it would have been fine. If it had been called Sonic Blitz, Sonic Island Adventure, or even Sonic Mania, it would have been fine. Fans would have lamented the inconsistent physics, but the game would have ultimately been forgotten on the pile of not-Sonic Sonic games.

However, by calling the game Sonic the Hedgehog 4, Sega called up images of the original Genesis games and all the nostalgia and expectations that come with them. In this environment, inconsistent physics and slightly-off gameplay became unforgivable errors, and Sonic 4 was never able to gain the fan support it needed to continue past two episodes.

Welcome the Fans

So how do we know that Sonic Mania isn’t going to be another Sonic 4? Because the new developers have the right amount of reverence for the series, and we’ve already seen their work.

Christian Whitehead’s Retro Engine is already being used to re-create classic Sonic games. If you’ve played Sonic CD on a modern platform, you’ve played his version. If you’ve played Sonic the Hedgehog or Sonic the Hedgehog 2 on Android or iOS/tvOS, you’ve played the version made by him and Simon Thomley. The feel of the game is so accurate, Sega is comfortable using his engine with some of the most important Sonic games.

Meanwhile, the team at PagodaWest found success with Major Magnet. At first glance, Magnet is nothing like a Sonic game, but the art and musical style have the same flavor if one knows what to look for.

And as for the music, one only needs to browse through Tee Lopes’ YouTube page to hear his love for the original songs.

These fans know what makes a good Sonic game. And, arguably more importantly, they know how to make a good Sonic game.

For once, Sonic fans may actually be looking at a Good Future.

"Future" signpost from Sonic CD

The Lines Worth Reading Between

Apple’s latest earnings have an interesting note: their research spending is the highest it’s been since 2006.

Research and development is a fancy business way of saying “doing new things.” When my previous employer entered the great recession of 2008, the plan to weather the storm was to double-down on R&D. By investing in new products when the market was slow, the company would have those products ready when the market was ready to buy. Our part of the company–tasked with entering a new market for the company–was one of the few areas allowed to hire new employees.

The economy’s recovery in general is up for debate, but the advice is sound: research and development is a key investment for any company, particularly product-based companies (which any software company is, SaaS not withstanding). It’s almost too obvious: as annoying as constant calls for Apple to release a new product are, they do have a point. New products and innovations are the lifeblood of these companies. Which is why every company invests in research and development.

To provide some context, simple research is a constant cost. A company sets aside a certain amount every year to pay a certain number of people to spend a certain amount of time exploring new ideas. Google is (or at least was) famous for its “20% time” that allows any engineer in the company to spend time exploring. Microsoft has an entire division devoted to exploration. Apple obviously has its own research and development; they would be unable to update their products annually without it.

When a project is close to completion, things change. Completing a project means investing in quality assurrance and testing. It means finalizing all of the little details that make the difference between a “good” product and a “great” one. What was once a fixed cost suddenly becomes more variable.

Which brings me back to the inital observation: the last time Apple’s R&D spending was this high was in 2006. Apple’s most important product of the last 10 years–the iPhone–was introduced in 2007.

By now there’s too much smoke for there not to be some kind of watch-like device from Apple, most likely to be introduced this Tuesday. And given Apple’s increase in research spending, it’s going to be a big deal.

Personally, I can’t wait. You?


The New Television

It was a little over a year ago that Netflix CEO Reed Hastings laid out their strategy:

“The goal is to become HBO faster than HBO can become us.”
I would argue that this has happened. They’ve surpassed HBO in number of (paying) subscribers, essentially proving the market for streaming internet television separate from a traditional pay TV (cable, satellite, IPTV) subscription.

So let’s have some fun. If we take the assumption that television will move online at face value,1 what options could a television viewer have 5-10 years from now?

The New HBO: Netflix

Netflix is leading the way in premium online video, both in marketshare and mindshare. They see themselves as maintaining this premium brand, and their long-term manifesto specifically mentions having a top-tier viewing experience, including no commercials.

Today, if you ask people what the best channel on cable is, they’ll probably say “HBO,” even if they don’t subscribe. Everyone knows HBO is where high-quality television like Game of Thrones and True Detective is shown. Likewise, if you ask people where the best online video is, they’ll probably say “Netflix,” thanks to original shows like House of Cards and Orange Is the New Black.

Netflix has the market share now, and they’re also doing everything they can to stay foremost in people’s minds when it comes to television. I don’t see them losing this position as more and more video services pop up; their huge head start in content and technology should keep them in the lead. Provided, of course, they don’t do anything stupid.

The New Showtime: HBO

The first name people think of in premium television today is HBO, but the second is Showtime. They win their fair share of awards and attention with shows like Dexter and Homeland; but they’re usually thought of in the same sentence as HBO, not on their own. This is the place I see HBO occupying: excellent in their own right, but always in relation to Netflix.

This doesn’t seem like it should happen. HBO is part of a much larger corporate behemoth and has had many profitable years of existence to build its content abilities. Also, according to numbers from SNL Kagan, HBO’s wholesale price (the price paid to HBO after the cable company takes its cut) is around the same price as Netflix’s price to its end users. In other words, if HBO were to instantly switch to a direct-to-customer model, they would only need to match Netflix on price to bring in the same revenue.

I see two major obstacles for HBO going forward. The first is their ties to the cable industry and the status quo. While the current system allows them to arbitrarily raise their prices without immediately alerting the end-users (a problem Netflix is running into), it also ties them closer to the existing pay-TV market and gives them less time to establish themselves firmly in the streaming market. The second problem is that their current forays into the streaming market have been met with technical glitches at the worst possible times. Normally for a tech-savvy media company, the technical problems are easy and the content problems are hard, but at the scale HBO would need to operate to compete on Netflix’s own turf, the technical problems are quite hard and could impact HBO’s bottom line more than they realize.

The New Network: Hulu

Network television is often decried for being bland, unoriginal, and all-around mainstream. But what every television hipster (of which I am one) knows in their heart is that this is where the eyeballs are. It used to be that broadcast television–and by extension network TV–was the only way to reach most Americans. Today, cable’s audience has grown to the extent that massive audiences are possible for shows like Breaking Bad, but the original networks still command a powerful presence in the television world.

Hulu is most known for making that network TV readily available to internet viewers. Viewers can easily catch the last couple of episodes of their favorite network dramas and late-night talk shows for free on their computers as long as they are willing to tolerate a few commercials to do so. They also offer a premium service, Hulu Plus, that allows access to more episodes and shows as well as allowing viewing through smartphones, tablets, and set-top boxes. Unlike Netflix and HBO, however, Hulu Plus still contains commercials. While this seems antithetical to a premium service, it is practically no different than nearly every single channel available on cable.

I expect Hulu to continue to invest in its original programming, much like HBO and Netflix. Its focus on network-style programming gives it the ability to become the next mainstream-focused network. It remains to be seen, however, whether its decision to keep advertisements in its subscription offering will affect its ability to keep subscribers over the long term.

The New ESPN: ESPN

ESPN’s describes themselves as “the worldwide leader in sports,” and they have done their best to live up to that description, especially when it comes to online video. ESPN has offered live events via their ESPN360 website since 2007, relaunching it as ESPN3 in 2010. ESPN3 is not a free service, however, as it is only available to internet users whose service providers have agreed to pay ESPN for access to the service. This is in addition to ESPN’s recently launched video platform, appropriately titled WatchESPN. Similar to HBO GO, this service is only available to subscribers of participating cable providers.

Other major sports providers, like NBC and FOX, have their own streaming video websites and apps. Unlike ESPN, however, these are relatively recent developments, and ESPN’s head start in building out its live streaming infrastructure shows. Throw in ESPN’s overwhelming mindshare in sports broadcasting, and they won’t be going anywhere in the new television world.

The New Cable: TV Everywhere

TV Everywhere is an initiative by the existing cable/satellite companies to tie online streaming to existing cable subscriptions. For example, to use the March Madness app to watch the NCAA Men’s Basketball tournament, you must be an existing cable subscriber to watch any game not broadcast on CBS.

In the future, it’s not hard to imagine a “virtual cable” operator that has access to these apps as its primary service as opposed to a secondary add-on. This service probably wouldn’t be any cheaper than existing cable, but it could easily compete in other aspects such as ease-of-use, customer service, and a general awareness of its place in the new world that other cable providers would not have.

So why is this listed separately from HBO and ESPN? In actuality, it’s not that different, and those channels could easily be part of this “virtual cable” company. The difference with HBO and ESPN is the simple fact that those channels have the sheer brand power to break away from cable. It’s unlikely that TBS or Animal Planet could sell their channels outside of a bundle, but HBO and ESPN have such strong brands that not only could they easily sell access to their apps on their own, they could break their existing cable contracts to do so and not lose many (if any) cable affiliates in the process since no cable company wants to offer a service without those channels.

So what?

Nothing, really. While I could see a lot of these things playing out as I proposed here, anything can change when there’s technology involved. The mythical Apple Television (separate from or a reboot of the current Apple TV) could be just as game-changing as everyone wants it to be. Netflix could have another Quikster moment or find that its original content strategy is unsustainable. The ongoing net neutrality debate could actually affect things.

There’s a lot of what-ifs ahead in the world of television, but personally, I can’t wait to see what happens.

    1. I know I’m asking a lot here. The incumbent providers, many of whom own channels, will do everything they can to protect the status quo. But let’s face it, saying “everything will stay the same because the de facto monopolies given to the current television providers allow them to prevent this future at all costs” is about as interesting as sending the eagles to Mordor

Why Apple Should Not Buy Nintendo

They really shouldn’t. I want to set expectations up front, and when you’re talking about either Apple or Nintendo, people (myself included) are going to have Opinions. But let’s back up a bit first.

The ideal

A good merger is one where the whole is greater than the sum of its parts. By that, I mean that the two companies coming together amplify each other’s strengths and compensate for each other’s weaknesses.1 The best mergers will emphasize the second more than the first.

Let’s look at the Apple-NeXT merger in 1996 as an example of a successful merger. NeXT was a small company that made what they considered to be the best operating system in the world, NeXTSTEP. Their computer, the NeXTcube, was used for a variety of advanced computer uses, most notably by Tim Berners-Lee to create the first web server and web browser. They also had Steve Jobs, arguably one of the best business leaders in the industry. By 1996, though, they had dropped their hardware division and were only selling their software to run on conventional PCs as a replacement for Windows.

Apple at this time was in trouble. They made what they considered to be the best computers in the world, but they were lagging behind in the software race. They had tried and failed to develop their own modern operating system, and were in serious danger of losing the personal computer market to Windows 95. Their management was unfocused and unable to bring the different factions within Apple to work together.

Apple and NeXT shared the common goal of making the best products they could. NeXT had a solid operating system but couldn't convince people to give up Windows to use it. Apple had strong hardware but their software hadn't evolved enough to take advantage of that hardware. At its core, the merger brought the two companies together on their common goal, with Apple supplying the hardware NeXT needed and NeXT supplying the software and management Apple needed.

The end result? The immediate change with Apple was the influx of good management from NeXT, particularly Steve Jobs. The software team at Apple immediately began work on newer versions of the existing Mac OS (versions 8 and 9) that bought Apple enough time to get the new, NeXTSTEP-based Mac OS out the door as Mac OS X. The advanced operating system not only improved performance on Apple's existing hardware, but allowed them to switch to a completely different type of hardware when they needed to. On top of that, OS X was versatile enough that Apple would eventually use it to power the iPhone and iPad.

Over 15 years later, that $400 million investment is still paying off. That's a good merger.

The reality

So where are Apple and Nintendo today? What are their strengths and weakness that would make-or-break our hypothetical merger?

The Apple of today prides itself on a--dare I say it--magical marriage of hardware and software. The design ideal is that when you see their work, whether hardware or software, it is beautiful; but when you need to actually do work, the hardware and software become almost invisible compared to what you are doing.

However, Apple has traditionally not been good at games. Not many people know of their attempt at making a game console with Bandai, and for good reason: it wasn't good. Gaming on the Mac has always been a second-class citizen, and companies like Valve have only begun targeting the Mac in the past few years. Games are very popular on iOS devices, but those games are not significantly tied to iOS itself. As Ben Thompson writes:

  • Games take over the whole screen; this means that tailoring a game to fit a particular platform’s look and feel isn’t important
  • There is an entire industry devoted to providing cross-platform compatibility from day one. Most game developers are targeting game engines such as Unity, not iOS or Android. This is acceptable because of point one

Nintendo is committed to making the best gaming experiences possible, then making them better. In the past, this has led them to create some of the most beloved franchises in the video game world, including Super Mario, Zelda, Kirby, and Pokémon. In recent years, this has meant pursuing new hardware: not only the gyroscope for the Wii and the touchscreen for the DS but also things like the DK Bongos for the GameCube, the microphone for the DS, and the stereo camera on the 3DS. For every hardware feature Nintendo releases, there is a game like Wii Sports, WarioWare, or Donkey Konga designed specifically to get the most fun from that feature.

The current Nintendo is a victim of a changing landscape. They lost mindshare and marketshare of hobbyist2 gamers to Sony and Microsoft, and their (smart) response was to pursue the mainstream market with the Wii and DS. This strategy paid off until iOS and Android devices began capturing mindshare and marketshare in the mainstream with free-to-play casual games among other benefits. Their efforts to woo both markets with the Wii U and 3DS have been decent, but some worry it won’t be enough to keep the company around.

The dream

So what happens if we bring the two companies together? Let's assume for the sake of argument that Apple uses some of its cash horde and buys Nintendo outright.

From day one, Apple has a large library of exclusive games for its platform, games that are fun and that people want to play; and Nintendo is essentially guaranteed a place in the new smartphone world. Nintendo can, with some effort, create versions of classic games from its library for iOS, accessable to a massive audience that would easily pay the current asking prices of $3-5 each. These games would obviously not be available for any other smartphone or tablet platform, increasing the value of iOS both to consumers who want to play Nintendo games and to developers who want to reach those consumers.

Going forward, Nintendo can help Apple to move its platform forward much like they have with their own platforms in the past. Possibilities dreamed up by the iOS platform team can be made concrete by Nintendo's game team. Both companies thrived in the past by pushing the integration of hardware and software, and having both companies push each other could bring out the absolute best in both. If Apple does release an app-capable Apple TV as rumored, a library of Nintendo games would only help sell the device.

Let's not leave hardware out of the equation. A Jony Ive-designed game console would be great for publicity, but Nintendo could gain more immediate benefits from Apple's supply chain. Apple has incredible buying power when it comes to quality components, especially solid-state storage and touch screens. A (relatively) quick update to the 3DS and Wii U touch screens would elevate the quality of those devices, and that is an area that Apple has made itself an expert in.

This, of course, assumes there are no cultural issues with the merger. Part of what made the Apple-NeXT merger so successful was the understanding that NeXT management was essentially taking over Apple. If the hardware or software teams at the two companies aren't able to find common ground with each other, the best talent could walk out the door and the resulting company would be far worse off than either company would have been separate. This could be a moot point; desperation on either side has a way of forcing compromise where it wasn't thought possible.

But that's not why I think it wouldn't work.

The problem

The best mergers amplify shared strengths and compensate for weaknesses. The worst mergers amplify shared weaknesses. And Apple and Nintendo share a similar weakness: online services.

One of Apple's biggest competetors moving forward is Google. Google was born on the web, and as such, Google understands the web on a near-instinctual level. Servers talking to servers talking to phones is a beginning requirement for a product, not an idea tacked on halfway through the process. More importantly, they know how people use the web. They know how many people leave if search slows down less than half a second. They know how to give users email, file storage, online document editing, and keep it all in sync. Apple's previous online service, MobileMe, was not well received. Their new service, iCloud, is an attempt to modernize the service and make it more reliable, but the reality falls short of the ideal.

One of Nintendo's biggest competitors moving forward is Microsoft. The Xbox is a powerful game machine on its own, but its biggest strength is the Xbox Live service. Every Xbox Live game ties into the same online infrastructure, allowing individual players to define their friends once (using easy-to-remember names) and play with them in every game. This is something Microsoft has fought for from the beginning of the service. Most importantly, the interactions and purchases in Xbox Live are defined around people. Contrast this with Nintendo, which bases its interactions around devices. Social connections are made by exchanging device-specific friend codes which have all the joy and personality of 16-digit phone numbers. Purchases and friend lists are device-dependent, so replacing a console outside of a warranty repair means losing your entire library of downloaded games.

Knowing all this, how appealing does it sound to know that the company that brought you MobileMe is merging with the company that brought you friend codes?

To be fair, both Apple and Nintendo are learning in this area. Apple's iCloud service is getting better, but it will be some time before developers (and their users) begin to trust the service. Nintendo is slowly making improvements to their online service, but they would still rather shut down a service than see it misused.

Both companies, still approach the internet the same way they approach products: slowly and deliberately. This often leads to them missing a crucial component of what their customers actually want. A merger would make this worse, not better, as companies (just like people) lean on what they know during times of transition. A merger would deny both companies the opportunity to truly learn and understand online services in the modern, connected world.

Which stinks because I really want to play Pokémon on my iPhone.

  1. It’s a lot like marriage in that regard, but that’s a topic for another day. 

  2. Some places call them “hardcore” or “serious” gamers, but the basic idea is people who pursue gaming like most people pursue hobbies: investing more time and money than average and knowing more about the subject than most people. 


There's a Difference, Guys.

Apple didn’t sue Samsung because they had a touchscreen phone. They didn’t sue because of rounded rectangles. They didn’t sue because of icons arranged in a grid.

Palm added all those together to get WebOS, which was easily distinguishable from iOS.

Microsoft added all those together (save rounded corners) to get Windows Phone 7, which was easily distinguishable from iOS.

Google added all those together and made the Nexus series of Android-powered phones that were easily distinguishable from iOS-powered iPhones.

Samsung added all those things together and arranged them to be as similar to the iPhone as possible. They ignored advice from Google warning them not to do so. Instead of pouring creative effort into improving on what iOS had to offer, they focused on copying what iOS had to offer.

That is why Apple took then to court, and that is why they lost.


File Sharing Rant

I’ve largely taken a back seat on the whole file sharing debate. However, now that I actually have a self-published work I feel it is time for me to make a public stance. Here goes…

I’m going to have to agree with John Gruber’s assessment of Richard Stallman’s latest essay:

	<p>I waver between rolling my eyes at Stallman’s kookiness and admiring his singleminded determination.</p>

In my case, however1, Stallman’s kookiness extends to a large portion of the Free Software Foundation’s philosophies. Above all else, the FSF champions the right to modify and redistribute software. I have no problem with this goal as I will often promote a free or open source program (which apparently are not the same) when it is a viable alternative to a commercial program. I use WordPress instead of ExpressionEngine. I use The GIMP instead of Photoshop. But I use Safari instead of Firefox because I find Safari to be faster on my Mac. In my case, I am willing to give up a “freedom” that I don’t really use (the ability to modify the source code) in exchange for a more pleasant computing experience.

It is Richard Stallman’s opinion on creative works that I find unacceptable2. Never mind that because not all Creative Commons licenses are free he refuses to endorse any of them (he, of course, suggests the GPL). What is dangerous is that he equates creative works such as movies and music with information and file sharing with the general term “sharing.” In doing so, Stallman shows his background as a computer scientist. A program is written to solve a problem; the FSF’s arguments that there are more benefits to releasing the source are valid here largely because the program can benefit from the scientific method. Information wants to be free, and the solution to the problem (the program) is simply another form of information.

A creative work, however, is not simply information. It does not consist of simple facts or present a solution to an established problem. It is, when done properly, a reflection of the author or artist’s heart. It can be anything from a commentary on society to a rewrite of a poorly done movie to an attempt to reconcile temporal existence with eternal life. As such, creative works cannot be held to the same standards as computer programs, and vice versa.

Equating creative works to information reduces the author’s creative expression to its digital format, an act of language that cheapens the work even more than the term content. And distributing digital creative works over file sharing is not simply sharing, it is copying. Like anything distributed over the internet, the digital information is copied, not moved, from one computer to another. Loaning a CD or a book to a friend is sharing, since while one is in possession of it the other is not. File sharing creates copies, so that both are in possession at the same time. While not necessarily the same as theft, this cannot, by any reasonable definition, be considered sharing.

This is not to say I am against file sharing as a whole. There are hundreds of out-of-print and hard-to-find works that can benefit from file sharing in order to preserve their value to society. Also, it can be used by lesser known artists to encourage the viral word-of-mouth growth that is essential to growing a fanbase. This is the aim of Creative Commons, and I am disappointed that a man committed to “freedom” refuses to acknowledge the benefits of such a system.

1 John Gruber may agree with me, but I won’t presume to speak for him.

2 Yes, it’s a Wayback Machine link. The post as linked from the original slashdot article no longer exists.


Scientific Voting

Comments like this chicken-or-crap essay pointed out by John Gruber notwithstanding, I’m still on the fence to a certain degree about this election. I’ve appreciated John McCain’s willingness to go against the party establishment the past few years, a reason I’ll be voting for our current senator as well. On the other hand, Barack Obama appears to have a solid technology platform and it’s undoubtable that he’s inspired a lot of people to take interest in politics.

At the end of the day, I want to make an informed decision and choose the candidate that aligns most closely with what I believe. Now, my most closely held beliefs may or may not be held by the candidates; in today’s political environment it’s almost impossible to tell what beliefs are genuine. (Not completely impossible, mind you, but those guys don’t typically win, endorsements or not.) As far as political beliefs go, it’s sometimes hard for me to tell just what I believe. Less government spending is good, abortion is bad, morals in general are kinda… not sure. After this policy and that boycott, I’m wondering how right the “Christian right” really is.

Sounds like a job for… a political quiz! Or rather, several. First stop is The Compass, a several-part quiz that shows where you are on a two-dimensional graph that compares social and financial issues. The postmodernist in me is actually quite proud of my position, but other people are not impressed. And it doesn’t help me pick a candidate.

Enter Glassbooth (found via TechCrunch). They’re supposedly nonpartisan and nonbiased, and I’m inclined to believe it. You first pick your most important issues and then rate your position. What I like most about it is that you have the option of remaining neutral on an issue if you so choose. It then compares how you feel about the issues with how the candidates feel about the same issues, and it gives you quotes and voting records to back it up.

So who am I voting for? Should I really have wondered in the first place? Actually, I’ve got more in common with Obama than the Libertarian candidate. Wonder how I would have compared with Ron Paul…