Introducing Smolblog

Around the end of last year, I wrote an essay about what made Tumblr unique in the blogging world, followed by another essay about different technologies that can be used by a blog platform. And then I did nothing.

Well, not nothing. I went and got a new job. I also started sketching out some more concrete ideas. And while I want to be farther along in the actual development of things, I also want to start getting feedback on the ideas themselves.

Full disclosure: I'm great at talking about ideas, but I'm still learning to actually execute on them. Which is kinda disappointing, since the execution is where so many ideas go from "good" to "awesome." So, bear in mind, this is an idea. It may not get very far, it may not get very good, it may crash and burn spectacularly. But these are problems I have wanted to solve for myself, and if I can help solve them for others, then I feel that I must try. So with that, let me announce...

Smolblog

The name was carefully considered and chosen for the following reasons:

  • “Smol” is one of my favorite “internet words.” It’s small, but more adorable. More comfortable. “Small” isn’t big enough, but “Smol” is just right.
  • It’s a blogging platform.
  • It’s for the space in between a micro-sized blog and a medium-sized blog.
  • Most importantly, smolblog.com was available.

Side note: it’s honestly ridiculous how hard it is to get a good dot-com these days.

“Smol” blogging is something I want to emphasize with this platform. Blogging on platforms like WordPress and Medium can feel intimidating. You have a blank page and a site that encourages posting about “big” ideas. What Tumblr excelled at was encouraging small posts of just a picture. Just a sentence. Just a link.

It’s no coincidence that I’ve probably posted more on my Tumblr blog in a year than I did on my WordPress blog in five. While Medium has become a home for presenting big ideas, Tumblr was a home for just... being yourself. That’s the kind of atmosphere I want to build on Smolblog.

The Mission

A project needs to have guiding principles, a central problem to solve. Focus on these can help determine what features need to be built and which ones can wait for later. They can also help set the tone for interactions between people within and around the project.

Keep the gates open

  • Anyone should be able to set up a technically identical server. While some design elements and trademarks may be reserved for the ”canonical” site, there should be almost no difference using sites hosted on different servers.
  • Individual blogs should be easily moved (import/export) between servers or saved offline
  • Use open protocols for interactions

The end result is something like Mastodon: you don’t need to be on the same server as someone in order to interact with them.

Play well with others

  • Allow synchronization from and syndication to other social networks
  • Use oEmbed instead of copying others’ posts

I'm going to be much more willing to try something new if it means I don't lose the social connections I've made on existing services. I'm shooting for Twitter and Tumblr crossposting for phase one as these are the services I use most.

Enable self-expression

  • Allow multiple blogs on multiple domains
  • Allow user-installed themes
  • Make it easy to post small posts and reblogs

There is a time and a place for standardized, beautiful web design. Your personal site should only be that if you want it to be.

Phase One

Spoiler alert: it's WordPress. It's always been WordPress. Why?

  • It's easily deployable on inexpensive web servers.
  • It's well-supported and actively maintained.
  • It comes with several key features for Smolblog out of the box, including but not limited to
    • Multi-user support
    • Multi-site support
    • Image management and manipulation
    • REST API
    • oEmbed provider and consumer support
    • Standard format for import/export
  • Lots of people are invested in extending WordPress for custom purposes. I work with some of them.

So while I talk about Smolblog as its own thing, the first phase (at least) will be delivered as a WordPress plugin. If the project ever outgrows WordPress, then it will need to be at least as easy-to-deploy as vanilla WordPress is currently.

Building on top of WordPress, I plan on adding Tumblr and Twitter crossposting. I've already worked on a large part of the logic through a previous project of mine. By the end of phase one, I'm hoping to have the following features in addition to a standard WordPress Multisite install:

  • Import a full Twitter archive
  • Authorize against Twitter as a single account
  • Pull tweets from that account on a regular basis if they do not already exist on the site
  • Pull Retweets and Retweet-with-comments as embedded tweets to clearly deliniate original and reposted content
  • Push new posts to Twitter, either in full or as links back to site
  • Authorize against Tumblr as an account and indicate a blog
  • Pull posts from that blog, both historical and on a regular basis if they do not already exist on the site
  • Pull reblogs as embedded posts to clearly deliniate original and reposted content
  • Push new posts to Tumblr in as native a format as possible

This should lay the groundwork for adding more services as time and available methods allow.

Phase Later

Some other ideas that will have to come later, after the basic version is working:

  • Posting natively with ActivityPub
  • Cross-posting to Facebook Pages (depends on API support from Facebook)
  • Cross-posting to Instagram (currently being privately tested by Facebook, will depend on Facebook being kind and benevolent and honestly I don't expect this to ever be possible)
  • Cross-posting to YouTube/Vimeo
  • Cross-posting to DeviantArt
  • Dashboard for following other sites/people (use RSS/ActivityPub to "follow anyone")
  • Easy reblogging-as-oembed
  • Supporting Micropub APIs well
  • Native post editor for when Gutenberg is too much
  • Allow end-user editable custom theme
  • Easy podcasting
  • Asks

There's a lot here. I'm not going to be able to do this myself. But I'm going to try. If you want to follow along, the best place to do that is here on this blog (see email widget at the bottom of the page). If you want to see or contribute code, check out the GitHub repo for the plugin.

I have lots of hope and plans. I hope to have more. Thanks for reading, everyone.


What Makes A Platform, or How Do We Recreate Old Blue

It’s not enough to just make something. It’s got to be worthwhile. So if we’re going to do this, we’re going to do this right. Let’s start with the past.

What made Old Blue so good?

Old Blue (the site I will not name for fear of Big Red) was lightning in a bottle. There’s not way any site can hope to recreate the same success. It was the right parts at the right time, and whatever truly takes its place will be something unexpected. So what were the right parts?

The Easiest Way To Actually Blog

Old Blue removed a lot of the friction of blogging. These weren’t just technical challenges, though it took care of those as well. There were no servers to configure, no software to download. You picked a username and boom! You had a blog.

Big deal, other services (like WordPress and Blogger) were that easy. Where Old Blue really excelled was in getting content onto your blog. You were allowed and even encouraged to post content you found, not just content you wrote yourself. This was emphasized further by the “reblog” functionality that allowed you to easily repost content from another’s blog onto your own, giving you content for your own blog while attributing it to the original poster.

The problem of starting a blog is easily solved. Old Blue solved the much harder problem of how to easily get content onto a blog.

Dashboard Confessions

Even with the reblog button, though, there was still the matter of finding blogs to reblog from. For this, Old Blue took a page from the then-new Twitter and added the ability to “follow” other blogs. Their posts would then show up in a standard format on your “Dashboard.”

While this took away a large portion of the customization, it made keeping up with blogs easier than ever. There was no worrying about RSS feed readers or poorly-configured Google Analytics to worry about; readers got to read and bloggers got their consistent audience.

Mid–2000s Geocities-Style Self-Expression

Purists will complain about the single-column layout of most Old Blue blogs. They will decry the lack of responsiveness, complaining in tandem that the owner has neither heard of smartphones nor twenty-seven-inch monitors. One comment complained that the state of web design on Old Blue was similar to Geocities in the mid–2000s. I agree wholeheartedly, but I see it as a positive.

Self-expression has always been a part of the social internet. It started with Geocities sites, migrated to MySpace profiles, and eventually settled on Old Blue blogs. All of these allowed mostly unrestricted styles, letting site owners pick and choose random HTML, CSS, and JavaScript snippets from across the internet and blend them together into a miasma that was unmistakably them. Old Blue took it a step further, allowing custom domain names for free. If you didn’t want Old Blue’s name anywhere on your public blog, you didn’t need it.

Did it look ugly? To some. Did it sometimes break? Yes. But it gave people ownership over their blogs, allowing them to feel like their space was truly theirs.

Anything Goes

Everyone “knows” that Old Blue was full of illicit/NSFW material. And, let’s be honest, it’s made it hard for many to take the service seriously. In a professional context, the last thing a service needs is something work-related showing up next to something, well, not safe for work! This is doubly true when it comes to advertising, a sad fact that has robbed the service of much-needed revenue.

And yet, this exceptionally permissive content policy had a side-benefit. Content creators were free to post without fear of their content being removed for a nebulous “terms of service violation.” This was especially relevant in the wake of other online communities like LiveJournal and FanFiction.net nominally “cracking down” on adult content. These crackdowns were, at best, selectively enforced and relied heavily on community reports; the end result being illicit material that was nominally disallowed but somehow acceptable or unknown to the wider community was able to survive on those sites.

Content creators whose work was illicit (or even objectionable in other ways) could post freely on Old Blue without worrying about their content suddenly disappearing. This drove more people to the platform, in turn making it more attractive to other content creators with “safer” material. The network effects took over and made Old Blue a force to be reckoned with.

Hyper-specific Hyperfixations. Or not.

Old Blue made it incredibly easy to sign up and start a blog. That blog could be as specific or general as you wanted. And when you got to the point where you needed a different space, you could start another blog. And another. And another.

Content creators could make different blogs for different fandoms, different levels of content safety, or just different ideas in general. This gave rise to creatively-named specific blogs, like the notable “effyeah” named blogs, or particularly specific names like “picsthatmakeyougohmm.”

What Would We Need?

So, using these principles, what features would a potential replacement for Old Blue need?

  • Low-friction signups
  • Easy to find and post content
  • Easy to make multiple blogs
  • Easy to follow interesting blogs
  • Open-ended theming
  • Custom domain option
  • Clearly-defined (if not permissive) content policy

Five of these are technical problems. Good programming and good design can make these features sing. The issue is the last, social problem: the content policy.

The only site of any significant size that has survived with a permissive content policy is Archive Of Our Own. It’s run by the Office of Transformative Works, a nonprofit dedicated to making a space for works that would not otherwise have a home. As such, they have devoted significant resources to ensuring their policy can withstand legal challenges, and they rely on true tax-deductible donations to fund the site instead of skittish advertisers. Any platform that would truly wish to fill the shoes of Old Blue would probably need to take a similar approach.

An alternative is the one taken by WordPress. Savvy web citizens know that there are two sides to WordPress: the free website where anyone can sign up for a blog, and the open-source software anyone can install for free on their own web server. While downloading and installing WordPress is not necessarily for the faint of heart (it requires some technical knowledge of web servers and how to maintain them), WordPress is widely considered one of the easiest pieces of web software to install and use.

This ease of deployment allows the free website WordPress.com to have a stricter content policy, since anyone adversely affected can take their content to a self-hosted blog with a little effort. This is more than simply offering a blog “backup”; WordPress has built-in mechanisms to move content from one WordPress-powered blog to another with few changes. A blog hosted on WordPress.com with a custom domain can be changed to a self-hosted WordPress blog with few to no visible changes to visitors.

While the WordPress method doesn’t eliminate the social problem of a content policy, it does reduce the stakes. If a group of users find the content policy onerous, they can set up (and pay for) their own WordPress-powered platform.

What next?

And here is where I will cut this off. I humbly submit this for comment, knowing I’ve left some things out that may not have been integral to my experience on Old Blue but essential to others.

I’ll also be working on a follow-up to discuss particular technologies that could be used to create a new platform in this vein, so if you have any suggestions there, I’m all ears.

But I do want to close with this: these are ideas. These are thoughts. And that’s all they are. Building a platform takes a lot of work, both in the programming but also in how it is socially maintained. And as Facebook, Twitter, Google, and Big Red are learning, the rules you choose to have and how you enforce them can have dramatic consequences to the community that builds up around your platform. This is not something I can tackle on my own, and it is not something I would ask anyone to volunteer for.

This is a thought exercise, a way of getting these ideas out of my head. I hope you find it useful, or at least a little informative. And if it helps shape whatever platforms come next, I’ll be even more happy. Thanks for reading; I’ll see you next time.


Sonic Mania and the Triumph of Fan Culture

The livestream was cutting in and out. There was a constant buzzing that replaced most of the audio. And instead of a trailer, the in-house audience saw a brief Sonic Mania logo followed by a Mac desktop. But after many fits and starts, the trailer finally played.

[youtu.be/xmkT113ML...](http://youtu.be/xmkT113MLYI)

There’s a level of excitement around Sonic Mania that Sega hasn’t seen in roughly five years, the time since Sonic Generations was released. Fans are excited about the chance to play a new game in the vein of the classic Sega Genesis games many grew up playing. Polygon described it as “finding a long-lost Sonic title from the mid–90’s.”

But what makes Sonic Mania any different from previous Sonic games, introduced with much hype and fanfare only to be revealed as spectacularly mediocre at best? After all, this is the same company that has had many attempts to “reboot” the franchise or “return it to its roots,” often with lackluster (if not horrible) results. Is the anticipation justified this time?

The proof, as they say, is in the pudding. But this time, Sega’s investing in the right chefs. And before I try to string this metaphor out too far, let’s talk about The Avengers.

Precedence: Marvel’s The Avengers

Side-by-side picture of the six Avengers
via Wallpapers Wide

No, Sonic Mania isn’t about crossing over with other video game franchises; I’m talking about the making of the film itself. It’s easy to forget not even ten years since it came out, but The Avengers really was the culmination of an unprecedented project.

Most of what I could say about The Avengers has, honestly, already been said more eloquently by MovieBob in his forty-minute video on how The Avengers is “really that good.” Go ahead and watch it if you’ve got time. If you don’t have time, find the time and watch it.

For our discussion, the biggest takeaway is the impact of Joss Whedon. Whedon loved comics, he loved the characters, and he was already emotionally invested in making their big-screen outing the best it could be. In other words, he was a fanboy.

But the simple fact of him being a fanboy wasn’t enough to qualify him to direct a multimillion dollar motion picture. After all, the world is filled with half-finished projects of people with an abundance of passion but a lack of skill. Whedon, however, had already proven himself able to handle a “band of misfits comes together to form a team” story before.

Promotional image for Firefly
via Geek League of America

In the case of Joss Whedon, Marvel could have reasonable confidence in their director based on his prior work. While The Avengers is very obviously different from Firefly and Serenity, the “feel” of the two is similar enough that Whedon could apply lessons from one to the other. Only this time, he got to do it using Marvel’s money.

What does this have to do with Sonic? Fanboys.

Labors of Love

Title screen from Sonic 2 HD
via Sonic Retro

There’s a difference between being “a fan” and being “in a fandom.” For myself, I’m a fan of movies like The Princess Bride and The Matrix and video games like Super Mario Bros: I enjoy them and will talk at length about them. But I’m not just a fan of Sonic; I’m in the Sonic the Hedgehog fandom: in addition to talking about and analyzing the series, I also participate in fan-created works based on the series. Many Sonic fans–myself included–use the characters and concepts from the series as a jumping-off point for their own creative works.

These works can be as simple as a short story featuring Sonic and Tails or a drawing of the antagonist Dr. Eggman. Sometimes, though, these works can get a little more complex. Short stories blossom into novels, simple drawings turn into detailed comics, and transcriptions of the songs turn into complete re-interpretations.

And then there’s fan games.

The Sonic fan game community is helped by what many consider to be a lack of quality Sonic games in recent years. Since fans can’t get the game they want from Sega, they decide to program it themselves:

  • Christian Whitehead, a.k.a. Taxman, programmed a game called Retro Sonic using his own game engine. Fans loved how it captured the same feel of the original games. Whitehead currently works as an independent developer.
  • Simon Thomley, a.k.a. Stealth, made a name for himself by modifying the code of the original games to create entirely new games, one of which was the incredibly ambitious project Sonic the Hedgehog Megamix, a modification of the already-complex Sonic CD. He is also working as an independent developer under the name Headcannon.
  • A massive fan effort was undertaken to create a high-resolution remake of Sonic the Hedgehog 2, aptly titled Sonic the Hedgehog 2 HD. Two of the designers involved went on to found the mobile games studio PagodaWest Games, bringing along the musician for the ride.

A Brief But Important Aside About Sonic 4

It is worth mentioning that Sonic Mania is not the first time Sega has attempted a “Classic Sonic” game in the modern era. The most notable of these attempts is Sonic the Hedgehog 4, a series of episodic games merging the modern Sonic art style with the classic Sonic gameplay.

To do this, Sonic Team brought on the development team responsible for the Sonic Advance and Sonic Rush series, Dimps. The Sonic Advance series was itself a series of two-dimensional Sonic games that, while not incredibly popular, held their own. In the end, Sonic 4 played very similar to those games, and that wasn’t a good thing.

If the game had been called Sonic Advance 4, it would have been fine. If it had been called Sonic Blitz, Sonic Island Adventure, or even Sonic Mania, it would have been fine. Fans would have lamented the inconsistent physics, but the game would have ultimately been forgotten on the pile of not-Sonic Sonic games.

However, by calling the game Sonic the Hedgehog 4, Sega called up images of the original Genesis games and all the nostalgia and expectations that come with them. In this environment, inconsistent physics and slightly-off gameplay became unforgivable errors, and Sonic 4 was never able to gain the fan support it needed to continue past two episodes.

Welcome the Fans

So how do we know that Sonic Mania isn’t going to be another Sonic 4? Because the new developers have the right amount of reverence for the series, and we’ve already seen their work.

Christian Whitehead’s Retro Engine is already being used to re-create classic Sonic games. If you’ve played Sonic CD on a modern platform, you’ve played his version. If you’ve played Sonic the Hedgehog or Sonic the Hedgehog 2 on Android or iOS/tvOS, you’ve played the version made by him and Simon Thomley. The feel of the game is so accurate, Sega is comfortable using his engine with some of the most important Sonic games.

Meanwhile, the team at PagodaWest found success with Major Magnet. At first glance, Magnet is nothing like a Sonic game, but the art and musical style have the same flavor if one knows what to look for.

And as for the music, one only needs to browse through Tee Lopes’ YouTube page to hear his love for the original songs.

These fans know what makes a good Sonic game. And, arguably more importantly, they know how to make a good Sonic game.

For once, Sonic fans may actually be looking at a Good Future.

"Future" signpost from Sonic CD

The New Television

It was a little over a year ago that Netflix CEO Reed Hastings laid out their strategy:

“The goal is to become HBO faster than HBO can become us.”
I would argue that this has happened. They’ve surpassed HBO in number of (paying) subscribers, essentially proving the market for streaming internet television separate from a traditional pay TV (cable, satellite, IPTV) subscription.

So let’s have some fun. If we take the assumption that television will move online at face value,1 what options could a television viewer have 5-10 years from now?

The New HBO: Netflix

Netflix is leading the way in premium online video, both in marketshare and mindshare. They see themselves as maintaining this premium brand, and their long-term manifesto specifically mentions having a top-tier viewing experience, including no commercials.

Today, if you ask people what the best channel on cable is, they’ll probably say “HBO,” even if they don’t subscribe. Everyone knows HBO is where high-quality television like Game of Thrones and True Detective is shown. Likewise, if you ask people where the best online video is, they’ll probably say “Netflix,” thanks to original shows like House of Cards and Orange Is the New Black.

Netflix has the market share now, and they’re also doing everything they can to stay foremost in people’s minds when it comes to television. I don’t see them losing this position as more and more video services pop up; their huge head start in content and technology should keep them in the lead. Provided, of course, they don’t do anything stupid.

The New Showtime: HBO

The first name people think of in premium television today is HBO, but the second is Showtime. They win their fair share of awards and attention with shows like Dexter and Homeland; but they’re usually thought of in the same sentence as HBO, not on their own. This is the place I see HBO occupying: excellent in their own right, but always in relation to Netflix.

This doesn’t seem like it should happen. HBO is part of a much larger corporate behemoth and has had many profitable years of existence to build its content abilities. Also, according to numbers from SNL Kagan, HBO’s wholesale price (the price paid to HBO after the cable company takes its cut) is around the same price as Netflix’s price to its end users. In other words, if HBO were to instantly switch to a direct-to-customer model, they would only need to match Netflix on price to bring in the same revenue.

I see two major obstacles for HBO going forward. The first is their ties to the cable industry and the status quo. While the current system allows them to arbitrarily raise their prices without immediately alerting the end-users (a problem Netflix is running into), it also ties them closer to the existing pay-TV market and gives them less time to establish themselves firmly in the streaming market. The second problem is that their current forays into the streaming market have been met with technical glitches at the worst possible times. Normally for a tech-savvy media company, the technical problems are easy and the content problems are hard, but at the scale HBO would need to operate to compete on Netflix’s own turf, the technical problems are quite hard and could impact HBO’s bottom line more than they realize.

The New Network: Hulu

Network television is often decried for being bland, unoriginal, and all-around mainstream. But what every television hipster (of which I am one) knows in their heart is that this is where the eyeballs are. It used to be that broadcast television–and by extension network TV–was the only way to reach most Americans. Today, cable’s audience has grown to the extent that massive audiences are possible for shows like Breaking Bad, but the original networks still command a powerful presence in the television world.

Hulu is most known for making that network TV readily available to internet viewers. Viewers can easily catch the last couple of episodes of their favorite network dramas and late-night talk shows for free on their computers as long as they are willing to tolerate a few commercials to do so. They also offer a premium service, Hulu Plus, that allows access to more episodes and shows as well as allowing viewing through smartphones, tablets, and set-top boxes. Unlike Netflix and HBO, however, Hulu Plus still contains commercials. While this seems antithetical to a premium service, it is practically no different than nearly every single channel available on cable.

I expect Hulu to continue to invest in its original programming, much like HBO and Netflix. Its focus on network-style programming gives it the ability to become the next mainstream-focused network. It remains to be seen, however, whether its decision to keep advertisements in its subscription offering will affect its ability to keep subscribers over the long term.

The New ESPN: ESPN

ESPN’s describes themselves as “the worldwide leader in sports,” and they have done their best to live up to that description, especially when it comes to online video. ESPN has offered live events via their ESPN360 website since 2007, relaunching it as ESPN3 in 2010. ESPN3 is not a free service, however, as it is only available to internet users whose service providers have agreed to pay ESPN for access to the service. This is in addition to ESPN’s recently launched video platform, appropriately titled WatchESPN. Similar to HBO GO, this service is only available to subscribers of participating cable providers.

Other major sports providers, like NBC and FOX, have their own streaming video websites and apps. Unlike ESPN, however, these are relatively recent developments, and ESPN’s head start in building out its live streaming infrastructure shows. Throw in ESPN’s overwhelming mindshare in sports broadcasting, and they won’t be going anywhere in the new television world.

The New Cable: TV Everywhere

TV Everywhere is an initiative by the existing cable/satellite companies to tie online streaming to existing cable subscriptions. For example, to use the March Madness app to watch the NCAA Men’s Basketball tournament, you must be an existing cable subscriber to watch any game not broadcast on CBS.

In the future, it’s not hard to imagine a “virtual cable” operator that has access to these apps as its primary service as opposed to a secondary add-on. This service probably wouldn’t be any cheaper than existing cable, but it could easily compete in other aspects such as ease-of-use, customer service, and a general awareness of its place in the new world that other cable providers would not have.

So why is this listed separately from HBO and ESPN? In actuality, it’s not that different, and those channels could easily be part of this “virtual cable” company. The difference with HBO and ESPN is the simple fact that those channels have the sheer brand power to break away from cable. It’s unlikely that TBS or Animal Planet could sell their channels outside of a bundle, but HBO and ESPN have such strong brands that not only could they easily sell access to their apps on their own, they could break their existing cable contracts to do so and not lose many (if any) cable affiliates in the process since no cable company wants to offer a service without those channels.

So what?

Nothing, really. While I could see a lot of these things playing out as I proposed here, anything can change when there’s technology involved. The mythical Apple Television (separate from or a reboot of the current Apple TV) could be just as game-changing as everyone wants it to be. Netflix could have another Quikster moment or find that its original content strategy is unsustainable. The ongoing net neutrality debate could actually affect things.

There’s a lot of what-ifs ahead in the world of television, but personally, I can’t wait to see what happens.

    1. I know I’m asking a lot here. The incumbent providers, many of whom own channels, will do everything they can to protect the status quo. But let’s face it, saying “everything will stay the same because the de facto monopolies given to the current television providers allow them to prevent this future at all costs” is about as interesting as sending the eagles to Mordor

To Learn and Understand

I am a big picture thinker and a perpetual dreamer. I love taking an idea and fleshing out the concept, and I’m continually inspired by the potential that today’s technology affords. If I’m in a room with a like-minded person, things can really take off.

So what would happen if you put me in a room with over one hundred?

My wife and I attended Greenville Grok, a conference designed around conversations and bringing people together. It’s put on by The Iron Yard, a local startup accelerator / code academy / coworking space, and it was started by Matthew Smith who realized he enjoyed the conversations and hangout times at conferences more than the keynotes and formal talks.

While there are a couple of keynote speakers at Grok, the emphasis is on what it calls “10-20s”: ten- or twenty-minute discussions on one topic in groups of eight to ten. My groups included software developers like me, graphic designers, artists, managers, and others; and our employers ranged from hot startups to established players to freelancers. While we all possessed an interest and affinity for technology, the similarities stopped there.

The opening keynote by Kristian Andersen set the tone for the discussions to follow. He started by dispelling the notion that people need to “find what they’re passionate about,” reminding us that the actual root of the word means “to suffer.” Finding what we are built for and willing to suffer for should be the real goal, not simply picking a topic we are excited about. The topic wove its way into the discussions to follow, introducing ourselves to our groups with “What’s your name, what do you do, and what do you suffer for?”

There were a number of good questions addressed in the breakout groups, including

  • How does one deal with the transient nature of digital work?
  • What can a developer do to keep his skills polished?
  • If the internet disappeared tomorrow, what would you do?
  • What's the place for liberal arts education in learning to code?

I was also able to talk through some some of my thoughts on Netflix and television, but my biggest personal insights came from bouncing off an idea I’ve had for an educational video series. Being able to get quick feedback to help round out my abstract idea has helped give me direction for this venture. (The actual execution, of course, is still up to me.)

The fringe aspects of the conference were good too. We elected not to participate in the BMW test track activity, but the Squarespace-sponsored “hangover lounge” had Mario Kart set up, so that helped. The Vagabond Barista had a pop-up shop set up, and it was nice to have his (very caffeinated) coffee in the mornings. Even the conference t-shirt was different: local print shop Dapper Ink brought a silkscreen rig to the conference and let attendees print their own copies of the t-shirt.

This is the second Grok I’ve attended, and I will continue to attend any chance I get. I learn best by participating (or maybe I just love running my mouth), so the format of this conference means I get much more than my money’s worth. The fact that it’s put on by cool people, has cool stuff, and was held in a cool building just makes it all the more enjoyable.

See you next year, everyone!


Why Apple Should Not Buy Nintendo

They really shouldn’t. I want to set expectations up front, and when you’re talking about either Apple or Nintendo, people (myself included) are going to have Opinions. But let’s back up a bit first.

The ideal

A good merger is one where the whole is greater than the sum of its parts. By that, I mean that the two companies coming together amplify each other’s strengths and compensate for each other’s weaknesses.1 The best mergers will emphasize the second more than the first.

Let’s look at the Apple-NeXT merger in 1996 as an example of a successful merger. NeXT was a small company that made what they considered to be the best operating system in the world, NeXTSTEP. Their computer, the NeXTcube, was used for a variety of advanced computer uses, most notably by Tim Berners-Lee to create the first web server and web browser. They also had Steve Jobs, arguably one of the best business leaders in the industry. By 1996, though, they had dropped their hardware division and were only selling their software to run on conventional PCs as a replacement for Windows.

Apple at this time was in trouble. They made what they considered to be the best computers in the world, but they were lagging behind in the software race. They had tried and failed to develop their own modern operating system, and were in serious danger of losing the personal computer market to Windows 95. Their management was unfocused and unable to bring the different factions within Apple to work together.

Apple and NeXT shared the common goal of making the best products they could. NeXT had a solid operating system but couldn't convince people to give up Windows to use it. Apple had strong hardware but their software hadn't evolved enough to take advantage of that hardware. At its core, the merger brought the two companies together on their common goal, with Apple supplying the hardware NeXT needed and NeXT supplying the software and management Apple needed.

The end result? The immediate change with Apple was the influx of good management from NeXT, particularly Steve Jobs. The software team at Apple immediately began work on newer versions of the existing Mac OS (versions 8 and 9) that bought Apple enough time to get the new, NeXTSTEP-based Mac OS out the door as Mac OS X. The advanced operating system not only improved performance on Apple's existing hardware, but allowed them to switch to a completely different type of hardware when they needed to. On top of that, OS X was versatile enough that Apple would eventually use it to power the iPhone and iPad.

Over 15 years later, that $400 million investment is still paying off. That's a good merger.

The reality

So where are Apple and Nintendo today? What are their strengths and weakness that would make-or-break our hypothetical merger?

The Apple of today prides itself on a--dare I say it--magical marriage of hardware and software. The design ideal is that when you see their work, whether hardware or software, it is beautiful; but when you need to actually do work, the hardware and software become almost invisible compared to what you are doing.

However, Apple has traditionally not been good at games. Not many people know of their attempt at making a game console with Bandai, and for good reason: it wasn't good. Gaming on the Mac has always been a second-class citizen, and companies like Valve have only begun targeting the Mac in the past few years. Games are very popular on iOS devices, but those games are not significantly tied to iOS itself. As Ben Thompson writes:

  • Games take over the whole screen; this means that tailoring a game to fit a particular platform’s look and feel isn’t important
  • There is an entire industry devoted to providing cross-platform compatibility from day one. Most game developers are targeting game engines such as Unity, not iOS or Android. This is acceptable because of point one

Nintendo is committed to making the best gaming experiences possible, then making them better. In the past, this has led them to create some of the most beloved franchises in the video game world, including Super Mario, Zelda, Kirby, and Pokémon. In recent years, this has meant pursuing new hardware: not only the gyroscope for the Wii and the touchscreen for the DS but also things like the DK Bongos for the GameCube, the microphone for the DS, and the stereo camera on the 3DS. For every hardware feature Nintendo releases, there is a game like Wii Sports, WarioWare, or Donkey Konga designed specifically to get the most fun from that feature.

The current Nintendo is a victim of a changing landscape. They lost mindshare and marketshare of hobbyist2 gamers to Sony and Microsoft, and their (smart) response was to pursue the mainstream market with the Wii and DS. This strategy paid off until iOS and Android devices began capturing mindshare and marketshare in the mainstream with free-to-play casual games among other benefits. Their efforts to woo both markets with the Wii U and 3DS have been decent, but some worry it won’t be enough to keep the company around.

The dream

So what happens if we bring the two companies together? Let's assume for the sake of argument that Apple uses some of its cash horde and buys Nintendo outright.

From day one, Apple has a large library of exclusive games for its platform, games that are fun and that people want to play; and Nintendo is essentially guaranteed a place in the new smartphone world. Nintendo can, with some effort, create versions of classic games from its library for iOS, accessable to a massive audience that would easily pay the current asking prices of $3-5 each. These games would obviously not be available for any other smartphone or tablet platform, increasing the value of iOS both to consumers who want to play Nintendo games and to developers who want to reach those consumers.

Going forward, Nintendo can help Apple to move its platform forward much like they have with their own platforms in the past. Possibilities dreamed up by the iOS platform team can be made concrete by Nintendo's game team. Both companies thrived in the past by pushing the integration of hardware and software, and having both companies push each other could bring out the absolute best in both. If Apple does release an app-capable Apple TV as rumored, a library of Nintendo games would only help sell the device.

Let's not leave hardware out of the equation. A Jony Ive-designed game console would be great for publicity, but Nintendo could gain more immediate benefits from Apple's supply chain. Apple has incredible buying power when it comes to quality components, especially solid-state storage and touch screens. A (relatively) quick update to the 3DS and Wii U touch screens would elevate the quality of those devices, and that is an area that Apple has made itself an expert in.

This, of course, assumes there are no cultural issues with the merger. Part of what made the Apple-NeXT merger so successful was the understanding that NeXT management was essentially taking over Apple. If the hardware or software teams at the two companies aren't able to find common ground with each other, the best talent could walk out the door and the resulting company would be far worse off than either company would have been separate. This could be a moot point; desperation on either side has a way of forcing compromise where it wasn't thought possible.

But that's not why I think it wouldn't work.

The problem

The best mergers amplify shared strengths and compensate for weaknesses. The worst mergers amplify shared weaknesses. And Apple and Nintendo share a similar weakness: online services.

One of Apple's biggest competetors moving forward is Google. Google was born on the web, and as such, Google understands the web on a near-instinctual level. Servers talking to servers talking to phones is a beginning requirement for a product, not an idea tacked on halfway through the process. More importantly, they know how people use the web. They know how many people leave if search slows down less than half a second. They know how to give users email, file storage, online document editing, and keep it all in sync. Apple's previous online service, MobileMe, was not well received. Their new service, iCloud, is an attempt to modernize the service and make it more reliable, but the reality falls short of the ideal.

One of Nintendo's biggest competitors moving forward is Microsoft. The Xbox is a powerful game machine on its own, but its biggest strength is the Xbox Live service. Every Xbox Live game ties into the same online infrastructure, allowing individual players to define their friends once (using easy-to-remember names) and play with them in every game. This is something Microsoft has fought for from the beginning of the service. Most importantly, the interactions and purchases in Xbox Live are defined around people. Contrast this with Nintendo, which bases its interactions around devices. Social connections are made by exchanging device-specific friend codes which have all the joy and personality of 16-digit phone numbers. Purchases and friend lists are device-dependent, so replacing a console outside of a warranty repair means losing your entire library of downloaded games.

Knowing all this, how appealing does it sound to know that the company that brought you MobileMe is merging with the company that brought you friend codes?

To be fair, both Apple and Nintendo are learning in this area. Apple's iCloud service is getting better, but it will be some time before developers (and their users) begin to trust the service. Nintendo is slowly making improvements to their online service, but they would still rather shut down a service than see it misused.

Both companies, still approach the internet the same way they approach products: slowly and deliberately. This often leads to them missing a crucial component of what their customers actually want. A merger would make this worse, not better, as companies (just like people) lean on what they know during times of transition. A merger would deny both companies the opportunity to truly learn and understand online services in the modern, connected world.

Which stinks because I really want to play Pokémon on my iPhone.

  1. It’s a lot like marriage in that regard, but that’s a topic for another day. 

  2. Some places call them “hardcore” or “serious” gamers, but the basic idea is people who pursue gaming like most people pursue hobbies: investing more time and money than average and knowing more about the subject than most people. 


The Voice

It was the middle of the night in the middle of winter my freshman year when God spoke to me.

I was skirting the edge of depression and worrying about the future. In this particular case I had worked up the courage to walk across campus to see if some girls I had been hanging out with were around. They weren't. On the way back to my side of campus I stopped at the lake to calm myself. The part of my brain that I should never listen to (yet always do) was yelling again about how much trouble my future was in. In this case, it was how my fear of approaching women and my general personality and just absolutely everything about me was going to mean that I was not going to find my wife at college even though most people do and that meant I was never going to find a wife in general and so on.

So I went down to the lake to pray.

Now, when I say "pray," you should read "talked and sometimes yelled out loud at God because there was no one else to listen." It was more than a little irreverent, but it was what I needed. I poured out everything: how anxious I was about the future, how I was afraid that even if God brought the right person into my life I'd be too stupid to notice her, how lonely I was, and how afraid I was that I'd always be lonely. And while I didn't hear a voice, my thoughts went in a direction that was completely different from where they were going.

In that moment, it was like God took the scared, freaking out child that I was, took him gently by the shoulders, knelt down, looked him in the eyes, and said, "Evan, I have been watching out for you your entire life. Why would I stop now, especially on something that is this important to you?"

I was still scared. But a lot less freaked out. And—spoiler alert—I found her.

This week, I've been skirting the edge of depression (maybe more than skirting, to be honest) and worrying about the future. In this particular case, I've been without a job for three months now. I've been searching and interviewing, and I've been subject to the usual delays and pitfalls of a job search. Despite my relative success at keeping myself busy with a nice side project, I've been giving into panic more than I care to admit. The part of my brain that I should never listen to (yet always do) is yelling again about how much trouble my future is in. In this case, it's how my lack of what I perceive as a robust background is going to mean I can't get a job and if I do get a job is it going to be one that I will enjoy and not just show up to and will I really be able to do the job if I do get it and so on.

Time to go down to the park to pray, but somehow I don't think the message has changed.


File Sharing Rant

I’ve largely taken a back seat on the whole file sharing debate. However, now that I actually have a self-published work I feel it is time for me to make a public stance. Here goes…

I’m going to have to agree with John Gruber’s assessment of Richard Stallman’s latest essay:

	<p>I waver between rolling my eyes at Stallman’s kookiness and admiring his singleminded determination.</p>

In my case, however1, Stallman’s kookiness extends to a large portion of the Free Software Foundation’s philosophies. Above all else, the FSF champions the right to modify and redistribute software. I have no problem with this goal as I will often promote a free or open source program (which apparently are not the same) when it is a viable alternative to a commercial program. I use WordPress instead of ExpressionEngine. I use The GIMP instead of Photoshop. But I use Safari instead of Firefox because I find Safari to be faster on my Mac. In my case, I am willing to give up a “freedom” that I don’t really use (the ability to modify the source code) in exchange for a more pleasant computing experience.

It is Richard Stallman’s opinion on creative works that I find unacceptable2. Never mind that because not all Creative Commons licenses are free he refuses to endorse any of them (he, of course, suggests the GPL). What is dangerous is that he equates creative works such as movies and music with information and file sharing with the general term “sharing.” In doing so, Stallman shows his background as a computer scientist. A program is written to solve a problem; the FSF’s arguments that there are more benefits to releasing the source are valid here largely because the program can benefit from the scientific method. Information wants to be free, and the solution to the problem (the program) is simply another form of information.

A creative work, however, is not simply information. It does not consist of simple facts or present a solution to an established problem. It is, when done properly, a reflection of the author or artist’s heart. It can be anything from a commentary on society to a rewrite of a poorly done movie to an attempt to reconcile temporal existence with eternal life. As such, creative works cannot be held to the same standards as computer programs, and vice versa.

Equating creative works to information reduces the author’s creative expression to its digital format, an act of language that cheapens the work even more than the term content. And distributing digital creative works over file sharing is not simply sharing, it is copying. Like anything distributed over the internet, the digital information is copied, not moved, from one computer to another. Loaning a CD or a book to a friend is sharing, since while one is in possession of it the other is not. File sharing creates copies, so that both are in possession at the same time. While not necessarily the same as theft, this cannot, by any reasonable definition, be considered sharing.

This is not to say I am against file sharing as a whole. There are hundreds of out-of-print and hard-to-find works that can benefit from file sharing in order to preserve their value to society. Also, it can be used by lesser known artists to encourage the viral word-of-mouth growth that is essential to growing a fanbase. This is the aim of Creative Commons, and I am disappointed that a man committed to “freedom” refuses to acknowledge the benefits of such a system.

1 John Gruber may agree with me, but I won’t presume to speak for him.

2 Yes, it’s a Wayback Machine link. The post as linked from the original slashdot article no longer exists.