Cheating at school is like pirating music

Education needs to change. In order for an industry to change, it needs to somehow change itself or be forced to change by its customers.

Top-down changes are driven by an industry itself, for example via innovation. When Apple introduced the iPhone, the phone industry changed from the top-down. Bottom-up changes are driven by the customers of an industry, for example when their behaviors change. As more people discovered the convenience of pirating music online, the music industry changed from the bottom-up.

Industries can cause customers to change, or customers can cause industries to change. The outcome is the same in both cases—an industry and its customers have changed—but the direction of causality is reversed.

Bottom-up changes can be particularly potent in entrenched industries where baggage and bureaucracy breed complacency. Education is arguably one of our most important yet entrenched industries. Like the music industry before it, it’s ripe for some bottom-up change.

What if some forms of cheating at school are like pirating music: unfair, unsustainable, but ultimately disruptive? Consider some of the ways people can cheat these days:

  • Groups secretly work together on assignments despite being instructed to work individually. People want to collaborate. Education technology should facilitate collaboration.
  • Online sources are plagiarized or solutions are purchased. People want to remix, not recreate. Education technology should automate attribution and celebrate remixing. Original content creation is still important, but it can come later after students have learned more.
  • Google and Wikipedia can defeat questions in online tests. People want to search, not memorize. Education technology should embrace instant searching. Regurgitation sucks.

Students, cheaters, and entrepreneurs

I don’t condone cheating, but I think I understand it. It’s lazy, but it’s also efficient. Time is a valuable resource. Cheaters shift that resource out of lower and into higher productivity and greater yield (if only for the short-term). This is also one way to define an entrepreneur.

Yet students typically bear no resemblance to entrepreneurs or employees or anyone associated with companies.

Imagine a company whose employees have the ultimate goal of solving a certain problem. While solving this problem (which is made up by their boss), they’re forbidden from working together or communicating in any way. They can’t outsource or use other people’s technologies or ideas. They can’t use any technology or tools at all, actually. Except for a pen and pencil.

That’s the typical environment into which we thrust students when we send them to be judged by their exam solutions. We give them a contrived problem with absurd rules; it seems unsurprising that some of them cheat.

Adapt or die

Bottom-up change has a certain Darwinian inevitability about it. But if cheaters are indeed the new pirates, it might take awhile for us to spot the trend. The first sign will be underground technologies that put the students who use them in a grey area that is questionable yet somehow right. This could already be happening somewhere.

These tools, the Napsters of the education industry, will offer paths to learning that are so efficient, low cost, and convenient that the education industry as we know it will be forced to adapt or die.

Recommendation Technologies Are Still “Very Stupid”

You’ve probably seen automatic suggestions at services like Amazon or iTunes (“You might also like…”). These are powered by recommendation technologies. Ashton Kutcher had a lot to say about social recommendation technologies at TechCrunch Disrupt yesterday (around 14 minutes into the video):

As social recommendation is being built, I think we’re being shown a lot of things that are like the things that we already like. In some ways the technology is still very stupid. We haven’t been able to isolate that which would be the diametric opposite of what we like, and thereby create challenge or conflict, which then creates ownership, which then creates appreciation, which then creates retention of users. That kind of conflict actually creates connectivity and creates social engagement… Conflict is enticing.

Insightful. I think recommendation technologies also shelter us from internal conflict. Exposure to news, books, and music that are in conflict with our current beliefs or interests can force us out of our comfort zone. It’s there, on the periphery of our present self-conception, where extremely rewarding growth and learning can occur.

Great recommendation technologies should push our limits, not keep us locked within them.

He Started Programming When He Was 6!

We’ve all seen it: “So-and-so started programming when he was only 6!” Or 12, or 15, or whatever. You see it in the media, but it’s often individuals talking about themselves.

The age you started programming doesn’t matter.

How silly would it be if we bragged this way about other things? “I’ve been talking since I was 2!” Most of us have been speaking for the majority of our lives, yet some people are much better speakers than others. The amount of time that has passed from the moment we started doing something until now is meaningless. Who knows what we did in the meantime?

Even the total amount of time spent doing something is a dubious measure of skill. A 20-year-old might be a far more captivating speaker than a 40-year-old despite having much less experience. Granted, Malcolm Gladwell’s 10,000 hour rule for the mastery of skills rings true in some cases, maybe even for programming. So say that. “I spent 10,000 hours programming by the time I was 12!”

You might believe programming is more like composing music than it is like talking. “Mozart started composing music when he was only 5!” Mozart was a prodigy whose father was already a successful teacher, musician and composer. Mozart’s biology and upbringing were stacked. Are yours?

Were you programming non-trivial apps soon after you wrote your first line of code? Did you immediately proceed to program lots of apps, even complex ones, all by yourself, and collect payment for your work? Early in your career, did you meet and work with some of the world’s greatest programmers of your time? Mozart did all these things (with his music).

Maybe you did too. Or maybe music isn’t a great comparison after all. In any case, note that Mozart’s brilliance lies not in the age at which he started, but in the quality and quantity of his output. Show us what you’ve done.

My concern isn’t so much about the hollow bragging, though. I feel that programming should be started at an early age, much like the fundamentals of talking and reading and writing. And in that sense, the age we start programming does matter. My concern with celebrating the young age at which some people started programming is that it sends the wrong message. It says: “Programming is so hard that in order to be good at it, you should’ve started when you were young.”

And that’s false. If you’re not a programmer, but you’re curious what it’s like, try Codecademy. They’re a fellow Y Combinator startup whose site makes it easy and fun to learn the basics of programming, regardless of age and experience. If you can read and type and think, then you can program.

Update: this post generated some good discussion on Hacker News.

The Infinite Journey

They say that life is all about the journey, not the destination. My journey is that of an entrepreneur.

We start with nothing and work towards an imagined destination: a new product or service that adds value to the world. This destination is a moving target, subject to the winds of change that are markets, trends, customers, and competitors. We narrow the distance by building towards our goal while simultaneously pulling our goal towards us by validating markets, talking to customers, and differentiating from competitors.

In the best case, we’ll catch sight of our destination without much delay. But it likely won’t look as we had originally imagined it. And when we reach it, we’ll discover that it’s merely a stepping stone. One leg of our journey completes, only to reveal more possibilities ahead; our journey continues.

I love this potentially infinite journey. But what gets me up in the morning isn’t so much the journey itself, but the destination, the prospect of accomplishing something that matters and ultimately bringing my imagination to life. En route, while we’re building, I often feel impatient, restless, and dissatisfied, punctuated with alternating feelings of glory and despair, depending on the day.

I know I’m not alone.

As entrepreneurs, we’re never entirely happy with our journey. There are always more destinations in sight. On some level, opposite to conventional wisdom, the life of an entrepreneur isn’t about the journey; it’s about the destinations we pursue along the way. I believe this is intrinsic to entrepreneurship. Where we’re at is never good enough. Our perpetual dissatisfaction with the present is what motivates us to keep improving it.

There are other journeys we can enjoy: family, friends, and sipping a cold beverage on a summer afternoon. Life, in its totality, probably is about the journey. But at work, my sights are set on the next destination and those that follow, knowing this is a journey that won’t end.

Entrepreneur or otherwise, if you’re a fellow seeker of destinations, agitator of the status quo, and perpetuator of improvement, I raise my glass to you and your infinite journey.

Granularity, Attribution, and Intentionality

Communication involves passing chunks of information among participants. Each medium has a typical granularity of information:

  • Book: single, massive chunk delivered from author to reader.
  • Newspaper: medium-sized chunks by different authors.
  • Conversation: smaller chunks exchanged back and forth.
  • Twitter, SMS, messaging: tiny conversational chunks.

Collaboration tools like Google Docs and Simplenote can be seen through this lens of granularity as well. Google Docs, like its cousins Google Wave and Etherpad before it, is extremely fine-grained in that every single letter that people type is a chunk of information that is sent to all participants. Simplenote, meanwhile, sends chunks that tend to be more Twitter-sized.

I think that extremely fine-grained communication, while cool from a technical perspective, can undermine the very purpose of communication itself, namely to understand and be understood.

If you’ve used Google Docs, Google Wave or early versions of ICQ, you might already agree. Simultaneous communication at the finest granularity of text is akin to everyone talking over each other during a conversation: distracting (at best), inefficient, and lacking in structure and pacing.

Nonetheless, it’s still a straightforward way for people to write text together, and in some ways, Google Docs is better than Simplenote. For example, when collaborating in Google Docs, each person is assigned a uniquely colored cursor so you can see who is responsible for making a given change. This attribution of content is extremely important; it gives each participant an identity.

Finally, consider apps like BlackBerry Messenger and Apple’s iMessage. They have a granularity of communication that feels natural and conversational, and they attribute content to participants by effectively using colors, portraits, and positioning of text. But they go a step further by trying to communicate the intentions of participants. When I’m in the process of typing a message, you see “Mike is typing…” on your screen, and it’s an effective yet appropriately subtle cue that I intend to say something; it’s an opening of the mouth, a motion of the hands, a sudden inhalation of breath.

It’s becoming increasingly easy for developers to add communication features like these to their apps using tools like Simperium. In the coming years, I expect we’ll see vastly improved collaborative apps whose careful attention to granularity, attribution and intentionality will result in more visceral experiences. We’ll see collaborative apps that feel more collaborative, empowering their users to achieve heightened, synergistic states of flow.

Files Are Like Fax Machines

Like fax machines, I think files are going to be around for a long time. But a growing number of people aren’t going to enjoy using them.

I first made this comparison in front of fellow startup companies at a Y Combinator event in summer 2010. Feeling underprepared in a room full of high achievers who had been told not to prepare, but had anyway, I summarized our new company Simperium in sixty seconds of train-of-thought pitch. Out came the comparison of files to fax machines.

It’s a comparison worth revisiting. Fred Wilson’s recent declaration that There Will Be No Files In The Cloud is a good place to start:

This is why I love Google Docs so much. I just create a document and email a link. Nobody downloads anything. There are no attachments in the email. Just a link. Just like the web, following links, getting shit done. I love it.

I agree, and others have expressed a similarly pessimistic outlook for files. We’re betting our business on this idea. The greatest strength of our app Simplenote is transparent, file-less syncing, and Simperium makes it easy for other developers to accomplish the same with their own apps.

I believe that in recent years, the frontend development of apps and devices has begun to outpace the backend progress of databases and networking. We’ve got these wonderful, intimate user experiences backed by comparatively clunky storage and transfer systems. The backend needs to catch up to the front; it’s dragging behind, necessarily attached, but weighty and unwieldy.

Only recently has a complete vision for ubiquitous apps begun to coalesce. It’s a holistic vision that includes frontend, backend, and hardware developments.

It’s not a crystal clear vision, though. We know the goal is some kind of mobile-friendly, multi-user, multi-app system for the storage and transfer of data across multiple devices. But, for example, Apple’s iCloud is a vendor-centric solution that preserves support for documents as files, whereas Simperium potentially supports any vendor’s devices and is a more radical departure from files. That each solution will also differ in its storage mechanisms, conflict resolution, account system, social features, and ease of use is also apparent, and important tradeoffs for these differences will abound.

Fred closes with:

And how do you elegantly morph from a file centric model to a document centric model? It won’t be easy, I’m sure of that.

That’s the crux of the problem, with elegance being a great yardstick for any solution. At Simperium, we certainly have our take on it, but there’s a surge of other activity in this space as well.

The exact fate of files in the coming decades is uncertain, but I predict we’ll hear a growing number of people complaining about them as they do about fax machines today. More likely, rather than complaining about files themselves, they’ll complain about apps that still rely on files in the same way we complain about organizations that still rely on fax machines.