I believe that schools and districts waste a lot of time and money on initiatives that never go anywhere.

Perhaps it happens when a district spends over a billion dollars on new iPads and changes its mind. Perhaps it’s less publicized and happens when teachers are sent to a conference, students get an expensive software, or policies change district wide. Often times it begins with high hopes… but nothing really seems to change and after a while people stop talking about it or doing anything with it.

Thinking about how much time and money gets spent on the latest shiny gadgets or hip training that go nowhere can be mind blowing. Yet, this pattern seems to keep happening without an end in sight. I don’t think it has to be this way though, and I think that applying some business principles to education may be a start.


The Lean Startup
Imagine that you have an idea for a new app for people’s phones. You think it could be a game changer, but you’re not sure as everyone hopes that their new app will be a game changer. What should you do? Should you spend $50,000+ and make your dream app? Or, is there a way to get a better idea about whether your idea will even work before you jump right in?

I’m guessing that many of you would want to figure out if it would work before you jump in. However, change the context to being a new app for students at a school district, and it seems like too many districts would just cut a check and maybe later figure out if it made a difference.

This is really scary to me, as it wastes crazy amounts of money and leads to burnt out teachers. So, let’s talk about two ideas I learned from the book The Lean Startup that have applications in education.


Minimum Viable Product
Before schools and district implement a big initiative, they should think about the assumptions they’re making that have to be true for the initiative to work. For example, with the Los Angeles USD iPad fiasco, I’m sure that the idea of every student having a device sounded amazing, but for it to work correctly, there were many assumptions that would have to prove true including:

  • Infrastructure like WiFi would have to be installed and working in schools
  • The bandwidth would have to be high enough to support all the devices being used at the same time
  • Teachers would have to find the iPads to be helpful or would not use them
  • Teachers would have to be trained on how to use them with their students
  • The iPads would need to be highly reliable or would require never ending maintenance
  • Students would have to keep them in good condition so they don’t break
  • There would have to be apps that could do what teachers wanted
  • There would have to be money to buy those apps
  • The devices would need a lifespan long enough to be worthwhile

I could go on and on, but the point is that if even one of those assumptions turned out to not be true, the whole project would fail. What good are iPads you can’t connect to the internet? What good are iPads that no teachers want to use?

The process of testing assumptions is absolutely critical, and in the Lean Startup model, the idea of a minimum viable product (MVP) allows you to test assumptions before moving on. Here’s a real example of it from business.

Zappos founder Nick Swinmurn wanted to test the hypothesis that customers were ready and willing to buy shoes online. Instead of building a website and a large database of footwear, Swinmurn approached local shoe stores, took pictures of their inventory, posted the pictures online, bought the shoes from the stores at full price after he’d made a sale, and then shipped them directly to customers. Swinmurn deduced that customer demand was present, and Zappos would eventually grow into a billion dollar business based on the model of selling shoes online.

The idea that 20 years ago people were skeptical about selling shoes online may be hard to understand today, but the reality is that it would have been very risky to build a whole company out of an idea that relied on many assumptions. Instead, Swinmurn minimized his risk by testing his assumptions in the “minimum viable” way.

Sure, it was very inefficient to take picture of shoes and then ship them directly to customers. However, by doing this, he could find out whether his assumptions were true. If they weren’t, then he could figure it out before he wasted lots of time and money.

How do we do this in education? Can we pilot a program at a grade level or school to see how it works before we scale it far and wide? What valuable issues or concerns might we uncover? What we have to realize is that this is a process.

Going back to the LAUSD example, for me to commit to such an enormous expenditure, virtually ever assumption would have to be tested. Maybe I’d begin with rolling out iPads for a single school. How does the WiFi hold up? How much training do teachers need? How often are the iPads are utilized? What apps are purchased but not used? What evidence shows that the iPads make a difference?

Once those issues were resolved, then I’d scale it up and see what new issues come up. I’d repeat this until it was clear it was set up for success. While I’ve never been an administrator, it’s hard for me to understand how some of these decisions are allowed to happen.


Split Testing
Here’s a tough question to ask: how do we know that we’re making the right decisions before we commit to them? How do we know whether the decision we’re making is better than the alternatives?

This is something that happens in the online world all the time without you ever noticing. For example, let’s say that a website wants people to click on a button. They have some ideas about what the button could say (maybe “BUY NOW!” or “GET YOURS!”) and what color the button should be (red or blue), but they’re not really sure which is best. This is where split testing comes in.

They will set up their website to randomly pick one of the two phrases and one of the two colors. Then they will sit back and collect data on what phrase and color leads to the most clicks. If one combination of color and text results in more clicks, then they will choose that phrase and color as the permanent color of the button.

It doesn’t have to end there. They could even test the button’s shape, placement, or any other factor. The process of split testing provides data that people can use to turn their hunches into something quantifiable. So how do we do this in education?

I saw it done in my own school district when piloting a new textbook. Teachers who were unsure about which textbook was better took their two favorite books and tested them out for a nine week period. Half the teachers got book A while half got book B. After the nine weeks, they switched and got an additional nine weeks with the other book. After each nine week period, we collected data to quantify what teachers learned. This resulted in helping teachers make a more informed decision.

Continuing with the iPad example, perhaps Los Angeles USD could have given some teachers online training and some in person training and examined the results. Maybe some schools could have had one app while others had another to see what was more effective. I’m not saying that I have a perfect way of setting this up, but that by testing our options, we’re more likely to make a decision we are happy with.


Unfortunately, I believe that people make worse decisions when they’re spending other people’s money. I think that we need to be more practical about the big choices we make and look for ways to better ensure success ahead of time. My hope is that this blog post provides two options to consider if you find yourself in a similar position.

I’d like your pushback though. What am I naive about? What am I missing? What other ways could these techniques be applied? Please let me know in the comments.


  1. I agree with some of your points here. I also think theres always an assumption that if they (the districts) don’t provide training, educators won’t learn. Many of us know things and have done our own research and testing right now or used social media to learn. I also believe the best PD is driven by PLCs that are willing and provided the time to collaborate in a trusting, healthy and sustainable way for growth. If we don’t invest in educators, we can continue to get the same result we’ve always gotten which is more of the same repackaged in a bow and nice price tag. Providing coaching or mentoring is a great way to invest in educators as is inviting them to share feedback and relevance for what they need just in time.

    • I absolutely get this. That’s sort of the whole idea behind #observeme. It’s not my intention to say that teachers should only learn from experts.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment