Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Users of my iOS Game Teach Me a Lesson MIT Didn’t (2011) (aaroniba.net)
130 points by sherm8n on March 28, 2013 | hide | past | favorite | 58 comments


I think it's interesting that people who make games often don't think of their players as basically a giant parallel computer. You're essentially sending a puzzle off to hundreds, thousands, or millions of 'cores' that will through a combination of brute force, intelligence, and a complicated but large network of communication, come to an optimal solution in a time span you as an individual probably can't even comprehend. They will not only solve your puzzle, but they will solve for how to solve it and then spread that knowledge.

Thus I found it hilarious when Blizzard thought it would take people months to completely finish Diablo 3. Real time? A week or two.


It's human nature. When your goal is to challenge or stump someone you think there's no way they can be smarter than you. You designed it so you should know it best.


Schneier's Law: "Any person can invent a security system so clever that he or she can't imagine a way of breaking it."


Took me two months to get to 60


I think this illustrates one of the biggest reasons many of the most popular mobile games have variable solutions and some sort of "star" rating for the solution. A lot of these games work on the idea that people ripping through it achieve the 1 star solution and go to puzzle 2. It has a "secondary" or "advanced" mode built right in for the person who wants to get 3 stars on each.

Common things to grade with stars:

Speed

fewest 'widgets' to solve

total 'impact' (Think damage a la angry birds)

etc.

We can also combine these sorts of things in the rating system. There is a great game, "Cargo-Bot", that is essentially a programming game. The rating system there only takes into account the number of instructions used but it could do something where it takes into account the number of instructions used and the run time of the "program" produced.

I guess my point is, designing games with singular solutions is probably cheating the gamer out of some additional fun (go back and finish on hard) and cheating the designer out of exploring some interesting concepts around "scoring".


The completionist in me doesn't like the star system. I feel like I'm being punished for being "dumb". (I stopped playing "Feed Me Oil" because I couldn't figure out the optimal solution to a level.)


A game based on high scores or fastest time has much more replayability (and incentive to replay) that a puzzle game with just one solution.


This reminds me a lot of the section on constraint-satisfaction problems from an AI class I took a while back. You can solve constraint satisfaction problems using backtracking, but this is very difficult--it takes exponential time. However, for certain problems (like the well-known n-queens problem), the optimal way to solve it is to start with a random configuration and jiggle it around.

This is the same thing but with people instead of CSP-solving algorithms.


I wanted to learn about constraint-satisfaction algorithms, so I implemented dancing links in javascript. If you're interested, you can see it at http://dancing-links.herokuapp.com/ and the source is on github https://github.com/kybernetikos/dancing-links


Omg, do you remember solving shortest path graphs by hand?


"No, they had discovered a superior strategy."

I had a similar experience when I published an iOS game in 2011. I was still in high school, so publishing a game available on the App Store was quite a big deal to the other people at school. I became known by a bunch of people for making this game. One day, someone I'd never spoken to comes up to me and shows me 'a bug'. It turns out this guy had developed an optimal strategy that would win the game every single time, with an almost perfect score. I totally failed to anticipate this way of playing the game. In retrospect, it was quite naive to assume that players would play how I would. When I played my own game, I had the technical knowledge of how the game functioned - and it seems as if this context clouded my vision of more creative ways to play this game. Players didn't think about collision detection, trajectories, the scene graph, parallax scrolling and z-indexed sprites. Instead, they saw the game as world to interact with, and having zero knowledge of what was going on behind the scenes afforded them a lot of room for creative exploration (and exploitation)


"When I played my own game, I had the technical knowledge of how the game functioned"

I have this same problem when I test my own software.


This is why I tend towards being a strong advocate of a "test group" for any software shop. This is basically a group of people running a combination of manual and automated tests against software as a black box.

Unit tests never catch everything. The problem with them is they can only test everything the coder thought to test. There is value in testing all of those things the coders never thought of.


Are you willing to give details on that strategy?


I hope this doesn't come across as snarky, but this is very apropos for me.

I'm currently taking 6.856 at MIT, Randomized Algorithms, and one of the core lessons from the class is that random guessing is frequently a superior strategy. You can have Las Vegas algorithms which find a correct solution and probably execute quickly, or Monte Carlo algorithms which execute quickly and probably find a correct solution. The users are employing a Las Vegas strategy here.

So, perhaps he just didn't take the right classes at MIT :-).

For a neat example, check out finding a min-cut of a graph in a randomized fashion: http://en.wikipedia.org/wiki/Karger%27s_algorithm


"I thought there was a mistake because some users were solving puzzles faster than I was! Were they cheating?"

This made me chuckle. I think this says more about the MIT mindset than about the users. Also, the total surprise at the guessing strategy is characteristic of a mathematician.

Nevertheless a fun little article to read.


This is something that I've noticed myself. I find that it's much easier to write a paper for school by just writing something, no matter how simple, and using it as a framework for the next iterations. It makes it much easier for me to think of the paper one piece at a time. I've started trying to apply this strategy more rigorously lately. For a four page paper I had to write yesterday, I started by just writing out the structure of the paper:

"Start with a relevant 'hook.' Now lay down your thesis statement. I got to my thesis by starting here. Then I went and saw this. That turned into the other, and here you have the thesis."

Then I took each bit and replaced it with what I actually wanted to say there and continued to iterate from there. In this case, my professor has a very specific structure that he looks for in our papers so putting that first iteration together didn't take much effort.

Similarly, when I'm working on a programming project, I find it's much easier to start with a working iteration, even by stretching the definition of "working," and improving it until I'm happy with it.

It is, of course, important to look at the paper (or other work) holistically to make sure that it all flows together once the pieces are in place.


> It turns out to be easier and faster to iterate from an existing but wrong solution, than to deduce a correct solution from scratch. Even if you have to occasionally press the “clear” button to start over.

it's probably easier when, as the author designed his game,

> each puzzle has only one possible solution-path given the starting conditions

the risk of iteration is having gone through the "wrong" branch at one point and being stuck in a local maxima: it works, but it does not work as well as it could. This may or may not matter given the constraints of the situation (and in most situations, it will probably work well: the goal is to get there rather than to exhaustively maximize "brownie points")


> "It turns out to be easier and faster to iterate from an existing but wrong solution, than to deduce a correct solution from scratch. "

It is if you have no additional cost in building/destroying something, as in this game. If you had a pot of money and every piece you placed/destroyed cost you money (or even more than a fraction of a second in time) - as is often the case in real life - then the picture could well be very different.


That's true if you are building something tangible. You can't simply start building something (like a bridge) and fix the mistakes you make while you are making them.

But for programming ? You don't have those kind of costs.


>But for programming ? You don't have those kind of costs.

Where did you get this strange idea from? You have customers, backwards compatibility, programmers time, and tons of other "cost" stuff.


> You have customers, backwards compatibility, programmers time, and tons of other "cost" stuff.

None of which matter. The idea being if you are working on a component for version 2.0, even with all these "costs", it's better to iterate quickly. The only thing that is close to mattering is programmers' time, but even that is addressed in the article. Simply that it's faster to fail fast and iterate until you get the correct solution than it is to spend all that time planning.


I think truth is somewhere in the middle. Even fast iterations must be planned and thought through. [1]

[1] http://www.infoq.com/interviews/agile-software-architecture-...


You don't have to ship until you have actually solved whatever problem you are working on, though. It still might be faster to get to the point of shipping code by this method, in some instances.


You do. The scale of those costs, and whether those costs outweigh the cost of planning everything before you start, may vary depending on a variety of things, such as the structure of your development teams. But there's always a cost.

At the simplest level, 5 days of coding something that you realise doesn't do what you want is 5 days you'll never get back.

At places like mine, where you've got multiple third parties that you're interfacing with, sending them off to develop something before being 100% certain that they are doing the right thing could - and reasonably frequently does - have many thousands of pounds of cost implications, and many months of delay, when you realise that a particular call needs to real time instead of batch for example.


>That's true if you are building something tangible.

In product (prototype) development, there's a maxim that's followed by a lot of people "Plan to throw the first one away" or something to that effect. Sure, it may not work for every case. But, for lots of projects, it's useful to accept the notion that you will learn enough building the first one to get more value out of throwing it away than by keeping it. There are just too many things that can't be seen without a (probably) unattainable level of diligence.


I'm pretty sure MIT teaches some sort of AI 101.* CSP, local search (and genetic algorithms) is what you're looking for. Also make sure to check out the paper by Wolpert and Macready on "no free lunch in search and optimization" for a perspective on "best way"

*I also think it's a non-trivial connection to make that these types of problems are covered in AI or at least that's the feedback I get from students. "Oh didn't expect this search stuff thought it was about brainnnnnnz"

Pretty cool read though, thx for posting.


I think a lot of that classic work on search, heuristics, and problem-solving has been overshadowed lately by the emphasis on more 'modern' statistical machine learning. There's definitely value to both perspectives.


It's funny how these things work. I'm the developer of an iOS snake game called Space Snake (http://www.shiftingmind.com/spacesnake/). When I launched the game I quickly got to the top of the high scores, since I had been playing it myself for quite some time during development. I was very happy when someone finally beat my high score. It meant there was someone out there who liked my game enough to beat me at it.


Well, viva human brain versions of Monte Carlo Search Tree algorithms. Your game was at least solvable deductively in practice. If you were talking about solving something like the game of Go, i.e. high branching factor and deep tree, then Monte Carlo algorithms would be your first guess instead of trying to do deductive reasoning. Upper Confidence bound on Trees (UCT) algorithm is a good one to read about and understand if you have a spare evening.


Very satisfying post.

It confirms one of my opinions about proficiency in general. Proficiency has a lot to do with pattern recognition and having access to multiple tools, each tailored to solving a particular problem. Figuring out which tool to use for a given problem is a key aspect to proficiency, but the time it takes to carefully select the proper tool can often be trumped by simply choosing always the same tool and succeed or fail really fast. More often than not, the pattern matching after the first pass will be simpler because parts of the problem will be solved already.

I started thinking about it when I was a kid discussing the best football (as in soccer) players with my friends. I always favoured players who could use both feet and their head equally, able to figure out the path of least resistance every time, but one of my friends would say that the very best players would only use one foot, their best foot, and focus on their placement to maximize the use of it in every occasion. In other words, when the stars align they are unmatched, and their whole strategy revolves on making those stars align as often as possible, rather than on figuring out how to play on a given spot.

It made a lot of sense back then, as the one-trick-ponies could easily and consistently out-play the jack-of-all-trades if they had a whole team dedicated to serve their right foot as often as possible. I find it eerie that it still does make sense in the context of your game, and many others. When time is of the essence and wrong moves only cost the time they take to perform, you can either spend your precious time thinking of the correct play, or spend it doing something that doesn't require you to think at all, and adjust as you go using pattern recognition on smaller subsets.


Talk about having an unfair advantage. Why wouldn't you make use of your best skill and continue to exploit it?

That's why startup founders need to be experts in their domain. It takes too long for someone new in the space to gain a deep understanding until they have their unfair advantage.


Some people have the "unfair advantage" of being able to pick up a domain of expertise and excel at it really fast. As the startup grows in reach and complexity, those guys really stand out. The reason might be because they can spin every problem into proper nails for the expert to hammer.


This reminds me of the Marshmallow challenge experiment. It turns out the best result comes from the most spontaneous and fearless groups :), using the same sort of approach as fix and repair. http://marshmallowchallenge.com


Very interesting and not at all surprising. Humans are very good at developing intuition about a problem space given that they can interactively tinker around with different solutions and get immediate feedback about it's validity. Part of my research relies on this premise and I try to build tools that provide the right "knobs" for someone to build intuition in some design space and take it further than what I, as the tool maker, could have originally envisioned.


This really made me reflect on how careful logical deductions affected my life. Being careful and safe wasn't as rewarding as jumping into something and learning from it.

I was inspired to write this post about my experience - https://news.ycombinator.com/item?id=5453761


It's called "Throwaway Model". It's the second thing you learn about after the "Waterfall Model" in Software Engineering 101

http://en.wikipedia.org/wiki/Software_prototyping#Throwaway_...

The game looks good though. I'll probably try it out.


We were supposed to learn about more than the waterfall model?

I don't think my version of that class was very useful, TBH.


This made me remember competing against my research advisor for the fastest times on the various levels of Minesweeper. The beginning was the key; after a few weeks I could tell within 1-3 clicks whether any particular run had a competitive chance. I hit the reset button a lot.


For what it's worth, the algorithm is called backtracking (http://en.wikipedia.org/wiki/Backtracking), and I'm pretty sure it's taught at most respectable CS departments in the world.


The hard part here is encoding a heuristic that is capable of improving the time complexity of the naive exponential algorithm asymptotically so that it becomes useful. People are really impressive. They're capable of realizing where they broke something and seeing when continuing down a path won't lead to a solution, which lets them mentally prune the search tree dramatically. Getting a computer to do the same...

For fun: if you can encode a human's cognition and spatial awareness for things like this, you can probably do the same for NLP, then write a tl;dr app that massively outperforms summly and sell it to Google for more than $30M. :)


It's not surprising that is the better solution. Humans do stuff the same way in real world. They repeat something until they get it right, and that's the learning process. The same with neural nets. You don't get to do something perfect from first try.


> It turns out to be easier and faster to iterate from an existing but wrong solution, than to deduce a correct solution from scratch.

Indeed, many things in real life follow this pattern.


Hey, you have discovered that, to make a game, you have to give it to other people and see how they play...

Well done!

If anyone is interested in game design, I'd recommend this book [1]. Yep, the most important part is make something, give players to fiddle around and iterate to next version.

[1] http://artofgamedesign.com/


Erm.. isn't this actually called 'trial and error' fairly common method.. No?


Reminds me of the first time I made something with lisp.


MIT didn't teach you about hill climbing algorithms?


And that's the difference at MIT between a math degree and a CS degree, methinks.

Though, I don't think a hill climbing algorithm is going to be too helpful when there's only 1 solution.


Yes, that's a good point. Aaron mentioned some users would restart "if things ever look hopelessly messed up". A better analogy might be simulated annealing, where solvers randomly switch to another configuration.


Intelligence is knowing an algorithm. Wisdom is knowing when to apply it.


Huuuuuuge jump up to his denouement.


Elaborate?


tl;dr Dynamic programming is often better than greedy.


More like hill climbing is often better than depth-first search.


The HN post of the day for me ( GMT+1) thanks , these posts are the reason why i like HN.


What are you talking about? I don't get it... :)


(just to clarify, I was confused with the (GMT+1) part)


Linear variation method in QM + an SCF cycle ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: