XBox doesn't work outside & in the class room. Now, I wouldn't advocate for playing computer games outside -- but this might be a nice gimmick for children's birthdays when celebrated in the garden.
A friend of mine uses these games in the classroom as a primary school teacher. Kids here all have a laptop, so when they get nervous, it's a neat way for them to make a break & move a bit. Apparently, this works well for kids with hyperactive tendencies. Kinect would be too complicated in that setting as you need to purchase & maintain 20+ Kinects.
If you want to run the code manually, install it in the virtual environment. Then you can simply run commands `ski-leu` or `pop-that-balloon`.
Btw., I use pop-that-balloon for basketball dribbling training, and a friend uses it for boxing. Have fun & please let me know if you use it for yet another sport :).
Just in case you are looking for an alternative approach: if you write contracts in your code, you might also consider crosshair [1] or icontract-hypothesis [2]. If your function/method does not need any pre-conditions then the the type annotations can be directly used.
Coding with contracts has always been interesting to me but I haven't had the option/time to try it seriously in a project. I assume you've had experience with it, how much productivity/code maintanability you gain compared to not using it (or using type annotations)?
In my anecdotal experience, it takes very little time for juniors to pick up adding contracts to their code. You need to grasp implication, equivalence, exclusive or, and get used to declarative code, but then it's easy. (I often compare it to SQL.)
I find contracts personally super useful as I can express a lot of relationships in the code trivially and have them automatically verified. For example, when this input is None, then that output needs to be positive. Or if you dele the item it should not exist in this and that registry, and some related other items should also not exist any more.
My email is in the commits of the repository, feel free to contact me and we can have a chat if you are interested in a larger picture and more details.
P.S. I think the important bit is not to be religious about contracts and tests. Sometimes it's easy to write the contracts and have the function automatically tested, sometimes unit tests are clearer.
I tend to limit myself to simple contracts and do table tests for the complex cases to reap the benefits of the both approaches.
Sorry, I did not express myself clearly. For certain functions you can express all the properties in contracts and have them automatically tested.
For other functions, you write some obvious contracts (so that those are also tested in integration tests or end-to-end tests), but since writing all the contracts would be too tedious or unmaintainable, you test these functions additionally using, say, table-driven tests where you specifically test with data points for which you could not write the preconditions or you check the post-conditions manually given the input data. For example, sometimes is easier to generate the input, manually inspect the results and write the expected results in the test table.
> [...] turn contracts into tests [...]
icontract-hypothesis allows you to ghostwrite Hypothesis strategies which you can further refine yourself.
You can already do that with a family of predicates if you write preconditions in your Python code (see my previous comment [1]). There is an ongoing discussion how to bring this in into Hypothesis (see the issue [2]).
Use contracts and propery-based testing. See Hypothesis library in Python (and other laguages). I wrote a library for contracts in Python (http://github.com/Parquery/icontract, see its readme for further references to other libraries).
I find the combination between unit tests, component tests and contracts the best. Contracts allow you to test with tons of data you wouldn't have ever generated manually when enabled in component and e2e tests.
The contracts obliviate often the need to write extensive unit tests. As they are part of the interface, we can avoid thus to re-write many unit tests when the interfaces change during a refactoring.
I always wondered where this multiplicative factor of "several times" comes from. In my experience, writing correct software was marginally slower than writing sloppy software as long as most thinking is done with a pen & paper. Would you mind to elaborate a bit more?
I'm guessing: because of cheap labor. Writing correct and/or fast software involves quite a bit of "don't do stupid things" all across the development process. Doing these things right doesn't add much time to development, but one has to first learn how to do these things right. A fresh and inexperienced developer, or the "one year of experience repeated 10 times" person that only worked at "move fast and break things" project isn't going to have this knowledge, but will be cheaper to hire.
Not OP/GP, but I think it's mostly about the definition of correct.
Does it correctly handle every possible sequence of inputs? For the vast majority of software in use today, the answer is "no"; The follow up question is "does it matter?" and (luckily or unluckily) the answer for the vast majority of software in the vast majority of use cases, is "no" as well.
>>> The failure occurred only when a particular nonstandard sequence of keystrokes was entered on the VT-100 terminal which controlled the PDP-11 computer: an "X" to (erroneously) select 25 MeV photon mode followed by "cursor up", "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds
Software was not obviously correct or incorrect; In fact, it had been acceptable for an earlier model (which had some hardware protections missing from the newer one). Reaching the incorrect state required the race described above to trigger, which did, in fact, happen in practice a handful of times.
You can work very hard to formally prove your implementation, only to find out that the compiler had a bug and makes your software bad. Or the CPU does; Or all the designs are fine, but there's an bit flip due to electro-migration or cosmic rays. Many people consider this "force majeur" - "an act of god" one cannot anticipate, but cosmic rays are in fact an expected -- and hard to avoid -- input to many systems.
You are in control of a logical model which you can, with extra work, do (provably) correctly. But that IS, from experiencce, 10x to 100x more expensive, and unless you go for 10x-100x more expensive hardware, reduces and moves the sloppiness factor around, but does not eliminate it.