The video below, courtesy of the Brick Experiment Channel, caught my eye as a nifty example of iterative and accumulative design problem solving. This is not constrained to the tangible world. In systems design, a problem with a basic starting point which is responded to with a simple and effective solution often spawn a series of step-up demands and higher complexity problems over time.
There can be a few reasons for this, such as:
By design – an iterative/incremental approach to problem solving beginning with a ‘minimal viable product’ (MVP) which is tested, released, and then informs subsequent design in a conscious, methodical and step-wise fashion
By evolving requirement – the subsequent problems did not present themselves until the initial problem had been resolved due to evolution of the system itself
By naivety of requirement – the clients of the solution did not realise they had additional problems to solve or believed that the initial problem to be solved was the only one
By low capability awareness – the clients of the solution were not aware that the problem was able to be solved and therefore had not applied resources to solve it
By status quo bias – the clients of the solution were emotionally anchored on the current system, despite inadequacies, and so avoided attempts to solve the problem
“Fate of the Nameless Child” began as a longer piece of work several years ago; a multi-chapter, multi-perspective narrative and commentary on how people fear the unknown ‘thing’, the unknown ‘one’. Themes of attempting to understand, contain, destroy are explored more deeply in the original.
I shopped it around to several publishing houses to no avail; some lovely feedback on the tone of voice and the themes in the longer form piece, which I am sure I will publish here one day so you can appreciate the full story and more of the characters who only get a passing mention in this version.
Who is the nameless child, and what is their fate?
Alistair, October 2020
p.s. Read my previous piece “Fallen Angel” also published by Antipodean SF.
I came across this article by Peter High [Twitter], published in Forbes in 2015. It remains topical, particularly when discussing governance, accountability and responsibility for information systems and technology in any organisation.
This is the twelfth in a series of blog posts I’ve been working on during COVID isolation. It started with the idea of refreshing my systems design and software engineering skills, and grew in the making of it.
Get in touch on Twitter at @mr_al if you’d like to chat.
A Brief History of Randomness
As mentioned earlier, there were a few thoughts I had early in the game design and coding process which may explain the differences in results across parallel runs of the same algorithm in different languages. In the black box of each languages’ include files and libraries are a variety of methods for generating pseudo random numbers. There are several, often quite emotive, debates amongst mathematicians and software engineers as to how to generate a truly ‘random’ number.
Famously, the engineers at Cloudflare came up with a nifty solution derived from a patented idea by Silicon Graphics. This involved a physical implementation of a set of lava lamps, generating non-digital entropy which is then used to seed random number generation. My mind is drawn back to Douglas Adams’ description of the Infinite Improbability Drive from the Hitch Hikers’ novels, which used a strong Brownian motion producer (say, a nice hot cup of tea) as an alternative to a wall of lava lamps.
However, for the average software engineer, it is a black box. So, my next thought was that something had slipped into the code base – particularly when I was wrestling through the spacing and grammar markers that plagued me in Python – which had perverted the logic. Could an errant semi-colon or un-closed conditional mean that the code for Python and Ruby was ‘behaving’ differently to the earlier versions?
Time for a visual grep. You know what a ‘grep’ is, right? It is a ye olde and popular Unix/Linux command line utility for searching for pattern strings. Highly efficient and rich with options that have been tacked onto it over time. However, it’s only as good as the definition of what your looking for, and the structure of what you’re looking “in”.
When it comes to comparing code of different languages, things are a bit trickier. Yes, there are advanced search and replace style converters out there (some of which turn out some truly barbaric chunks of code, as witnessed during far-too-many Y2K code migrations). But sometimes, it’s easier to do a ‘visual grep’ – namely, print out both sets of code, and sit down with a glass of something Scottish and highly alcoholic, and go at it line by line, side by side.
Which I did.
I chose the C++ version and the Python version of the Game class, mainly as it is the most complex and also because it encapsulates almost all of the game flow logic.
There had been a few weeks between producing these blocks of code, and my struggles with debugging the Python code, which meant I was very familiar with it.
What I was looking for was obvious typos, omissions or lines of code which varied markedly from method to method. My hope was that, as the old techo joke goes, that the problem “existed between the keyboard and the chair”, and that a slip of the wrist had resulted in a change in the logic flow which was causing the Python code to run very differently. (The Ruby code was a transcription of the Python code, so my working assumption was that if a mistake had been made, that it would be in the Python.)
That assumption turned out to be unfounded. I did come across some redundant code in the C++ version, which doesn’t affect game play. However, the Python version was clean. I would have to look more closely as to why games played out through that language ‘took longer’ and produce different results than their earlier counterparts.
Looking for a Smoking Gun
Again, my suspicions fell on the pseudo-random number generation (or PRNG for those who enjoy a good acronym) engine type used by each language. Each language typically has a base/common PRNG algorithm it uses to generate numbers. These in turn use one of a handful of known and trusted mathematical formulae to produce output that looks ‘close enough’ to being random for most decisions. (‘Most’ generally excludes secure encryption.)
PHP’s own documentation on the random_int() function goes so far as to advise how different underpinning PRNG methods are applied depending on the platform:
That looks pretty good… but it wasn’t the default PRNG engine I’d used.
So, I took a version of the PHP code and went through it, replacing instances of rand() with mt_rand(). I re-ran the 49,999-run test… and the results were pretty similar to the original run.
Why? Turns out that from PHP 7.1 the code for array_rand was changed to use – you guessed it – the Mersenne Twister formula.
Outside of dice rolls, choosing a element from an array is the most-used random selection that the Auto120 game goes through. So – what if the PHP version was rolled back (along with the use of mt_rand) and the code re-run? That was next.
The Difference between Data Science and Data Analysis
I’ve been around a while. Not as long as some, but long enough to see the repeating patterns that emerge over time when it comes to Information Technology. They keep coming, too. The ‘cloud’ of today is the ‘outsourcing’ of the 90s. the “Big Data” of today is the “BI” of the 00’s. And so on. Everything’s stolen, nowadays.
There are a lot of people around who use the term ‘data science’ without understanding what it really means. It’s a pretty simple thing to consider, and those who publish on the subject argue that it is either: another name for statistics, or is not… takes into account qualitative observations, or does not… and so on.
I’m reminded of two of my favourite paradigms – one being the parable of the Blind Men and the Elephant (TL/DR: each individual, having never actually ‘seen’ an elephant, opines a strongly held opinion about what an elephant is based on a single point of data), and Niels Bohr’s reuse of an old Danish proverb “Det er vanskeligt at spaa, især naar det gælder Fremtiden” that roughly translates as: “Prediction is difficult, especially about the future”.
To whit, analysing data can tell us “what happened” but rarely tells us, precisely, what is “going to happen”.
Science is a simple discipline, made difficult by the complexities of the subject matter. The joy of marrying the words “data” and “science” is that you get a self-explanatory statement. (Despite that, I’ll explain.) The scientific method begins with making a conjecture – a hypothesis – and then designing, executing and recording the results from repeatable experiments which provide the evidence that prove this hypothesis through predictable results.
This is what I was attempting to do with developing the broader framework of the Auto120 game. Being – if you design and build an instance of an algorithm you SHOULD get repeatably consistent results in the output REGARDLESS of the language it is implemented in.
Well. Let’s break that down.
Back to Random
So, my working hypothesis was this – the random number generator in use by PHP – being either the original algorithm or the Mersenne Twister version – could cause the data to skew. To test, the further experiment was this – 1) run a version of the 49,999 run on PHP 7.0, which pre-dated the integration of the Mersenne Twister, and 2) run a version of the 49,999 on PHP 5.6, AND replace the use of the array_rand method with a random integer selection instead.
My expectation was to see four different looking sets of data, including the original run and the follow up run where I’d substituted rand_mt.
The reality was this:
Yep. They’re the same. Almost. Look in the table below at the MODE for the two runs that DON’T use the Mersenne Twister for random number generation:
Standard Deviation (of the PHP runs)
Table 1 – Comparison of key statistical measures across the four PHP runs
The Mode, the most frequently occurring move count, is 20 moves MORE than the other runs. (But, still, 40 moves FEWER than for Ruby and Python.)
So, the random number engine did play a part. But did it play such a significant part as to impact the Python and Ruby results the way they did? It was time to run another test to find out.
Next post… having ruled out the random number engine as the significant contribution to different languages’ game behaviour, it was time to deep dive into what may have changed in the code…
This is the eleventh in a series of blog posts I’ve been working on during COVID isolation. It started with the idea of refreshing my systems design and software engineering skills, and grew in the making of it.
In Part 11 this analysis continues, with some weird stuff showing up when comparing the outputs of the coded algorithms side by side.
Get in touch on Twitter at @mr_al if you’d like to chat.
Weird Data Science
Depending on the extent of outliers, some of the statistics measures returned from a truly stochastic data set may be ridiculously large. But, in the case of the Auto120 algorithm, they shouldn’t be. This is not a true integer random number generation exercise, despite some of the game moves being based on decisions made on random numbers. It is a rules-based algorithm with a degree of randomness baked into it.
To further complicate matters, each language, each interpreter and compiler – even the platform that the code was run on – adds another noise factor. A different internal algorithm for generating random number results, a different random ‘seed’ generator, a different millisecond time stamp to feed the seed generator, etc.
This is James Gleick’s Chaos Theory writ on a macro scale. Each drop of water from the tap will fall in a different way, of a different size, in a different rhythm and frequency, based on several environmental and physical factors.
So, let’s look the graphs: [ Y axes are the frequency count of moves, X axes is the count of game moves ]
As you scroll through these, the difference in the final two languages – Python and Ruby – compared to the earlier four languages, is obvious. If it isn’t then here are all 6 mapped on top of each other:
Some quick observations:
PHP follows close to these, however it’s peak is not as high and the skew to the right is more apparent
Ruby and Python show a more symmetrical, albeit still right-skewed distribution. The peak is significantly lower that of the other language runs.
On removing outliers
A quick note on the C++ graph. The 49,999 game run produced five games where the move count was a clear outlier – values ranging between 3,000 and over 6,000 moves – which were at odds with the rest of the data. I’d seen this occur during testing as well.
Every now and then the C++ version of the program got into a rhythm of moves that looked as though it was ‘looping’ for several thousand iterations. It was one of the reasons I put a hard-coded exit in the iterations of the main program, so that it did not run ad infinitum and so I could trap logical errors.
In the case of this batch, there ended up being eight data points that I removed – being where there were games of over 750 moves – in order to produce a ‘corrected’ data set.
By the numbers
The following table, using the calculated statistical indicators rather than relying on a visual interpretation, echoes what we see in the composite graph:
I’ve colour coded the language columns to match the colours in the composite graph, with the exception of C++ ‘uncorrected’ which is not on the graph. Some interesting observations:
Ruby and Python are so close as to be almost identical. We’re talking 0.5% difference across most indicators.
C++ seemed to be the most efficient ‘player’, with the lowest number of average moves across all games – even taking into account the uncorrected run. However, Java had the lowest mode, indicating that there were a higher number of lower-move games.
I expected the runs to vary a little – but did not expect them to vary with such significance. The next step was to try and figure out why.