Incremental Design and Problem Solving

via Kyle Wendland on Twitter

The video below, courtesy of the Brick Experiment Channel, caught my eye as a nifty example of iterative and accumulative design problem solving. This is not constrained to the tangible world. In systems design, a problem with a basic starting point which is responded to with a simple and effective solution often spawn a series of step-up demands and higher complexity problems over time.

There can be a few reasons for this, such as:

By design – an iterative/incremental approach to problem solving beginning with a ‘minimal viable product’ (MVP) which is tested, released, and then informs subsequent design in a conscious, methodical and step-wise fashion

By evolving requirement – the subsequent problems did not present themselves until the initial problem had been resolved due to evolution of the system itself

By naivety of requirement – the clients of the solution did not realise they had additional problems to solve or believed that the initial problem to be solved was the only one

By low capability awareness – the clients of the solution were not aware that the problem was able to be solved and therefore had not applied resources to solve it

By status quo bias – the clients of the solution were emotionally anchored on the current system, despite inadequacies, and so avoided attempts to solve the problem

Courtesy of Brick Experiment Channel on YouTube

Writing: Fate of the Nameless Child

My latest piece of flash-fiction “Fate of the Nameless Child” has been published by Australian zine “Antipodean Sci Fi”. Narration to follow soon!

Photo credit: Peter H at https://pixabay.com/users/tama66-1032521/

A bit of background on the story.

Fate of the Nameless Child” began as a longer piece of work several years ago; a multi-chapter, multi-perspective narrative and commentary on how people fear the unknown ‘thing’, the unknown ‘one’. Themes of attempting to understand, contain, destroy are explored more deeply in the original.

I shopped it around to several publishing houses to no avail; some lovely feedback on the tone of voice and the themes in the longer form piece, which I am sure I will publish here one day so you can appreciate the full story and more of the characters who only get a passing mention in this version.

Who is the nameless child, and what is their fate?

Alistair, October 2020

p.s. Read my previous piece “Fallen Angel” also published by Antipodean SF.

Design and Code – Part 12 – The Bake Off – Apples and Pears

This is the twelfth in a series of blog posts I’ve been working on during COVID isolation. It started with the idea of refreshing my systems design and software engineering skills, and grew in the making of it.

Part 1 describes ‘the problem’. A mathematics game designed to help children understand factors and limits which represents the board game Ludo.

Part 2 takes you through the process I followed to break this problem down to understand the constraints and rules that players follow as a precursor to implementation design.

Part 3 shows how the building blocks for the design of the classes and methods are laid out in preparation for understanding each of the functions and attributes needed to encapsulate the game.

In Part 4 we get down to business… producing the game using the programming language PHP with an object orientated approach.

Part 5 stepped it up a notch and walked through a translation of the code to object orientated C++.

Part 6 delves into the world of translating the code into object-orientated Java.

In Part 7 the same code is then translated into Javascript.

In Part 8 I slide into the sinuous world of Python and learn a thing or two about tidy coding as I go.

Part 9 explored the multi-faceted world of Ruby.

In Part 10 I started down the path of comparative analysis by asking the question – did all of these languages implement the algorithm consistently, and how would we know if they did?

This is continued in Part 11, with some weird stuff showing up when comparing the outputs of the coded algorithms side by side.

Get in touch on Twitter at @mr_al if you’d like to chat.


One of these things is close to, but not exactly like, the other.

A Brief History of Randomness

As mentioned earlier, there were a few thoughts I had early in the game design and coding process which may explain the differences in results across parallel runs of the same algorithm in different languages. In the black box of each languages’ include files and libraries are a variety of methods for generating pseudo random numbers. There are several, often quite emotive, debates amongst mathematicians and software engineers as to how to generate a truly ‘random’ number.

Famously, the engineers at Cloudflare came up with a nifty solution derived from a patented idea by Silicon Graphics. This involved a physical implementation of a set of lava lamps, generating non-digital entropy which is then used to seed random number generation. My mind is drawn back to Douglas Adams’ description of the Infinite Improbability Drive from the Hitch Hikers’ novels, which used a strong Brownian motion producer (say, a nice hot cup of tea) as an alternative to a wall of lava lamps.

However, for the average software engineer, it is a black box. So, my next thought was that something had slipped into the code base – particularly when I was wrestling through the spacing and grammar markers that plagued me in Python – which had perverted the logic. Could an errant semi-colon or un-closed conditional mean that the code for Python and Ruby was ‘behaving’ differently to the earlier versions?

Code Walking

Time for a visual grep. You know what a ‘grep’ is, right? It is a ye olde and popular Unix/Linux command line utility for searching for pattern strings. Highly efficient and rich with options that have been tacked onto it over time. However, it’s only as good as the definition of what your looking for, and the structure of what you’re looking “in”.

When it comes to comparing code of different languages, things are a bit trickier. Yes, there are advanced search and replace style converters out there (some of which turn out some truly barbaric chunks of code, as witnessed during far-too-many Y2K code migrations). But sometimes, it’s easier to do a ‘visual grep’ – namely, print out both sets of code, and sit down with a glass of something Scottish and highly alcoholic, and go at it line by line, side by side.

Which I did.

Figure 1 – Walking through code line by line, C++ at the top, Python below

I chose the C++ version and the Python version of the Game class, mainly as it is the most complex and also because it encapsulates almost all of the game flow logic.

There had been a few weeks between producing these blocks of code, and my struggles with debugging the Python code, which meant I was very familiar with it.

What I was looking for was obvious typos, omissions or lines of code which varied markedly from method to method. My hope was that, as the old techo joke goes, that the problem “existed between the keyboard and the chair”, and that a slip of the wrist had resulted in a change in the logic flow which was causing the Python code to run very differently. (The Ruby code was a transcription of the Python code, so my working assumption was that if a mistake had been made, that it would be in the Python.)

That assumption turned out to be unfounded. I did come across some redundant code in the C++ version, which doesn’t affect game play. However, the Python version was clean. I would have to look more closely as to why games played out through that language ‘took longer’ and produce different results than their earlier counterparts.

Looking for a Smoking Gun

Again, my suspicions fell on the pseudo-random number generation (or PRNG for those who enjoy a good acronym) engine type used by each language. Each language typically has a base/common PRNG algorithm it uses to generate numbers. These in turn use one of a handful of known and trusted mathematical formulae to produce output that looks ‘close enough’ to being random for most decisions. (‘Most’ generally excludes secure encryption.)

PHP’s own documentation on the random_int() function goes so far as to advise how different underpinning PRNG methods are applied depending on the platform:

Figure 2 – Extract of the PHP language description for random_int

That looks pretty good… but it wasn’t the default PRNG engine I’d used.

There are plenty of articles written about the use of PRNGs in simulation or gaming software. With the benefit of hindsight (and a lot of simulation runs) it now seems obvious that the formula behind the PRNG used in the C++, Java, Javascript (and probably PHP) code was the same. Ruby and Python are most likely using a formula with the intriguing name – “Mersenne Twister” – which, up to now, I’d taken to be some sort of burlesque dance move.

So, I took a version of the PHP code and went through it, replacing instances of rand() with mt_rand(). I re-ran the 49,999-run test… and the results were pretty similar to the original run.

Why? Turns out that from PHP 7.1 the code for array_rand was changed to use – you guessed it – the Mersenne Twister formula.

Figure 3 – Extract of the PHP language description for array_rand

Outside of dice rolls, choosing a element from an array is the most-used random selection that the Auto120 game goes through. So – what if the PHP version was rolled back (along with the use of mt_rand) and the code re-run? That was next.

The Difference between Data Science and Data Analysis

I’ve been around a while. Not as long as some, but long enough to see the repeating patterns that emerge over time when it comes to Information Technology. They keep coming, too. The ‘cloud’ of today is the ‘outsourcing’ of the 90s. the “Big Data” of today is the “BI” of the 00’s. And so on. Everything’s stolen, nowadays.

Figure 3- Who uses fax machines these days?? (Except for doctors, apparently.)

There are a lot of people around who use the term ‘data science’ without understanding what it really means. It’s a pretty simple thing to consider, and those who publish on the subject argue that it is either: another name for statistics, or is not… takes into account qualitative observations, or does not… and so on.

I’m reminded of two of my favourite paradigms – one being the parable of the Blind Men and the Elephant (TL/DR: each individual, having never actually ‘seen’ an elephant, opines a strongly held opinion about what an elephant is based on a single point of data), and Niels Bohr’s reuse of an old Danish proverb “Det er vanskeligt at spaa, især naar det gælder Fremtiden” that roughly translates as: “Prediction is difficult, especially about the future”.

To whit, analysing data can tell us “what happened” but rarely tells us, precisely, what is “going to happen”.

Science is a simple discipline, made difficult by the complexities of the subject matter. The joy of marrying the words “data” and “science” is that you get a self-explanatory statement. (Despite that, I’ll explain.) The scientific method begins with making a conjecture – a hypothesis – and then designing, executing and recording the results from repeatable experiments which provide the evidence that prove this hypothesis through predictable results.

This is what I was attempting to do with developing the broader framework of the Auto120 game. Being – if you design and build an instance of an algorithm you SHOULD get repeatably consistent results in the output REGARDLESS of the language it is implemented in.

Well. Let’s break that down.

Yes, the results are consistent… but not as consistent as I would have expected. Yes, they are generally consistent across different languages. But, they’re not precisely the same. Like the wobble in Uranus’ orbit which led astronomers to discover the planet Neptune, there is something ‘different’ between two subsets of languages – with Python and Ruby in one camp, and PHP, C++, Java and Javascript in the other. And even PHP is slightly skewed.

Back to Random

So, my working hypothesis was this – the random number generator in use by PHP – being either the original algorithm or the Mersenne Twister version – could cause the data to skew. To test, the further experiment was this – 1) run a version of the 49,999 run on PHP 7.0, which pre-dated the integration of the Mersenne Twister, and 2) run a version of the 49,999 on PHP 5.6, AND replace the use of the array_rand method with a random integer selection instead.

My expectation was to see four different looking sets of data, including the original run and the follow up run where I’d substituted rand_mt.

The reality was this:

Figure 4 – Consolidated frequency plot of four different PHP runs, n=49,999

Yep. They’re the same. Almost. Look in the table below at the MODE for the two runs that DON’T use the Mersenne Twister for random number generation:

PHPPHP-MTPHP-5.6PHP-7Standard
Deviation
(of the
PHP runs)
Average129.06128.83128.99129.580.33
Median118.00117.00117.00118.000.58
Mode83.0086.00102.00103.0010.47
STDEV63.4863.7563.4063.640.15
Kurtosis2.051.902.021.950.07
Skewness1.151.141.151.140.01
Table 1 – Comparison of key statistical measures across the four PHP runs

The Mode, the most frequently occurring move count, is 20 moves MORE  than the other runs. (But, still, 40 moves FEWER than for Ruby and Python.)

So, the random number engine did play a part. But did it play such a significant part as to impact the Python and Ruby results the way they did? It was time to run another test to find out.

Next post… having ruled out the random number engine as the significant contribution to different languages’ game behaviour, it was time to deep dive into what may have changed in the code…

Design and Code – Part 11 – The Bake Off – Statistical Comparison, Part II

This is the eleventh in a series of blog posts I’ve been working on during COVID isolation. It started with the idea of refreshing my systems design and software engineering skills, and grew in the making of it.

Part 1 describes ‘the problem’. A mathematics game designed to help children understand factors and limits which represents the board game Ludo.

Part 2 takes you through the process I followed to break this problem down to understand the constraints and rules that players follow as a precursor to implementation design.

Part 3 shows how the building blocks for the design of the classes and methods are laid out in preparation for understanding each of the functions and attributes needed to encapsulate the game.

In Part 4 we get down to business… producing the game using the programming language PHP with an object orientated approach.

Part 5 stepped it up a notch and walked through a translation of the code to object orientated C++.

Part 6 delves into the world of translating the code into object-orientated Java.

In Part 7 the same code is then translated into Javascript.

In Part 8 I slide into the sinuous world of Python and learn a thing or two about tidy coding as I go.

Part 9 explored the multi-faceted world of Ruby.

In Part 10 I started down the path of comparative analysis by asking the question – did all of these languages implement the algorithm consistently, and how would we know if they did?

In Part 11 this analysis continues, with some weird stuff showing up when comparing the outputs of the coded algorithms side by side.

Get in touch on Twitter at @mr_al if you’d like to chat.


Things are about to get weird and miscellaneous…

Weird Data Science

Depending on the extent of outliers, some of the statistics measures returned from a truly stochastic data set may be ridiculously large. But, in the case of the Auto120 algorithm, they shouldn’t be. This is not a true integer random number generation exercise, despite some of the game moves being based on decisions made on random numbers. It is a rules-based algorithm with a degree of randomness baked into it.  

To further complicate matters, each language, each interpreter and compiler – even the platform that the code was run on – adds another noise factor. A different internal algorithm for generating random number results, a different random ‘seed’ generator, a different millisecond time stamp to feed the seed generator, etc.

This is James Gleick’s Chaos Theory writ on a macro scale. Each drop of water from the tap will fall in a different way, of a different size, in a different rhythm and frequency, based on several environmental and physical factors.

So, let’s look the graphs:  [ Y axes are the frequency count of moves, X axes is the count of game moves ]

Figure 1 – Output from PHP [n=49,999]
Figure 2 – Output from Java [n=49,999]
Figure 3 – Output from C++ [n=49,999]
Figure 4 – Output from Javascript [n=49,999]
Figure 5 – Output from Ruby [n=49,999]
Figure 6 – Output from Python [n=49,999]

As you scroll through these, the difference in the final two languages – Python and Ruby – compared to the earlier four languages, is obvious. If it isn’t then here are all 6 mapped on top of each other:

Figure 7 – Combined plot, all languages [n=49,999] {click for large version}

Some quick observations:

  • Java, C++ and Javascript appear to show a near identical pattern
  • PHP follows close to these, however it’s peak is not as high and the skew to the right is more apparent
  • Ruby and Python show a more symmetrical, albeit still right-skewed distribution. The peak is significantly lower that of the other language runs.

On removing outliers

A quick note on the C++ graph. The 49,999 game run produced five games where the move count was a clear outlier – values ranging between 3,000 and over 6,000 moves – which were at odds with the rest of the data. I’d seen this occur during testing as well.

Every now and then the C++ version of the program got into a rhythm of moves that looked as though it was ‘looping’ for several thousand iterations. It was one of the reasons I put a hard-coded exit in the iterations of the main program, so that it did not run ad infinitum and so I could trap logical errors.

In the case of this batch, there ended up being eight data points that I removed – being where there were games of over 750 moves – in order to produce a ‘corrected’ data set.

By the numbers

The following table, using the calculated statistical indicators rather than relying on a visual interpretation, echoes what we see in the composite graph:

Figure 8 – Statistical measures, all languages

I’ve colour coded the language columns to match the colours in the composite graph, with the exception of C++ ‘uncorrected’ which is not on the graph. Some interesting observations:

  1. The skewness of Java and JavaScript are identical, but their kurtosis and mode vary
  2. Ruby and Python are so close as to be almost identical. We’re talking 0.5% difference across most indicators.
  3. The Java, Javascript and C++ distributions, indicated by the Standard Deviation, are also very similar.
  4. C++ seemed to be the most efficient ‘player’, with the lowest number of average moves across all games – even taking into account the uncorrected run. However, Java had the lowest mode, indicating that there were a higher number of lower-move games.

I expected the runs to vary a little – but did not expect them to vary with such significance. The next step was to try and figure out why.

Next post… looking for ways of understanding what was causing the difference in output.