☛ Of the original Prince of Persia video game

If you’re old enough to have played — or watched! — any version of the original Prince of Persia computer game, or, take an interest in design and/or programming, this Youtube video is for you. What a wonderfully charming retelling of how the game was developed, working within frugal memory limits of the Apple ][ for which it was originally developed.

And no, I’m not old enough either to have played the original on an Apple platform. My first sighting of this piece of art was in the late 1990s, on the computer of one of my dad’s friends. For me, these were the really early days, personal computers were starting to enter the mass market, and the teenage me was buckling in for the ride. If I remember correctly, I only played this game on Mridul kaka’s (translation: Mridul uncle’s) computer a couple of times, and I remember being completely awestruck by how fluid the character movements looked. This game was gorgeous. Those were the days when games were blocky at best, and Prince of Persia was just something else. As I said, a work of art.

And now I find out those fluid animations were based on real motion capture! In the 1980s! How unbelievably cool is that? Seriously, go watch, you won’t regret it.


☛ Bertrand Serlet — ‘Why AI works’

If you take an interest in why — not how — the modern large language models work, this is a great 30-minute video lecture from Bertrand Serlet, former Senior Vice President of Software Engineering at Apple. This is a well-crafted, non-jargon talk to take in; at the very least, it’s a lesson in how to communicate a complex, mathematical topic in incremental, bite-size pieces without devolving into multi-layer flowcharts and equations.

I started watching because I remember Serlet from a couple of his hilarious presentations from his Apple days (watch here, then here) poking fun at Microsoft, and I’m glad I didn’t skip this one.

If, after watching the video, you would like to follow up on the topic with some technical reading, I have you covered. Transformers are a type of deep neural network, specifically trained to make predictions based on chains of previous input, using the inherent contexts and relationships therein.

So, while an image classifier takes in a single image and predicts probabilities that the image contains specific objects, a transformer takes in a sequence of information pieces, chained together, and makes a contextual prediction. The prediction in this case tries to extend the input chain, which can be the next logical word, next logical pixel, next logical musical note, etc.

If you want to take a deep dive into building a simple LLM from scratch, you may start with Andrej Karpathy’s tutorial. Karpathy is one of the co-founders of OpenAI (makers of ChatGPT), and this is a very well put-together lecture. He also has an hour-long “busy-person’s intro to LLMs”.

Finally, if you really want to go into the rabbit-hole, this paper is what started the transformer revolution: Attention is all you need. The View PDF link will take you to the PDF version of the paper. This LitMaps link will show you how influencial that paper has been.

But, seriously, forget all of the technical stuff. Go watch Bertrand’s video lecture.

P.S.: I should note, this is not an unconditional endorsement of AI in general and LLMs in particular. These technologies are being used, and may continue to be used, in ways that are unsavory, short-sighted and dangerous. We need to be circumspect and judicious in how we deploy these extremely powerful technologies, so that we don’t incur unacceptable costs in the long term. We should aim to bolster our creative crafts with AI, not foolishly attempt to replace human creativity.


☛ Miss Marple makes a comeback

The Guardian reports:

The collection, titled Marple, marks the first time anyone other than [Agatha] Christie has written “official” (as recognised by the Christie estate) Miss Marple stories. The 12 women who contributed to the collection include award-winning crime writers Val McDermid and Dreda Say Mitchell, historical novelist Kate Mosse, classicist and writer Natalie Haynes and New York Times bestselling author Lucy Foley.

(If the Guardian link above doesn’t work for any reason, here is an alternative link from Smithsonian Magazine quoting The Guardian.)

This is great! Always room for more Marple mysteries for avid Christie readers such as me!

There have already been several new “official” Poirot novels, written by Sophie Hannah, also sanctioned by the Christie estate, that have been published in the last few years. I have read a couple of them, and they are pretty good reads! The author’s voice seems just that bit different — of course, that is to be expected, and indeed hoped for — and that’s a little jarring after years of reading Christie, but the plots and the characters are quite well-thought-written-fleshed-out. They won’t feel out of place amongst Christie’s Poirot mysteries.

If these new Marple stories are anywhere as good, then they will be worth looking out for.


☛ Ancient DNA traces origin of Black Death

A Silk Road stopover might have been the epicentre of one of humanity’s most destructive pandemics.

People who died in a fourteenth-century outbreak in what is now Kyrgyzstan were killed by strains of the plague-causing bacterium Yersinia pestis that gave rise to the pathogens responsible several years later for the Black Death, shows a study of ancient genomes.

“It is like finding the place where all the strains come together, like with coronavirus where we have Alpha, Delta, Omicron all coming from this strain in Wuhan,” says Johannes Krause, a palaeogeneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who co-led the study, published on 15 June in Nature.

Fascinating read on new research on the origins of Black Death. As you can imagine, it’s not an easy task to find genomic data from the plague bacteria, several centuries after the pandemic. Then, like now, how the pandemic spread mattered quite a lot of how and where a lot of humans came together and then dispersed, carrying the deadly disease with them.


☛ Of Cricket, and How Fast Bowling is About More Than Speed

It has been too long on this website with not a mention of cricket. To remedy that, here is essential reading by Cameron Ponsonby at ESPNCricinfo on how fast bowling speeds are only a portion of the feel of a fast bowler’s pace:

It is very easy to think of facing fast bowling as primarily a reactive skill. In fact, read any article on quick bowling and it will invariably say you only have 0.4 seconds to react to a 90mph delivery.

But what does that mean? No one can compute information in 0.4 seconds. It’s beyond our realm of thinking in the same way that looking out of an aeroplane window doesn’t give you vertigo because you’re simply too high up for your brain to process it.

However, the reason it’s possible is because, whilst you may only have 0.4 seconds to react, you have a lot longer than that to plan. And the best in the world plan exceptionally well.

When the ball arrives to you, as the batter, literally faster than you can react to the ball, how fast a ball feels has way more to do with diversity between bowlers than the raw pace on the ball.

Excellent and insightful read.

Another interesting piece by Ponsonby talks about data analytics in cricket. As Ponsonby mentions in his fast bowling article, cricket only dabbles in data analytics when compared to, say, baseball, where the analytics have been taken to another level altogether.

I think I’m okay with the balance that cricket has with its data analytics: I would rather have the analytics being fascinating reads for the fan, and an influence on the coaches/players, without their becoming all that anyone cares or talks about. I sometimes feel like the innate skill and art of sport gets lost in baseball. Makes for great reading though!