☛ Finding our galactic home (Youtube)

What a beautiful simulated rendition of our place in the universe, by answering the hypothetical question: ‘how would you find your way back home if you were stranded 1 billion light-years from Earth?’ By the way, note the ‘every dot is a galaxy’ and ‘every dot is a star’ notations as the video zooms in. Also, if you’re curious about the Sun’s description as a ‘yellow dwarf’, yes, that’s the nomenclature, although it’s a misnomer, and, as the video says towards the end, our Sun is perfectly unremarkable and ordinary.

To me, videos like these are a reminder of how minuscule we are in the context of how big space is. For all our progress and technology, the farthest our probes have ventured is juuust outside of the solar system.

See also: Pale Blue Dot, by Carl Sagan.


☛ Of the original Prince of Persia video game (Youtube)

If you’re old enough to have played — or watched! — any version of the original Prince of Persia computer game, or, take an interest in design and/or programming, this Youtube video is for you. What a wonderfully charming retelling of how the game was developed, working within frugal memory limits of the Apple ][ for which it was originally developed.

And no, I’m not old enough either to have played the original on an Apple platform. My first sighting of this piece of art was in the late 1990s, on the computer of one of my dad’s friends. For me, these were the really early days, personal computers were starting to enter the mass market, and the teenage me was buckling in for the ride. If I remember correctly, I only played this game on Mridul kaka’s (translation: Mridul uncle’s) computer a couple of times, and I remember being completely awestruck by how fluid the character movements looked. This game was gorgeous. Those were the days when games were blocky at best, and Prince of Persia was just something else. As I said, a work of art.

And now I find out those fluid animations were based on real motion capture! In the 1980s! How unbelievably cool is that? Seriously, go watch, you won’t regret it.


☛ Bertrand Serlet — ‘Why AI works’

If you take an interest in why — not how — the modern large language models work, this is a great 30-minute video lecture from Bertrand Serlet, former Senior Vice President of Software Engineering at Apple. This is a well-crafted, non-jargon talk to take in; at the very least, it’s a lesson in how to communicate a complex, mathematical topic in incremental, bite-size pieces without devolving into multi-layer flowcharts and equations.

I started watching because I remember Serlet from a couple of his hilarious presentations from his Apple days (watch here, then here) poking fun at Microsoft, and I’m glad I didn’t skip this one.

If, after watching the video, you would like to follow up on the topic with some technical reading, I have you covered. Transformers are a type of deep neural network, specifically trained to make predictions based on chains of previous input, using the inherent contexts and relationships therein.

So, while an image classifier takes in a single image and predicts probabilities that the image contains specific objects, a transformer takes in a sequence of information pieces, chained together, and makes a contextual prediction. The prediction in this case tries to extend the input chain, which can be the next logical word, next logical pixel, next logical musical note, etc.

If you want to take a deep dive into building a simple LLM from scratch, you may start with Andrej Karpathy’s tutorial. Karpathy is one of the co-founders of OpenAI (makers of ChatGPT), and this is a very well put-together lecture. He also has an hour-long “busy-person’s intro to LLMs”.

Finally, if you really want to go into the rabbit-hole, this paper is what started the transformer revolution: Attention is all you need. The View PDF link will take you to the PDF version of the paper. This LitMaps link will show you how influencial that paper has been.

But, seriously, forget all of the technical stuff. Go watch Bertrand’s video lecture.

P.S.: I should note, this is not an unconditional endorsement of AI in general and LLMs in particular. These technologies are being used, and may continue to be used, in ways that are unsavory, short-sighted and dangerous. We need to be circumspect and judicious in how we deploy these extremely powerful technologies, so that we don’t incur unacceptable costs in the long term. We should aim to bolster our creative crafts with AI, not foolishly attempt to replace human creativity.


☛ Miss Marple makes a comeback

The Guardian reports:

The collection, titled Marple, marks the first time anyone other than [Agatha] Christie has written “official” (as recognised by the Christie estate) Miss Marple stories. The 12 women who contributed to the collection include award-winning crime writers Val McDermid and Dreda Say Mitchell, historical novelist Kate Mosse, classicist and writer Natalie Haynes and New York Times bestselling author Lucy Foley.

(If the Guardian link above doesn’t work for any reason, here is an alternative link from Smithsonian Magazine quoting The Guardian.)

This is great! Always room for more Marple mysteries for avid Christie readers such as me!

There have already been several new “official” Poirot novels, written by Sophie Hannah, also sanctioned by the Christie estate, that have been published in the last few years. I have read a couple of them, and they are pretty good reads! The author’s voice seems just that bit different — of course, that is to be expected, and indeed hoped for — and that’s a little jarring after years of reading Christie, but the plots and the characters are quite well-thought-written-fleshed-out. They won’t feel out of place amongst Christie’s Poirot mysteries.

If these new Marple stories are anywhere as good, then they will be worth looking out for.


☛ Ancient DNA traces origin of Black Death

A Silk Road stopover might have been the epicentre of one of humanity’s most destructive pandemics.

People who died in a fourteenth-century outbreak in what is now Kyrgyzstan were killed by strains of the plague-causing bacterium Yersinia pestis that gave rise to the pathogens responsible several years later for the Black Death, shows a study of ancient genomes.

“It is like finding the place where all the strains come together, like with coronavirus where we have Alpha, Delta, Omicron all coming from this strain in Wuhan,” says Johannes Krause, a palaeogeneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who co-led the study, published on 15 June in Nature.

Fascinating read on new research on the origins of Black Death. As you can imagine, it’s not an easy task to find genomic data from the plague bacteria, several centuries after the pandemic. Then, like now, how the pandemic spread mattered quite a lot of how and where a lot of humans came together and then dispersed, carrying the deadly disease with them.