Photographer Raghu Rai passes away

Raghu Rai, amongst the very best in photo-journalism, passed away a few days ago at the age of 83. If you don’t know who he is is or what he is famous for, do yourself a favor and check out his archives. Especially, see his portrayal of Kolkata, his black and white work, and also his haunting, heart-wrenching, and yet, completely matter-of-fact capture of Bhopal and its victims in the aftermath of the gas tragedy.

My first exposure to his work was during my time at Jadavpur University Photographic Club (JUPC) during my undergrad days, when as a budding photographer I went to a couple of his exhibitions in Kolkata. One was his follow-up coverage to see, and show, the Bhopal victims after a couple of decades. I had seen nothing like it, and it left a deep impression on me. He was capturing tragedy and horror, he was telling peoples’ stories; there was something serene and poignant in his imagery, and yet something captivatingly prosaic. I felt like I was there.

Photo-journalism is inherently opinionated, and always wants to do two things— tell a story, and capture realism. Any event can be captured in photos from multiple perspectives, and every photographer has to choose one perspective over another. The good photo-journalist knows how to capture and portray the core truth (in their opinion) of the event in their lens, while telling a story, and capturing impactful photographs. There was no one better at it than Raghu Rai. Every story that he has covered, you will find his opinion, his perspectives, and feel the story in your mind’s eye and your heart’s strings.

Alas, I never had the fortune to meet him in person during my JUPC days, but as I said above, his photography had an outsize impact in how I thought about my own photography.

Rest in peace, sir.


☛ Finding our galactic home (Youtube)

What a beautiful simulated rendition of our place in the universe, by answering the hypothetical question: ‘how would you find your way back home if you were stranded 1 billion light-years from Earth?’ By the way, note the ‘every dot is a galaxy’ and ‘every dot is a star’ notations as the video zooms in. Also, if you’re curious about the Sun’s description as a ‘yellow dwarf’, yes, that’s the nomenclature, although it’s a misnomer, and, as the video says towards the end, our Sun is perfectly unremarkable and ordinary.

To me, videos like these are a reminder of how minuscule we are in the context of how big space is. For all our progress and technology, the farthest our probes have ventured is juuust outside of the solar system.

See also: Pale Blue Dot, by Carl Sagan.


☛ Of the original Prince of Persia video game (Youtube)

If you’re old enough to have played — or watched! — any version of the original Prince of Persia computer game, or, take an interest in design and/or programming, this Youtube video is for you. What a wonderfully charming retelling of how the game was developed, working within frugal memory limits of the Apple ][ for which it was originally developed.

And no, I’m not old enough either to have played the original on an Apple platform. My first sighting of this piece of art was in the late 1990s, on the computer of one of my dad’s friends. For me, these were the really early days, personal computers were starting to enter the mass market, and the teenage me was buckling in for the ride. If I remember correctly, I only played this game on Mridul kaka’s (translation: Mridul uncle’s) computer a couple of times, and I remember being completely awestruck by how fluid the character movements looked. This game was gorgeous. Those were the days when games were blocky at best, and Prince of Persia was just something else. As I said, a work of art.

And now I find out those fluid animations were based on real motion capture! In the 1980s! How unbelievably cool is that? Seriously, go watch, you won’t regret it.


☛ Bertrand Serlet — ‘Why AI works’

If you take an interest in why — not how — the modern large language models work, this is a great 30-minute video lecture from Bertrand Serlet, former Senior Vice President of Software Engineering at Apple. This is a well-crafted, non-jargon talk to take in; at the very least, it’s a lesson in how to communicate a complex, mathematical topic in incremental, bite-size pieces without devolving into multi-layer flowcharts and equations.

I started watching because I remember Serlet from a couple of his hilarious presentations from his Apple days (watch here, then here) poking fun at Microsoft, and I’m glad I didn’t skip this one.

If, after watching the video, you would like to follow up on the topic with some technical reading, I have you covered. Transformers are a type of deep neural network, specifically trained to make predictions based on chains of previous input, using the inherent contexts and relationships therein.

So, while an image classifier takes in a single image and predicts probabilities that the image contains specific objects, a transformer takes in a sequence of information pieces, chained together, and makes a contextual prediction. The prediction in this case tries to extend the input chain, which can be the next logical word, next logical pixel, next logical musical note, etc.

If you want to take a deep dive into building a simple LLM from scratch, you may start with Andrej Karpathy’s tutorial. Karpathy is one of the co-founders of OpenAI (makers of ChatGPT), and this is a very well put-together lecture. He also has an hour-long “busy-person’s intro to LLMs”.

Finally, if you really want to go into the rabbit-hole, this paper is what started the transformer revolution: Attention is all you need. The View PDF link will take you to the PDF version of the paper. This LitMaps link will show you how influencial that paper has been.

But, seriously, forget all of the technical stuff. Go watch Bertrand’s video lecture.

P.S.: I should note, this is not an unconditional endorsement of AI in general and LLMs in particular. These technologies are being used, and may continue to be used, in ways that are unsavory, short-sighted and dangerous. We need to be circumspect and judicious in how we deploy these extremely powerful technologies, so that we don’t incur unacceptable costs in the long term. We should aim to bolster our creative crafts with AI, not foolishly attempt to replace human creativity.


☛ Miss Marple makes a comeback

The Guardian reports:

The collection, titled Marple, marks the first time anyone other than [Agatha] Christie has written “official” (as recognised by the Christie estate) Miss Marple stories. The 12 women who contributed to the collection include award-winning crime writers Val McDermid and Dreda Say Mitchell, historical novelist Kate Mosse, classicist and writer Natalie Haynes and New York Times bestselling author Lucy Foley.

(If the Guardian link above doesn’t work for any reason, here is an alternative link from Smithsonian Magazine quoting The Guardian.)

This is great! Always room for more Marple mysteries for avid Christie readers such as me!

There have already been several new “official” Poirot novels, written by Sophie Hannah, also sanctioned by the Christie estate, that have been published in the last few years. I have read a couple of them, and they are pretty good reads! The author’s voice seems just that bit different — of course, that is to be expected, and indeed hoped for — and that’s a little jarring after years of reading Christie, but the plots and the characters are quite well-thought-written-fleshed-out. They won’t feel out of place amongst Christie’s Poirot mysteries.

If these new Marple stories are anywhere as good, then they will be worth looking out for.