☛ Bertrand Serlet — ‘Why AI works’

If you take an interest in why — not how — the modern large language models work, this is a great 30-minute video lecture from Bertrand Serlet, former Senior Vice President of Software Engineering at Apple. This is a well-crafted, non-jargon talk to take in; at the very least, it’s a lesson in how to communicate a complex, mathematical topic in incremental, bite-size pieces without devolving into multi-layer flowcharts and equations.

I started watching because I remember Serlet from a couple of his hilarious presentations from his Apple days (watch here, then here) poking fun at Microsoft, and I’m glad I didn’t skip this one.

If, after watching the video, you would like to follow up on the topic with some technical reading, I have you covered. Transformers are a type of deep neural network, specifically trained to make predictions based on chains of previous input, using the inherent contexts and relationships therein.

So, while an image classifier takes in a single image and predicts probabilities that the image contains specific objects, a transformer takes in a sequence of information pieces, chained together, and makes a contextual prediction. The prediction in this case tries to extend the input chain, which can be the next logical word, next logical pixel, next logical musical note, etc.

If you want to take a deep dive into building a simple LLM from scratch, you may start with Andrej Karpathy’s tutorial. Karpathy is one of the co-founders of OpenAI (makers of ChatGPT), and this is a very well put-together lecture. He also has an hour-long “busy-person’s intro to LLMs”.

Finally, if you really want to go into the rabbit-hole, this paper is what started the transformer revolution: Attention is all you need. The View PDF link will take you to the PDF version of the paper. This LitMaps link will show you how influencial that paper has been.

But, seriously, forget all of the technical stuff. Go watch Bertrand’s video lecture.

P.S.: I should note, this is not an unconditional endorsement of AI in general and LLMs in particular. These technologies are being used, and may continue to be used, in ways that are unsavory, short-sighted and dangerous. We need to be circumspect and judicious in how we deploy these extremely powerful technologies, so that we don’t incur unacceptable costs in the long term. We should aim to bolster our creative crafts with AI, not foolishly attempt to replace human creativity.


☛ How Bayesian inference works

Bayesian inference is a way to get sharper predictions from your data. It’s particularly useful when you don’t have as much data as you would like and want to juice every last bit of predictive strength from it.

Although it is sometimes described with reverence, Bayesian inference isn’t magic or mystical. And even though the math under the hood can get dense, the concepts behind it are completely accessible. In brief, Bayesian inference lets you draw stronger conclusions from your data by folding in what you already know about the answer.

An excellent, simple introduction to Bayesian inference. This uses practical examples and an abundance of visual guides: especially useful if you don’t have an extensive math background.