☛ Bertrand Serlet — ‘Why AI works’

If you take an interest in why — not how — the modern large language models work, this is a great 30-minute video lecture from Bertrand Serlet, former Senior Vice President of Software Engineering at Apple. This is a well-crafted, non-jargon talk to take in; at the very least, it’s a lesson in how to communicate a complex, mathematical topic in incremental, bite-size pieces without devolving into multi-layer flowcharts and equations.

I started watching because I remember Serlet from a couple of his hilarious presentations from his Apple days (watch here, then here) poking fun at Microsoft, and I’m glad I didn’t skip this one.

If, after watching the video, you would like to follow up on the topic with some technical reading, I have you covered. Transformers are a type of deep neural network, specifically trained to make predictions based on chains of previous input, using the inherent contexts and relationships therein.

So, while an image classifier takes in a single image and predicts probabilities that the image contains specific objects, a transformer takes in a sequence of information pieces, chained together, and makes a contextual prediction. The prediction in this case tries to extend the input chain, which can be the next logical word, next logical pixel, next logical musical note, etc.

If you want to take a deep dive into building a simple LLM from scratch, you may start with Andrej Karpathy’s tutorial. Karpathy is one of the co-founders of OpenAI (makers of ChatGPT), and this is a very well put-together lecture. He also has an hour-long “busy-person’s intro to LLMs”.

Finally, if you really want to go into the rabbit-hole, this paper is what started the transformer revolution: Attention is all you need. The View PDF link will take you to the PDF version of the paper. This LitMaps link will show you how influencial that paper has been.

But, seriously, forget all of the technical stuff. Go watch Bertrand’s video lecture.

P.S.: I should note, this is not an unconditional endorsement of AI in general and LLMs in particular. These technologies are being used, and may continue to be used, in ways that are unsavory, short-sighted and dangerous. We need to be circumspect and judicious in how we deploy these extremely powerful technologies, so that we don’t incur unacceptable costs in the long term. We should aim to bolster our creative crafts with AI, not foolishly attempt to replace human creativity.


☛ Disposable America — A history of modern capitalism from the perspective of the straw. Seriously.

By Alexis Madrigal for The Atlantic:

The invention of American industrialism, the creation of urban life, changing gender relations, public-health reform, suburbia and its hamburger-loving teens, better living through plastics, and the financialization of the economy: The straw was there for all these things—rolled out of extrusion machines, dispensed, pushed through lids, bent, dropped into the abyss.

You can learn a lot about this country, and the dilemmas of contemporary capitalism, by taking a straw-eyed view.

This is a very well researched article on the humble drinking straw, and its correlation with the evolving American societal outlook. The pervasiveness of the drinking straw in this society probably makes this a pretty good correlation to make.

Go read, this is quite an interesting, albeit long, read. (I did not know, for example, that the original straw was made from actual straw.)


☛ Indian banks contemplate ‘face reading’ to spot doubtful loan seekers

From the Times of India:

Private banks in the western coastal state [Gujarat] have approached the Gujarat Forensics Science University to prepare a facial micro-expressions manual, to train its employees in recognising doubtful high net-worth customers like fugitive liquor baron Vijay Mallya demanding loans.

This is straight out of the American TV series Lie to Me (IMDB Link):

In the show, Dr. Cal Lightman (Tim Roth) and his colleagues in The Lightman Group accept assignments from third parties (commonly local and federal law enforcement), and assist in investigations, reaching the truth through applied psychology: interpreting microexpressions, through the Facial Action Coding System, and body language.

Have the Indian bankers in question seriously been watching too much TV reruns? In the show, the protagonists use micro expressions to evaluate suspects and their testimony to solve crimes. That’s slightly different from the real world case of deciding whether to give out large loans, no? (For context, India has had a slew of recent large loan frauds.)

I am completely bewildered by this. If there have been some large loan frauds, shouldn’t the most important step be a complete overhaul and re-evaluation of how credit-worthiness of prospective clients is determined? In a financial sense? In a risk assessment and cost-benefit analysis sense? In an available collateral sense? Especially given that investigations have been called for on bank employees, it has been alleged that a bank CEO “failed to initiate steps” to prevent the fraud after there were red-flags, and bank officials have been charged?

Do the bankers really believe that there is nothing to improve on their financial evaluations side and in their employee honesty side? Or is this a case of putting their head in the sand and going ‘la-la-la’? Are the bankers too entrenched in their current practices and workflows, don’t want to go through the trouble — and the expense — of actually re-evaluating their own businesses, and are looking for guises to exculpate themselves?

I mean, seriously, if the banks want to go for next generation methods, artificial intelligence and machine learning would be an actual avenue to explore. Examples to be found here and here. There are even courses and available computer code(here and here) to get people started!

Come now, bankers in question: get real and find real solutions to your real problems, and stop with the hand waving TV-show inspirations.


UK Election: Interesting logistics of the Queen’s speech

In light of the recent election in the UK, the Queen, of course, is supposed to make a speech regarding forming the government by the party that has won majority. Now, however, after the interesting results of the election, the Queen’s speech is delayed, and the reason for it is very interesting.

The Telegraph UK reports:

The Queen’s Speech is going to be delayed because it has to be written on goatskin paper and the ink takes days to dry.

Apparently, the British monarchy are more concerned than others would be about the archival qualities of the paper that they use.

[…] goatskin paper is not actually made from goatskin.

The material is in fact high-quality archival paper which is guaranteed to last for at least 500 years.

Well, okay, but still, why the delay?

Well, ink on this special paper takes a few days to dry. And the monarchy had “ready to go” versions of the speech for (a) a Conservative party majority, and (b) a Labour party majority. But the results of the election, that resulted in a hung parliament, has put all pre-made plans into disarray. Since the political parties themselves don’t know yet how the government will be formed, the Queen’s speech isn’t finalized yet either.

Once the details are set in stone they can be committed to the goatskin paper and sent away for binding before being presented to the Queen.

I love how even the most apparently mundane things become fascinating just by being associated with the British monarchy.


☛ Recent ISRO satellite launch carried special imaging constellation

From the website of the company ‘Planet’, published the same day the ISRO satellites were launched:

Today Planet successfully launched 88 Dove satellites to orbit — the largest satellite constellation ever to reach orbit. This is not just a launch (or a world record, for that matter!); for our team this is a major milestone. With these satellites in orbit, Planet will reach its Mission 1: the ability to image all of Earth’s landmass every day.

This constellation therefore formed the majority (88 of 104 satellites launched) of the payload carried by the last ISRO launch. As of this launch, Planet is operating 149 satellites in Earth orbit — this is no mean feat.

Also, an interesting side note: ISRO’s previous largest payload that I referred to in my last post — 20 satellites launched in June 2016 — also seems to be for this same company:

This is our 15th launch of Dove satellites and second aboard India’s PSLV. The launch of Flock 3p comes off the successful launch of Flock 2p on the PSLV in June 2016