Links #12
Randomizing linear algebra, how autonomous vehicles will change the world, and more.
1.
Most of the computational cost of machine learning and AI algorithms comes from numerical linear algebra. Some of these algorithms explicitly involve random numbers and in practice are they’re subject to numerical errors. In other words, if two different people query ChatGPT with the same prompt, the responses will always be slightly different.
In the presence of all this noise, what is the point of trying to perform matrix operations exactly? Can’t we settle for operations that are probably, approximately correct in exchange for a computational speedup?
Is the Future of Linear Algebra.. Random? Goes over an interesting review paper by heavyweights in the field discussing opportunities to randomize important linear algebraic operations. But they don’t cover randomized matrix multiplication.
Fortunately for us, the first part of this Davis Blalock post covers a hardware implementation of his randomized matrix multiplication algorithm. The headline claim is that the new implementation results in 25x better power efficiency with some loss of accuracy.
I wonder how much modern AI could be improved if these sorts of randomized algorithms were employed from hardware through to deployment.
2.
Eli Dourado asks what the second order effects of autonomous vehicles (AV’s) will be. He notes that car ownership may fall and that people may have little need for a garage, a driveway, or street parking. More safety and at-home delivery are other big benefits.
Things I want to add:
It’s hard to overstate how much this would change shipping. Trucks that carry cargo containers could swarm incoming cargo ships and drive day and night across the country, stopping momentarily to pass their cargo on to a freshly charged truck. Individual items can be shipped directly to someones home, leading to an internet of physical goods.
Getting rid of parking is a huge benefit as it takes up a large fraction of valuable land in cities. With AV’s, people don’t need to bother with parking ever again, their car can drop them off and go to a very dense parking garage nearby.
With parking garages filled with high tech cars, I wonder if they could be put to use for grid-scale energy storage or computation. Electric AV’s are essentially batteries on wheels and could deliver power wherever users need it.
With very safe AV’s, cars can drive faster. Perhaps people will eventually be comfortable with their car going over 100 mph on the highway. For higher speeds, we could build specialized roads for AV’s, especially when they don’t have any people in them.
Lots of little benefits flow from ubiquitous AV’s. For example, AV’s can make moving easier and cheaper, just load a few extra cars up with stuff and meet them at your destination without needing to drive a mile. Energy is one of the only major costs.
If we have cars with cameras, onboard computers and some degree of situational awareness, we might as well add facial recognition to unlock the car for approved users. Cars could also be witnesses to crimes, report accidents, make emergency calls, and automatically pay tolls.
3.
I came across this video of Brett Victor giving an update about his project Dynamicland. If you’re not familiar with his work, he’s got some very cool ideas about tools for thought and human-computer interaction. I highly recommend watching his talk Inventing on Principle.
The whole “tools for thought” community has some captivating ideas, but progress has stalled. Perhaps there just isn’t as much opportunity there as we thought, but there’s a chance that the tools for thought community just came a little early.
Watching Victor’s 2019 talk, I’m struck by how much it needs augmented reality. Replace all the whiteboards and paper and projectors with augmented reality glasses; suddenly you have a new medium for thought that’s easy to set up anywhere in the world. I think this is the sort of foundation you need to build a JARVIS-like digital assistant with LLM’s. People iterate and communicate fastest when they can just do things, conscious vocalization is clunkier than subconscious action. Enabling people to “work with their hands” on a digital medium is key.
Everything Else
Developing Palo Alto is worth $1 trillion makes the argument that homeowners aren’t opposing new development to keep their home prices high. The local gains from development are so large that they can be compensated enough to buy their vote. NIMBY’s probably have other reasons for blocking development. As an aside, I think the example illustrates the value of just moving people to productive places. While we can’t (and shouldn’t) actually go through with this plan, it’s clear that fixing this sort of thing is orders of magnitude more important than a lot of other economic issues.
Yishan estimates that it would take $2 trillion to terraform the Australian outback. I think the same result could be achieved at a lower cost by creating freshwater lakes and letting ecosystems crop up around them.
Solid-state Far-UVC Roadmapping Workshop Report. This is something I really wanted to exist, and now it does! A detailed review on far-UVC lighting, its potential to prevent airborne diseases, and the challenge of scaling it to every building.
A (partially paywalled) post on a new technique in lithography known as directed self-assembly which uses special polymers that self-organize to clean up the lines in photolithography, improving resolution.
The solar industrial revolution is the biggest investment opportunity in history. Casey Handmer is continuing to bang the drum on the solar industrial revolution. Notice the similarities with my previous post on the topic.
the friendship theory of everything points out that who you spend time with shapes who you are. Having close friends live physically close to you is more important than the specific place you live.
Adeno-associated-virus-mediated gene delivery to ovaries restores fertility in congenital infertile mice. A proof-of-concept gene therapy for infertility that targets cells other than egg cells. The therapy shouldn’t carry over in the offspring which is nice.
KL is All You Need. “Modern machine learning is a sea of initialisms: VAE, VIB, VDM, BBB, VB, etc. But, the more time I spend working in this field the more I come to appreciate that the core of essentially all modern machine learning methods is a single universal objective: Kullback-Leibler (KL) divergence minimization.”
Thread and Github repo of some guy reimplementing every deep learning advance and discussing what he learned from it.
Manufactured homes never took off because transportation costs would eat up all your cost savings. But with electric self driving trucks, your transportation costs declines rapidly and also continues to decline over time. It might finally enable the industrial revolution for construction.
When working with these LLMs. In most of the cases, accuracy is not the point, it just throws us some words and we convince ourselves, this is what we needed. So, yeah randomising matrix multiplication could be something.
Also, I love the themes you delve into and the quality of references. It's intriguing.