I have some thoughts on AI which don’t warrant their individual posts, so I’ve collected them here.

Privacy

I think we’re mostly past the point where people would seriously say that they don’t need privacy if they have nothing to hide. But I think there is a core there that many still find persuasive:

It’s okay for my personal information to be collected. Targeted ads aren’t that persuasive really, and nobody with power cares enough about me to trawl through all the information that exists about me. I’m not a terrorist, I like free things, so why should I worry?

I think this is fundamentally flawed. Even if you trust the current establishment to respect your rights1, if you have any2: Governments change, and power will inevitabely be abused.

But even if you don’t agree, I want to point out that thrawling through all the data collected about everyone is something that AI is making economically viable. Also, future AIs could be persuasive at a superhuman level.

Disclosing personal information has always been a vulnerability. The cost of exploiting it is rapidly dropping.

Even if you believe that the surveillance state loves you, AI is also making hacking easier, so you should at least think carefully about protecting yourself from cybercrime, which has a lot of overlap with strengthening your privacy posture.

If you want to take better care of your privacy, this seems like a valuable resource. I think the following are good steps to take, roughly in the order of ease of use:

  • Use an Ad Blocker
  • Use firefox over Chrome
  • Use a web browser that resists fingerprinting (both on your PC and on mobile)
  • VPN
  • Use an E2E encrypted messanger app with as many contacts as you can persuade (Signal > Telegram > WhatsApp > Facebook Messangwer)
  • E2E encrypted email provider2
  • Get off social media as much as you possibly can
  • Privacy-focused, FOSS Phone OS
  • FOSS Home PC OS (you can dual-boot if you need to game, just keep anything other than gaming off Windows)

Vol

Shortening AI timelines are causing a relatively small group of people a relatively large amount of anxiety.

For me, I noticed an instinc to wait a bit, see how things are developing, and then assess again what trajectory we’re on. I don’t think, in general, that’s correct. As the rate of technological change is accelerating, the amount of uncertainty we have about the future is increasing too!3

All else being equal, if the rate of technological change keeps rising, you’ll never again be as certain about the answer to the question “What will the world look like in a year?” as you are now.

Note that this is very different to saying that the future is most likely to look like the present. That’s definitely not true.

This implies a strategy of locking in short term gains and doing things that are likely to be useful in a wide range of different worlds further down the line.

Taste

When coding LLMs first made an appearance and nobody knew how capable they were, people, especially the SV crowd, started talking about this quality called taste. The idea is that anyone can write code, but it takes an expert engineer’s taste to decide how to compose a system. There’s an analogous story for research taste.

While taste is a real thing, there is a spurious implication: taste is supposed to be this ineffable quality that machines will never learn to imitate. Therefore we will always have to keep engineers around to make those tasty decisions for us. Job security guaranteed.

As far as I can tell, this is just cope. I see no reason why a sufficiently powerful AI wouldn’t have, or be able to develop, taste. Don’t be fooled.

Linguistic cooling

This is pretty low on the list of big important issues to care about, but dear to me nonetheless. I would really like for orthography to change in some ways:

though -> tho
neighbour -> naybor
Leicester -> Lester

and many more. While the “correct” spellings are very interesting historically to some, and a great way to demonstrate supreme educations to others, they are impractical: a waste of time for people, mostly young children, studying the language, a source of confusion for non-natives, a cause of endless irritation to people with dyslexia. While British English is a particularly egregious case, the problem extends at least to French, Polish, and German4, roughly in that order, and presumably many other languages too.

I don’t think this is something individuals can change. Being judged for your language is very real, and having a consistent spelling does allow for more efficient reading. However, over time I would expect spellings to naturally become more efficient.

There are small and a big barriers to this process.

The small barriers are spellcheck and autocorrect. Even in casual texting, if I try to type “tho”, it will warn me in red that that’s not a “real word”, and maybe even change it to “the” or something else that I will have to tediously change manually. Both of these mechanisms, on the margin, discourage the use of certain kinds of casual language in writing. I think at the scale of a language community, this slows down language change (and thus, I claim, language progress).

The big barrier could be generative AI. Beyond just fixing spelling in place, I can imagine a future where whole language patterns are ossifying into helpful, honest, and harmless, corporate-speak. As more and more written text will be AI generated, in one possible future language gets trapped at this point, with humans adopting the manerisms of AI. In a different one, humans will develop their own language to differentiate real from AI interactions. Perhaps we’ll see something different entirely.

Whatever it ends up being, it will be interesting to track. If you have linguistic quirks that you don’t accidentally wan’t to lose, you should pay attention here. For example, by using a system prompt that encourages a style that you choose yourself. Or by adding abbreviations and idiosynchracies you like to your mobile keyboard dictionaries.


  1. And you probably shouldn’t

  2. You probably don’t if you’re not a US citizen.  2

  3. One way to understand why is this. Imagine trying to guess what the weather will be one day from now. You can probably make a reasonable guess, with pretty wide error bars. If you now try to guess the weather two days from now, the error bars will be wider. Now, if we assume that the rate of weather change doubles over night, the weather will change in one day as much as it previously would have in two. So your new one day error bars should be as wide as your old two day error bars. 

  4. I feel obliged to link this