Yup. AI kinda sucks. There are many things that AI models are good at, but sometimes they’re just plain bad. They mess up text in images. They hallucinate information out of nowhere. They suggest jumping off a bridge when you feel depressed (please don't actually do that).
These days, it can feel like the hype around AI is nothing more than just that — hype. Companies are pouring billions of dollars a year into building better models, but are any of their wild claims actually going to come true? Is AI really going to transform society as we know it? Or is the bubble about to burst?
Personally, I don’t think anyone really knows. People will try to extrapolate and make predictions. Some of them will be quite sure of themselves. But even the most rational, evidence-based arguments, I think, will succumb to the sobering reality of how unprecedented and unpredictable this all is.
So instead of endlessly quibbling over what AI will or won’t be able to do, perhaps it would be more productive to shift our perspective a bit. After all, I think we can agree that no matter how capable AI becomes, we should want it to be a good thing for society. And that means managing the risks associated with it, ensuring that it is safe, and ensuring that it is aligned with what humans care about. It’s not as sexy as trying to make AI superhuman, but it is nevertheless important work that must go hand in hand with the development of more powerful AIs.
Companies such as OpenAI have made some progress on this front; for example, using something called RLHF to align their models with human preferences and safeguard them from generating dangerous/inappropriate content. However, these safeguards can be jailbroken, and they themselves can cause more problems such as sycophancy. That’s not to mention the countless other issues: deep fakes, misinformation, discrimination, copyright infringement, data poisoning, the list goes on.
Now, if you have any confidence at all that AI will reach or surpass human abilities, then you’ll also have to consider a slew of more serious risks: AI displacing the labor force, escalating wars, facilitating bioterrorism, enabling authoritarian regimes, taking over human society, or completely wiping us out. As crazy as that might sound, hundreds of leading AI experts are already worried that AI could become an extinction risk on the same level as pandemics or nuclear war.
Sure, they might be wrong. But still, it's better to be safe than sorry. And the best way to be safe — is to put in the work (and the resources) to tackle these various challenges in AI safety. Of course, it won't be easy, especially if we are to ensure that AI is aligned with our intrinsic human values (not just our surface-level instructions & preferences).
It will take a lot of work and a lot of research, but the fact of the matter is — we just aren't doing that right now. There's a lot of talking, but not a lot of walking — especially by major players such as OpenAI. As of April 2024, AI safety research still accounted for only 2% of all AI research. And according to a report published in June 2024, only $1 is spent on AI safety for every $250 spent on making AI more powerful.
The truth is, you don’t have to buy into the AI hype to recognize that AI safety just isn’t getting the attention that it deserves. And you certainly don’t have to be working on AI to care about what this means (case in point, I’m an aerospace engineering student). Whether you believe that AI will revolutionize our lives, or only somewhat affect them, you should care that it does so for the better.
That doesn't mean you have to work on AI safety, but at the very least, you should stay informed and understand the risks. There are plenty of cool (and free!) resources available online; for example, aisafety.com, or this intro video by Robert Miles. Or, if you have the time, you could try this AI Safety Fundamentals course that I took over the summer. Whatever it is that you do, just remember: if we want AI to be a good thing for all of us, then it's up to all of us to hold governments & companies accountable, to advocate for AI safety, and to make sure that AI is, on the whole, a force for good.
P.S. I have a confession to make. Remember that DALL-E generated penguin image I used to show that AI messes up text in images? Well, DALL-E actually didn't mess up. I tried several times to get it to do so, but it kept giving me the right text ("AI KINDA SUCKS"). So I gave up and told it to generate an image that said "AI KINDA SUCKKS" - which it did correctly, first try. A year ago, it almost certainly would've messed up. But now, it works flawlessly. Take from that what you will.