When technology becomes indistinguishable from magic, it’s very easy to get excited, carried away, or even become irrational.
Why irrational you might ask? Because when the expectation from any technology or system extends well beyond what is known to be feasible, that is an irrational behaviour.
Sadly, as the result of completely unjustified expectations, financing, hype and marketing, that’s precisely what is happening across the globe due to the recent developments of AI. The irrationality to:
- Expect AI systems to completely fulfill your obligations at work, school etc. The expectation that this technology will make you “look good” at whatever you’re doing, without you doing or knowing anything.
- Expect AI systems to replace you, i.e. render you redundant, and getting defensive about it and hostile to the technology itself. Yes, AI systems will almost certainly make a very significant percentage of the population redundant, not because of sheer skill, but due to massive productivity gains that will enable employers to perform work with much fewer people. But it’s not going to be, at least not in the short-term, because AI systems can fully replace humans across the wide range of skills the latter have.
- Expect AGI systems to appear within months. LLMs may have, unexpectedly, demonstrated some basic emergent reasoning capabilities, but that is very different to the levels of reasoning necessary for AGI. They also lack the fundamental mechanism for learning. Even if AGI eventually incorporates LLMs, in some way, it will almost certainly require additional and very substantial technology we haven’t invented yet.
- Consider recent developments in AI technology as just ‘hype’ and ignore it or downplay it.
People will always use technology badly. Yet, the frauds, the bozos, the fearmongering doomers (or, equally, the tech bro self-serving optimists, a.k.a. the ‘accels’), those with vested interests or just hardened ideology, do not and should not single-handedly define or steer our use of technology.
There are huge risks to society through the use of AI technology, as it exists today. Societal risks, not existential risks (a.k.a. Superintelligence etc.) The problem isn’t the technology itself; it’s the humans, their groups and power structures that make use of it. The politicians, the entrepreneurs, the financiers. There has been exceptionally little mention of these immediate risks in the West, and I bet even less in the developing world. There seem to be no plans, no discussions, no strategy. A very large percentage of the population is expected to be affected in the developed world, about 60% according to the World Bank, yet few governments seem to be even interested in addressing this, at least not publicly. Young adults graduating from university will increasingly find it difficult to find a job, to learn their craft, science or trade. Increased dependence on AI technology will erode human knowledge as people will increasingly expect ‘immediate solutions and answers’ to their problems, rather than put in the time to learn, understand and synthesise new knowledge. Education will be severely affected (it already is!)
All of those risks are addressable. Some easier than others, perhaps, but they all can be addressed by our existing institutions, processes and structures in place. But they aren’t.
On the other hand, dismissing or banning AI would be a huge mistake. First off, you just cannot do it. You need to assume that AI technology is here, it will evolve and that humanity has to evolve with it, even if you thought that nothing good can come of it.
But AI also brings with it massive opportunities for good. In software engineering, the field most exposed to AI, chiefly due to its proximity to it, chances are that in your day-to-day work you can save up to 80% of your workload through this technology today. This translates into astounding productivity gains. Think applications being implemented and released in days versus weeks or months by half the people it would take previously. Mind you, you still have to work for the remaining 20%+, as much as many may be tempted not to do it. And in some areas of software engineering, in algorithm design, research, systems programming, very high performance scientific or low level programming, existing AI tools will not be able to help you as much. But they can certainly help and they will do so if you treat them as tools.
Yet even when people do use those tools, I find they are often too lazy to put that 20%, to keep their brains functioning and benefit from the insane benefits AI is bringing to their lives. This results in mediocre output, disappointment and complaints. At the other extreme, many, perhaps after seeing those mediocre results, or even trying to use those tools with outsized expectations, end up completely dismissing the technology.
Yet as the technology progresses, AI will gradually encroach on every other field where human knowledge and expertise is valued. I doubt anyone will be able to avoid interacting with such systems in the coming 2-3 years, in both personal and professional settings, whether they like it or not.
And while they are there, we don’t hear about the benefits much. And when we do, ironically, they are largely uttered by people who are either not qualified to talk about them or those who are using the technology and its potential benefits for humanity as punchlines; the frauds, the bozos, the doomers, the ‘accels’.
Don’t let them define what AI is. AI can be used for good. Whether it does is (still) up to us.