The idea is that you can imagine a machine that would manipulate atoms and molecules on the molecular level. Since everything around is really is just an arrangement of molecules, a sufficiently advanced machine would be able to rearrange those molecules into… well, any shape you desire. The raw materials for most of the things around us aren’t that complicated, atomically – you could buy the trace metals as powders, for example, or find many of them in dirt or other items. Feed in the supply to the nanofabricator, it pulls apart the input materials, processes them, and spits out what you want it to spit out. All you need is those raw materials, the energy to supply the machine, and a blueprint of whatever you need to manufacture.
It may well be that making such a machine is impossible, or centuries away. We can manipulate individual molecules now, but it’s very slow and on a very small scale – no nanofabricator could be built today. But you can imagine similar technology being used by drug design companies to synthesise new molecules that might be useful as active ingredients in drugs. And, of course, if it was invented, then things change hugely. Just as the internet liberated information, making it widely accessible to many, so you might imagine that nanofabricators – for the lucky few that own them – would make physical objects, things, stuff, accessible to many. I guess the only valuable commodities would be energy, raw materials, blueprints for the nanofabricator, and the Santa Claus machines themselves. (Unless they can self-replicate – then you can imagine all kinds of doomsday scenarios where they go out of control!)
Even if something specifically like a nanofabricator can’t be made, we can hope that new advances in 3D printing (they can even 3D print houses these days…) are showing the way towards better manufacturing processes that can be made more smart and versatile than ever before. If physical objects suddenly become next to worthless – if we can attain true abundance in society – then so many of the social structures around us no longer need to exist. But people have been dreaming of this for a long time (that machines would replace us and leave us all free to live lives of leisure) and it hasn’t happened yet.
One of the main developments has been how much better we are at detecting exoplanets – planets that surround stars in other solar systems. We can now work out things about their atmospheric composition, to try to determine whether they’re more likely to host life or not. I discuss the many problems, questions and concerns with detecting alien life – which should by many people’s reckoning be all around us – in the Fermi and Drake episode of my podcast.
It’s a little tricky for us to define superintelligence because we don’t really have a great definition of intelligence. Your calculator can do mental arithmetic many millions of times faster than you – it can perform calculations that would take you years in minutes. In many ways, it’s already superhuman. But you wouldn’t call your calculator superintelligent, because it can’t do anything else.
Equally, I think we judge things on human terms. When you try to picture talking to a superintelligent AI, you imagine that you’re talking to some other incredibly smart person – like Einstein, da Vinci, anyone you like. (This is another bias in our assessment of intelligence, or maybe it’s just me, we usually think of the smartest people as scientists by default!) But the reality is that talking to a superintelligence isn’t like talking to another human. There’s a spectrum of intelligence for humans and we imagine superintelligence as being at the top end of that spectrum. The reality is that it would be thousands, millions, billions of times more intelligent than that. The comparison in talking to a superintelligence is less like you or I talking to Stephen Hawking, and more like an ant trying to talk to a human.
The technological singularity happens when new technologies emerge very, very quickly. People like Ray Kurzweil link it to the ability to modify our own genomes, and maybe even merge with computers using brain-computer interfaces. It’s also sometimes used as shorthand for an “intelligence explosion.”
Let’s step back a minute. Imagine I develop the world’s first AI, by simulating the human brain. It might only be as smart as a human, but I can run it on really good hardware, and so I can run it a hundred times faster than a human brain.
Then, that AI is like a hundred humans working at the same time, 24 hours a day, with absolute focus – and a single mind – on a single topic. If it begins to improve itself, maybe it can make a few software upgrades, get some better hardware in, and it’s like a thousand humans working together. Maybe soon that’s a million. Eventually, through this self-improvement, the AI can become superintelligent.
People are concerned about what would happen if such an AI were to exist. They’re mainly concerned that it might not have the same goals as us. Maybe the AI sees humans as a distraction, or even a threat (we might still have the power to turn it off!).
That said, you asked how soon to expect the Singularity. I can’t really answer that question. Some people will give you an exact date – I think Kurzweil usually says around 2040, which seems *incredibly* close by these standards. Maybe by the end of the century we will have human-level intelligent AI, which I think would likely trigger something like a Singularity in a lot of cases. But we don’t even know if it’s possible. Exponential growth in technology doesn’t usually continue forever, just like exponential growth in population doesn’t continue forever. The populations in Europe were exponentially growing once – now they’re on the decline. So I don’t think you can just extrapolate from where we are today and imagine the sci-fi future of your dreams coming to pass – and we certainly can’t put a date on it!
But the AI problem is fascinating to think about and many respected scholars do spend a lot of time thinking about it. You can listen to my episodes on the Singularity here and here, and I’ve also interviewed a couple of people about it.