Cave of Plato goes caving with physicist Thomas Hornigold in search of answers to topics ranging from Santa Claus Machines to Singularity. 
What do you find most intriguing about Physics and how does it help you explore larger questions of life?
The most interesting thing for me about physics is how we can infer so much information from such seemingly small amounts of data. Take astrophysics as an example – all we really have is the light that falls on our little planet, Earth, with occasional extra pieces of information arising from cosmic rays or the debris that lands here. But from that, we’ve been able to map out the Galaxy – and the Universe to a certain extent. We’ve been able to identify thousands of galaxies and even guess at the large-scale structure of the Universe itself. We’ve been able to figure out how stars work with the physics that we understand on Earth, and match a lot of the predictions we can make to the things that we see in outer space. There’s so much left that we don’t understand, but I still find it truly astonishing that we’re able to understand anything at all.

Please suggest 3 interesting books/films that blends interesting concepts in physics with mystery and thrill.
In science fiction, I think Blood Music by Greg Bear is an excellent book. It’s an examination of what would occur in a future where nanotechnology runs rampant, and gradually takes over the world – but it combines this dystopian scenario from science fiction with a healthy dose of philosophy as well, so that comes highly recommended.
Life 3.0 by Max Tegmark is principally a book about artificial intelligence, so it relates to topics like the Singularity and Artificial Intelligence, but there’s actually a lot of physics in there as well – and he wraps it up in a pretty fun storyline in places that I enjoyed reading through.
On the visual side, you can do a lot worse than the TV series Cosmos by Carl Sagan for an understanding of outer space; most of it isn’t out of date yet.
And, despite what some people say, most of the physicists I know really enjoyed Interstellar. Sometimes suspension of disbelief is a good thing!

Tell us about the Santa Claus Machine or the Nano fabricator. If invented, how will it re write the future of our civilization?
So the nanofabricator is really just a metaphor, in a lot of ways, for the potentials people dream about in terms of molecular manufacturing. It’s called the Santa Claus machine partially as a joke – people say that it can give you more or less everything that you want, and so it seems too good to be true. Me and my editors over at Singularity Hub thought that the Santa Claus machine would be a good article for Christmas Day.

The idea is that you can imagine a machine that would manipulate atoms and molecules on the molecular level. Since everything around is really is just an arrangement of molecules, a sufficiently advanced machine would be able to rearrange those molecules into… well, any shape you desire. The raw materials for most of the things around us aren’t that complicated, atomically – you could buy the trace metals as powders, for example, or find many of them in dirt or other items. Feed in the supply to the nanofabricator, it pulls apart the input materials, processes them, and spits out what you want it to spit out. All you need is those raw materials, the energy to supply the machine, and a blueprint of whatever you need to manufacture.

It may well be that making such a machine is impossible, or centuries away. We can manipulate individual molecules now, but it’s very slow and on a very small scale – no nanofabricator could be built today. But you can imagine similar technology being used by drug design companies to synthesise new molecules that might be useful as active ingredients in drugs. And, of course, if it was invented, then things change hugely. Just as the internet liberated information, making it widely accessible to many, so you might imagine that nanofabricators – for the lucky few that own them – would make physical objects, things, stuff, accessible to many. I guess the only valuable commodities would be energy, raw materials, blueprints for the nanofabricator, and the Santa Claus machines themselves. (Unless they can self-replicate – then you can imagine all kinds of doomsday scenarios where they go out of control!)

Even if something specifically like a nanofabricator can’t be made, we can hope that new advances in 3D printing (they can even 3D print houses these days…) are showing the way towards better manufacturing processes that can be made more smart and versatile than ever before. If physical objects suddenly become next to worthless – if we can attain true abundance in society – then so many of the social structures around us no longer need to exist. But people have been dreaming of this for a long time (that machines would replace us and leave us all free to live lives of leisure) and it hasn’t happened yet.

What latest, groundbreaking research is being done by physicists to look for extra terrestrial life and research on parallel worlds, multi verse etc?
I would say that the main research is being done in the extraterrestrial life field. Parallel worlds and the multiverse are very interesting theoretically, but we don’t really have any idea how to do experiments that necessarily confirm or deny their existence – and, at any rate, the parallel worlds we’re talking about aren’t the kind where you can just hop into the neighbouring dimension and see a whole different world. On the other hand, there are lots of ways to look for extraterrestrial life that we can try right now – mostly just by scanning likely regions of space, where there are lots of stars, with radio-telescopes, looking for their signals.

One of the main developments has been how much better we are at detecting exoplanets – planets that surround stars in other solar systems. We can now work out things about their atmospheric composition, to try to determine whether they’re more likely to host life or not. I discuss the many problems, questions and concerns with detecting alien life – which should by many people’s reckoning be all around us – in the Fermi and Drake episode of my podcast.

What is SuperIntelligence? Are natural and Artificial Intelligence (and possibly extra terrestrial intelligence) a part of Super Intelligence?
People use “Superintelligence” in a lot of different ways, but when I use it, I’m referring to an intelligence that’s far far greater than that of humans – and specifically the idea that we could create an artificial intelligence that becomes superintelligent.

It’s a little tricky for us to define superintelligence because we don’t really have a great definition of intelligence. Your calculator can do mental arithmetic many millions of times faster than you – it can perform calculations that would take you years in minutes. In many ways, it’s already superhuman. But you wouldn’t call your calculator superintelligent, because it can’t do anything else.

Equally, I think we judge things on human terms. When you try to picture talking to a superintelligent AI, you imagine that you’re talking to some other incredibly smart person – like Einstein, da Vinci, anyone you like. (This is another bias in our assessment of intelligence,  or maybe it’s just me, we usually think of the smartest people as scientists by default!) But the reality is that talking to a superintelligence isn’t like talking to another human. There’s a spectrum of intelligence for humans and we imagine superintelligence as being at the top end of that spectrum. The reality is that it would be thousands, millions, billions of times more intelligent than that. The comparison in talking to a superintelligence is less like you or I talking to Stephen Hawking, and more like an ant trying to talk to a human. 

Simplify ‘Singularity’ for us. How soon are we likely to experience a singularity and is this going to be a threat to our survival?
So a singularity in physics is a region of space, or perhaps a point, with infinite density. In this context, we’re talking about an intelligence singularity, almost – an intelligence explosion that creates a superintelligence that’s arbitrarily clever. Lots of people, including Ray Kurzweil who has a lot to answer for in terms of popularising the idea of this, refer to it as a technological singularity. He points out that the rate of change of technological developments just keeps accelerating, in an exponential fashion – a classic example is Moore’s Law, where the processing power of our best computer chips doubles every eighteen months or so.

The technological singularity happens when new technologies emerge very, very quickly. People like Ray Kurzweil link it to the ability to modify our own genomes, and maybe even merge with computers using brain-computer interfaces. It’s also sometimes used as shorthand for an “intelligence explosion.”

Let’s step back a minute. Imagine I develop the world’s first AI, by simulating the human brain. It might only be as smart as a human, but I can run it on really good hardware, and so I can run it a hundred times faster than a human brain.

Then, that AI is like a hundred humans working at the same time, 24 hours a day, with absolute focus – and a single mind – on a single topic. If it begins to improve itself, maybe it can make a few software upgrades, get some better hardware in, and it’s like a thousand humans working together. Maybe soon that’s a million. Eventually, through this self-improvement, the AI can become superintelligent.

People are concerned about what would happen if such an AI were to exist. They’re mainly concerned that it might not have the same goals as us. Maybe the AI sees humans as a distraction, or even a threat (we might still have the power to turn it off!).

Perhaps we try to use the AI for some goal, but it doesn’t quite understand what we mean: an example is if people tried to order the AI to get rich, and it works out that the best way to get rich is to establish a totalitarian dictatorship over the entire world and have them all work for you. If it’s superintelligent, it might be able to engineer this. Then we have a pretty terrible outcome. Or maybe competing AIs get involved in a destructive arms race that results in a colossal war. The really scary thing about the Singularity, I think, is that we just have no idea what would happen – because these AIs have not been invented yet, we cannot know how they’d respond. Lots of scenarios are very bleak.

That said, you asked how soon to expect the Singularity. I can’t really answer that question. Some people will give you an exact date – I think Kurzweil usually says around 2040, which seems *incredibly* close by these standards. Maybe by the end of the century we will have human-level intelligent AI, which I think would likely trigger something like a Singularity in a lot of cases. But we don’t even know if it’s possible. Exponential growth in technology doesn’t usually continue forever, just like exponential growth in population doesn’t continue forever. The populations in Europe were exponentially growing once – now they’re on the decline. So I don’t think you can just extrapolate from where we are today and imagine the sci-fi future of your dreams coming to pass – and we certainly can’t put a date on it!

But the AI problem is fascinating to think about and many respected scholars do spend a lot of time thinking about it. You can listen to my episodes on the Singularity here and here, and I’ve also interviewed a couple of people about it.