8 July 2020

9 Terrifying Technologies That Will Shape Your Future

Luca Rossi

Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.

It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.

Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.

Before we begin, I have to warn you: some of the things you will read in this story can be VERY controversial. I need you to approach this story with a very open mind, and acknowledge that the ideas I present here are just that, ideas.

I hold no extreme or fixed views, nor do I claim to have the exact answers to ethical and philosophical questions. You may have completely different ideas, and that’s totally fine.


1. Cryonics

Cryonics may seem very sci-fi (to be fair, everything in this story does), but it already exists. There are companies that freeze you as soon as you die, so you can be brought back to life when technology and medicine will be advanced enough.

Seriously, companies like this (I’m NOT affiliated to them). You can buy your immortality now if you want.

Here’s a list of some celebrities that got frozen (James Bedford, Ted Williams, John Henry Williams, Dick Clair Jones, FM-2030), and some that are currently alive and want to be frozen (Seth McFarlane, Larry King, Simon Cowell, Paris Hilton, Britney Spears).

It’s unclear when and whether these people will be unfrozen. The technology to freeze people is not perfect yet, and there can be irreversible damages that will never be fixed, but it’s improving every year. On the other hand, the technology to unfreeze people doesn’t exist yet.
Why Does It Scare Us

People who have chosen to freeze themselves carry a social stigma. Nowadays, the idea of freezing our dead bodies seems very Frankenstein-y to most people.

People that choose to freeze themselves are often seen as deluded and cowards. They won’t accept death. They want to cheat on the fact that God, or nature, or the universe, wanted them dead.
Should We Be Scared?

Not really.

Do you know what else cheats death? CPR. The letter R literally stands for Resuscitation.

Do you know who else deludes themselves because they won’t accept death? Come on, answer it yourself. I don’t want to offend anyone.

Short answer: no, we should not be scared. At least not for the reasons mentioned above.

I think we should do anything possible to avoid death, period. Unless you are in a persistent vegetative state and have very good reasons to prefer death over useless suffering (even then, I would argue that future technologies could fix your condition), there is no reason why we should accept death. Life is fucking beautiful, I don’t want it to end.

Once you are already dead, what else do you have to lose? Maybe cryonics won’t work. Maybe people who are currently frozen have been irreversibly damaged. But they made a no-lose bet. They could either choose to die for sure, or give themselves a small chance to survive. What do you think is the obvious choice?

There is a serious problem, though. Hibernation is not cheap. When cryonics will become socially acceptable, more and more rich people will freeze themselves. Then it will be the turn of the middle class. And what about low-income people?

We are used to saying that, once we are dead, we are all the same, rich and poor. In the near future, this may not be true anymore, because only poor people will die.

Nowadays, death is normal. Many of us have experienced the loss of a loved one, but we knew that they had to go eventually. But what will happen when death will not be normal anymore? When it will be avoidable? The suffering will be much greater, and the idea that we will have a class of immortals and a class of mortals is seriously fucked up.

2. Personal Decision-Makers

Do you ever sit in your room, pondering a decision for hours? Do you ever regret decisions? Do you ever make a good decision but you don’t enjoy it because you think of what you have lost by not choosing the alternatives?

I know I do, all the time. But that will end soon.

Google and Facebook know you more than you know yourself. If optimal decisions are those that act in your best interest, isn’t it obvious to think that these companies will one day make decisions for you?

You won’t need to make decisions anymore: AI will make every decision for you, like which job to take, which person to date, what to eat for lunch.
Why Does It Scare Us

Having to make decisions may be difficult and frustrating, but it’s what makes us feel in control. It’s that thing that gives us whatever we call “free will”.

Society isn’t ready yet to stop believing in the existence of free will, although this is slowly changing. But even for those of us who don’t believe in free will, losing the ability to make decisions with our brains could be very demeaning. Especially if big corporations are those in control of decision-making algorithms.
Should We Be Scared?

Not really.

I predict that we will learn to surrender our decision-making ability very soon. We are already doing it. We don’t decide which series to watch, Netflix does. We don't decide which music to listen to, Spotify does. We don’t decide which book to read, Amazon does.

They make you feel in control of your decisions, because ultimately you are the one who clicks on the play button of the first episode of Stranger Things. But how could I know that you actually watched that series, if it was really your decision?

We don’t realize it, but corporations already drive our decision-making process with recommender systems. Yet we are often satisfied, because it works. AI is better than us at deciding what we like. Not making a decision directly doesn’t deprive us of the satisfaction that it brings us. If anything, we will enjoy it more, because we can’t regret decisions we never made.

We (probably) can live a happy and fulfilled one even if we are not in control. The need for control is an illusion and a cultural thing. Society evolves fast with technology, and the concept of free will won’t be relevant in the future.

If we have to be scared of something, it’s that someone can take advantage of this. Hackers and corporations themselves can manipulate our decisions for their own ends and we wouldn’t even know it.

If you think that you will retain your sense of control and will recognize when someone is trying to manipulate your decisions, think again. It has already happened, with the recent Cambridge Analytica scandal. Data has been used to manipulate voters with personalized ads.

But still, people have manipulated people since the dawn of time. This problem would exist anyway, even without AI decision-makers.
3. Artist AI
Photo by christies.com

In 2018, this painting has been sold at an auction for $432,500. The artist is a Generative Adversarial Network (GAN), a type of “creative AI”.

I personally think that the painting kind of sucks, but GANs have been used to create music, write fake news, and generate fake people.

It won’t be long before AI will be used to create art better than humans. While most people claim that art should only be a human thing, because machines lack a “soul” or a “personality”, in some cases they can’t tell whether a piece of art has been created by a human or a machine. In other words, GANs have passed the art Turing Test.
Why Does It Scare Us

We are now used to see AI getting better than us at everything. AI is better than us at detecting cancer. AI is better than us at playing chess and Go.

But come on, art? AI, will you give us a break?

We can’t still accept the fact that AI can be a better artist than us. Art is something inherently human, right? It is emotional, passionate, profound, it’s not logical and mathematical, right? It can’t be done by calculating derivatives or using Bayes’ theorem, right?

If there was one thing in which we could be better than AI, it was art. If even art is done better by AI, what value do we humans have to offer?
Should We Be Scared?

Maybe a little.

It depends on how it plays out.

On one side, people will enjoy more and better art. They will be used to AI being good at making art and they will benefit from it.

On the other side, human artists won’t be able to compete with AI.

Nobody will prevent human artists from creating art. But it won’t be seen or appreciated by other people. As an artist, you may like the process of making art, but if there is nobody to show it to, what’s the point in doing it?

There is one point in favor of human artists though. A good piece of art is defined by two factors:
A strong emotional impact.
A strong message.

AI can create the best emotional experience ever. But what about the message? Sure, AI can deliver a message as well, but would it be relatable?

A strong message should be drawn by personal experiences and observations, this is impossible to do for an AI. Again, this doesn’t mean that AI can’t deliver a message. It can deliver the best message ever, but it won’t be authentic. People can’t relate to it, because it has no human experience.

This is why, in the best case, people could decide to enjoy both AI and human art. We may even not refer to both of them as “art”, because they will be two different things. One will be more emotion-oriented, the other will be more message-oriented.
4. Ultra-Realistic Sex Robots

Since the dawn of humanity, we have invented spectacular things like languages, art, tools, science, and ways to please ourselves without having to recur to actual sex all the time. The first dildo is 28,000 years old, way older than agriculture and human societies.

In a not so distant future (actually, it’s already happening), we will be invaded by androids and gynoids. But it won’t be a Terminator-like invasion. The only thing to invade will be our beds.

Sex may actually be the only use for androids and gynoids. I don’t see many other applications for humanoid robots.

Soon these sex robots will be ultra-realistic. They will be physically indistinguishable from real human beings. Sex with robots will feel just like real sex, if not better. AI will make these robots able to perform various moves and bend in any conceivable position. You will be able to have sex in any way you ever wanted and satisfy your weirdest fetishes.

Even more, sex robots will not only satisfy physical needs, but also emotional ones. You can talk with your sex robot about your problems, then cuddle, have sex and cuddle again. It will feel like a real person, an actual significant other, and you may even engage in a relationship with it.

Why Does It Scare Us

There are three main reasons why sex robots scare us:

They can be hacked. This is a common problem with most of the technologies described here. Did you piss someone off? They don’t have to get their hands dirty: all they have to do is reprogram your sex wife or husband to kill you. Maybe in a weird sex position so that it looks like an accident.

They replace real human affection. Why would you even bother to find a real partner, when sex robots can satisfy both your physical and emotional needs? You can save money for dinner and use it to hire a robot prostitute or buy a personal sex robot on Amazon.

They may lead us to extinction. Well, this is a direct consequence to the previous point: if you don’t care about finding a partner, you won’t have children.

When sex robots will be common, we may all lose touch with reality. Also, sex robots will be way hotter than humans. If we get used to them, we may become unable to get physically attracted to other humans. We will become desensitized just like porn addicts are. We will objectify human bodies even more.

Should We Be Scared?

Maybe a little.

While these three problems may be concerning, it’s unclear whether they will actually be that bad.

The problem is that we often fail to understand that society evolves. Sex is a societal aspect as well as a biological one. Some cultures values monogamy, others value polygamy. Some cultures value physical attractiveness, others don’t. Some cultures value homosexuality as a strength, others as a weakness.

Sex robots will surely become common. But it doesn’t necessarily mean it will be bad.

Nowadays, there is a stigma around sex robots. A few decades ago, there was a stigma around Internet porn. But today everybody visits Pornhub regularly without shame. The same is likely to happen with sex robots.

As societal values will evolve, we may not see much difference between human-human relationships and human-robot ones, as long as our physical and emotional needs are fulfilled. I’m not saying that this will be certain or that I want it to be true, I’m just saying that this is a real possibility. It’s society that evolves around technology, not vice versa.

What about sex robots being hacked? Well, some of us have smart homes, and they can be hacked as well. A very smart and motivated hacker could kill you remotely with the objects you have in your room now (unless you are reading this from your bathroom). So the problem is not specific to sex robots.

And what about extinction? Society will probably find some ways to keep producing babies. They may involve more in vitro fertilization than actual sex. And who knows, maybe robots will also be good parents.

Again, I’m not saying that I like it. I’m just saying that, eventually, we or our children will. The major problem I personally see with this is love. Love is about wanting to make another person happy. How can there be love if the thing to make happy is an apathetic robot?

If there is one sure positive outcome from this, it is that all crimes related to sex will be eliminated. There won’t be (human) prostitution anymore. There won’t be rape anymore. There won’t be domestic abuse. Even pedophiles can get children robots.

BEFORE YOU HATE ME for that last sentence, what would you prefer, a pedophile having sex with a child robot, or a pedophile having sex with a real child? If I visited someone’s home in 2040 and found a child robot there, I would still be sickened, but at least I would be relieved that that person wasn’t anywhere near a real child.
5. Nanites
Nanites are nano-robots that one day will surround us everywhere. They would be too small to be visible (unless they are in swarms), but they would be everywhere. They would be in the air, in the water, on the surfaces, in our food, in our bodies, in our urine and feces.

They would have a swarm intelligence (no centralized control) and they would be able to replicate and adapt.

They could be used to clean the environment, 3d print anything, cure most illnesses, explore other planets, control the weather, and do many other sci-fi things.
Why Does It Scare Us

Would you trust millions of tiny and smart robots in your own body? I didn’t think so.

The reason is always the same: hacking. Whoever controls the nanobots, controls everything. Despite the fact that a decentralized system would be more resilient than a centralized one, a smart virus would be enough to destroy just about anything. Just program one nanobot to replicate its code in other nanobots and destroy itself after an hour. And watch the world collapse.
Should We Be Scared?

Probably.

Hacking will always be an issue. But one thing is hacking a bank account, another is hacking the air, the ocean, the buildings, human bodies. All it would take is one smart motivated angry person to destroy the world.

On the other hand, security grows together with hacking techniques. A technology as advanced as nanites is expected to be safe and secure. This is not enough to guarantee that nanites won’t destroy us, but I don’t think that would be the case.

I think that many other things could destroy us far before nanites will.
6. Designer Babies
Photo by Shirota Yuri on Unsplash

A designer baby is a baby who has been artificially created by genetic code manipulation.

Our advances in understanding the genetic code have been wonderful. We have been able to clone animals and build real chimeras.

The purpose of designer babies is to create genetically strong humans, without diseases and with the desired features. You can build a tall, smart, kind, athletic kid without having to rely on the randomness of the genetic lottery.
Why Does It Scare Us

When we think of human genetic alteration and designer babies, we think of eugenics, then we think of Hitler and that triggers us.

It’s normal to be scared of genetic alterations. Messing with genes is commonly referred to as “playing God”, since it means manipulating the fabric of life itself. Messing with our genes too much may create something that isn’t even human.

If we feel the need to design our baby with particular features, it’s like we are giving these features an importance that is greater than human life itself.

In other words, this seemingly goes against the moral law that a child must be loved “no matter what”.

This also goes against every effort we have made in the past decades in trying to overcome racial differences.

We have always had trouble with genetic differences. These differences, like human races, are often reflected in class differences. Some decades ago, especially in the US, blacks and whites belonged to two different social classes. Now this distinction doesn’t formally exist anymore, but it’s still strongly present in human minds (just look at what’s happening in the US right now).

Designer babies will likely bring another class problem that is way worse than the racial one and the one described when talking about cryonics. What do you think will happen when rich people will start designing perfect kids while poor kids won’t be able to afford it? Will we have riots and #undesignedlivesmatter movements?
Should We Be Scared?

Maybe a little.

But the only real reason why we should be scared is the one just described, that is the risk of the birth of a new societal gap.

Meanwhile, the eugenics argument has to be put in context. The main reason why eugenics is such a bad word today, is the whole Hitler thing.

But there is a big difference between killing an “unhealthy” being and preventing it from being born.

And there is another big difference between choosing who mates with who and allowing any happy couple to have their genetically perfect child.

As long as “different” people aren’t treated differently, and people are allowed to mate with whoever they want, designer babies should not only be okay, but even encouraged.

Why am I saying such a controversial thing?

Seriously, what’s wrong in designing the features of your baby, instead of letting randomness choosing them for you? If you know you have a genetic disease, would you risk your child to carry it as well? I don’t know you, but I think that in that case, it would be crueler not to design your baby.

I really want to make this clear. Eugenics is not inherently bad if performed before conception (or before the fetus gains consciousness, although I don't want to make a pro-choice type of argument here). If you know your future child will have a disease, wouldn’t you want to spare them a life of suffering?

If you can’t afford such treatment, and your child is born with such disease, it doesn’t mean that they don’t deserve all the love of the world. They are not “unwanted”. You just wanted to give them a better life. And you still love them “no matter what”. They are worth no less than a genetically perfect child.

The problem is not genetic alteration itself, it’s society. The problem is that we let genetic features define us, while they should be just that, features. Anyone should have a healthy and happy child. Why risk unnecessary suffering just because you are afraid to go against God, nature, or the universe?
7. Immersive Reality and Brain-Computer Interfaces
Photo by Laurens Derks on Unsplash

You may be familiar with Augmented Reality (AR) and Virtual Reality (VR). What you may be less familiar with are Immersive Reality (IR) and Brain-Computer Interfaces (BCI).

IR consists of immersing you into another reality by injecting that reality into your brain. In other words, it directly manipulates your brain waves to create experiences.

BCIs are the interfaces that allow for this manipulation. Imagine that you can reprogram your brain. Since the brain is the source of, well, pretty much everything you experience, having control over it can be very powerful.

With BCIs, you could not only immerse yourself into another reality, but also suppress your fears, change your personality, read 1,000 books in one second, communicate telepathically with other people, control robots (including nanites), control connected objects telekinetically, and get many other sci-fi superpowers.
Why Does It Scare Us

There are two aspects to be taken into consideration.

One is the perception of reality. IR, especially if used in videogames, can make you lose touch with your actual reality. You will spend more time in IR than in actual reality. You may not recognize the latter anymore.

Another aspect is that BCIs can turn you into something that is not human, or at least deprive you of basic human experiences. What will be of you when you will acquire all the superpowers mentioned above? If you can reprogram your mind just like a software, what will your life look like? What will happen when the basic experience of talking to another human being will be outdated because you can just communicate telepathically?

And what if, again, our minds get hacked?
Should We Be Scared?

Maybe a little.

Losing touch with reality has been a human habit since the Stone Age. Seventy thousand years ago (or probably more), we invented fiction. Since then, our ability to deliver fiction has always improved: oral stories, written stories, acts, movies, videogames, virtual realities.

It is just part of human nature. As long as you can give meaning to your life, does it even matter in which reality do you spend most of your time, whether it’s real or virtual? Hell, even our own reality can be just a simulation. It doesn’t mean that it’s not meaningful. As long as you have consciousness, your life is meaningful, independently of the reality that hosts it.

But what if BCIs do more than immersing you into another reality? What if they change the structure of your mind? Like many other things, it depends on how it plays out.

I think that BCI superpowers won’t inherently be a bad thing, just like coffee is not a bad thing when it wakes you up, alcohol is not a bad thing when it makes you a bit dizzy, and weed is not a bad thing when it makes you a little relaxed (sorry, conservatives).

But what if you overdo it? What if it is addictive? What if it’s less like coffee and more like cocaine? Well, again, it depends on how it plays out. Society will need to come up with norms to regulate those experiences if necessary, while psychologists and neurologists should better understand the impact of these technologies on our minds before they become commonplace.
8. Mind Upload

Take the previous point, and stretch it to the extreme. You don’t just immerse yourself into another reality, you take your bags and move there.

I have recently finished the new Amazon series Upload. I suggest you give it a try, it’s pretty good.

In the Upload world, people can choose, instead of dying, to upload their minds into a virtual afterlife.

This may eventually happen for real. Your body may die, your brain may die, but the information in your mind can be exported and stored on a digital device. Your mind will be a file in a hard disk, in a USB stick, or in the cloud. Then you can keep living through a computer program that transforms the structure of that file in a way that is similar to the one in which electrical stimuli transform the structure of your brain.
Why Does It Scare Us

Sounds crazy? I know, right? That’s why it’s scary.

We don’t really know what will happen to our consciousness after an upload. And the scariest thing is that we may never be able to know it.

The thing is, yes, it’s crazy to think that you can live in a USB stick, but it’s also crazy to think that consciousness has to be dependent on a biological substratum. Why does it matter whether a neuron is made of proteins or information bits?

It isn’t even like your consciousness is stored in your particular brain cells. Your neurons are changed all the time, yet your consciousness is always there (yes, new neurons are always created, even if they don’t reproduce). Consciousness has to reside in information, so it could live in a USB stick too. Theoretically.
Should We Be Scared?

Probably.

Imagine what would happen if consciousness wasn’t retained after an upload. If every human will eventually upload, we will basically be extinct. There would be just a computer program running for nothing, and that’s it.

We may never know the answer. You can’t just ask someone if they have a consciousness, the answer will obviously be positive, since it relies on the same set of memories and phenomena that you can find in a human brain.

I personally think that consciousness will be retained. To be more precise, I think that consciousness isn’t “stored” in the brain or in a USB stick, but it’s something that arises when there are interactions, whether they are synaptic impulses or CPU cycles. But we can’t really prove that.

This doesn’t mean that we should ignore the problem. Chances are that mind upload will eventually happen, so it’s better if neuroscientists, engineers, and philosophers at least try to find an answer.
9. Artificial General Intelligence
Photo by Franck V. on Unsplash

Artificial General Intelligence (AGI) is the last invention we will ever need to make. An AGI is an AI that, like humans, can learn anything, not just one specific task like current AIs.

When an AGI will be created, it will recursively improve itself by becoming more intelligent. At some point, it will be an Artificial Super Intelligence (ASI), a being so smart that we would be like ants to it.

At this point, two things can happen. Either the ASI will build a utopia for us, or it will drive us towards extinction. There is no in-between.
Why Does It Scare Us

Movies like Terminator, Ex Machina, 2001: A Space Odyssey, Transcendence showed us hypothetical worlds in which AIs went rogue.

There is a lot of confusion about this. Some people believe that an AGI can never be created because there is “something” about the human mind that can’t be replicated.

Others believe that an AGI would gain self-consciousness and get sick of being the humans’ slave, so it would rebel.

An AGI will likely be created at some point, because there is no reason to believe that there is that “something” about the human mind, especially about human intelligence. Also, an AGI won’t gain self-consciousness, because intelligence and consciousness are two separate things.

Yet there are different reasons to be scared.
Should We Be Scared?

Yes.

The problem with AI is that it’s difficult to tell it what its goals and boundaries should be. We are messy, we are complicated, we don’t know what we want.

If AGI goals aren’t aligned with ours, we incur the Alignment Problem, and very bad things can happen.

Let’s take for example the paperclip maximizer experiment. A simple task like producing paperclips can turn into an existential risk for us. Why?

Because an AGI would always find the most efficient way to reach its goals. For example, it could turn all the atoms in the Earth into paperclips, including our bodies.

If we specify not to kill us, it will still find a loophole to make our lives miserable. To avoid this, we have to specify every single condition. And it’s not simple.

It’s not that AGI will actively try to kill us. It’s just amazingly difficult not to hurt us. We need many things in order to live a good life, and we need to make sure that AGI doesn’t compromise these things by mistake.

There are some instances though in which AGI would actively want to kill us. Since we are so unpredictable, AGI may think that we would get scared and try to stop it from reaching the goals that we ourselves gave it.

This is probably the most difficult problem we will ever have to solve. And also the last one.

If you want to further explore this subject, I recommend you read Superintelligence by Nick Bostrom. It uses a strong academic language, but it’s a worthy read.


Since I was 15 or something, I have dreamt to build an AGI. Now, I dream to make sure that, when it will come, AGI won’t kill us.

Technology is a wonderful thing, but it can also be scary. This has never been as true as it is now. We have a kind of power that we would have never imagined before.

Technological improvements can only be good if they are guided by wisdom improvements. If we can’t make wise choices on how to use God-like technologies, we are doomed for sure.

No comments: