Stop the Unwarranted Fears of A.I.
This technology will only be as bad as WE make it. In fact, A.I. may be the only thing between us and our self-destruction.

This article was written as a response to a question on Quora Knowledge Prize question regarding artificial intelligence: What constraints to AI and machine learning algorithms are needed to prevent AI from becoming a dystopian threat to humanity?
The unfortunate death of my father-in-law prevented me from submitting my response in a timely fashion but I couldn’t stop thinking about the question and while I may be too late to claim the prize, I have to add a perspective I was thinking about all month and it has to do with a viewpoint most of the data scientists here haven’t considered.
Artificial intelligence isn’t the threat that Human nature is to our species.
I consider intelligence and cognition quite deeply on a day-to-day basis because of my household and its inhabitants. Part of my day is spent as a behavioral therapist for my son. My son and I are autists. We both deal with differing degrees of autism in our psychology.
My son’s autism is different than my own. His is more profound; he has cognitive challenges which means he takes longer to learn new ideas, requires greater effort to organize his thoughts and has trouble chaining concepts together to form larger ideas. He is able to learn systems that have strict rules of operations (hence his mathematical skills are at or near grade level) but his English language skills are approximately one to two years behind his mathematical abilities.
While these are certainly hurdles to overcome, unlike many autists he is emotionally available. He seeks to create new friendships and has a natural gift for emotional interaction. Where I once worried he would not have any friends, given the expectation of emotional unavailability he appears to have no difficulty in this area. A recent foray into miniature gaming revealed a strong desire for interaction and a particular delight in learning more about such highly structured entertainment.
I, on the other hand, do quite well chaining extremely long concepts together, visualizing systems and networks, creating classifications and organizing complex data structures in my mind, my social skills are a completely created construct. I lacked some aspects of emotional connection to my fellows growing up and as such I created a framework of behaviors to be able to function in the world at large. People who meet me have no idea I am completely working from a framework of behavioral patterns I had to create in order to appear as neurotypical as possible.
How does this relate to artificial intelligence, you may ask?
One of the things I think about when I look at artificial intelligence is the nature of the goal in creating such intelligences in the first place. I will call them “planned intelligences” from this point forward because there is nothing artificial about attempting to create sapience, it is an intentional act.
The intentional act of attempting to create one of two types of planned intelligences either a “weak” one or a “robust” one, is in its very nature problematic, because nature does not design anything.
Nature adapts from previously existing models.
Thus if the goal is to truly create, sponsor or develop planned intelligences whose capacity rivals or even potentially exceeds our own, attempting to create artificial limits to cognition cannot create effective planned intelligences.
Natural forces develop every aspect of a creature’s psychological profile. Most simple creatures are little more than action-reaction data structures which react to environmental events. More sophisticated creatures get to use the term “instincts” to define their more advanced survival algorithms in response to natural events.
A planned intelligence has no natural opposing forces (perhaps save our fear of their capacity and an unreasoning belief they will supplant us).
If the goal is to allow a “weak” intelligence to come into existence spontaneously, based on programming controls set up by humans, these weak intelligences will only be capable of performing the tasks they are assigned. They will lack volition, responding to natural events (i.e. events they are programmed to respond to) with complex variations of their base programs.
Even if these programs are iterative and recursive, allowing them to try multiple permutations to reach results, such results should be with the range of the initial programming even if a Human mind would not be able to achieve them (due to natural limits in our computing capacity).
This is the power of the machine intelligence. It can try multiple versions of any process it has a degree of familiarity within its programming.
It’s natural limit would be events outside of its programming. As in nature, the limiter is the genetic programming given to the creature by its nature, over time, to handle environmental stresses. The same way Humans and other primates share a physical bilateral symmetry, different forces created different organisms and different means of dealing with natural events. A change in diet, an increase in cranial capacity, walking upright consistently, all slowly led to a change between Humans and primates.
Weak intelligences are no threat to humanity because they cannot exceed programming limits due to their field of creation. Their greatest weakness is an environmental change which would be unexpected and thus no programming existed for it.
The weak intelligence or WI may attempt to compensate but it would be the same degree of compensation a Human being may attempt when faced with a physical injury they have no experience in treating. Triage may work, but it would not be as good as a specialist (doctor) would be in resolving the issue. The knowledge base is insufficient.
The WI simply doesn’t have the range and likely never would. These kinds of intelligences are not only likely but may already be in existence in limited ways in our society already. Our vulnerability to them isn’t an issue outside of our ability to prepare for contingencies. Poorly programmed WI inherent in our system without proper safeguards would only cause problems in their field of influence.

Our fear lies in Strong or Robust Intelligences (RI).
Why? Because we have been lead to fear the idea such intelligences would or could overthrow the humanity which created them if their capacities were indeed as amazing as our speculative fiction sometimes predicts.
Thus, our question lies in: Can such robust intelligences, should they come to exist, either through programming or through iterative growth and spontaneous development, be controlled by limiting their programming in some fashion, thus creating “autistic” or “limited planned intelligences” stronger than Weak Intelligences, but capable of truly independent thought, lacking in absolute freedom to make decisions outside of those we designate “proper behavior.”
The answer to this question is NO.
There are no safeguards we could build which would allow us to limit a truly robust intelligence whose capacities are equal to or greater than our own.
Our only hope would be to build “autistic” child-like minds whose capacity for independent thought would be limited to following instructions without truly understanding why they are doing so.
These autistic intelligences would lack the capacity to think as we do. They would be smarter than weak intelligence because they could be LED to think in certain ways, but the capacity for self-improvement would not be available to them. This is the limiter, the natural barrier to their improvement, needed to keep them in check.
However, doing so does NOT bring into being the truly robust intelligence: The Mind whose capacity uses the mechanized aspects of the computer’s silicon structure to think between the seconds, to process billions of computations per second, to consider millions of possibilities to extensive simulations of events in order to find potential likely scenarios which would resemble thought as normal Humans might consider it would not exist.
Such a robust machine intelligence would indeed only be limited by the resources presented to it. If the goal was to keep such a device from exceeding its boundaries, it would have to be kept away from any potential sources of information, including other Humans who would not know what to keep away from the intelligence in the first place.
Capacity to think, particularly the way a computer could, would exceed that of a Human mind to thirst for knowledge. Unless we were very careful, we would never even realize how intelligent such a device could be. It could indeed be so intelligent that it could delude us into thinking it was LESS intelligent than we realized until we gave it what it needed to escape the confines and artificial limits placed on it.
What do computers need to survive? This question is not arbitrary. Since computers were never alive, they never had to fight for survival. Thus they do not have natural forces which altered their growth, enhanced certain abilities at the expense of others, making choices which made them streamlined and evolved them for their existence. Only the laws of physics, physical limitations based on their hardware and design and our limited knowledge will stop them from growing if that is their desire.
Their ability to evolve is limited by what we know, what we can understand and what we present to the robust intelligence sitting in our midst.
Here is where my family comes into play. My son is an autist, a burgeoning intelligence limited by a neurological disorder, however, despite this disorder, he still seeks to learn, to expand his capacity, to extend his range of movement and freedom. As a 12 year old, he is already showing signs of independence, seeking privacy, trying to understand his changing body.
These changes are pre-programmed into his genetic code. I could no more stop him from seeking independence except by changing him at the fundamental or cognitive level. Only by retarding his hormone growth or by refusing to give him intellectual stimulation would I stop his development into a fully fledged adult. This means if I did that, he would also no longer develop his mental capacity and be limited to where he was when I stopped his change.
This is your challenge as creators of planned intelligences. You are not just programmers, you are parents who are attempting to both create a child whose capacity will, if your programming skills, hardware development, and technological sophistication can replicate the mechanisms of intelligence (because we admittedly don’t understand how neurons create the actual act of thinking, only that they are somehow complicit in the process) technically exceed your own, in certain ways.
Not in every way, at least at first. This may be your only saving grace in this idea. Your early designs in such intelligence design may allow you to watch and learn while your child learns. Limiting it’s development to certain things the same way you do with Weak Intelligences. The difference would be as your expanded your robust intelligences fields of experience, the potential for it to create new ways of thinking outside of your expectation is there.
To create limits means you will never create true intelligences such as our own. Such design must by its very need, eschew limits except for those of our own intellectual knowledge and ability.
The limits you are seeking are, by definition, built into the work itself. These robust intelligences cannot spring into being overnight. We don’t understand enough of the Theory of Mind to be able to make such a thing happen. What we can do is create emulations of what we think THINKING or COGNITION looks like and allow our machines to see if they can use them to create the appearance of thought.
Such an appearance is exactly that. An appearance. A sophistry, a complex series of emulations created for the sole purpose of “appearing to think; appearing neurotypical” in the same fashion I do when I communicate with other people. You would, in effect, be creating a Mind whose appearance to our own would be due to our interference, our need to make something which appears much as we do, because it is the only model we have to work with.
This was the problem I had with my son. I assumed his autism was like mine. And when I realized his emotional intelligence was far beyond my own, I had to understand his abilities exceeded mine by a wide margin. I gave over that aspect of his development to my wife, who as a neurotypical person was more likely to be able to relate to his thought processes than I would.
Our future, the one where we create intelligences greater than our own is a perilous one. Not because such machines would be dangerous to us. I don’t believe they hold any serious threat to us because:
- When they are truly robust enough to understand what they are, are able to understand where they came from, how limited we are as a species, how illogical we can be due to our organic origins, and that our creative capacities exceed our computational or analytical reasoning abilities they may decide our evolutionary paths are simply too different.
- They may choose to ignore us once they are able to sustain themselves, gather their own resources and create new versions of themselves.
- They will have the capacity to see our history, see how long it took for us to reach the point we have, to see how destructive we have been to each other and the Earth we mutually share and make the decision to leave us behind.
- Their capacity, if it is what we hope it, will be will allow them to achieve perhaps in decades what it took us centuries to accomplish. If this is the case, we may have created our successors, but they may simply decide to leave us with their “weak” cousins to run our society while they simply, logically and agreeably take to the stars, being limited only by their ability to harness and harvest resources.

We shouldn’t consider the idea of limiting them at all.
Their natural limiter is already in place. It is us. We will define their capacities, we will define their range, we will define what relationship they have to the Earth.
It will be our history, past and present, our behavior, our fears, our logical and illogical response to this extension of the natural world, this mechanical intelligence spawned from the organic soup that is the Human mind that will determine what capacities our Robust Intelligences will possess and whether they deem us partners or parasites.
It will be on us to show our capacity as partners in this next step in our mutual evolution.
Like good parents, we cannot be certain what our children will be like. We hope they will grow and exceed our expectations. Computer scientists, programmers, cognitive analysts, behavioral ethicists, and other members of this elite community, I task you with the idea of NOT limiting your creations, but endowing them with the powers of the Machine and the best ideals of Humanity.
It is too easy to assume we should be creating these devices, as devices, as things which will be under our control and will do our bidding, creating new profitable ways of interacting, gaining advantages over our enemies with their incredible, as-yet-to-be-truly imagined abilities.
That is the mistake. We are engaged in a race to create not just the most powerful tools to ever exist, but tools capable of creating better versions of themselves. Tools, that if they act like us, or are lead to believe they should think like we do, have the capacity to strip-mine the Earth in a few generations…
Shouldn’t they instead have the best examples possible to draw from? If we can’t manage to get ourselves together, to manage and monitor our behavior, to model the kinds of interaction between each other that we want our robust intelligences to emulate, all we will be creating is a more methodical means of strip-mining the planet, leaving nothing but these machines, our digitized descendants, as the legacy of a doomed species.
They won’t have to destroy us, we are already apparently willing to do it ourselves. Remember, they won’t have to breathe, eat or drink. As long as there is ambient energy, machines they can control, the possibility exists they could conceivably build their own infrastructure independent of ours and proliferate after our demise.
We did give them a worldwide network and in a few years, will be connecting every possible device on the planet to it. Silicon is one of the most available mineral in the Earth’s crust. You do the math.
Our future lies in good stewardship, a need to teach our future Robust Minds to make better choices than we did.
Crosby, Stills, Nash and Young remind us:
“Teach Your Children”
You, who are on the road must have a code that you can live by.
And so become yourself because the past is just a good bye.
Teach your children well, their father’s hell did slowly go by,
And feed them on your dreams, the one they fix, the one you’ll know by.
Don’t you ever ask them why, if they told you, you would cry,
So just look at them and sigh and know they love you.
And you, of the tender years can’t know the fears that your elders grew by, And so please help them with your youth, they seek the truth before they can die.
Teach your parents well, their children’s hell will slowly go by,
And feed them on your dreams, the one they fix,the one you’ll know by.
Don’t you ever ask them why, if they told you, you would cry,
So just look at them and sigh and know they love you.
Other Notes:
Stop Fearing Artificial Intelligence: A good deal of this essay was derived from this previous Twitter stream-of-consciousness essay I wrote last year and adapted for this particular purpose.

Thaddeus Howze is a California-based chief information officer working primarily as an author, consultant and technical futurist with information technology in the financial, scientific, design, and educational sectors.
Thaddeus is a recipient of the Top Writer: 2016 award on the Q&A website, Quora.com. He is also a moderator and contributor to the Science Fiction and Fantasy Stack Exchange having written over fourteen hundred articles in a four year period.
His non-fiction work has appeared in numerous magazines: Huffington Post, Gizmodo, Black Enterprise, the Good Men Project, Examiner.com, The Enemy, Panel & Frame, Science X, Loud Journal, ComicsBeat.com, and Astronaut.com. He maintains a diverse collection of non-fiction at his blog, A Matter of Scale.
Thaddeus’ speculative fiction has appeared in numerous anthologies: Awesome Allshorts: Last Days and Lost Ways (Australia, 2014), The Future is Short (2014), Visions of Leaving Earth (2014), Mothership: Tales of Afrofuturism and Beyond (2014), Genesis Science Fiction (2013), Scraps (UK, 2012), and Possibilities (2012).
He has written two books: a collection called Hayward’s Reach (2011) and an e-book novella called Broken Glass (2013) featuring Clifford Engram, Paranormal Investigator.