Designing Human Offshoots That Won't Wipe Out Humanity

Discussion regarding the Outsider webcomic, science, technology and science fiction.

Moderator: Outsider Moderators

Kage Sama
Posts: 8
Joined: Fri Aug 13, 2021 11:27 pm
Location: Central MD, USA
Contact:

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Kage Sama »

Demarquis wrote:
Sat Dec 25, 2021 3:38 am
If I had the power to create human offshoots, I would go for something better adapted for what we consider an extreme, hostile environment. The deep sea, deserts, Titan, that sort of thing. But make them poorly adapted to the type of environment we are already best adapted to--that way there is no competition over territory.
Ala the original Guardians of the Galaxy concept - instead of Terra-forming worlds that were impossible/financially impossible, they adapted humans to be able to survive on/near the various planets (Mercury, Jupiter, Pluto, etc.). Sure, Jim Starlin did some serious hand-wavium in creating the various offshoot races, but they were well-suited for their new homes and weren't reliant, nor envious, of the original Terrans.

User avatar
Cthulhu
Posts: 910
Joined: Sat Dec 01, 2012 6:15 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Cthulhu »

Werra wrote:
Sat Dec 25, 2021 12:17 pm
It's not quite as bad. To design a species from scratch, you need to know what you are designing. While omniscience is unachievable, it's also not needed. The research institutions that can produce a species will also be capable to readjust it after its genesis. The same thing that happens in city planning, software development or any other design process.

Also, there is the ethical question of why you would need to, want to or should be able to decide every single detail of what another species is to be like.
So, you want to start another discussion about race and racism? Count me out.
Humans cannot even accommodate for the minuscule differences in our own species, but to add a second, engineered race would increase the problems by an order of magnitude. With the mindset that the newly created race is the same as a piece of software or a building that can be simply adjusted, the issue of massive interspecies tensions would be a given and an uprising is bound to happen. This is a prime example of what I meant by being ethically and morally mature.
Bamax wrote:
Sat Dec 25, 2021 2:14 pm
Yes, Werra is right.

Even in judo-christitan religion God does not always get what he wants or even predict the outcome of every event that occurs in the encyclopedia/record book that is the Bible. Changes his mind even on occasion based on things happening.


If many gods including popular ones are not omniscent, neither would an alien race engineering offshoots need to be either.

This is hyperbole, since we all know God/gods>scifi civilization.

It's an ultimate example.
God doesn't exist in the same plane of reality, so to speak, and thus, doesn't need to fear an uprising. On the contrary, it's even possible to destroy all of his creations for a mere disobedience, not even a threat.
But the topic of this thread is the creation of an offshoot race that won't be a threat to us. So, the question whether we are even mature enough for such a responsibility is far more important. So far, I must say that we certainly aren't.
Demarquis wrote:
Sat Dec 25, 2021 3:43 pm
Hey! It's the singularity problem, in another guise! Instead of a hyper-intelligent artificial intelligence, we have a self-aware sub-species of humanity. How do we know they won't turn on us? What safeguards could we build in? What are the steps on the path toward "friendly species"?

As I tell my computer nerd friends, we take this chance every time we raise children. And the solution is the same... If you want an intelligence to be friendly to you, be friendly to it. They tend to reciprocate.

As any parent can tell you, you ain't gettin' control.
With the amount of irresponsible parents, our ability to be friendly should be seriously questioned. But even if the child rises up against such a bad parent, it won't affect the human race as a whole, since both of them belong to it. With a different species, however, the chances of a "Planet of the Apes" scenario are quite high.

In conclusion, I must say that we are most probably not able to create a subspecies that won't be a threat to us. Especially if it designed after us, since we are the threat number one to ourselves. Such a race needs either to be totally subservient, or completely unlike us, in order as to not be a rival and competitor.

User avatar
Werra
Posts: 840
Joined: Wed Jun 06, 2018 8:27 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Werra »

@Cthulhu
A strong emotional response to an argument is a telltale sign of an inability to formulate a rational refutation. You are the only one talking about race here.

Now, you seem to have misunderstood my post. I softly brought up real world examples of design processes that work fine without full knowledge of the details or all resulting consequences. If why I brought up city zoning is too abstract for you, consider animal husbandry and farming then. Humanity has, for more than ten thousand years, already worked to create new species with delicious results.

So, the moral sin you are preaching about has already been committed in the stone age. Phrasing it as an actual design process is merely taking the principle and implementing it with (future) modern means. If it irks your tender idealism to speak of changing a species genome like some software, rest assured that once the means to do so are real, morals will change to accommodate those means. Progressivism will trample the pearls you are clutching under its march forward.

Bamax
Posts: 1040
Joined: Sat May 22, 2021 11:23 am

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Bamax »

Yeah a new morality to match capabilities makes sense.

Actually the Loroi already have this, although in their case it may be necessary as it only speeds up the inevitable.

Examples?
Loroi cull those who are chronically unwell mentally (crazy). Deformed babies I would not be surprised if they dealt with the same way.

While this may seem cold and cruel to us, remembering that Loroi are telepaths and sooner or later a crazy Loroi would irk the rest until they wanted to get rid of him or her anyway... yeah.

That said... I am not sure why you must sound harsh with Cthulhu.

Did he kill someone or a pet you know lol?

User avatar
Cthulhu
Posts: 910
Joined: Sat Dec 01, 2012 6:15 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Cthulhu »

Werra wrote:
Sun Dec 26, 2021 7:05 pm
A strong emotional response to an argument is a telltale sign of an inability to formulate a rational refutation. You are the only one talking about race here.
Good thing that I wasn't even remotely emotional.
Werra wrote:
Sun Dec 26, 2021 7:05 pm
Now, you seem to have misunderstood my post. I softly brought up real world examples of design processes that work fine without full knowledge of the details or all resulting consequences. If why I brought up city zoning is too abstract for you, consider animal husbandry and farming then. Humanity has, for more than ten thousand years, already worked to create new species with delicious results.
No, it seems that you misunderstood my response instead. There is a very important distinction between designing cities, or even raising new breeds of cattle, and the creation of a new Human subspecies. The factor is called sapience. Can we truly deal with the questions and issues this will produce? The strictly utilitarian approach of whether it's delicious or useful won't work on someone who can actually voice their objections. Therefore, we would need to be mature enough to handle this completely new and unknown development.
Werra wrote:
Sun Dec 26, 2021 7:05 pm
So, the moral sin you are preaching about has already been committed in the stone age. Phrasing it as an actual design process is merely taking the principle and implementing it with (future) modern means. If it irks your tender idealism to speak of changing a species genome like some software, rest assured that once the means to do so are real, morals will change to accommodate those means. Progressivism will trample the pearls you are clutching under its march forward.
On the contrary, I have seen enough examples to get rid of my tender idealism some 20 years ago. Instead, I'm merely warning that progressivism without responsibility is exactly what might cause the topic of this thread to happen. The issue is not if we can do it, or whether morals would change to accommodate for it, which is, of course, certain to happen. Your overly emotional argumentation aside, my experience has taught me that we are simply not yet ready to shoulder this responsibility.

Since I don't want to drag my work into this, because I'm currently on vacation, let's use not so much a political, but rather a technological example why we can't have nice things. A concept that's quite widespread in science-fiction, the orbital solar collector, would be a very interesting way to provide nearly limitless and ecologically friendly power. Collect it in space, where the sun always shines, and send it down via lasers or microwave pulses, sounds good, right?
Except that if such a technology would be available, we'd rather use it for orbital cannons instead. The ability to grill an entire city without the radioactive fallout of a nuke will be far too tempting, most probably enough for it to see real action. Seeing how narrowly we avoided a nuclear WW3, this time, we might not be as lucky.
A completely new species, on par or even better than us, will be far more dangerous, especially since we'd probably design a "warrior race" first. The temptation to use them for the "greater good" against the "forces of evil" should be overwhelming. But since they are sapient, sooner or later, they will realize who is the greater evil from their point of view, and the outcome might not be in our favor. Ultimately, we'd end up with an uprising of the war thralls, instead of machines, just per the thread's topic.
Bamax wrote:
Sun Dec 26, 2021 10:32 pm
Yeah a new morality to match capabilities makes sense.

Actually the Loroi already have this, although in their case it may be necessary as it only speeds up the inevitable.

Examples?
Loroi cull those who are chronically unwell mentally (crazy). Deformed babies I would not be surprised if they dealt with the same way.

While this may seem cold and cruel to us, remembering that Loroi are telepaths and sooner or later a crazy Loroi would irk the rest until they wanted to get rid of him or her anyway... yeah.
That's what I mean, we can't ever reign in ourselves, so how could we educate an offshoot (which is our progeny, so to speak), to reign themselves in? This new species needs to have a corresponding moral code, philosophy, societal structures, a reason for being. Without Soia guidance, Loroi society collapsed after the Fall, and it took them an immense amount of time to create a new one.
Bamax wrote:
Sun Dec 26, 2021 10:32 pm
That said... I am not sure why you must sound harsh with Cthulhu.

Did he kill someone or a pet you know lol?
He just really likes to argue, yet not for the outcome, but solely for the thrill of the argument itself. Besides, using such flowery expressions like trampling the pearls, but accusing me of being emotional? I really don't understand whether he's serious or just trolling.

User avatar
Werra
Posts: 840
Joined: Wed Jun 06, 2018 8:27 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Werra »

The strictly utilitarian approach of whether it's delicious or useful won't work on someone who can actually voice their objections.
Sapience does not invalidate an utilitarian perspective. Have you never negotiated employment?
Therefore, we would need to be mature enough to handle this completely new and unknown development.
That does not follow. Only if the species designers were to stick to your personal, idealistic ethics would they need to be "mature" enough. They could simply not care about the ethical ramifications.
Your overly emotional argumentation aside, my experience has taught me that we are simply not yet ready to shoulder this responsibility.
Using evocative language =/= making an emotional argument. Your continuous attempts of going "No U!" are duly noted. Predictably enough you have conceded my argument that "ethic shmetics" and yet still attacked me on emotional grounds.

Also, since humanity does not have either the scientific or technological basis to create a new species yet, its current degree of responsibility is also of no concern. Furthermore, responsibility is not a prerequisite to the implementation of any new technology, as you have detailed already.
A completely new species, on par or even better than us, will be far more dangerous, especially since we'd probably design a "warrior race" first.
To find out which design decisions to make to stop such a threat from arising is why Bamax started this discussion. In my view, the only reason to create a new species is to fill an evolutionary niche that your own species can't fill. Like living on the ocean floor for example. If the species you create do not share your own niche, there shouldn't be a lot of evolutionary competition going on. Since we are talking about designing a sapient species, I believe we can also factor in a desire of this new species to cooperate with humanity. 1. The species will be entirely at our mercy for at least as long as it takes to build numbers and infrastructure. 2. Ethical considerations about the treatment of a sapient species go both ways. 3. Humanity doesn't share their niche and both species therefore have something unique to offer.
Without Soia guidance, Loroi society collapsed after the Fall, and it took them an immense amount of time to create a new one.
The bubble wide orbital bombardment of all known settlements might have played a part in that. Arioch is also not a historian and really just wants to write about tepid babes going pewpew in space. Bless his little heart!

@Bamax
The Loroi are full blown eugenicists. They decide who gets to reproduce based on rational criteria, a lot of which are raw genetics even. Their morals and ethics concerning the value of the individual should be their most alien feature to us.

Bamax
Posts: 1040
Joined: Sat May 22, 2021 11:23 am

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Bamax »

Gh

Bamax
Posts: 1040
Joined: Sat May 22, 2021 11:23 am

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Bamax »

Werra wrote:
Mon Dec 27, 2021 3:40 pm
The strictly utilitarian approach of whether it's delicious or useful won't work on someone who can actually voice their objections.
Sapience does not invalidate an utilitarian perspective. Have you never negotiated employment?
Therefore, we would need to be mature enough to handle this completely new and unknown development.
That does not follow. Only if the species designers were to stick to your personal, idealistic ethics would they need to be "mature" enough. They could simply not care about the ethical ramifications.
Your overly emotional argumentation aside, my experience has taught me that we are simply not yet ready to shoulder this responsibility.
Using evocative language =/= making an emotional argument. Your continuous attempts of going "No U!" are duly noted. Predictably enough you have conceded my argument that "ethic shmetics" and yet still attacked me on emotional grounds.

Also, since humanity does not have either the scientific or technological basis to create a new species yet, its current degree of responsibility is also of no concern. Furthermore, responsibility is not a prerequisite to the implementation of any new technology, as you have detailed already.
A completely new species, on par or even better than us, will be far more dangerous, especially since we'd probably design a "warrior race" first.
To find out which design decisions to make to stop such a threat from arising is why Bamax started this discussion. In my view, the only reason to create a new species is to fill an evolutionary niche that your own species can't fill. Like living on the ocean floor for example. If the species you create do not share your own niche, there shouldn't be a lot of evolutionary competition going on. Since we are talking about designing a sapient species, I believe we can also factor in a desire of this new species to cooperate with humanity. 1. The species will be entirely at our mercy for at least as long as it takes to build numbers and infrastructure. 2. Ethical considerations about the treatment of a sapient species go both ways. 3. Humanity doesn't share their niche and both species therefore have something unique to offer.
Without Soia guidance, Loroi society collapsed after the Fall, and it took them an immense amount of time to create a new one.
The bubble wide orbital bombardment of all known settlements might have played a part in that. Arioch is also not a historian and really just wants to write about tepid babes going pewpew in space. Bless his little heart!

@Bamax
The Loroi are full blown eugenicists. They decide who gets to reproduce based on rational criteria, a lot of which are raw genetics even. Their morals and ethics concerning the value of the individual should be their most alien feature to us.

Spiral would respond, "Not so true!"

Hahaha! Green Girl is the anti-thesis of tepid.

User avatar
Cthulhu
Posts: 910
Joined: Sat Dec 01, 2012 6:15 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Cthulhu »

Werra wrote:
Mon Dec 27, 2021 3:40 pm
Sapience does not invalidate an utilitarian perspective. Have you never negotiated employment?
An employer doesn't create his workers from scratch, so it's not a particularly good argument.
Werra wrote:
Mon Dec 27, 2021 3:40 pm
That does not follow. Only if the species designers were to stick to your personal, idealistic ethics would they need to be "mature" enough. They could simply not care about the ethical ramifications.
But we were trying to avoid a "wipeout" scenario? Not accounting for the possible ethical ramifications may certainly speed it up.
Werra wrote:
Mon Dec 27, 2021 3:40 pm
Using evocative language =/= making an emotional argument. Your continuous attempts of going "No U!" are duly noted. Predictably enough you have conceded my argument that "ethic shmetics" and yet still attacked me on emotional grounds.
I haven't even attacked you, merely provided my answers in a slightly humorous way to counter your unexplainably aggressive way of debating. This is not a life-or-death scenario (yet), just some minor banter.
Werra wrote:
Mon Dec 27, 2021 3:40 pm
Also, since humanity does not have either the scientific or technological basis to create a new species yet, its current degree of responsibility is also of no concern. Furthermore, responsibility is not a prerequisite to the implementation of any new technology, as you have detailed already.
It may not be necessary for the implementation, but handling the responsibility is crucial to ensure Humanity's survival.


At first, I wanted to provide some proper counter-arguments, but your overly aggressive way of debating a purely fictional scenario spoiled any entertainment that the discussion could've provided. For the second time, no less. Since you try your utmost to make arguing with you difficult, let's stop right here. It serves absolutely no purpose, and I have better things to do. Please enjoy your utilitarian opinion without my interference.

User avatar
Werra
Posts: 840
Joined: Wed Jun 06, 2018 8:27 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Werra »

Cthulhu wrote:
Tue Dec 28, 2021 3:32 pm
An employer doesn't create his workers from scratch, so it's not a particularly good argument.
Neither would tweaking an existing species be creating one from scratch. Negotiating employment is an everyday example of how an utilitarian way of thinking can be used in mutually beneficial ways. You seem to believe that genetic modification is automatically unethical and that it will happen without the consent of the modified. But why? Once we are talking about creating entire species, gene modification will just be tool to use. Why would a new species not be open to further modification that better helps them exist in their niche/reduces their risk of coming into conflict with another species?
Cthulhu wrote:
Tue Dec 28, 2021 3:32 pm
But we were trying to avoid a "wipeout" scenario? Not accounting for the possible ethical ramifications may certainly speed it up.
It may, or it may not. You'd have to work with a specific scenario to justifiably make as strong a claim as you did. Being ethical and enjoying evolutionary success are certainly two independent states.
Cthulhu wrote:
Tue Dec 28, 2021 3:32 pm
I haven't even attacked you, merely provided my answers in a slightly humorous way to counter your unexplainably aggressive way of debating.
In this thread, my first post was of a neutral tone, yet you chose to immediately attack my motives instead of contending with my position. In return I mostly try to limit my answers to the necessary minimum to demonstrate the flaws in your argumentation. Please take note that I make an effort to use examples that aren't from science fiction.
It may not be necessary for the implementation, but handling the responsibility is crucial to ensure Humanity's survival.
When you say "ready to shoulder responsibility" you mean sophisticated ethics and behaviours that ensure peace and harmony between the created species and mankind. That's different from the responsibility of being able to deal with the consequences of ones decisions. Only the latter responsibility is needed to ensure humanities survival.
Cthulhu wrote:
Tue Dec 28, 2021 3:32 pm
At first, I wanted to provide some proper counter-arguments, but your overly aggressive way of debating a purely fictional scenario spoiled any entertainment that the discussion could've provided. For the second time, no less. Since you try your utmost to make arguing with you difficult, let's stop right here. It serves absolutely no purpose, and I have better things to do. Please enjoy your utilitarian opinion without my interference.
It's been weeks since we discussed whether aliens can be racist and you're still not over that? Sorry man, I didn't mean to push you this deep into cognitive dissonance.

Demarquis
Posts: 437
Joined: Mon Aug 16, 2021 9:03 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Demarquis »

@Cthulhu: "With the amount of irresponsible parents, our ability to be friendly should be seriously questioned. But even if the child rises up against such a bad parent, it won't affect the human race as a whole, since both of them belong to it. With a different species, however, the chances of a "Planet of the Apes" scenario are quite high.

In conclusion, I must say that we are most probably not able to create a subspecies that won't be a threat to us. Especially if it designed after us, since we are the threat number one to ourselves. Such a race needs either to be totally subservient, or completely unlike us, in order as to not be a rival and competitor."

I guess that depends on how much of an existential threat one thinks we are to ourselves. To continue with my children analogy: Human children are the greatest threat to the survival of humanity we know of! Every war ever fought has been fought by someone's children! They have carried out genocides, exploit other people for profit, and are now destroying the climate!

The flaw in this argument is obvious. "Children" are not a monolythic population all of whom have the same agenda nor cooperate with each other toward the same goals. They behave and interact as individuals, and so have to be engaged and judged as individuals. Whenever someone commits a destructive act, there are specific reasons why that one individual acted they way they did. Even in the case of large scale populations following an evil leader, there are other people not following that leader. And in every individual case, kind treatment by trusted friends probably prevents more destructive evil than any other action. Someday, someone's child may end the human race, but that still won't be an argument against having children.

I see no significant difference with a deliberately designed species (or AI, for that matter). They won't act as a monolythic species, all of whom share the same goals or act together as one. They will be individuals, people like us. And like us, some of them will act destructively, and some cooperatively, and the majority probably both. In every case, they will have to be understood, interacted with, and judged, as unique individuals. And I have no doubt that reciprocation with them would go a long way.

There is no argument against a designed species that doesn't also apply to the human race. We will be able to trust them to the exact same extent that we can trust each other.

Whether you regard that as an argument in favor of or against is a matter of individual opinion.

BTW: I think this may be my 100th post. Woo Hoo!

User avatar
Cthulhu
Posts: 910
Joined: Sat Dec 01, 2012 6:15 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Cthulhu »

Demarquis wrote:
Fri Dec 31, 2021 2:07 am
I guess that depends on how much of an existential threat one thinks we are to ourselves. To continue with my children analogy: Human children are the greatest threat to the survival of humanity we know of! Every war ever fought has been fought by someone's children! They have carried out genocides, exploit other people for profit, and are now destroying the climate!

The flaw in this argument is obvious. "Children" are not a monolythic population all of whom have the same agenda nor cooperate with each other toward the same goals. They behave and interact as individuals, and so have to be engaged and judged as individuals. Whenever someone commits a destructive act, there are specific reasons why that one individual acted they way they did. Even in the case of large scale populations following an evil leader, there are other people not following that leader. And in every individual case, kind treatment by trusted friends probably prevents more destructive evil than any other action. Someday, someone's child may end the human race, but that still won't be an argument against having children.
Except that our children belong to the same race, so even in the case of catastrophic, world-ending failure, we'll sink or swim as a species. Even if there are fortunate (or not) survivors, those will still be humans only, radioactive mutants aside. With a different race thrown into the equation, however, a wholly new dimension of problems will be introduced. After all, the interspecies conflict is quite a widespread topic in science-fiction. Arioch's Tizik-tik vs. Hal-tik Umiaks. H. G. Wells' Morlocks vs. Eloi. Or, in the case of the deep-sea adaptation, Frank Schätzing's "The Swarm" vs. Humanity.
Demarquis wrote:
Fri Dec 31, 2021 2:07 am
I see no significant difference with a deliberately designed species (or AI, for that matter). They won't act as a monolythic species, all of whom share the same goals or act together as one. They will be individuals, people like us. And like us, some of them will act destructively, and some cooperatively, and the majority probably both. In every case, they will have to be understood, interacted with, and judged, as unique individuals. And I have no doubt that reciprocation with them would go a long way.
The hostility could be triggered by a multitude of factors, a perceived threat, resources, speciism, or the attempt of a human faction/state to instrumentalize the subspecies (or parts thereof) against a rival. The engineered race might not even have some sort of unity before, but conflicts are a fast-track way to construct powerful group biases.
Demarquis wrote:
Fri Dec 31, 2021 2:07 am
There is no argument against a designed species that doesn't also apply to the human race. We will be able to trust them to the exact same extent that we can trust each other.

Whether you regard that as an argument in favor of or against is a matter of individual opinion.
Unfortunately, my experience has led me to the conclusion that Humans aren't ready for this. We are already too much of a threat to ourselves, and we are very good at inventing new ones, there's no real need to add such an exceptional factor on top of it.

However, since we are also not mature enough to forbid and prevent it entirely, I've proposed that such a race should be either subservient or occupy a wholly different habitat. For example, "The hammer cannot question the will of its maker"-approach of the genejack factories from SMAC, where the "workers" are simply unable to think of anything but their work, if they should live alongside us. On the other hand, different worlds may be settled with beings that are designed for mutually exclusive environments. Methane-breathers for super-cold ones, or silicon-based "living dragons" for the opposites.

In my humble opinion, that would be the best way to prevent a wipe-out scenario, as per the thread's topic.

User avatar
Werra
Posts: 840
Joined: Wed Jun 06, 2018 8:27 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Werra »

Cthulhu wrote:
Fri Dec 31, 2021 11:47 am
Demarquis wrote:
Fri Dec 31, 2021 2:07 am
I see no significant difference with a deliberately designed species (or AI, for that matter). They won't act as a monolythic species, all of whom share the same goals or act together as one. They will be individuals, people like us. And like us, some of them will act destructively, and some cooperatively, and the majority probably both. In every case, they will have to be understood, interacted with, and judged, as unique individuals. And I have no doubt that reciprocation with them would go a long way.
The hostility could be triggered by a multitude of factors, a perceived threat, resources, speciism, or the attempt of a human faction/state to instrumentalize the subspecies (or parts thereof) against a rival. The engineered race might not even have some sort of unity before, but conflicts are a fast-track way to construct powerful group biases.
Demarquis paints the more realistic picture. A species that can extinguish humanity needs to rival our numbers and posess the means to field well equipped armies. Since humans already inhabit every part of the globe, that means this won't be achievable without extensive prior cooperation and contact between the species. This practically guarantees that the new species reaction won't be monolithic.
Likewise, if the species doesn't already have a strong, global, group-identity a few points of conflict with man won't make the entire species willing to commit genocide. Group biases are almost never constructed. They grow over long periods of time. Even the biases that propagandists have fanned always were amplified preexisting biases. At least I can't think of any wholly artificial biases in human history.
All in all, the chances of a created species just deciding and then succeeding to get rid of mankind seem very slim. Due to the aforementioned requirements of such an undertaking, this species would practically have to first displace large parts of humanity with evolutionary pressure to build the necessary powerbase. If that is then the trajectory of pop development, the proactive genocide of humanity seems a formality.
Cthulhu wrote:
Fri Dec 31, 2021 11:47 am
Unfortunately, my experience has led me to the conclusion that Humans aren't ready for this. We are already too much of a threat to ourselves, and we are very good at inventing new ones, there's no real need to add such an exceptional factor on top of it.
Don't be so glum, humanity has seen nearly uninterrupted growth for thousands of years. The times where human actions decreased world population enough to stop humanity from growing are very few. Right now I can only think of the Mongols using plague infested corpses in a siege as an alleged example. Maybe a couple of the Chinese civil wars managed it too.

Also, you misjudge human restraint. We have a great survival instinct, both as individuals and in groups. Measures that have a high chance to backfire or pose a serious risk of severe consequences are almost never used. Adverse scenario with a long warning period we regularly ward off with great efficiency or change our approach in time. For a potentially genocidal artificial species this means that humanity can shift its modus operandi when necessary. For example in addition to making sure not to share an ecosystem with the species it could also be created with severe limitations. Infertility, mental inferiority or the need for a specific diet for example. It's not currently ethical, but when has that ever stopped anybody?

Bamax
Posts: 1040
Joined: Sat May 22, 2021 11:23 am

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Bamax »

Werra wrote:
Sat Jan 01, 2022 2:48 am
Cthulhu wrote:
Fri Dec 31, 2021 11:47 am
Demarquis wrote:
Fri Dec 31, 2021 2:07 am
I see no significant difference with a deliberately designed species (or AI, for that matter). They won't act as a monolythic species, all of whom share the same goals or act together as one. They will be individuals, people like us. And like us, some of them will act destructively, and some cooperatively, and the majority probably both. In every case, they will have to be understood, interacted with, and judged, as unique individuals. And I have no doubt that reciprocation with them would go a long way.
The hostility could be triggered by a multitude of factors, a perceived threat, resources, speciism, or the attempt of a human faction/state to instrumentalize the subspecies (or parts thereof) against a rival. The engineered race might not even have some sort of unity before, but conflicts are a fast-track way to construct powerful group biases.
Demarquis paints the more realistic picture. A species that can extinguish humanity needs to rival our numbers and posess the means to field well equipped armies. Since humans already inhabit every part of the globe, that means this won't be achievable without extensive prior cooperation and contact between the species. This practically guarantees that the new species reaction won't be monolithic.
Likewise, if the species doesn't already have a strong, global, group-identity a few points of conflict with man won't make the entire species willing to commit genocide. Group biases are almost never constructed. They grow over long periods of time. Even the biases that propagandists have fanned always were amplified preexisting biases. At least I can't think of any wholly artificial biases in human history.
All in all, the chances of a created species just deciding and then succeeding to get rid of mankind seem very slim. Due to the aforementioned requirements of such an undertaking, this species would practically have to first displace large parts of humanity with evolutionary pressure to build the necessary powerbase. If that is then the trajectory of pop development, the proactive genocide of humanity seems a formality.
Cthulhu wrote:
Fri Dec 31, 2021 11:47 am
Unfortunately, my experience has led me to the conclusion that Humans aren't ready for this. We are already too much of a threat to ourselves, and we are very good at inventing new ones, there's no real need to add such an exceptional factor on top of it.
Don't be so glum, humanity has seen nearly uninterrupted growth for thousands of years. The times where human actions decreased world population enough to stop humanity from growing are very few. Right now I can only think of the Mongols using plague infested corpses in a siege as an alleged example. Maybe a couple of the Chinese civil wars managed it too.

Also, you misjudge human restraint. We have a great survival instinct, both as individuals and in groups. Measures that have a high chance to backfire or pose a serious risk of severe consequences are almost never used. Adverse scenario with a long warning period we regularly ward off with great efficiency or change our approach in time. For a potentially genocidal artificial species this means that humanity can shift its modus operandi when necessary. For example in addition to making sure not to share an ecosystem with the species it could also be created with severe limitations. Infertility, mental inferiority or the need for a specific diet for example. It's not currently ethical, but when has that ever stopped anybody?

I guess the conclusion is twofold:

1. If you don't want an offshoot to overthrow you then make them so they cannot. For example the animals it can safely be said have as much a chance as a snowball in a volcano of overthrowing us. Because the ability snd intelligence discrepancy is just that huge.

2. If you want a race to defeat humans then you design for that too.... fully aware they might turn on you so you make a kind of failsafe. A virus or something that is like their kryptonite. Since it would be lethal mostly to only them, that wouod be your countermeasure.

Want an example of a blatant attempt to supplant the human race?

Anytime you have a species that is stronger but just as smart and capable that can reproduce you have a problem.

Afterall... that is the very basis of male dominance LOL.

gaerzi
Posts: 246
Joined: Thu Jan 30, 2020 5:14 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by gaerzi »

Werra wrote:
Sat Jan 01, 2022 2:48 am
Also, you misjudge human restraint. We have a great survival instinct, both as individuals and in groups. Measures that have a high chance to backfire or pose a serious risk of severe consequences are almost never used. Adverse scenario with a long warning period we regularly ward off with great efficiency or change our approach in time.
Global warming says hi! Scientists have warned about it for a half-century, and we're still not doing anything to seriously tackle the issue of carbon emissions. Some countries (like Germany or Belgium) have even decided that the first priority was getting rid of nuclear power to instead fall back on coal and gas. And it's in this context where we should be lowering our energy consumption as much as possible that some people have decided to start a craze on extremely energy-wasteful cryptocurrencies and NFT.

Demarquis
Posts: 437
Joined: Mon Aug 16, 2021 9:03 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Demarquis »

"Anytime you have a species that is stronger but just as smart and capable that can reproduce you have a problem."

The solution is to design them to interbreed with us. Blood is thicker than water.

User avatar
Werra
Posts: 840
Joined: Wed Jun 06, 2018 8:27 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Werra »

Demarquis wrote:
Sat Jan 01, 2022 4:39 pm
"Anytime you have a species that is stronger but just as smart and capable that can reproduce you have a problem."

The solution is to design them to interbreed with us. Blood is thicker than water.
So they have even more opportunity to outbreed the human genes? :|

gaerzi
Posts: 246
Joined: Thu Jan 30, 2020 5:14 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by gaerzi »

If they can breed with humans, they are humans. That's biology 101 stuff.

Demarquis
Posts: 437
Joined: Mon Aug 16, 2021 9:03 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Demarquis »

Also, I'm not sure I care if they can gradually assimilate us over the course of a couple of tens of thousand of years. I find that I have little or no loyalty to the human species, as such. I care about people I know the most, and after that I just care about people. And I think of any self-aware sophont as a "person." So as long as they don't, you know, slaughter us all in an apocalypse scenario, I think I'm cool with it.

User avatar
Werra
Posts: 840
Joined: Wed Jun 06, 2018 8:27 pm

Re: Designing Human Offshoots That Won't Wipe Out Humanity

Post by Werra »

@gaerzi
https://en.wikipedia.org/wiki/Liger

@Demarquis
If they replace us by breeding with us, most people would agree. But what about a species that outbreeds us and is not interbreeding with us?

Post Reply