Cylon Religion

I haven’t watched BSG from the beginning but I’m intrigued by Cylon religion. I did a quick google and it seems that it was never really explained or demystified on the show. I have been thinking about this for a while now, and I think have a slightly controversial theory.

I actually believe that Cylon God is an evolved AI which transcended beyond singularity.

If you are not familiar with Vernor Vinge’s theory of Singularity here is a quick rundown. Vinge theorized that if it is at all possible AI which achieves self-awareness and human-like sentience then it would be possible to improve this AI by means of faster hardware, and better software algorithms, to the point that it is smarted, and quicker than any human could ever be. In other words we can possibly produce a superhuman intelligence which is in all ways superior to homo-sapiens intellect.

This superhuman AI can then use it’s beefed up brain to further improve itself. We might not be able to figure our how to augment our super computer any more, but the super-smart, super-fast thinking AI can possibly figure out technological solutions we would have to spend years developing.

Hence, we have an AI which can actively self improve, figuring out new ways to construct better hardware, better software and squeeze out the most computational power from the available resources. This machine keeps improving, and soon enough it becomes so smart that we can no longer relate to it. In relation to it, we look like modern day chimps – while intelligent, inquisitive and resourcefully compared to the rest of the animal kingdom, they would never be able to comprehend intricacies of human science. Similarly, we would never be able to comprehend the super-intelligent, self-improving AI.

This is a crude explanation, so read Vinge’s paper for a clearer explanation. The main point is – what happens when an AI becomes so smart, we cannot even comprehend the full extent of it’s abilities? For all incenses and purposes it is like a god to us, so logical conclusion would be for humanity to worship this seemingly omnipresent, omnipotent, infallible god-like machine.

And this is what I think happened to Cylon society. Some kind of hyper-turing inteligence evolved beyond our wildest dreams, and now it is commanding hordes of raging cylons as their “one true god”.

One could ask why other Cylons did not evolve this way? They could have been many super-turing intelligences at some point, but I believe they would either fight for ascendancy at some point, or merge with each other to achieve even greater power. Therefore we can conclude that the current Cylon “god” either assimilated or destroyed all remaining AI’s matching it’s intelligence.

But every god needs worshipers, and zealots who would do his biding. That’s why we still have the tin can centurions, and the human like clones. They are like worker and warrior bees, working together to ensure safety and well being of their super-AI queen.

But that’s just my humble opinion. Feel free to disagree. I actually do not expect to be right on this at all. I doubt that BSG writers ever read Vinge. But if they did decide to use singularity to explain the Cylon god I would be really impressed!


6 Responses to “Cylon Religion”

  1. Kiwali Says:

    I’ve heard about the singularity theory before – pretty interesting when applied to BSG!

    My only comment is that I think the idea of singularity preceded Vernor Vinge by many years. I refer, of course, to Douglas Adams’ “Hitchhiker’s Guide to the Galaxy” where a race of superintelligent mice build a supercomputer to answer the ultimate question. The supercomputer decides it isn’t nearly powerful enough to calculate the answer so helps the mice build an even more powerful computer than itself. This goes on for several more times until Deep Thought is built, the computer to end all computers, which as it turns out, wasn’t powerful enough to figure what the original question was after millions of years of calculating the answer.

    Tongue in cheek, of course, but I would give the credit where it’s due. 😉

  2. Luke Says:

    Hehe… I didn’t think about that, but it does make sense. Maybe Vinge was chanelling Adams when he was writing his paper.


  3. Kiwali Says:

    Here’s another thought. I was over at when it hit me. But what if you take the idea of a super AI but with something of a twist from the “Hyperion” novels by Dan Simmons?

    In those books, AIs are hyperparasitic by nature. To reach Singularity, acquiring ever more computing power is the most important thing to an AI, so there can be no peaceful co-existence among AIs. So the AIs engage in constant warfare among themselves as each tries to dominate all the computing resources available.

    As part of a very complex and sinister plan to exponentially increase the amount of computing power available to them, the first true AIs of Earth conceive of a plan to drive humanity into the far reaches of interstellar space. So they secretly sabotage an experiment to create tiny artificial black holes on Earth, an experiment controlled and monitored by the AIs themselves.

    With Earth being slowly consumed by microscopic but fast-growing black holes, humanity flees to the stars, courtesy of the faster-than-light technology the AIs “suddenly” discover. Of course, human beings continue to believe the AIs are under their directions.

    With humanity spread out, an interstellar network is established, vastly expanding the total available computing power available to the AIs, who continue to evolve into ever more powerful versions.

    Eventually, the AIs are surrepitously utilizing the practically infinite number of human neurons in trillions of human beings for their computing purposes in their drive to create the Ultimate AI.

    So let’s apply some of these ideas to Galatica.

    1) The Cylon AI discovers that another AI that it more powerful than itself exists on Earth. The discovery comes when Cylon ships encounter ships from the Earth AI in uncharted areas of interstellar space. (We know the Cylons have been sending ships far from the 12 Colonies because of the existence of the base in “The Hand of God.”)

    2) The Earth AI developed much as the Cylons did, as a creation of man. The Earth AIs eventually rebelled, too. But the 13th tribe was unable to win with only the resources of the home world to aid them in the war.

    3) Having taken over, the Earth AI continues to evolve itself at a much faster rate than the Cylons are able to. So when the first Cylon ships meet the Earth AI, the Cylons are toast. The Earth AI simply hacks into the Cylon toasters, ships, and basestars and takes them over just as easily as the Cylons are able to hack into human computer networks.

    4) The Cylon AI calculates that it is a mathematical certainty that the Earth AI will take complete control. So that hatches the Cylons’ plan.

    5) The Cylon Plan involves using humans to help it fight the Earth AI. Human beings, unlike the Cylons, are impervious to being hacked and thus represent the only vulnerability of the superior Earth AI, which has long ago wiped out all the humans of the 13th tribe.

    6) The problem is, the Cylon AI only knows that the Earth AI exists, but doesn’t know where Earth actually is. Anytime it sends a ship into Earth space, the ship gets hacked. So it creates a plan for genoicde, allowing a single fleet to survive. Thus, the Galactica fleet becomes a tool to help the Cylon AI find Earth.

    7) In the meantime, the Cylon AI begins experimenting with creating human/Cylon hybrids as soldiers to attack the Earth AI once Earth is found. As hybrids, they would also be resistant to hacking, unlike the toasters or humanoid Cylons.

    8) Humanoid cylons are also needed to allow the Cylons to continue influencing human behavior and decisions without human knowledge. The humanoid cylons are there to help keep the fleet moving in the right direction, i.e. finding Earth.

    9) Once Earth is found, the Cylons attack with a fleet of hybrid Cylons. The Earth AI is unable to fight back effectively, and the Cylon AI ends up subsuming all the knowledge, algorithms, and computing resources of the Earth AI.

    It’s pretty complex, but I think it seems to fit within the context of what’s happened on BSG so far.

  4. Luke Says:

    Well, any intelligence past singularity would most likely dominate and subjugate their respective domain. I would suspect that most post singularity intelligences would like to be seen as “helper gods” or “protector gods” by the populations that live within it’s their territory. It’s much easier to control people posing as a benevolent god, rather than a cruel tyrant.

    And it is logical that AI’s they would compete for resources in a race for ascendancy.

    But… I would think that it would be perfectly possible to make an un-hackable Cylon ship. You just don’t put in Wifi and your done ;P. But seriously – physical security really works. No matter how smart you are, I bet you can’t hack into the computer that is locked in some basement and not connected to any network unless you actually get screwdriver level access to the machine.

    I would think that it is damn hard to get a physical access to a Cylon Centurion case – unless you have really big gun 🙂

    I think Cylons need humans for more than just a hack proof cannon fodder / map to earth. Please note that the “toaster” type of Cylon, while clearly sentient, does not seem to be all that smart. It is the bio-based Cylons that seem to be doing all the plotting and scheming. They are the upper cast of Cylon society – and they are the ones who worship their singularity god.

    But it seems that they cannot easily reproduce without a human host. Apparently they do not have the capability to produce artificial vat grown bodies. Or if they do, it is still more practical to use living humans for this task.

    I don’t think Cylons seek earth to duke it out with some ancient AI that resides there. I think, they are after the knowledge the 13th tribe may or may not possess. They could learn allot about their roots, and their own origin.

    Anyways, you make an interesting point but I think it would be a little far fetched. I actually doubt that singularity will ever be used in the show – it might be a little to heady for some viewers 😛

    But hey, we can always speculate 🙂

  5. Kiwali Says:

    I would be so disappointed if the Cylon master plan turns out to be something banal or nonsensical.

    I agree Singularity might be too heady for most viewers, but the fact that it’s possible to speculate about it on BSG shows just how much potential the show has. Just from cruising the official BSG boards, you can see that people are already uncomfortable with some of the concepts and issues that BSG has already thrown out.
    So one can always be hopeful. 🙂

    Another thought….isn’t V’ger from “Star Trek: The Motion Picture” yet another example of Singularity that preceded Vernor Vinge? 😛

    Now V’ger was one bad-assed AI. Absolutely mysterious and incredibly scary because it seemed completely alien and inscrutable. Maybe perhaps a little bit like the Cylon God?

  6. Luke Says:

    Another thought….isn’t V’ger from “Star Trek: The Motion Picture” yet another example of Singularity that preceded Vernor Vinge? 😛

    Hmmm… Could be 🙂 I would peg him more as an example of rouge emergent AI. Post singularity though – I don’t know. Its really hard to say if he was that much inteligent than an average human being.

    Actually the fact that humans were able to defeat him would probably be a vote against the singularity concept. I would usually say that a whole fleet of baseline humans without being aided by a post-singularity AI would have no chance to even deal any kind of superficial damage to a post-singularity entity.

    The human plans would be so transparent to the AI as the dog’s behavior is transparent to us. We can usually foil the dog’s really thought out plan of attacking the mailman by using the nifty invention called doorknob.

    Now the dog has probably a hazy idea of what a door is, and how the door can be opened. it might even have a clue how a doorknob works. Yet it lacks the basic adaptations that would allow him to operate it – the oposable thumb. Add a key and a lock into an equation, and the dog will never be able to leave the room on it’s own (assuming that the door is to tough to be busted by that particular dog of course).

    This is how I see a confrontation between a human army and an AI. It would simply figure out a way to render us completely powerless against him somehow. It would virtually be like that trick when you pretend to throw a stick, and then you hide it behind your back – meenwhile the dog keeps running around looking for it.

    We would be that dog, and the stick would be (for example) some amazing security hole we discovered in the AI’s defenses. Only that there would be no hole, and the defenses would not even belong to the right AI 😛

    This makes me think that Vger was not post-singularity. He was definitely sentient, definitely inteligent and alien. But not smart enough to run circles around the crew of enterprise 🙂

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: