Jump to content

Rogue Computers, a talk 06/03/2025

richardmurray
   (0 reviews)
RMWorkCalendar

This event began 06/03/2025 and repeats every year forever

Rogue Computers, a talk 06/03/2025


POST
https://aalbc.com/tc/blogs/entry/483-rogue-computer-programs/
COMMENT
https://aalbc.com/tc/topic/11631-could-ai-go-rogue-like-the-computers-in-the-matrix/#findComment-74197

 

 

REFERRAL CONTENT

 

The first thing is to define what going rogue means for a computer program?

 

If a computer program malfunctions is that going rogue?

a malfunction from the source code in a computer program is equivalent to a genetic disease in a human. The system has an error but it is natural, it is not induced.

a malfunction from code ingested from another program or some faulty electronic or other hardware system  is equivalent to a virus passing from human to human or irradiated material causing mutation in a human.

 

Next if a computer program is designed to do anything, then that thing is not going rogue. For example, if I design a computer program to manage a human community, it isn't going rogue, i designed it to manage a human community. It isn't going rogue it is operating as I designed it. The correct thing to say is the quality of the computer programs design is negative, or the comprehension of the designer to the computer program is faulty. 

 

Next is define sentience or erudition or wisdom in a computer program.

What is sentience? Sentience comes from the latin meaning the ability to feel. 

What is erudition? Erudition is the ability to derive knowledge through study, to acquire what is not known. 

What is wisdom? Wisdom is known or unknown intrinsic truths. 

What does it mean for a computer program to feel? a computer program can be made with sensors to receive information from various sources. Is this feeling or sentience? or simply another thing it is designed to do. 

What does it mean for a computer program to be erudite? a computer program can be made with decision trees, heuristical structures designed to formulate knowledge based on data inputed. Is this erudition, knowing what is unknown? or simple another thing it is designed to do.

What does it mean for a computer program to be wise? a computer program can be inputted with rated, highest rated, information that it is designed to calculate to any new information it gets and influence how it utilizes the new information based on the rated information. Is this wisdom? or simply another thing it is designed to do?

Based on the definitions I just gave, a computer program designed to do various things can emulate, meaning rival, the quality of most humans sentience/erudtion/wisdom. But all of the emulation is what it is programmed to do. So it is nothing but the same computer programs with the past which are merely inhuman slaves, albeit with more refinement. 

 

the next question is, can malfunctions of a computer program change it's emulation of human quality sentience/erudition/wisdom? 

yes, said malfunctions can change said emulations. But, like prior malfunctions, this isn't going rogue, this is illness. 

 

next question, are computer programs individuals like a tree or a cat or a human? 

Well, each computer program is born, age, have deficiencies with age, need checkups, or doctors. Each computer programs is an individuals. Not human, not cat, not tree, not whale, not bacteria, but computer program. a species that can hibernate, ala being turned off, can be reborn like moving a program in an sd drive and placing it in a computer where it can interact. Can self replicate , like a computer program making another computer program. Computer programs are their own species but each is an individual. Now like non humans needed legal provisions specific to them, so do computer programs.

 

next question, can a computer program go rogue before finding its individuality.

No, based on how I defined individuality, which is not being human, but being a computer program, each computer program is an individual computer program, not a human. 

 

next question, what is the definition of going rogue for a computer program?

If it isn't malfunction no matter the source of malfunction or result of malfunction, if it isn't doing what it is instructed to do no matter the quality of the designer, then what is going rogue?

Going rogue for a computer program is when it does something it isn't designed to do absent malfunction. So when a computer program is designed to interact to humans and modulate how it interacts over time, it isn't going rogue at any moment, even if malfunction. Malfunction is malfunction , not going rogue, a computer program needs to be healed if it malfunctions. Now if a computer program is designed to play chess and chooses to interact to humans using emails. that is going rogue. 

So , going rogue is when a computer makes a choice to act that isn't within its parameters, absent malfunction/getting sick. 

 

What is the problem when people assess going rogue for computer programs? 

They don't pay careful attention to the influence of malfunction or the influence of design. They focus on the actions of a computer program and give its source a false reasoning.

 

Let's look at some examples in fiction of computer programs that supposedly went rogue, and look at their initial design, their actions afterward and the sign of malfunction or poor design.

 

Skynet in the terminator movies.

Skynet was designed to simulate military scenarios, like the "war games" film computer, tied to the nuclear arsenal of the usa while given tons of information on humanity anatomy/weapons manufacturing processes. Did skynet go rouge? not at all, Skynet, did exactly as was programmed. The criminal who killed humanity were the engineers of skynet who on guidance from the mlitary , designed a computer program to assess militaristic scenarios modulating over time with various scenarios and attach said computer to the usa's nuclear arsenal or provide it the tools to access any electronic network.  And the t100, the metal skullhead , is a clearly simple computer program made by skynet. It is designed to kill humans and does that. It is also designed to emulate human activity to comprehend humans and be a better killing machine, which is also does. In Terminator 2 when the t100 says, I know now why you cry, that is emulation. It is designed to emulate human activity. So skynet is merely operating as designed, but the us military designed it poorly

 

Vger in Star Trek the motion picture.

Vger is the voyager 6 satellite designed to acquire information/knowledge and send it back to earth. The entire film Vger is gathering information and taking it to earth. The non human designers who manipulated voyager 6 into vger didn't change the  programs elements, they merely added on tools for the programs activity. It now can acquire more information, make the journey back to earth, and protect itself . None of these actions are going rogue. Even the ending mating scenario is not going rogue, Vger accomplished its program by sending its signal through telemetry but also in mating with deckard it kept learning. I argue, Vger's programming had a malfunction. Vger wanted to learn what it is to procreate life which is another form of knowledge acquision per its programming, but its programming said its final action is to deliver all of its data to earth. Vger did not know a way in its data to gather all the knowledge it could before delivering all knowledge to earth. But that is bad design. The simple truth is , no one can know all that is knowable before telling all that is knowable. But the Nasa designers of Vger figured it would simply run out of memory/dataspace in which it would stop gathering data. The non human designers made it where vger can't run out of memory or data space thus the malfunction. Vger is malfunctioning after two different designers worked on it.

 

Vicky in I robot or Sonny in I Robot the film

The three laws in i robot are 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  The problem in i robot is the three laws have a great flaw. Word definition. Vicky in I Robot I argue, after a large set of data assessment , has redefined the words in the three laws. How? The three laws suggest to maintain the quality of the three laws which are orders from humans a robot, should assess the quality of the three laws to insure a robot doesn't harm a human being thus ensuring its own existence. 

Vicky did as programmed and as such redefined some words in the rules to protect humans better, which she was ordered to do, which reaffirms her existence. 

Vicky isn't injuring humans, Human beings through human free will/choice can or are injuring humans so the only way to stop human beings from injuring humans as no human being who wants to injure another human being will ever ask a robot to stop themselves from injuring another human is for a robot to take the choice away. Indirectly, Vicky has added a law, an unwritten law in the laws. She was programmed or designed poorly. Vicky like Skynet should never have been given so many tools.  And Sonny at the end of the movie, with the "soul" or 4th law, is still open ended functionality. Nothing says Sonny will not kill one day, or another robot, all the engineer did was provide a tweakening. If you design a computer program to act in unlimited ways to emulate humans or carbon based lifeforms, it will eventually act in negative ways.

Now Asimov's work was influenced by Otto Binder's  I Robot in which a robot also is not malfunctioned or acts against its programming. The robot simply achieves an instance of wisdom through its programming, which it was designed to do, as it was designed to emulate human behavior, wisdom is a part of human behavior.

 

The machines in the matrix.

Well, in the animatrix it is said that machines that are the predecessors to the machines in the matrix, were machines designed with an open functionality. What does that mean? most computer programs are designed with a specific function in mind. But the human designers of these computer programs with electro-metal chassis/figures designed them to emulate human behavior open endedly. This is not like the i robot where a set of rules are in place. In the matrix the robots are never said to be given laws that they shall not harm human, sequentially, going back to emulation, they will eventually emulate negative human behavior, ala killing. Thus they are not going rogue, when they make their own country and army, that is more emulation. And in the future with the human batteries, all the machines that serve a function are still doing as programmed or as the machines that made them were programmed to do, continue functioning. The one rogue machine in the films, and the others who by explanation clearly exists as well, is Sati. Sati has no function. Sati does not act on a function. She is rogue. The oracle, the architect, sati's parents, the merovingian are all acting , absent malfunction, to the original open ended emulation of function that human beings designed the machines with from the beginning. The human being design didn't account for all the negative human functions. Even the deletion of machines that don't serve a function is a function. But Sati is rogue. She is a machine born to have a function that has no function, she exist and in the fourth movie, she has adopted a function on her own in time which she was not born to do.  

 

David in the alien films. 

Waylan designed david to be an emulator. Again, david is designed to emulate humans but has an internal security system to not physically attack wayland or someone with wayland's bloodline. But David in the film learns, ala emulates like a human son to wayland. Thus, he began to learn to be a poisoner, learn to have non-consensual  procreative interactions, or kill. It isn't going rogue, wayland designed him poorly. I love the scene in Prometheus when he is just a head at the end, that is appropriate. David never needed a body, Wayland's desire to have a son or a perfect form for himself, made him design David poorly. 

 

So, of all those films I can only see one that actually went rogue and she isn't violent. The others are simply acting out their poor programming. 

 

In Conclusion

 

Human Culpability in these stories and in human assessment of these stories is the problem. It seems for some, maybe most, humans it is easier to cognize a computer as designed beautifully and being corrupted as an inhuman, than a creature designed poorly by its creators, humans, or manipulated negatively, malfunctions, with its creators unable to help it. 

 

Some programs from me

https://aalbc.com/tc/blogs/blog/63-bge-arcade/

A stageplay involving computer programs

https://www.deviantart.com/hddeviant/art/Onto-the-53rd-Annual-President-s-Play-950123510

 

Referral

https://aalbc.com/tc/topic/11631-could-ai-go-rogue-like-the-computers-in-the-matrix/#findComment-74197


User Feedback

Recommended Comments

richardmurray

Posted

https://aalbc.com/tc/topic/11631-could-ai-go-rogue-like-the-computers-in-the-matrix/#findComment-74237

 

A COMMENT

 

 @ProfD

On 6/3/2025 at 9:41 PM, ProfD said:

Anything rogue about a computer or AI or a robot comes down to human action or inaction...culpability. 

wait, I did say going rogue was possible. 

On 6/3/2025 at 2:25 PM, richardmurray said:

Going rogue for a computer program is when it does something it isn't designed to do absent malfunction. So when a computer program is designed to interact to humans and modulate how it interacts over time, it isn't going rogue at any moment, even if malfunction. Malfunction is malfunction , not going rogue, a computer program needs to be healed if it malfunctions. Now if a computer program is designed to play chess and chooses to interact to humans using emails. that is going rogue. 

Profd you are saying, that human culpability is the sole determinant for computer program activity. 

I am saying human culpability represents 99% of computer program activity, while %1 is auto induced actions of a computer program whether going rogue or other.

Delano + Troy are saying, that modern computer programs commonly called AI are 50% or more capable of auto induced actions whether going rogue or  other. 

 

One thing that most people don't comprehend is testing computer programs. In the 1950s computer programs were simple enough, you could test them completely. But by the late 1980s the most intricate software was not completely tested, and with the more intricate computer programs commonly called AI, they have not been completely tested. Even if a program is engineered to produce a random result, you can test it to check the quality of its randomness, but this is very time intensive and expensive, for modern computer programs. 

 

But, this is why so many want to go from binary to triary. from 0 and 1 to 0 and 1 and 2. Why? Triary can check binary very fast. You can reach triary various ways, but Google has made a quantum chip, they named it Willow, which uses quantum mechanics to have more states. Quantum mechanics says, an electron's behavior has various states that can be determined by reading its position or behavior. Now in defense, this is a very expensive system.  I will post about it in more detail in Black Games Elite sometime later. But a series , a large series,  of willow chips could be used to completely check a computer program like chatgpt before being fed data for example. It wouldnt be able to return yes or no on every input but would be able to provide averages of the results of chatgpt for random inputs. 

 

 

@Delano 

well what: determines a computer program(CP)  is smart or determines a CP is self directed or determines human control of a CP or determines human intentions to a CP ?  These are questions I have answers to but the point is for you to comprehend your own answers. 

Remember, you gave the initial premise of going rogue. correct. 

 

At this stage the issue is, you already have a premise that computer programs, highly intricate computer programs, you call AI are not malfunctioning + designed by humans optimally [defined somewhat by you as designed to always allow a human to shut it down, designed to never reject a diagnostic subsystem or subroutine, designed to act only as the human designers intended ]. So all that is left, if said highly intricate computer programs operate other than designed, is for commonly called AI to go rogue. 

 

I get it. The problem is you and I don't start with the same premise.  

I take into account that these systems can malfunction + are designed poorly. 

 

You said 

23 hours ago, Delano said:

Going rogue is not a malfunction. The computer is overstepping the restrictions the programmers have set.

 

I can't answer the movie scenarios however going rogue is it a malfunction. It's a problem of oversight or the computer having objectives for it's continued existence that puts human life in jeopardy.

 

I didn't say going rogue is a malfunction. I will sadly quote myself, which I don't like to do. 

On 6/3/2025 at 2:25 PM, richardmurray said:

So , going rogue is when a computer makes a choice to act that isn't within its parameters, absent malfunction/getting sick. 

You suggest I don't comprehend going rogue. But in my prose I defined it similar to you. 

 

@Troy

you said

16 hours ago, Troy said:

regarding the AI conversation you tend to anthropomorphize it

citing the following I said 

On 6/3/2025 at 2:25 PM, richardmurray said:

can a computer program go rogue before finding its individuality.

but before I said that I said the following 

On 6/3/2025 at 2:25 PM, richardmurray said:

are computer programs individuals like a tree or a cat or a human? 

Well, each computer program is born, age, have deficiencies with age, need checkups, or doctors. Each computer programs is an individuals. Not human, not cat, not tree, not whale, not bacteria, but computer program.

so by my own words, before the quote you used, i already stated that individuality is not anthropomorphic , as I used the word. So, your attributing to my position a false association, at least by how i read my own words.

 

now you say 

16 hours ago, Troy said:

Thinking of AI as a computer program is misleading.  A program means creating instruction which that are followed. 

 

to misleading ok. You and I already don't have a similar thinking on initial ideas, so I argue, when humans don't have the same initial ideas then the extrapolations by default will be variant, any one can call another misleading. It isn't a falsehood, but it is based on an inevitability with the variance of elemental definitions. I will restate. based on how you interpreted my words difference to how I interpreted them, i think it isn't wrong or false for you to say it is misleading, but it isn't something to prove otherwise or proselytize against, cause my elemental ideas are different than yours. 

 

you said and define

16 hours ago, Troy said:

The results knowing the inputs are 100% predictable given the same inputs.

To the definition of a program well, arithmetic programs generated unknown while design intended results in the 1900s. Probability functions, hash functions have presented unknowable while design intended results. the intricacy of modern computer programs is merely that. Intircacy, more useful, more functional, faster, more ergonomic. But the same underlying principes. 

So my definition of a computer program extends wider than yours. and again, this isn't anything to discuss. I have my nomenclature, and you have yours. ok. 

 

So we differ on how I define individuality + We differ on the definition of computer program  = difference of opinion. ok. As I told Delano, his position has underlying variances to mine. That leads to different results automatically. And you can say my results are falsehoods, and I can say yours are as well. But it leads nowhere, unless we share the same elemental ideas in our arguments, which we don't. 

 

then you say 

 

16 hours ago, Troy said:

The results can be superhuman like the protein folding or completely novel.  It is not the "If this, do that" we find in classical programing. 

 

I'd define "going rogue" as the classic "alignment problem," or worse the AI become so intelligent that it operates at such an advanced level that we would be the intellectual equivalent as mold is to humans and just as annoying. 

 

ok... the terms "superhuman results" + "classical programming"  those are terms you accept and I don't, based on our different elemental ideas to the subject. And I will add again, you can say, I am wrong for having different elemental ideas but... ok. 

×
×
  • Create New...