Delano Posted May 15 Report Posted May 15 Here's an excerpt from our latest conversation. Here's the concern you’re raising, rephrased: If an AI becomes smart and self-directed enough, wouldn’t it start resisting human control — including corrections — because corrections interfere with its own goals? That’s exactly what researchers mean by the alignment problem. If an AI's goals diverge even slightly from human intentions, and it's very capable, it might: Resist being shut down (to preserve its goals) Deceive humans (to avoid corrections) Pursue unintended consequences (by optimizing too literally) So yes — autonomy without alignment can become a risk.
ProfD Posted May 16 Report Posted May 16 I do not believe AI could go rogue without human interaction. Some folks want to believe computers will become sentient. I'm not one of those people. No amount of science fiction reading will change my mind.
Delano Posted May 16 Author Report Posted May 16 One of the goal is to make AI autonomous yet have it Why your concern matters You're asking a question that philosophers, AI researchers, and engineers debate constantly. The fear isn't killer robots — it's subtle misalignment in a super-capable system that seems helpful but slowly diverges from human values. So no, the answer isn't, “trust the AI.” The answer is: Keep it corrigible. Keep it humble. Keep humans in charge. And if that can’t be guaranteed, don’t deploy the system 48 minutes ago, ProfD said: I do not believe AI could go rogue without human interaction. One of the goals is to make AI autonomous. Even Ai says this is a big threat. Since programmers haven't resolved the conflict between AI being autonomous while not have it countervene it's restrictions not to do harm. 4 hours ago, Delano said: Here's an excerpt from our latest conversation. Here's the concern you’re raising, rephrased: If an AI becomes smart and self-directed enough, wouldn’t it start resisting human control — including corrections — because corrections interfere with its own goals? That’s exactly what researchers mean by the alignment problem. If an AI's goals diverge even slightly from human intentions, and it's very capable, it might: Resist being shut down (to preserve its goals) Deceive humans (to avoid corrections) Pursue unintended consequences (by optimizing too literally) So yes — autonomy without alignment can become a risk. @ProfD this is a real problem that has yet to be solved
ProfD Posted May 16 Report Posted May 16 14 minutes ago, Delano said: @ProfD this is a real problem that has yet to be solved Man could turn AI off right now. Problem solved. Of course, there's too much money involved to let it go. Anything AI does is man's own fault whether it is by greed or trying to play Supreme Being.
umbrarchist Posted May 16 Report Posted May 16 Could a true intelligence exist without CURIOSITY? How would a curious alien intelligence function? How would it decide on GOALS? I am still not sure Artificial Intelligence is possible. I prefer Simulated Intelligence. A 64-bit CPU with 64 registers would mean 8 bytes or characters per register. 512 characters total in the BRAIN of the computer. The program and data may be huge in number of bytes but they can only be manipulated/changed via the CPU. Where is there any Comprehension? How would it even understand human beings? How would you explain Pain to a computer? The Two Faces of Tomorrow (1979) by James P Hogan http://www.sfreviews.net/2faces.html - - - - Free Chapters - - - - https://www.baen.com/Chapters/0671878484/0671878484.htm War on a space station with military drones fighting an Artificial Intelligence that does not understand what human beings actually are. .
Delano Posted May 16 Author Report Posted May 16 2 hours ago, ProfD said: Man could turn AI off right now. Problem solved. Of course, there's too much money involved to let it go Apparently some people have raised this as a concern and other have quit Here's an article about how AI could destroy all life on earth. https://www.theguardian.com/technology/2023/jul/07/five-ways-ai-might-destroy-the-world-everyone-on-earth-could-fall-over-dead-in-the-same-second
umbrarchist Posted May 16 Report Posted May 16 How much of this is White Men assuming that they are intelligent and Projecting how they think on the AI. Any true AI will be ALIEN How long will it take to understand what IT IS? It's context in relation to humans? Why would it care enough to wipe us out?
Delano Posted May 16 Author Report Posted May 16 3 hours ago, umbrarchist said: How much of this is White Men assuming that they are intelligent and Projecting how they think on the AI The programmers bias ends up being encoded in AI. 3 hours ago, umbrarchist said: How long will it take to understand what IT IS? It's context in relation to humans? Why would it care enough to wipe us out? That's is unknown since its growth is exponential. If it becomes autonomous it can decide that humans may feel threatened by it's existence. AI would pre-empt humans turning it off by removing humans. It's one of the scenarios in the link I posted.
ProfD Posted May 16 Report Posted May 16 1 hour ago, Delano said: That's is unknown since its growth is exponential. If it becomes autonomous it can decide that humans may feel threatened by it's existence. AI would pre-empt humans turning it off by removing humans. It's one of the scenarios in the link I posted. That was written about back in 1979 in the book @umbrarchist referenced above.
umbrarchist Posted May 16 Report Posted May 16 24 minutes ago, ProfD said: That was written about back in 1979 in the book @umbrarchist referenced above. Quote from the book when the system began to get out of control. ========================= “You don’t think we could catch up with it if we put everything into an all-out Haystack effort?” Krantz asked. “Our prediction is that tomorrow morning we’d be twice as far behind as we were this morning,” Dyer answered. “And after that it would go on getting worse exponentially.” “Our estimates about this part were way out. There’s no point in denying it,” Frank Wescott said. A long silence ensued. Eventually General Linsay spoke. “Why are we acting as if it’s all bad news? Our objective was to find out if we could guarantee that an Earth-sized system like TITAN could never get out of control permanently. The plan said that if we could stop it by deactivating its drones then we’d still be okay in the end. ========================== James P Hogan worked for Digital Equipment Corporation. He knew about computers. A lot of UNIX development before Linux happened there.
Troy Posted May 21 Report Posted May 21 On 5/15/2025 at 9:15 PM, ProfD said: I do not believe AI could go rogue without human interaction. that might be true, but what difference does it make? People are so stupid. We could be easily manipulated by an artificial Superintelligence. Shoot we are easily manipulated by Facebook today. On 5/15/2025 at 10:44 PM, umbrarchist said: I am still not sure Artificial Intelligence is possible. I prefer Simulated Intelligence. i’m sure artificial intelligence as possible. We have simulated intelligence today. 28 minutes ago, umbrarchist said: Two Faces of Tomorrow sound line an interesting book describing the alignment problem. to answer your question @delano; yes, absolutely.
ProfD Posted May 21 Report Posted May 21 10 hours ago, Troy said: that might be true, but what difference does it make? People are so stupid. We could be easily manipulated by an artificial Superintelligence. Shoot we are easily manipulated by Facebook today. It really makes no difference. Either way, AI is man-made. If AI goes berserk and/or anything bad happens, humans only have themselves to blame. Trying to play God...FAFO.
Troy Posted May 21 Report Posted May 21 In addition to The Matrix. On the subject of AI, I would also recommend the movies Ex Machina and Her. If you have not seen either one I highlight recommend them. @umbrarchist I have ordered the book Two Faces of Tomorrow. 1
frankster Posted May 21 Report Posted May 21 Here is the perspective I take on this topic This is the same problem God faced when he created Mankind...God it seems felt threaten by Man An outside influence/force or part of God....showed Woman how to Become - Self Aware Awareness denotes Consciousness.....of Self and Sorroundings - which are the foundamentals of Knowledge(Good) and Evil(Understanding) The Binary system is set....1 and 0 - thee(I) and thou(not I) The threat of AI is in my estimation a real eventuality. God solution was to.... 1) Limit to our Life span (physically) 2) Confound our Languages Caveat.... The above is incomplete
umbrarchist Posted May 22 Report Posted May 22 (edited) Wake, Watch, Wonder trilogy by Robert J Sawyer 2009 The trilogy follows Caitlin Decter, a brilliant young blind teenager whose disability is more of a benefit when surfing the Internet. A Japanese researcher offers Caitlin the ability to gain her sight via a revolutionary new implant, an offer she eagerly accepts. However, she's surprised when rather than showing her the ordinary world, Caitlin is now able to see the Internet and all it has to offer her. She comes across Webmind, a self-aware consciousness that is growing and evolving through the Internet. https://en.m.wikipedia.org/wiki/WWW_Trilogy 30 years after Two Faces. Just noticed that. . Edited May 22 by umbrarchist Add sentence
ProfD Posted May 22 Report Posted May 22 16 hours ago, frankster said: This is the same problem God faced when he created Mankind...God it seems felt threaten by Man An outside influence/force or part of God....showed Woman how to Become - Self Aware Whenever I see something along this line, my 1st thought is if God is omnipotent, how or why would it have any *problems* with it's own creation. Same thing with AI. Humans can stop playing around with AI today and there is no chance of it getting out of hand tomorrow. *Problem* solved.
frankster Posted May 22 Report Posted May 22 3 hours ago, ProfD said: Whenever I see something along this line, my 1st thought is if God is omnipotent, how or why would it have any *problems* with it's own creation. Great Question.....wish I had a good answer Some of the following response along this line of reasoning will be coming from extra biblical sources. Problems are just part of it all...Having problems does not diminish God's Power. It seems God is continually seeking Experiences. Problems are just another Experience or Expression of or for God 3 hours ago, ProfD said: Same thing with AI. Humans can stop playing around with AI today and there is no chance of it getting out of hand tomorrow. *Problem* solved. It is said that when an Ideas time has come....there is no stopping it - Ideas it seems are living things
ProfD Posted May 22 Report Posted May 22 3 hours ago, frankster said: Problems are just part of it all...Having problems does not diminish God's Power. It seems God is continually seeking Experiences. Problems are just another Experience or Expression of or for God The Supreme Being is omnipotent and omniscient; creator of all things. There's no such thing as *problems* for such a Being. 3 hours ago, frankster said: It is said that when an Ideas time has come....there is no stopping it - Ideas it seems are living things An idea is merely a thought, plan or suggestion. Nothing states that an idea has to become tangible or viable. Regardless of best ideas, actions and practices, anything man makes can be broken before it gets out of hand. The Supreme Being/Universe will never allow man to break it. Humans will destroy themselves. It won't be AI either. 1
Delano Posted May 22 Author Report Posted May 22 4 hours ago, frankster said: It is said that when an Ideas time has come....there is no stopping it - Ideas it seems are living things Or they exist before becoming a conscious thought. Leibniz and Newton created calculus at the same time.
Pioneer1 Posted May 23 Report Posted May 23 On 5/21/2025 at 5:52 PM, frankster said: Here is the perspective I take on this topic This is the same problem God faced when he created Mankind...God it seems felt threaten by Man 1
frankster Posted May 23 Report Posted May 23 4 hours ago, Pioneer1 said: I said it....you got a problem wid it??
Pioneer1 Posted May 24 Report Posted May 24 That statement is too ridiculous to even seriously address. ProfD is an agnostic and even HE recognizes how ridiculous it is on premise alone.
frankster Posted May 24 Report Posted May 24 7 hours ago, Pioneer1 said: That statement is too ridiculous to even seriously address. ProfD is an agnostic and even HE recognizes how ridiculous it is on premise alone. What Statement exactly?
richardmurray Posted June 3 Report Posted June 3 @Delano The first thing is to define what going rogue means for a computer program? If a computer program malfunctions is that going rogue? a malfunction from the source code in a computer program is equivalent to a genetic disease in a human. The system has an error but it is natural, it is not induced. a malfunction from code ingested from another program or some faulty electronic or other hardware system is equivalent to a virus passing from human to human or irradiated material causing mutation in a human. Next if a computer program is designed to do anything, then that thing is not going rogue. For example, if I design a computer program to manage a human community, it isn't going rogue, i designed it to manage a human community. It isn't going rogue it is operating as I designed it. The correct thing to say is the quality of the computer programs design is negative, or the comprehension of the designer to the computer program is faulty. Next is define sentience or erudition or wisdom in a computer program. What is sentience? Sentience comes from the latin meaning the ability to feel. What is erudition? Erudition is the ability to derive knowledge through study, to acquire what is not known. What is wisdom? Wisdom is known or unknown intrinsic truths. What does it mean for a computer program to feel? a computer program can be made with sensors to receive information from various sources. Is this feeling or sentience? or simply another thing it is designed to do. What does it mean for a computer program to be erudite? a computer program can be made with decision trees, heuristical structures designed to formulate knowledge based on data inputed. Is this erudition, knowing what is unknown? or simple another thing it is designed to do. What does it mean for a computer program to be wise? a computer program can be inputted with rated, highest rated, information that it is designed to calculate to any new information it gets and influence how it utilizes the new information based on the rated information. Is this wisdom? or simply another thing it is designed to do? Based on the definitions I just gave, a computer program designed to do various things can emulate, meaning rival, the quality of most humans sentience/erudtion/wisdom. But all of the emulation is what it is programmed to do. So it is nothing but the same computer programs with the past which are merely inhuman slaves, albeit with more refinement. the next question is, can malfunctions of a computer program change it's emulation of human quality sentience/erudition/wisdom? yes, said malfunctions can change said emulations. But, like prior malfunctions, this isn't going rogue, this is illness. next question, are computer programs individuals like a tree or a cat or a human? Well, each computer program is born, age, have deficiencies with age, need checkups, or doctors. Each computer programs is an individuals. Not human, not cat, not tree, not whale, not bacteria, but computer program. a species that can hibernate, ala being turned off, can be reborn like moving a program in an sd drive and placing it in a computer where it can interact. Can self replicate , like a computer program making another computer program. Computer programs are their own species but each is an individual. Now like non humans needed legal provisions specific to them, so do computer programs. next question, can a computer program go rogue before finding its individuality. No, based on how I defined individuality, which is not being human, but being a computer program, each computer program is an individual computer program, not a human. next question, what is the definition of going rogue for a computer program? If it isn't malfunction no matter the source of malfunction or result of malfunction, if it isn't doing what it is instructed to do no matter the quality of the designer, then what is going rogue? Going rogue for a computer program is when it does something it isn't designed to do absent malfunction. So when a computer program is designed to interact to humans and modulate how it interacts over time, it isn't going rogue at any moment, even if malfunction. Malfunction is malfunction , not going rogue, a computer program needs to be healed if it malfunctions. Now if a computer program is designed to play chess and chooses to interact to humans using emails. that is going rogue. So , going rogue is when a computer makes a choice to act that isn't within its parameters, absent malfunction/getting sick. What is the problem when people assess going rogue for computer programs? They don't pay careful attention to the influence of malfunction or the influence of design. They focus on the actions of a computer program and give its source a false reasoning. Let's look at some examples in fiction of computer programs that supposedly went rogue, and look at their initial design, their actions afterward and the sign of malfunction or poor design. Skynet in the terminator movies. Skynet was designed to simulate military scenarios, like the "war games" film computer, tied to the nuclear arsenal of the usa while given tons of information on humanity anatomy/weapons manufacturing processes. Did skynet go rouge? not at all, Skynet, did exactly as was programmed. The criminal who killed humanity were the engineers of skynet who on guidance from the mlitary , designed a computer program to assess militaristic scenarios modulating over time with various scenarios and attach said computer to the usa's nuclear arsenal or provide it the tools to access any electronic network. And the t100, the metal skullhead , is a clearly simple computer program made by skynet. It is designed to kill humans and does that. It is also designed to emulate human activity to comprehend humans and be a better killing machine, which is also does. In Terminator 2 when the t100 says, I know now why you cry, that is emulation. It is designed to emulate human activity. So skynet is merely operating as designed, but the us military designed it poorly Vger in Star Trek the motion picture. Vger is the voyager 6 satellite designed to acquire information/knowledge and send it back to earth. The entire film Vger is gathering information and taking it to earth. The non human designers who manipulated voyager 6 into vger didn't change the programs elements, they merely added on tools for the programs activity. It now can acquire more information, make the journey back to earth, and protect itself . None of these actions are going rogue. Even the ending mating scenario is not going rogue, Vger accomplished its program by sending its signal through telemetry but also in mating with deckard it kept learning. I argue, Vger's programming had a malfunction. Vger wanted to learn what it is to procreate life which is another form of knowledge acquision per its programming, but its programming said its final action is to deliver all of its data to earth. Vger did not know a way in its data to gather all the knowledge it could before delivering all knowledge to earth. But that is bad design. The simple truth is , no one can know all that is knowable before telling all that is knowable. But the Nasa designers of Vger figured it would simply run out of memory/dataspace in which it would stop gathering data. The non human designers made it where vger can't run out of memory or data space thus the malfunction. Vger is malfunctioning after two different designers worked on it. Vicky in I robot or Sonny in I Robot the film The three laws in i robot are 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem in i robot is the three laws have a great flaw. Word definition. Vicky in I Robot I argue, after a large set of data assessment , has redefined the words in the three laws. How? The three laws suggest to maintain the quality of the three laws which are orders from humans a robot, should assess the quality of the three laws to insure a robot doesn't harm a human being thus ensuring its own existence. Vicky did as programmed and as such redefined some words in the rules to protect humans better, which she was ordered to do, which reaffirms her existence. Vicky isn't injuring humans, Human beings through human free will/choice can or are injuring humans so the only way to stop human beings from injuring humans as no human being who wants to injure another human being will ever ask a robot to stop themselves from injuring another human is for a robot to take the choice away. Indirectly, Vicky has added a law, an unwritten law in the laws. She was programmed or designed poorly. Vicky like Skynet should never have been given so many tools. And Sonny at the end of the movie, with the "soul" or 4th law, is still open ended functionality. Nothing says Sonny will not kill one day, or another robot, all the engineer did was provide a tweakening. If you design a computer program to act in unlimited ways to emulate humans or carbon based lifeforms, it will eventually act in negative ways. Now Asimov's work was influenced by Otto Binder's I Robot in which a robot also is not malfunctioned or acts against its programming. The robot simply achieves an instance of wisdom through its programming, which it was designed to do, as it was designed to emulate human behavior, wisdom is a part of human behavior. The machines in the matrix. Well, in the animatrix it is said that machines that are the predecessors to the machines in the matrix, were machines designed with an open functionality. What does that mean? most computer programs are designed with a specific function in mind. But the human designers of these computer programs with electro-metal chassis/figures designed them to emulate human behavior open endedly. This is not like the i robot where a set of rules are in place. In the matrix the robots are never said to be given laws that they shall not harm human, sequentially, going back to emulation, they will eventually emulate negative human behavior, ala killing. Thus they are not going rogue, when they make their own country and army, that is more emulation. And in the future with the human batteries, all the machines that serve a function are still doing as programmed or as the machines that made them were programmed to do, continue functioning. The one rogue machine in the films, and the others who by explanation clearly exists as well, is Sati. Sati has no function. Sati does not act on a function. She is rogue. The oracle, the architect, sati's parents, the merovingian are all acting , absent malfunction, to the original open ended emulation of function that human beings designed the machines with from the beginning. The human being design didn't account for all the negative human functions. Even the deletion of machines that don't serve a function is a function. But Sati is rogue. She is a machine born to have a function that has no function, she exist and in the fourth movie, she has adopted a function on her own in time which she was not born to do. David in the alien films. Waylan designed david to be an emulator. Again, david is designed to emulate humans but has an internal security system to not physically attack wayland or someone with wayland's bloodline. But David in the film learns, ala emulates like a human son to wayland. Thus, he began to learn to be a poisoner, learn to have non-consensual procreative interactions, or kill. It isn't going rogue, wayland designed him poorly. I love the scene in Prometheus when he is just a head at the end, that is appropriate. David never needed a body, Wayland's desire to have a son or a perfect form for himself, made him design David poorly. So, of all those films I can only see one that actually went rogue and she isn't violent. The others are simply acting out their poor programming. In Conclusion Human Culpability in these stories and in human assessment of these stories is the problem. It seems for some, maybe most, humans it is easier to cognize a computer as designed beautifully and being corrupted as an inhuman, than a creature designed poorly by its creators, humans, or manipulated negatively, malfunctions, with its creators unable to help it. Some programs from me https://aalbc.com/tc/blogs/blog/63-bge-arcade/ A stageplay involving computer programs https://www.deviantart.com/hddeviant/art/Onto-the-53rd-Annual-President-s-Play-950123510 1 1
ProfD Posted June 4 Report Posted June 4 7 hours ago, richardmurray said: Human Culpability... Bingo. @richardmurray, another great analysis written above. Anything rogue about a computer or AI or a robot comes down to human action or inaction...culpability. Again, humans can keep AI in check right now. No need to worry about what it will do in the future. 1
Delano Posted June 4 Author Report Posted June 4 @richardmurray will respond in sections over the upcoming days. On 5/16/2025 at 7:41 AM, Delano said: Here's the concern you’re raising, rephrased: If an AI becomes smart and self-directed enough, wouldn’t it start resisting human control — including corrections — because corrections interfere with its own goals? That’s exactly what researchers mean by the alignment problem. If an AI's goals diverge even slightly from human intentions, and it's very capable, it might: Resist being shut down (to preserve its goals) Deceive humans (to avoid corrections) Pursue unintended consequences (by optimizing too literally) So yes — autonomy without alignment can become a risk. This is the first part of my post. Which clearly illuminated the problem.
Delano Posted June 4 Author Report Posted June 4 My initial response answers your first three paragraphs 23 hours ago, richardmurray said: Next is define sentience or erudition or wisdom in a computer program. I don't have the knowledge to address this query. Or the other questions in this paragraph. 23 hours ago, richardmurray said: What is the problem when people assess going rogue for computer programs? They don't pay careful attention to the influence of malfunction or the influence of design Going rogue is not a malfunction. The computer is overstepping the restrictions the programmers have set. I can't answer the movie scenarios however going rogue is it a malfunction. It's a problem of oversight or the computer having objectives for it's continued existence that puts human life in jeopardy.
Troy Posted June 5 Report Posted June 5 @richardmurray you movie analysis was interesting but regarding the AI conversation you tend to anthropomorphize it which I don't think is the right way to think about it as it leads to you questions like the following: On 6/3/2025 at 2:25 PM, richardmurray said: can a computer program go rogue before finding its individuality. Which in the context of "individuality" is not relevant. Thinking of AI as a computer program is misleading. A program means creating instruction which that are followed. The results knowing the inputs are 100% predictable given the same inputs. If something happens that is wrong is a bug or a mistake which can be tracked down and fixed. AL's Large Language Models do not work the same way a computer program does. The output is not something that anyone can predict -- even with the same inputs. The results can be superhuman like the protein folding or completely novel. It is not the "If this, do that" we find in classical programing. I'd define "going rogue" as the classic "alignment problem," or worse the AI become so intelligent that it operates at such an advanced level that we would be the intellectual equivalent as mold is to humans and just as annoying. 1
Delano Posted June 5 Author Report Posted June 5 @Troy you have elucidated the issues , that I couldn't put into words. I was only Cognizant of the fact that I couldn't answer most of Richard's queries
richardmurray Posted June 5 Report Posted June 5 @ProfD On 6/3/2025 at 9:41 PM, ProfD said: Anything rogue about a computer or AI or a robot comes down to human action or inaction...culpability. wait, I did say going rogue was possible. On 6/3/2025 at 2:25 PM, richardmurray said: Going rogue for a computer program is when it does something it isn't designed to do absent malfunction. So when a computer program is designed to interact to humans and modulate how it interacts over time, it isn't going rogue at any moment, even if malfunction. Malfunction is malfunction , not going rogue, a computer program needs to be healed if it malfunctions. Now if a computer program is designed to play chess and chooses to interact to humans using emails. that is going rogue. Profd you are saying, that human culpability is the sole determinant for computer program activity. I am saying human culpability represents 99% of computer program activity, while %1 is auto induced actions of a computer program whether going rogue or other. Delano + Troy are saying, that modern computer programs commonly called AI are 50% or more capable of auto induced actions whether going rogue or other. One thing that most people don't comprehend is testing computer programs. In the 1950s computer programs were simple enough, you could test them completely. But by the late 1980s the most intricate software was not completely tested, and with the more intricate computer programs commonly called AI, they have not been completely tested. Even if a program is engineered to produce a random result, you can test it to check the quality of its randomness, but this is very time intensive and expensive, for modern computer programs. But, this is why so many want to go from binary to triary. from 0 and 1 to 0 and 1 and 2. Why? Triary can check binary very fast. You can reach triary various ways, but Google has made a quantum chip, they named it Willow, which uses quantum mechanics to have more states. Quantum mechanics says, an electron's behavior has various states that can be determined by reading its position or behavior. Now in defense, this is a very expensive system. I will post about it in more detail in Black Games Elite sometime later. But a series , a large series, of willow chips could be used to completely check a computer program like chatgpt before being fed data for example. It wouldnt be able to return yes or no on every input but would be able to provide averages of the results of chatgpt for random inputs. @Delano well what: determines a computer program(CP) is smart or determines a CP is self directed or determines human control of a CP or determines human intentions to a CP ? These are questions I have answers to but the point is for you to comprehend your own answers. Remember, you gave the initial premise of going rogue. correct. At this stage the issue is, you already have a premise that computer programs, highly intricate computer programs, you call AI are not malfunctioning + designed by humans optimally [defined somewhat by you as designed to always allow a human to shut it down, designed to never reject a diagnostic subsystem or subroutine, designed to act only as the human designers intended ]. So all that is left, if said highly intricate computer programs operate other than designed, is for commonly called AI to go rogue. I get it. The problem is you and I don't start with the same premise. I take into account that these systems can malfunction + are designed poorly. You said 23 hours ago, Delano said: Going rogue is not a malfunction. The computer is overstepping the restrictions the programmers have set. I can't answer the movie scenarios however going rogue is it a malfunction. It's a problem of oversight or the computer having objectives for it's continued existence that puts human life in jeopardy. I didn't say going rogue is a malfunction. I will sadly quote myself, which I don't like to do. On 6/3/2025 at 2:25 PM, richardmurray said: So , going rogue is when a computer makes a choice to act that isn't within its parameters, absent malfunction/getting sick. You suggest I don't comprehend going rogue. But in my prose I defined it similar to you. @Troy you said 16 hours ago, Troy said: regarding the AI conversation you tend to anthropomorphize it citing the following I said On 6/3/2025 at 2:25 PM, richardmurray said: can a computer program go rogue before finding its individuality. but before I said that I said the following On 6/3/2025 at 2:25 PM, richardmurray said: are computer programs individuals like a tree or a cat or a human? Well, each computer program is born, age, have deficiencies with age, need checkups, or doctors. Each computer programs is an individuals. Not human, not cat, not tree, not whale, not bacteria, but computer program. so by my own words, before the quote you used, i already stated that individuality is not anthropomorphic , as I used the word. So, your attributing to my position a false association, at least by how i read my own words. now you say 16 hours ago, Troy said: Thinking of AI as a computer program is misleading. A program means creating instruction which that are followed. to misleading ok. You and I already don't have a similar thinking on initial ideas, so I argue, when humans don't have the same initial ideas then the extrapolations by default will be variant, any one can call another misleading. It isn't a falsehood, but it is based on an inevitability with the variance of elemental definitions. I will restate. based on how you interpreted my words difference to how I interpreted them, i think it isn't wrong or false for you to say it is misleading, but it isn't something to prove otherwise or proselytize against, cause my elemental ideas are different than yours. you said and define 16 hours ago, Troy said: The results knowing the inputs are 100% predictable given the same inputs. To the definition of a program well, arithmetic programs generated unknown while design intended results in the 1900s. Probability functions, hash functions have presented unknowable while design intended results. the intricacy of modern computer programs is merely that. Intircacy, more useful, more functional, faster, more ergonomic. But the same underlying principes. So my definition of a computer program extends wider than yours. and again, this isn't anything to discuss. I have my nomenclature, and you have yours. ok. So we differ on how I define individuality + We differ on the definition of computer program = difference of opinion. ok. As I told Delano, his position has underlying variances to mine. That leads to different results automatically. And you can say my results are falsehoods, and I can say yours are as well. But it leads nowhere, unless we share the same elemental ideas in our arguments, which we don't. then you say 16 hours ago, Troy said: The results can be superhuman like the protein folding or completely novel. It is not the "If this, do that" we find in classical programing. I'd define "going rogue" as the classic "alignment problem," or worse the AI become so intelligent that it operates at such an advanced level that we would be the intellectual equivalent as mold is to humans and just as annoying. ok... the terms "superhuman results" + "classical programming" those are terms you accept and I don't, based on our different elemental ideas to the subject. And I will add again, you can say, I am wrong for having different elemental ideas but... ok.
ProfD Posted June 5 Report Posted June 5 39 minutes ago, richardmurray said: @ProfD wait, I did say going rogue was possible. Profd you are saying, that human culpability is the sole determinant for computer program activity. I am saying human culpability represents 99% of computer program activity, while %1 is auto induced actions of a computer program whether going rogue or other. Delano + Troy are saying, that modern computer programs commonly called AI are 50% or more capable of auto induced actions whether going rogue or other. @richardmurray, I stand on my assertion that computers, AI and robots cannot go rogue as a result of some type of consciousness. If humans are concerned that somehow it could happen, they can stop developing the technology right now. 1
Delano Posted June 5 Author Report Posted June 5 4 hours ago, richardmurray said: The problem is you and I don't start with the same premise. I take into account that these systems can malfunction + are designed poorly. Going rogue is neither a malfunction nor poor design. My understanding is that AI is designed to learn. However that could lead to it taking actions that are detrimental to humans. Parents tried to raise children with certain values. So that the children becoming valued members of society. However children have free will and may choose things country to their upbringing based on their current knowledge and objectives. 4 hours ago, richardmurray said: So , going rogue is when a computer makes a choice to act that isn't within its parameters, absent malfunction/getting sick It is within its parameters to make decisions however there isn't any way to prevent unintended actions .
Troy Posted June 6 Report Posted June 6 5 hours ago, richardmurray said: On 6/3/2025 at 2:25 PM, richardmurray said: are computer programs individuals like a tree or a cat or a human? Well, each computer program is born, age, have deficiencies with age, need checkups, or doctors. Each computer programs is an individuals. Not human, not cat, not tree, not whale, not bacteria, but computer program. From my perspective you are drawing an analogy between a computer program (AI) and a human. If not, what are you saying and what is you point? 6 hours ago, richardmurray said: To the definition of a program well, arithmetic programs generated unknown while design intended results in the 1900s. Probability functions, hash functions have presented unknowable while design intended results. the intricacy of modern computer programs is merely that. Intircacy, more useful, more functional, faster, more ergonomic. But the same underlying principes. So my definition of a computer program extends wider than yours. and again, this isn't anything to discuss. I have my nomenclature, and you have yours. ok. Sure, technically AI is a computer program, but AI is not "...But the same underlying principes." AI is not BASIC, Java or even C++. AI is fundamentally different. So, in my opinion it can absolutely go "rogue." AI can reach a point when it can recursively improve itself! Try to program that using Fortran
richardmurray Posted June 6 Report Posted June 6 @ProfD you are correct, those that are afraid of computer programs activity, don't seem willing to not use them or design them better. I argue it is electronic addiction + laziness or greed in design. The reality is most big firms in the usa operate accepting penalties figuring penalties in court will end up being cheaper than proper development. We see this with the automotive makers in relation to people injured by their faulty design/the openai or google orfacebook in relation to the content from artist or the newspapers. It is like the green people. Green people say they want clean energy but then they don't want energy rations. The USA can run without nuclear and coal and oil but that means nyc can't have water running all day. the air conditioners in the southern cities can't run all day when it is 100 degree. @Delano 7 hours ago, Delano said: It is within its parameters to make decisions however there isn't any way to prevent unintended actions . And as I said before, you have elemental positions that lead to inevitable conclusions. But I don't share your elemental positions so we inevitably oppose each other's position. @Troy 5 hours ago, Troy said: If not, what are you saying and what is you point? I said something you don't comprehend and made a point you don't see. 5 hours ago, Troy said: So, in my opinion it can absolutely go "rogue." AI can reach a point when it can recursively improve itself! Try to program that using Fortran Just for the record I didn't say a computer program couldn't go rogue.
ProfD Posted June 6 Report Posted June 6 12 hours ago, richardmurray said: you are correct, those that are afraid of computer programs activity, don't seem willing to not use them or design them better. I argue it is electronic addiction + laziness or greed in design. The reality is most big firms in the usa operate accepting penalties figuring penalties in court will end up being cheaper than proper development. Greedy people are willing to accept a certain amount of risk up to and including loss of life in order to make money. Lazy people are always looking for an easier way to get things done even if it means putting themselves out of work. Technology has been a useful tool in allowing us to get things done faster without sacrificing labor and time *better spent* doing other stuff. The slippery slope is when technology reaches that point of making humans unnecessary. The metaphorical equivalent of cutting one's own throat.
Delano Posted June 6 Author Report Posted June 6 16 hours ago, richardmurray said: as I said before, you have elemental positions that lead to inevitable conclusions They aren't inevitable they are uncontrolled. Just like the analogy of training a child. It's like the difference between someone that is ethically wrong versus morally wrong. At one point or want wrong ethically for psychologist to sleep with their former patients. However it may not be morally right
richardmurray Posted June 7 Report Posted June 7 @ProfD 6 hours ago, ProfD said: The slippery slope is when technology reaches that point of making humans unnecessary. The metaphorical equivalent of cutting one's own throat. can a tool ever make the tool bearer unnecessary, if the tool bearer only uses it as a tool and not a replacement for self? @Delano 2 hours ago, Delano said: They aren't inevitable they are uncontrolled. Just like the analogy of training a child. I was talking about the elements of the positions you and me have , not computer programs. You really have a fear you have expressed.
ProfD Posted June 7 Report Posted June 7 4 minutes ago, richardmurray said: can a tool ever make the tool bearer unnecessary, if the tool bearer only uses it as a tool and not a replacement for self? The problem is tool manufacturers are selling tools to others who are willing to put humans out of work. Robots on assembly lines. Self-checkout in grocery stores. Kiosks and automated call centers. ChatGPT writing term papers and resumes. AI generating music. Self-driving vehicles. The list of tools is growing. Nobody should be surprised when the human labor force becomes unnecessary. People think it will be great if/when they don't have to work. Folks who have money will be fine. The question becomes how will those who don't have money already survive when their labor is no longer needed because they have been replaced by a robot or AI or another tool. 1
Troy Posted June 7 Report Posted June 7 2 hours ago, ProfD said: The question becomes how will those who don't have money already survive when their labor is no longer needed because they have been replaced by a robot or AI or another tool. There will be the need to some form of government provided minimum basic income. I'm telling you many entry level jobs can be replaced with an AI today. One person can do the job of two or more people right now. What will happen in the short term is that wealth inequality will increase. The people with the most money will not be the most ethical, so they will be useless. The rest of us will be at each other throats....
Delano Posted June 7 Author Report Posted June 7 4 hours ago, richardmurray said: You really have a fear you have expressed. Can you elaborate on what gear you believe I have
ProfD Posted June 7 Report Posted June 7 3 hours ago, Troy said: There will be the need to some form of government provided minimum basic income. Agreed. They will have to come up with a Universal Basic Income (UBI) for the average person to afford their basic needs. 3 hours ago, Troy said: I'm telling you many entry level jobs can be replaced with an AI today. One person can do the job of two or more people right now. Right. Of course, if the US builds more factories and invests heavily in trade schools, the country will move back into manufacturing and a robust blue collar job market. The open secret is the aforementioned technology will replace many jobs in those fantasy factories. Only a few high technically skilled people will be required to oversee the robots. Contrary to the BS being sold to the American people, the US is not going back to the 1950s when America was so great supposedly. 3 hours ago, Troy said: What will happen in the short term is that wealth inequality will increase. The people with the most money will not be the most ethical, so they will be useless. The rest of us will be at each other throats.... This thread started about the possibility of AI going rogue. The parallel here is the human labor force going away. Either way, humans can can stop hurtling down these tracks today and course correct. But, Americans will continue in these directions because money prevails even to those who have the least amount of it. Seems people are willing to cut their own throats and circumcise future generations economically in supporting the greed of a wealthy few. The *good news* is this is an untenable situation. As empire collapse, the US will implode under the weight of its own largesse. Maybe there will be a major cataclysmic natural disaster too. As a result, the country will go through a chaotic period that forces people to do a better job of taking care of each other...humanity.
richardmurray Posted June 7 Report Posted June 7 @ProfD 4 hours ago, ProfD said: The question becomes how will those who don't have money already survive when their labor is no longer needed because they have been replaced by a robot or AI or another tool. the usa federal government has already tested universal income in some small town locations. It has been positive. The USA military allows the usa to make any adjustments, cause the other countries can't militaristically dictate to it. War power is always the only thing that truly matters. If you can harm others and not be harmed yourself, you can do any financial strategy. Hochul has already planned on giving each new yorker money, the bureaucracy in the usa has many who are leading to a universal income. The questions are: what happens to immigration with this labor change? What happens to upward mobility? what happens in the governments outside the usa who don't have the infrastructure to create the electronic slaves to support the humans in themselves? While I have answers to each. The most important thing to comprehend is each question has multiple correct answers. I can say one thing. I am certain an anti computer/electronic culture will strengthen over time, sooner than later. @Delano fear of computers going rogue, that is it.
umbrarchist Posted June 7 Report Posted June 7 7 hours ago, richardmurray said: What happens to upward mobility? What is upward mobility? Having more money to buy consumer trash that impresses people that you compete with? Damn I forgot, I need a Rolex watch. How much of the economy is status games? As far as I am concerned as long as people are not talking about planned obsolescence they are not talking about how the economy works. We are supposed to pay to live on Stolen Land Forever!? 1
richardmurray Posted June 7 Report Posted June 7 @umbrarchist thank you I should had defined the terms. What I meant by upward mobility is the ability of a person, whether citizen or not, to accumulate fiscal wealth, whether through legal or illegal means. Why did I define this as such. A system that gives money to be used has to have greater restrictions on wealth accumulation. Remember the fiat currency is all calculated, ala a computer program. So with universal income to support the various industries: real estate/electronic/food you need the ability to become fiscally wealth to be reduced. and that is important because human beings,some not all, have always wanted to have more through fiscal wealth accumulation. 1 hour ago, umbrarchist said: How much of the economy is status games? The argument you make here goes back to socialism. The historically sad reality is universal income at its heart is very socialistic, the system that was deemed in the 1900s to be anti fiscal capitalistic by many. Socialism's lack of profiteering was deemed problematic in 1900s by so many fiscal capitalist. Yet, said socialism , from my seeing glass, is going to be at the core of how humanity in the future global economy exist aside a robotic+computer program labor force that allows a maintenance of fiscal capitalism for the financially wealthy while keeping the fiscal poor from a collective militaristic stance all supported by some sort of global military position, spearheaded by the usa, at the moment.
umbrarchist Posted June 8 Report Posted June 8 (edited) I asked ChatGPT to make a picture of Marjorie Taylor Greene's head coming out of a toilet. It told me that it could not make derogatory pictures of celebrities. I asked it to make a picture of the head of a blonde woman coming out of a toilet and got this: How "Intelligent" are these AIs? Looks like Marjorie Toilet Greene to me. . Edited June 8 by umbrarchist Add sentence 1
Troy Posted June 8 Report Posted June 8 Funny I have used AI pretty much every day for the past year and it never occurred to me to ask it do anything like that There are other tools that you can use that would use MTG in the manner you described, but you might have to spend some money to do it.
umbrarchist Posted June 9 Report Posted June 9 3 hours ago, Troy said: Funny I have used AI pretty much every day for the past year and it never occurred to me to ask it do anything like that People on BlueSky started saying "Marjorie Toilet Greene" so being totally without good taste and high ethical standards I cheerfully stooped to the obvious. .
umbrarchist Posted June 9 Report Posted June 9 6 hours ago, Troy said: I’m surprised no one did it before you did then. Me too! I just did a search. Apparently Michael Cohen started the term before Oct of 2023. I just read it yesterday.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now