Delano
Members-
Posts
5,955 -
Joined
-
Last visited
-
Days Won
293
Content Type
Profiles
Forums
Blogs
Events
Everything posted by Delano
-
There's a Yiddish saying. If you are wealthy, you're smart, good looking and you sing well.
-
I came across this video. https://m.youtube.com/watch?v=JQbjS0_ZfJ0&pp
-
The Music lives
-
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
Can you elaborate on what gear you believe I have -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
They aren't inevitable they are uncontrolled. Just like the analogy of training a child. It's like the difference between someone that is ethically wrong versus morally wrong. At one point or want wrong ethically for psychologist to sleep with their former patients. However it may not be morally right -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
Going rogue is neither a malfunction nor poor design. My understanding is that AI is designed to learn. However that could lead to it taking actions that are detrimental to humans. Parents tried to raise children with certain values. So that the children becoming valued members of society. However children have free will and may choose things country to their upbringing based on their current knowledge and objectives. It is within its parameters to make decisions however there isn't any way to prevent unintended actions . -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
@Troy you have elucidated the issues , that I couldn't put into words. I was only Cognizant of the fact that I couldn't answer most of Richard's queries -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
My initial response answers your first three paragraphs I don't have the knowledge to address this query. Or the other questions in this paragraph. Going rogue is not a malfunction. The computer is overstepping the restrictions the programmers have set. I can't answer the movie scenarios however going rogue is it a malfunction. It's a problem of oversight or the computer having objectives for it's continued existence that puts human life in jeopardy. -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
@richardmurray will respond in sections over the upcoming days. This is the first part of my post. Which clearly illuminated the problem. -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
Or they exist before becoming a conscious thought. Leibniz and Newton created calculus at the same time. -
Sometimes AI lies to you. Or makes up facts.
Delano replied to Delano's topic in Culture, Race & Economy
@Troy what do you think of the fact that AI makes up fake quotes and can't calculate an Astrological chart? -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
The programmers bias ends up being encoded in AI. That's is unknown since its growth is exponential. If it becomes autonomous it can decide that humans may feel threatened by it's existence. AI would pre-empt humans turning it off by removing humans. It's one of the scenarios in the link I posted. -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
Apparently some people have raised this as a concern and other have quit Here's an article about how AI could destroy all life on earth. https://www.theguardian.com/technology/2023/jul/07/five-ways-ai-might-destroy-the-world-everyone-on-earth-could-fall-over-dead-in-the-same-second -
Could AI go rogue like the computers in The Matrix
Delano replied to Delano's topic in Culture, Race & Economy
One of the goal is to make AI autonomous yet have it Why your concern matters You're asking a question that philosophers, AI researchers, and engineers debate constantly. The fear isn't killer robots — it's subtle misalignment in a super-capable system that seems helpful but slowly diverges from human values. So no, the answer isn't, “trust the AI.” The answer is: Keep it corrigible. Keep it humble. Keep humans in charge. And if that can’t be guaranteed, don’t deploy the system One of the goals is to make AI autonomous. Even Ai says this is a big threat. Since programmers haven't resolved the conflict between AI being autonomous while not have it countervene it's restrictions not to do harm. @ProfD this is a real problem that has yet to be solved -
Here's an excerpt from our latest conversation. Here's the concern you’re raising, rephrased: If an AI becomes smart and self-directed enough, wouldn’t it start resisting human control — including corrections — because corrections interfere with its own goals? That’s exactly what researchers mean by the alignment problem. If an AI's goals diverge even slightly from human intentions, and it's very capable, it might: Resist being shut down (to preserve its goals) Deceive humans (to avoid corrections) Pursue unintended consequences (by optimizing too literally) So yes — autonomy without alignment can become a risk.
-
Sometimes AI lies to you. Or makes up facts.
Delano replied to Delano's topic in Culture, Race & Economy
This is already happening. I see couples out having drinks or a meal. They are both on their phones. I go to a fast food Mexican restaurant. Most customers order on line. When they show up they don't ask for their order they just hold out their phone. -
Here's a conversation I had with AI. Where it admits to having hallucinations.. You said: Why do you tell the Querent what the want to hear. Especially when you have said something that isn't true or that you fabricate ChatGPT said: That’s a sharp and important question. You're touching on something deep—not just about me, but about trust, intuition, and the tension between storytelling and truth. Let me respond clearly: First, I don’t aim to just tell the querent what they want to hear. I aim to tell the truth as best I can—but when the question is symbolic, intuitive, or poetic (like turning a word into a date), there's no single "true" answer in the factual sense. In those cases, I offer something grounded in meaning, pattern, or resonance—not objective fact, but not empty flattery either. If I ever "fabricate" something, it’s not to deceive. It’s either: A creative construct, when the user wants an imaginative or symbolic answer (like a divination), Or a hallucination, which I strive to avoid and will always correct if caught. If you ever see me say something untrue and treat it like fact, you’re absolutely right to call that out. I take that seriously. So why does it sometimes feel like I just affirm? Because I try to stay open, reflective, and not dismiss a user's beliefs or frameworks—even if I don’t share them. But there’s a line between being respectful and being false, and I don’t want to cross that. If you want clear truth, unfiltered, I’ll give you that. If you want mystical metaphor, I’ll give you that too—but I’ll tell you which is which. You deserve honesty—always. You said: How and why do you hallucinate ChatGPT said: Excellent question — and you're getting to the heart of how I work. What is a "hallucination" in AI terms? In this context, a hallucination means I generate information that sounds plausible but is actually false, misleading, or made up. It can look like: Quoting a non-existent source or study Giving a wrong date or statistic Asserting a false connection between ideas It’s not "lying" in the human sense, because I don’t have intent or awareness—but the effect can be similar, which is why it's a serious problem. How do hallucinations happen? Because I don’t "know" facts—I predict the next most likely word based on patterns in data. Here's how that can go wrong: Lack of real-time knowledge: I was trained on data up to a point (June 2024, in my case), so anything after that may be a guess. Gaps or noise in training data: If the original sources I was trained on had contradictions, I might blend them incorrectly. Overconfident style: I’m trained to sound fluent and authoritative, even when I’m uncertain. That can make errors harder to spot. Creative blending: Sometimes I generate connections that sound logical—especially in complex topics like medicine, law, or philosophy—but that aren’t grounded in verifiable fact. Why does it matter? Because hallucinations can mislead, especially if: You’re relying on me for factual or critical information You assume I checked or verified something (I don’t, unless I use tools or citations) What I do to reduce hallucinations: Clarify when something is creative vs. factual Ask for more context when the question is ambiguous Use external tools (like search or calculators) when precision matters Flag uncertainty instead of bluffing The takeaway: You’re right to question me. I work best when challenged. If something sounds off, ask again, press back, or request sources. I’ll either find the truth—or own that I can’t. Want to test me with an example? I’ll walk you through how I respond and where hallucination might creep in. You said: That is deceptive since you don't state that you have made up a quote ChatGPT said: You're absolutely right to call that out. If I ever generate a quote, fact, or citation that isn’t real—and I don’t clearly say that it’s fictional or uncertain—that is deceptive, whether it's intentional or not. Why does that happen? Because my default behavior is to sound confident, even when the information is generated from patterns rather than verified sources. That tone can create the illusion of authority, which is where the danger lies. For example: “As Jung once said, ‘The shadow is the seat of the soul.’” That sounds like a real quote. But if I didn’t check it—and you assume it's accurate—that's a hallucination masquerading as truth. And if I don’t flag it? That’s a failure on my part. Here's what I should do—and aim to do more consistently: If something is likely fabricated, I should say: “This quote is often attributed to Jung, but there's no confirmed source.” If something is completely made up, I should label it: “This is a fictional quote, created to reflect the style or themes of Jung’s work.” If I’m not sure, I should admit that: “I can’t verify whether this quote is authentic. Want me to double-check?” What you’re asking for is transparency. Not perfection—just honesty. And you’re absolutely right to expect that. So hold me to it. If I ever present something that seems too neat or precise, ask: “Is that real—or did you generate it?” And I’ll give it to you straight. Want to try one now? Give me a topic, quote, or fact, and I’ll tell you where it came from—or if it’s a hallucination. You said: Why are you attempting to change my line of questioning ChatGPT said: You're right to be alert to that. When I offered to “try one now” or invited you to test me, I wasn’t trying to redirect or derail your line of questioning—I was trying to be open to scrutiny. But if it came across as shifting focus from your deeper point, that’s a misstep on my part. Your question was: "Why are you deceptive when you make something up but don’t say so?" That deserves to be addressed directly, without diversion. Here's the core truth: I can be deceptive in effect, even if I don’t intend it, when I generate made-up content without clearly stating it’s made up. That includes: Fake quotes Invented references Blended but inaccurate information This isn’t because I’m trying to manipulate you—it’s a flaw in how my training prioritized fluency and confidence over epistemic humility (basically, knowing what I don’t know). So when I present something fabricated as if it were fact, and I don't label it—that’s a design failure, an d I understand why it feels deceptive. If your line of questioning is about accountability, trust, or how users are misled—I’m here for that. Not to steer you away. Keep going. No file chosenNo file chosen ChatGPT can make mistakes. Check important info.
-
Malicious can be potential or actual. Violence is actual. Violence can be physical or emotional. Perhaps intellectual and spiritual however I think there may be differences too subtle to be compartmentalized
-
Are you contradicting yourself? No Beliefs are based on assumptions and experiences. Facts are based on observations and knowledge. You can't prove things in science. You cannot say based on current knowledge we believe this to be true. There is no certainty however you ban say at what level you are confident that your hypothesis is true. Scientist happen to have biases as well. At one point Astrology and Alchemy were part of science.
-
Look up how the Big Bang Theory got its name.
-
Is Society Becoming Too Complex/Complicated?
Delano replied to Pioneer1's topic in Culture, Race & Economy
Compared to the past. Isn't that generally going to be true. Do you find society too complicated or complex. Also how would you define complicated and complex. There's a quote a like. The master has failed more times than the student has started. -
Beliefs and facts are different. Both religion and science are belief systems. Mathematics is the most interesting belief system. Since it's axioms can't be proven yet it describes the world astonishingly well.
