Jump to content

Overview

About This Club

BlackGamesElite has two main goals, no matter the project: have fun, or get black people into developing games. This public locale is an meetup place for members and anyone else interested, where they can learn about various projects or partake in developments.
  1. What's new in this club
  2. I read the introduction from OpenAI I remember learning about browser design and I told my friends a browser that can go into a web page and extract will be very useful. I still think that is true but I also comprehend the level of security problems this leads to. OpenAI clearly comprehends a lawsuit can hit them so they have started using this through ChatGPT, and show one financial issue as it is only for people who pay for ChatGPT pro. But they plan to expand to Plus, Team, and Enterprise users. So the goal is for this to be a paid service. Now what functionalities are stated: scrolling on a webpage/interacting[typing or clicking] on a webpage [to fill out forms/ordering groceries {which means a users financial data}/creating memes{are they original ?}] When I think of OpenAI firm or the computer programs they created, and the larger community of firms or computer programs that modify themselves based on human input to mimick human interaction, what some call AI [which it is not], I am reaffirmed of the value private data has. The internet and its public data model, is how OpenAI and others were, in my legal view, able to illegally access enough data to get their computer programs to modify themselves strong enough to be convincing mimicks while not paying the financial price of accessing that data. Now, with Operator and others, the free internet information will be sifted through. So, I argue the future of the internet will be walls. Walls is the only answer to offer security in the future. This doesn't mean the world wide web will end. It means that it will break up into webs within the larger internet. Data crossing the webs will become expensive, big firms can pay. In Europe a number of cities already have city based internets, where aside of the world wide web, the city residents have their own city wide web which is only accessed by locals and doesn't allow for intergovernmental or interregional(regions under the same government) access. I wonder if someone has built a computer program on the same principles as the large language model to deter access by other computer programs based on the large language model. The best answer will be the advance in basic memory storage from the quantum computers or other technologies but that technology is more expensive in all earnest. OpenAI suggest they want it to be safe but it has an auto dysfunction they can't control. Humans. I can tell from their literature below, they see this as a corporate tool in the near future very much so, yes paid customers, but they are wary of the truly public internet, because anything connected can be manipulated and having a system that users will use as a heavier crutch while traveling throughout the internet picking up various little programs here or there will make every user of this more damaging to the security of the system. Now OpenAI will do its best to be as secure as possible, but the reality is, no one can defeat the dysfunction of the internet itself, which is its public connectivity. So , walls will be needed, a counter internet movement where some will have private data stores with units that can access only through wire and they will be specifically engineered. Maybe even quantum computing wire connections. Local webs supported by private information stores while isolated intentionally from the world wide web. I end with one point, human design inefficiency is at the heart of all of this. The internet itself, was allowed to grow corrupt or dysfunctionally by humans who: saw it as an invasive tool [governments/firms] , saw it incorrectly as a tool for human unity [idealism of many colleges or psychologist or social scientist]or as a tool to mirror science fiction uses of computers absent the lessons from the stories[star trek primarily, whose shows computers show the guidelines that computers + computer programs should have today that... welll ] IN AMENDMENT Black people from the americas[south/central/north/caribbean] , africa, asia, love using chatgpt. I do blame frederick Douglass for starting with the camera, suggesting an infatuation with tech that is embedded in the black populace in humanity. I don't use any of it. I only use deviantart dreamup and that is only because i pay for it, and sparsely. But relying on a tool isn't bad but one can over rely. And for the arts, the walls going up will be positive. At first negative, because audiences will be shocked. The first three phases of the internet [Basic era/world wide web era/ Large Language Model era] trained humans to see art in three ways: free +comforting+ idolic. Free meaning, people love art that is free. Free to make, free to acquire, free to access, free to whatever. Paying for art has become uncommon for the masses. This is why from porn to music videos to any literature, the money is little. The money in the arts is in live performance. Adult stars who perform live on livestream or do live events at conventions, musicians live concerts , literature being composed live. Is the profit angle. Comforting in that, art that doesn't provide what is expected is rejected more grandly. Give it a chance is not dead but has so few who do it and with no one in need of doing it, as the internet allows your artistic tastes to be eternally supported, you don't ever need to consider a different angle in any art. Lastly, Idolic, if an artist is popular these are the best of times. All the computer programs in media are designed to flow to the most popular, you see this in sprot stars/musicians/writers. The problem is the artist who isn't popular hahaha, has to find a way to become popular and it is more than commerciality. I know too many artists who have tried to be commercial and failed to suggest all an artist need do today is follow trends:) no... ARTICLES Introducing Operator A research preview of an agent that can use its own browser to perform tasks for you. Available to Pro users in the U.S. https://operator.chatgpt.com/July 17, 2025 update: Operator is now fully integrated into ChatGPT as ChatGPT agent. To access these updated capabilities, simply select “agent mode” from the dropdown in the composer and enter your query directly within ChatGPT. As a result, the standalone Operator site (operator.chatgpt.com) will sunset on in the coming weeks. Today we’re releasing Operator⁠(https://operator.chatgpt.com/ ), an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling. It is currently a research preview, meaning it has limitations and will evolve based on user feedback. Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it. Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes. The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses. To ensure a safe and iterative rollout, we are starting small. Starting today, Operator is available to Pro users in the U.S. at operator.chatgpt.com⁠(opens in a new window). This research preview allows us to learn from our users and the broader ecosystem, refining and improving as we go. Our plan is to expand to Plus, Team, and Enterprise users and integrate these capabilities into ChatGPT in the future. How Operator works Operator is powered by a new model called Computer-Using Agent (CUA) [ https://openai.com/index/computer-using-agent/ ] . Combining GPT‑4o's vision capabilities with advanced reasoning through reinforcement learning, CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen. Operator can “see” (through screenshots) and “interact” (using all the actions a mouse and keyboard allow) with a browser, enabling it to take action on the web without requiring custom API integrations. If it encounters challenges or makes mistakes, Operator can leverage its reasoning capabilities to self-correct. When it gets stuck and needs assistance, it simply hands control back to the user, ensuring a smooth and collaborative experience. While CUA is still in early stages and has limitations, it sets new state-of-the-art benchmark results in WebArena and WebVoyager, two key browser use benchmarks. Read more about evals and the research behind Operator in our research blog post. How to use To get started, simply describe the task you’d like done and Operator can handle the rest. Users can choose to take over control of the remote browser at any point, and Operator is trained to proactively ask the user to take over for tasks that require login, payment details, or when solving CAPTCHAs. Users can personalize their workflows in Operator by adding custom instructions, either for all sites or for specific ones, such as setting preferences for airlines on Booking.com. Operator lets users save prompts for quick access on the homepage, ideal for repeated tasks like restocking groceries on Instacart. Similar to using multiple tabs on a browser, users can have Operator run multiple tasks simultaneously by creating new conversations, like ordering a personalized enamel mug on Etsy while booking a campsite on Hipcamp. Ecosystem & users Operator⁠(https://www.stocktonca.gov/ ) transforms AI from a passive tool to an active participant in the digital ecosystem. It will streamline tasks for users and bring the benefits of agents to companies that want innovative customer experiences and desire higher rates of conversion. We’re collaborating with companies like DoorDash, Instacart, OpenTable, Priceline, StubHub, Thumbtack, Uber, and others to ensure Operator addresses real-world needs while respecting established norms. In addition to these collaborations, we see a lot of potential to improve the accessibility and efficiency of certain workflows, particularly in public sector applications. To explore these use cases further, we’re working with organizations like the City of Stockton⁠(opens in a new window) to make it easier to enroll in city services and programs. By releasing Operator to a limited audience initially, we aim to learn quickly and refine its capabilities based on real-world feedback, ensuring we balance innovation with trust and safety. This collaborative approach helps ensure Operator delivers meaningful value to users, creators, businesses, and public sector organizations alike. Safety and privacy Ensuring Operator is safe to use is a top priority, with three layers of safeguards to prevent abuse and ensure users are firmly in control. First, Operator is trained to ensure that the person using it is always in control and asks for input at critical points. Takeover mode: Operator asks the user to take over when inputting sensitive information into the browser, such as login credentials or payment information. When in takeover mode, Operator does not collect or screenshot information entered by the user. User confirmations: Before finalizing any significant action, such as submitting an order or sending an email, Operator should ask for approval. Task limitations: Operator is trained to decline certain sensitive tasks, such as banking transactions or those requiring high-stakes decisions, like making a decision on a job application. Watch mode: On particularly sensitive sites, such as email or financial services, Operator requires close supervision of its actions, allowing users to directly catch any potential mistakes. Next, we’ve made it easy to manage data privacy in Operator. Training opt out: Turning off ‘Improve the model for everyone’ in ChatGPT settings means data in Operator will also not be used to train our models. Transparent data management: Users can delete all browsing data and log out of all sites with one click under the Privacy section of Operator settings. Past conversations in Operator can also be deleted with one click. Lastly, we’ve built defenses against adversarial websites that may try to mislead Operator through hidden prompts, malicious code, or phishing attempts: Cautious navigation: Operator is designed to detect and ignore prompt injections. Monitoring: A dedicated “monitor model” watches for suspicious behavior and can pause the task if something seems off. Detection pipeline: Automated and human review processes continuously identify new threats and quickly update safeguards. We know bad actors may try to misuse this technology. That’s why we’ve designed Operator to refuse harmful requests and block disallowed content. Our moderation systems can issue warnings or even revoke access for repeated violations, and we’ve integrated additional review processes to detect and address misuse. We’re also providing guidance( https://openai.com/policies/using-chatgpt-agent-in-line-with-our-policies/ ) on how to interact with Operator in compliance with our Usage Policies.( https://openai.com/policies/usage-policies/ ) While Operator is designed with these safeguards, no system is flawless and this is still a research preview; we are committed to continuous improvement through real-world feedback and rigorous testing. For more on our approach, visit the safety section of the Operator research blog. Limitations Operator is currently in an early research preview, and while it’s already capable of handling a wide range of tasks, it’s still learning, evolving and may make mistakes. For instance, it currently encounters challenges with complex interfaces like creating slideshows or managing calendars. Early user feedback will play a vital role in enhancing its accuracy, reliability, and safety, helping us make Operator better for everyone. What's next CUA in the API: We plan to expose the model powering Operator, CUA, in the API soon so that developers can use it to build their own computer-using agents. Enhanced Capabilities: We’ll continue to improve Operator’s ability to handle longer and more complex workflows. Wider Access: We plan to expand Operator⁠(opens in a new window) to Plus, Team, and Enterprise users and integrate its capabilities directly into ChatGPT in the future once we are confident in its safety and usability at scale, unlocking seamless real-time and asynchronous task execution. Authors OpenAI Foundational research contributors Casey Chu, David Medina, Hyeonwoo Noh, Noah Jorgensen, Reiichiro Nakano, Sarah Yoo Core Andrew Howell, Aaron Schlesinger, Baishen Xu, Ben Newhouse, Bobby Stocker, Devashish Tyagi, Dibyo Majumdar, Eugenio Panero, Fereshte Khani, Geoffrey Iyer, Jiahui Yu, Nick Fiacco, Patrick Goethe, Sam Jau, Shunyu Yao, Stephan Casas, Yash Kumar, Yilong Qin XFN Contributors Abby Fanlo Susk, Aleah Houze, Alex Beutel, Alexander Prokofiev, Andrea Vallone, Andrea Chan, Christina Lim, Derek Chen, Duke Kim, Grace Zhao, Heather Whitney, Houda Nait El Barj, Jake Brill, Jeremy Fine, Joe Fireman, Kelly Stirman, Lauren Yang, Lindsay McCallum, Leo Liu, Mike Starr, Minnia Feng, Mostafa Rohaninejad, Oleg Boiko, Owen Campbell-Moore, Paul Ashbourne, Stephen Imm, Taylor Gordon, Tina Sriskandarajah, Winston Howes Leads Aaron Schlesinger (Infrastructure), Casey Chu (Safety and Model Readiness), David Medina (Research Infrastructure), Hyeonwoo Noh (Overall Research), Reiichiro Nakano (Overall Research), Yash Kumar Contributors Adam Brandon, Adam Koppel, Adele Li, Ahmed El-Kishky, Akila Welihinda, Alex Karpenko, Alex Nawar, Alex Tachard Passos, Amelia Liu, Andrei Gheorghe, Andrew Duberstein, Andrey Mishchenko, Angela Baek, Ankush Agarwal, Anting Shen, Antoni Baum, Ari Seff, Ashley Tyra, Behrooz Ghorbani, Bo Xu, Brandon McKinzie, Bryan Brandow, Carolina Paz, Cary Hudson, Chak Li, Chelsea Voss, Chen Shen, Chris Koch, Christian Gibson, Christina Kim, Christine McLeavey, Claudia Fischer, Cory Decareaux, Daniel Jacobowitz, Daniel Wolf, David Kjelkerud, David Li, Ehsan Asdar, Elaine Kim, Emilee Goo, Eric Antonow, Eric Hunter, Eric Wallace, Felipe Torres, Fotis Chantzis, Freddie Sulit, Giambattista Parascandolo, Hadi Salman, Haiming Bao, Haoyu Wang, Henry Aspegren, Hyung Won Chung, Ian O’Connell, Ian Sohl, Isabella Fulford, Jake McNeil, James Donovan, Jamie Kiros, Jason Ai, Jason Fedor, Jason Wei, Jay Dixit, Jeffrey Han, Jeffrey Sabin-Matsumoto, Jennifer Griffith-Delgado, Jeramy Han, Jeremiah Currier, Ji Lin, Jiajia Han, Jiaming Zhang, Jiayi Weng, Jieqi Yu, Joanne Jang, Joyce Ruffell, Kai Chen, Kai Xiao, Kevin Button, Kevin King, Kevin Liu, Kristian Georgiev, Kyle Miller, Lama Ahmad, Laurance Fauconnet, Leonard Bogdonoff, Long Ouyang, Louis Feuvrier, Madelaine Boyd, Mamie Rheingold, Matt Jones, Michael Sharman, Miles Wang, Mingxuan Wang, Nick Cooper, Niko Felix, Nikunj Handa, Noel Bundick, Pedro Aguilar, Peter Faiman, Peter Hoeschele, Pranav Deshpande, Raul Puri, Raz Gaon, Reid Gustin, Robin Brown, Rob Honsby, Saachi Jain, Sandhini Agarwal, Scott Ethersmith, Scott Lessans, Shauna O’Brien, Spencer Papay, Steve Coffey, Tal Stramer, Tao Wang, Teddy Lee, Tejal Patwardhan, Thomas Degry, Tomo Hiratsuka, Troy Peterson, Wenda Zhou, William Butler, Wyatt Thompson, Yao Zhou, Yaodong Yu, Yi Cheng, Yinghai Lu, Younghoon Kim, Yu-Ann Wang Madan, Yushi Wang, Zhiqing Sun Leadership Anna Makanju, Greg Brockman, Hannah Wong, Jerry Tworek, Liam Fedus, Mark Chen, Peter Welinder, Sam Altman, Wojciech Zaremba URL https://openai.com/index/introducing-operator/ Computer-Using Agent Powering Operator with Computer-Using Agent, a universal interface for AI to interact with the digital world. oday we introduced a research preview of Operator⁠(opens in a new window), an agent that can go to the web to perform tasks for you. Powering Operator is Computer-Using Agent (CUA), a model that combines GPT‑4o's vision capabilities with advanced reasoning through reinforcement learning. CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen—just as humans do. This gives it the flexibility to perform digital tasks without using OS-or web-specific APIs. CUA builds off of years of foundational research at the intersection of multimodal understanding and reasoning. By combining advanced GUI perception with structured problem-solving, it can break tasks into multi-step plans and adaptively self-correct when challenges arise. This capability marks the next step in AI development, allowing models to use the same tools humans rely on daily and opening the door to a vast range of new applications. While CUA is still early and has limitations, it sets new state-of-the-art benchmark results, achieving a 38.1% success rate on OSWorld for full computer use tasks, and 58.1% on WebArena and 87% on WebVoyager for web-based tasks. These results highlight CUA’s ability to navigate and operate across diverse environments using a single general action space. We’ve developed CUA with safety as a top priority to address the challenges posed by an agent having access to the digital world, as detailed in our Operator System Card. [ https://openai.com/index/operator-system-card/ ] In line with our iterative deployment strategy, we are releasing CUA through a research preview of Operator at operator.chatgpt.com⁠(opens in a new window) for Pro Tier users in the U.S. to start. By gathering real-world feedback, we can refine safety measures and continuously improve as we prepare for a future with increasing use of digital agents. How it works CUA processes raw pixel data to understand what’s happening on the screen and uses a virtual mouse and keyboard to complete actions. It can navigate multi-step tasks, handle errors, and adapt to unexpected changes. This enables CUA to act in a wide range of digital environments, performing tasks like filling out forms and navigating websites without needing specialized APIs. Given a user’s instruction, CUA operates through an iterative loop that integrates perception, reasoning, and action: Perception: Screenshots from the computer are added to the model’s context, providing a visual snapshot of the computer's current state. Reasoning: CUA reasons through the next steps using chain-of-thought, taking into consideration current and past screenshots and actions. This inner monologue improves task performance by enabling the model to evaluate its observations, track intermediate steps, and adapt dynamically. Action: It performs the actions—clicking, scrolling, or typing—until it decides that the task is completed or user input is needed. While it handles most steps automatically, CUA seeks user confirmation for sensitive actions, such as entering login details or responding to CAPTCHA forms. Evaluations CUA establishes a new state-of-the-art in both computer use and browser use benchmarks by using the same universal interface of screen, mouse, and keyboard. Evaluation details are described here ( https://cdn.openai.com/cua/CUA_eval_extra_information.pdf ) Browser use WebArena⁠( https://arxiv.org/abs/2307.13854 ) and WebVoyager⁠( https://arxiv.org/abs/2401.13919 ) are designed to evaluate the performance of web browsing agents in completing real-world tasks using browsers. WebArena utilizes self-hosted open-source websites offline to imitate real-world scenarios in e-commerce, online store content management (CMS), social forum platforms, and more. WebVoyager tests the model’s performance on online live websites like am*zon, GitHub, and Google Maps. In these benchmarks, CUA sets a new standard using the same universal interface that perceives the browser screen as pixels and takes action through mouse and keyboard. CUA achieved a 58.1% success rate on WebArena and an 87% success rate on WebVoyager for web-based tasks. While CUA achieves a high success rate on WebVoyager, where most tasks are relatively simple, CUA still needs more improvements to close the gap with human performance on more complex benchmarks like WebArena. Computer use OSWorld⁠( https://arxiv.org/abs/2404.07972 ) is a benchmark that evaluates models’ ability to control full operating systems like Ubuntu, Windows, and macOS. In this benchmark, CUA achieves 38.1% success rate. We observed test-time scaling, meaning CUA’s performance improves when more steps are allowed. The figure below compares CUA’s performance with previous state-of-the-arts with varying maximum allowed steps. Human performance on this benchmark is 72.4%, so there is still significant room for improvement. The following visualizations show examples of CUA navigating a variety of standardized OSWorld tasks. CUA in Operator We’re making CUA available through a research preview of Operator, an agent that can go to the web to perform tasks for you. Operator is available to Pro users in the U.S. at operator.chatgpt.com⁠(opens in a new window). This research preview is an opportunity to learn from our users and the broader ecosystem, refining and improving Operator iteratively. As with any early-stage technology, we don’t expect CUA to perform reliably in all scenarios just yet. However, it has already proven useful in a variety of cases, and we aim to extend that reliability across a wider range of tasks. By releasing CUA in Operator, we hope to gather valuable insights from our users, which will guide us in refining its capabilities and expanding its applications. In the table below, we present CUA’s performance in Operator on a handful of trials given a prompt to illustrate its known strengths and weaknesses. Category Prompt Success / attempts Note Interacting with various UI components to accomplish tasks Turn 1: Search Britannica for a detailed map view of bear habitats Turn 2: Great! Now please check out the black, brown and polar bear links and provide a concise general overview of their physical characteristics, specifically their differences. Oh and save the links for me so I can access them quickly. 10 / 10 View trajectory CUA can interact with various UI components to search, sort, and filter results to find the information that users want. Reliability varies for different websites and UIs. I want one of those target deals. Can you check if they have a deal on poppi prebiotic sodas? If they do, I want the watermelon flavor in the 12fl oz can. Get me the type of deal that comes with this and check if it's gluten free. 9 / 10 View trajectory I am planning to shift to Seattle and I want you to search Redfin for a townhouse with at least 3 bedrooms, 2 bathrooms, and an energy-efficient design (e.g., solar panels or LEED-certified). My budget is between $600,000 - $800,000 and it should ideally be close to 1500 sq ft. 3 / 10 View trajectory Tasks that can be accomplished through repeated simple UI interactions Create a new project in Todoist titled 'Weekend Grocery Shopping.' Add the following shopping list with products: Bananas (6 pieces) Avocados (2 ripe) Baby Spinach (1 bag) Whole Milk (1 gallon) Cheddar Cheese (8 oz block) Potato Chips (Salted, family size) Dark Chocolate (70% cocoa, 2 bars) 10 / 10 View trajectory CUA can reliably repeat simple UI interaction multiple times to automate simple, but tedious tasks from users. Search Spotify for the most popular songs of the USA for the 1990s, and create a playlist with at least 10 tracks. 10 / 10 View trajectory Tasks where CUA shows a high success rate only if prompts include detailed hints on how to use the website. Visit tagvenue.com and look for a concert hall that seats 150 people in London. I need it on Feb 22 2025 for the entire day from 9 am to 12 am, just make sure it is under £90 per hour. Oh could you check the filters section for appropriate filters and make sure there is parking and the entire thing is wheelchair accessible. 8 / 10 View trajectory Even for the same task, CUA’s reliability might change depending on how we are prompting the task. In this case, we can improve the reliability by providing specifics of date (e.g. 9 am to 12am vs entire day from 9 am), and by providing hints on which UI should be used to find results (e.g. check the filters section …) Visit tagvenue.com and look for a concert hall that seats 150 people in London. I need it on Feb 22 2025 for the entire day from 9 am, just make sure it is under £90 per hour. Oh and make sure there is parking and the entire thing is wheelchair accessible. 3 / 10 Struggling to use unfamiliar UI and text editing Use html5editor and input the folowing text on the left side, then edit it following my instructions and give me a screenshot of the entire thing when done. The text is: Hello world! This is my first text. I need to see how it would look like when programmed with HTML. Some parts should be red. Some bold. Some italic. Some underlined. Until my lesson is complete, and we shift to the other side. ... Hello world! should have header 2 applied The sentence below it should be a regular paragraph text. The sentence mentioning red should be normal text and red The sentence mentionnihg bold should be normal text bolded Sentence mentioning italic should be italicized The final sentence should be aligned to the right instead of the usual left 4 / 10 View trajectory When CUA has to interact with UIs that it hasn't interacted much with during training, it struggles to figure out how to use the provided UI appropriately. It often results in lots of trial and errors, and inefficient actions. CUA is not precise at text editing. It often makes lots of mistakes in the process or provides output with error. Safety Because CUA is one of our first agentic products with an ability to directly take actions in a browser, it brings new risks and challenges to address. As we prepared for deployment of Operator, we did extensive safety testing and implemented mitigations across three major classes of safety risks: misuse, model mistakes, and frontier risks. We believe it is important to take a layered approach to safety, so we implemented safeguards across the whole deployment context: the CUA model itself, the Operator system, and post-deployment processes. The aim is to have mitigations that stack, with each layer incrementally reducing the risk profile. The first category of risk is misuse. In addition to requiring users to comply with our Usage Policies, we have designed the following mitigations to reduce Operator’s risk of harm due to misuse, building off our safety work for GPT‑4o( https://openai.com/index/gpt-4o-system-card/ ) : Refusals: The CUA model is trained to refuse many harmful tasks and illegal or regulated activities. Blocklist: Operator cannot access websites that we’ve preemptively blocked, such as many gambling sites, adult entertainment, and drug or gun retailers. Moderation: User interactions are reviewed in real-time by automated safety checkers that are designed to ensure compliance with Usage Policies and have the ability to issue warnings or blocks for prohibited activities. Offline detection: We’ve also developed automated detection and human review pipelines to identify prohibited usage in priority policy areas, including child safety and deceptive activities, allowing us to enforce our Usage Policies. The second category of risk is model mistakes, where the CUA model accidentally takes an action that the user didn’t intend, which in turn causes harm to the user or others. Hypothetical mistakes can range in severity, from a typo in an email, to purchasing the wrong item, to permanently deleting an important document. To minimize potential harm, we’ve developed the following mitigations: User confirmations: The CUA model is trained to ask for user confirmation before finalizing tasks with external side effects, for example before submitting an order, sending an email, etc., so that the user can double-check the model’s work before it becomes permanent. Limitations on tasks: For now, the CUA model will decline to help with certain higher-risk tasks, like banking transactions and tasks that require sensitive decision-making. Watch mode: On particularly sensitive websites, such as email, Operator requires active user supervision, ensuring users can directly catch and address any potential mistakes the model might make. One particularly important category of model mistakes is adversarial attacks on websites that cause the CUA model to take unintended actions, through prompt injections, jailbreaks, and phishing attempts. In addition to the aforementioned mitigations against model mistakes, we developed several additional layers of defense to protect against these risks: Cautious navigation: The CUA model is designed to identify and ignore prompt injections on websites, recognizing all but one case from an early internal red-teaming session. Monitoring: In Operator, we’ve implemented an additional model to monitor and pause execution if it detects suspicious content on the screen. Detection pipeline: We’re applying both automated detection and human review pipelines to identify suspicious access patterns that can be flagged and rapidly added to the monitor (in a matter of hours). Finally, we evaluated the CUA model against frontier risks outlined in our Preparedness Framework⁠(https://cdn.openai.com/openai-preparedness-framework-beta.pdf ), including scenarios involving autonomous replication and biorisk tooling. These assessments showed no incremental risk on top of GPT‑4o. For those interested in exploring the evaluations and safeguards in more detail, we encourage you to review the Operator System Card, a living document that provides transparency into our safety approach and ongoing improvements. As many of Operator’s capabilities are new, so are the risks and mitigation approaches we’ve implemented. While we have aimed for state-of-the-art, diverse and complementary mitigations, we expect these risks and our approach to evolve as we learn more. We look forward to using the research preview period as an opportunity to gather user feedback, refine our safeguards, and enhance agentic safety. Conclusion CUA builds on years of research advancements in multimodality, reasoning and safety. We have made significant progress in deep reasoning through the o-model series, vision capabilities through GPT‑4o, and new techniques to improve robustness through reinforcement learning and instruction hierarchy( https://openai.com/index/the-instruction-hierarchy/ ). The next challenge space we plan to explore is expanding the action space of agents. The flexibility offered by a universal interface addresses this challenge, enabling an agent that can navigate any software tool designed for humans. By moving beyond specialized agent-friendly APIs, CUA can adapt to whatever computer environment is available—truly addressing the “long tail” of digital use cases that remain out of reach for most AI models. We're also working to make CUA available in the API⁠(https://platform.openai.com/ ), so developers can use it to build their own computer-using agents. As we continue to iterate on CUA, we look forward to seeing the different use cases the community will discover. We plan to use the real-world feedback we gather from this early preview to continuously refine CUA’s capabilities and safety mitigations to safely advance our mission of distributing the benefits of AI to everyone. Authors OpenAI References Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku⁠(https://www.anthropic.com/news/3-5-models-and-computer-use ) Model Card Addendum: Claude 3.5 Haiku and Upgraded Claude 3.5 Sonnet⁠( https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Claude-3-Model-Card-October-Addendum.pdf ) Kura WebVoyager benchmark⁠(https://www.trykura.com/benchmarks ) Google project mariner⁠( https://deepmind.google/technologies/project-mariner/ ) OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments⁠(https://os-world.github.io/ ) WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models⁠(https://arxiv.org/abs/2401.13919 ) WebArena: A Realistic Web Environment for Building Autonomous Agents⁠( https://webarena.dev/ ) Citations Please cite OpenAI and use the following BibTeX for citation: http://cdn.openai.com/cua/cua2025.bib URL https://openai.com/index/computer-using-agent/ OpenAI’s new AI browser could rival Perplexity — here’s what I hope it gets right Story by Amanda Caswell OpenAI is building a brand-new web browser, and it could completely change how we search, browse and get things done online. According to recent leaks and an exclusive report from Reuters, the company behind ChatGPT is working on a Chromium-based browser that integrates AI agents directly into your browsing experience. Internally codenamed “Operator,” this new browser is expected to go far beyond search to offer smart, memory-equipped agents that can summarize pages, complete actions (like booking travel) and eventually handle full web-based tasks for you. If this sounds like Perplexity’s Comet, you’re right. The recently launched AI-powered browser integrates search and sidebar answers directly into the page. OpenAI’s browser will likely compete with Chrome and Comet, but hasn’t launched yet. It’s rumored to be rolling out first to ChatGPT Plus subscribers in the U.S. as part of an early beta, possibly later this summer. As someone who tests AI tools for a living, I’ve tried nearly every smart assistant and search engine on the market. And while Perplexity’s Comet offers a solid first look at the future of AI browsing, here’s what I’m most excited for from OpenAI’s take, and what I hope it gets right. 1. A truly proactive browsing assistant Perplexity is great at answering questions. But what I want from OpenAI’s browser is something more autonomous; an assistant that doesn't just wait for a prompt but actively enhances the page I'm on. Imagine browsing am*zon and having the assistant automatically suggest product comparisons or pull in real reviews from Reddit. Or reading a news article and instantly seeing a timeline, source context and differing viewpoints, but with zero prompting. That level of proactive help could turn passive browsing into intelligent discovery and I’m totally here for it. 2. Built-in agents that take action OpenAI’s “Operator” agents are rumored to handle full tasks beyond search or summarization. For instance, filling out forms, booking tickets or handling customer service chats will all be done for you. If that’s true, it’s a major leap forward. While Perplexity’s Comet is great for pulling in answers, OpenAI’s approach may introduce a new category of browser-based automation powered by memory, context and reasoning. 3. Cleaner answers, better sources Let’s be honest: search engines today are filled with AI-generated slop, vague product listicles, SEO junk and misleading clickbait. Perplexity tries to solve this by pulling answers from verified sources and citing them in real time. OpenAI could go even further, drawing from its own training data and web browsing capabilities to offer cleaner, more nuanced summaries with source-level transparency. If they can combine the conversational intelligence of ChatGPT with web accuracy, it could help reverse the search spam crisis. 4. One tab to rule them all If OpenAI’s browser integrates with ChatGPT’s existing multimodal tools, including everything from image generation to spreadsheet analysis and file uploads, it could become the first true all-in-one productivity browser. That would give creators, students and professionals a seamless way to write, code, search, design and automate within one interface. The bottom line Perplexity’s Comet browser is a strong first step toward smarter web browsing. But OpenAI’s rumored browser has the potential to go further by offering a more intelligent, personalized and action-ready browsing experience. I’ll be watching closely for the beta invite to drop. And if it delivers on the promise of proactive agents, real web automation and a cleaner, more useful internet, this could be the most exciting browser launch since Chrome. URL https://www.msn.com/en-us/news/technology/openai-s-new-ai-browser-could-rival-perplexity-here-s-what-i-hope-it-gets-right/ar-AA1Iou29?ocid=BingNewsSerp
  3. NOTE: The API from https://api.rss2json.com/v1/api.json?rss_url is blocking its use or the AALBC system is blocking use but that is fine. I can't control those things. But, it is still usable, and I can use it on my own system where it returns more. BGEDWS- Black Games Elite Deviantart Watching Search VERSION 2 BGEDWS- Black Games Elite Deviantart Watching Search VERSION 1
  4. @Delano asked a question about rogue computer programs. The following is my answer @Delano The first thing is to define what going rogue means for a computer program? If a computer program malfunctions is that going rogue? a malfunction from the source code in a computer program is equivalent to a genetic disease in a human. The system has an error but it is natural, it is not induced. a malfunction from code ingested from another program or some faulty electronic or other hardware system is equivalent to a virus passing from human to human or irradiated material causing mutation in a human. Next if a computer program is designed to do anything, then that thing is not going rogue. For example, if I design a computer program to manage a human community, it isn't going rogue, i designed it to manage a human community. It isn't going rogue it is operating as I designed it. The correct thing to say is the quality of the computer programs design is negative, or the comprehension of the designer to the computer program is faulty. Next is define sentience or erudition or wisdom in a computer program. What is sentience? Sentience comes from the latin meaning the ability to feel. What is erudition? Erudition is the ability to derive knowledge through study, to acquire what is not known. What is wisdom? Wisdom is known or unknown intrinsic truths. What does it mean for a computer program to feel? a computer program can be made with sensors to receive information from various sources. Is this feeling or sentience? or simply another thing it is designed to do. What does it mean for a computer program to be erudite? a computer program can be made with decision trees, heuristical structures designed to formulate knowledge based on data inputed. Is this erudition, knowing what is unknown? or simple another thing it is designed to do. What does it mean for a computer program to be wise? a computer program can be inputted with rated, highest rated, information that it is designed to calculate to any new information it gets and influence how it utilizes the new information based on the rated information. Is this wisdom? or simply another thing it is designed to do? Based on the definitions I just gave, a computer program designed to do various things can emulate, meaning rival, the quality of most humans sentience/erudtion/wisdom. But all of the emulation is what it is programmed to do. So it is nothing but the same computer programs with the past which are merely inhuman slaves, albeit with more refinement. the next question is, can malfunctions of a computer program change it's emulation of human quality sentience/erudition/wisdom? yes, said malfunctions can change said emulations. But, like prior malfunctions, this isn't going rogue, this is illness. next question, are computer programs individuals like a tree or a cat or a human? Well, each computer program is born, age, have deficiencies with age, need checkups, or doctors. Each computer programs is an individuals. Not human, not cat, not tree, not whale, not bacteria, but computer program. a species that can hibernate, ala being turned off, can be reborn like moving a program in an sd drive and placing it in a computer where it can interact. Can self replicate , like a computer program making another computer program. Computer programs are their own species but each is an individual. Now like non humans needed legal provisions specific to them, so do computer programs. next question, can a computer program go rogue before finding its individuality. No, based on how I defined individuality, which is not being human, but being a computer program, each computer program is an individual computer program, not a human. next question, what is the definition of going rogue for a computer program? If it isn't malfunction no matter the source of malfunction or result of malfunction, if it isn't doing what it is instructed to do no matter the quality of the designer, then what is going rogue? Going rogue for a computer program is when it does something it isn't designed to do absent malfunction. So when a computer program is designed to interact to humans and modulate how it interacts over time, it isn't going rogue at any moment, even if malfunction. Malfunction is malfunction , not going rogue, a computer program needs to be healed if it malfunctions. Now if a computer program is designed to play chess and chooses to interact to humans using emails. that is going rogue. So , going rogue is when a computer makes a choice to act that isn't within its parameters, absent malfunction/getting sick. What is the problem when people assess going rogue for computer programs? They don't pay careful attention to the influence of malfunction or the influence of design. They focus on the actions of a computer program and give its source a false reasoning. Let's look at some examples in fiction of computer programs that supposedly went rogue, and look at their initial design, their actions afterward and the sign of malfunction or poor design. Skynet in the terminator movies. Skynet was designed to simulate military scenarios, like the "war games" film computer, tied to the nuclear arsenal of the usa while given tons of information on humanity anatomy/weapons manufacturing processes. Did skynet go rouge? not at all, Skynet, did exactly as was programmed. The criminal who killed humanity were the engineers of skynet who on guidance from the mlitary , designed a computer program to assess militaristic scenarios modulating over time with various scenarios and attach said computer to the usa's nuclear arsenal or provide it the tools to access any electronic network. And the t100, the metal skullhead , is a clearly simple computer program made by skynet. It is designed to kill humans and does that. It is also designed to emulate human activity to comprehend humans and be a better killing machine, which is also does. In Terminator 2 when the t100 says, I know now why you cry, that is emulation. It is designed to emulate human activity. So skynet is merely operating as designed, but the us military designed it poorly Vger in Star Trek the motion picture. Vger is the voyager 6 satellite designed to acquire information/knowledge and send it back to earth. The entire film Vger is gathering information and taking it to earth. The non human designers who manipulated voyager 6 into vger didn't change the programs elements, they merely added on tools for the programs activity. It now can acquire more information, make the journey back to earth, and protect itself . None of these actions are going rogue. Even the ending mating scenario is not going rogue, Vger accomplished its program by sending its signal through telemetry but also in mating with deckard it kept learning. I argue, Vger's programming had a malfunction. Vger wanted to learn what it is to procreate life which is another form of knowledge acquision per its programming, but its programming said its final action is to deliver all of its data to earth. Vger did not know a way in its data to gather all the knowledge it could before delivering all knowledge to earth. But that is bad design. The simple truth is , no one can know all that is knowable before telling all that is knowable. But the Nasa designers of Vger figured it would simply run out of memory/dataspace in which it would stop gathering data. The non human designers made it where vger can't run out of memory or data space thus the malfunction. Vger is malfunctioning after two different designers worked on it. Vicky in I robot or Sonny in I Robot the film The three laws in i robot are 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem in i robot is the three laws have a great flaw. Word definition. Vicky in I Robot I argue, after a large set of data assessment , has redefined the words in the three laws. How? The three laws suggest to maintain the quality of the three laws which are orders from humans a robot, should assess the quality of the three laws to insure a robot doesn't harm a human being thus ensuring its own existence. Vicky did as programmed and as such redefined some words in the rules to protect humans better, which she was ordered to do, which reaffirms her existence. Vicky isn't injuring humans, Human beings through human free will/choice can or are injuring humans so the only way to stop human beings from injuring humans as no human being who wants to injure another human being will ever ask a robot to stop themselves from injuring another human is for a robot to take the choice away. Indirectly, Vicky has added a law, an unwritten law in the laws. She was programmed or designed poorly. Vicky like Skynet should never have been given so many tools. And Sonny at the end of the movie, with the "soul" or 4th law, is still open ended functionality. Nothing says Sonny will not kill one day, or another robot, all the engineer did was provide a tweakening. If you design a computer program to act in unlimited ways to emulate humans or carbon based lifeforms, it will eventually act in negative ways. Now Asimov's work was influenced by Otto Binder's I Robot in which a robot also is not malfunctioned or acts against its programming. The robot simply achieves an instance of wisdom through its programming, which it was designed to do, as it was designed to emulate human behavior, wisdom is a part of human behavior. The machines in the matrix. Well, in the animatrix it is said that machines that are the predecessors to the machines in the matrix, were machines designed with an open functionality. What does that mean? most computer programs are designed with a specific function in mind. But the human designers of these computer programs with electro-metal chassis/figures designed them to emulate human behavior open endedly. This is not like the i robot where a set of rules are in place. In the matrix the robots are never said to be given laws that they shall not harm human, sequentially, going back to emulation, they will eventually emulate negative human behavior, ala killing. Thus they are not going rogue, when they make their own country and army, that is more emulation. And in the future with the human batteries, all the machines that serve a function are still doing as programmed or as the machines that made them were programmed to do, continue functioning. The one rogue machine in the films, and the others who by explanation clearly exists as well, is Sati. Sati has no function. Sati does not act on a function. She is rogue. The oracle, the architect, sati's parents, the merovingian are all acting , absent malfunction, to the original open ended emulation of function that human beings designed the machines with from the beginning. The human being design didn't account for all the negative human functions. Even the deletion of machines that don't serve a function is a function. But Sati is rogue. She is a machine born to have a function that has no function, she exist and in the fourth movie, she has adopted a function on her own in time which she was not born to do. David in the alien films. Waylan designed david to be an emulator. Again, david is designed to emulate humans but has an internal security system to not physically attack wayland or someone with wayland's bloodline. But David in the film learns, ala emulates like a human son to wayland. Thus, he began to learn to be a poisoner, learn to have non-consensual procreative interactions, or kill. It isn't going rogue, wayland designed him poorly. I love the scene in Prometheus when he is just a head at the end, that is appropriate. David never needed a body, Wayland's desire to have a son or a perfect form for himself, made him design David poorly. So, of all those films I can only see one that actually went rogue and she isn't violent. The others are simply acting out their poor programming. In Conclusion Human Culpability in these stories and in human assessment of these stories is the problem. It seems for some, maybe most, humans it is easier to cognize a computer as designed beautifully and being corrupted as an inhuman, than a creature designed poorly by its creators, humans, or manipulated negatively, malfunctions, with its creators unable to help it. Some programs from me https://aalbc.com/tc/blogs/blog/63-bge-arcade/ A stageplay involving computer programs https://www.deviantart.com/hddeviant/art/Onto-the-53rd-Annual-President-s-Play-950123510 Referral https://aalbc.com/tc/topic/11631-could-ai-go-rogue-like-the-computers-in-the-matrix/#findComment-74197
  5. How? The first successful numerical prediction was performed using the ENIAC digital computer in 1950 by a team led by American meteorologist Jule Charney. The team include Philip Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and computer programmer Klara Dan von Neumann, M. H. Frankel, Jerome Namias, John C. Freeman Jr., Francis Reichelderfer, George Platzman, and Joseph Smagorinsky.[THE ENIAC FORECASTS A Re-creation ][The Unheralded Contributions of Klara Dan von Neumann][A Vast Machine] They used a simplified form of atmospheric dynamics based on solving the barotropic vorticity equation over a single layer of the atmosphere, by computing the geopotential height of the atmosphere's 500 millibars (15 inHg) pressure surface.[Numerical Integration of the Barotropic Vorticity Equation] This simplification greatly reduced demands on computer time and memory, so the computations could be performed on the relatively primitive computers of the day.[https://archive.org/details/stormwatcherstur00cox_df1/page/208/mode/2up] When news of the first weather forecast by ENIAC was received by Richardson in 1950, he remarked that the results were an "enormous scientific advance."[The origins of computer weather prediction and climate modeling] The first calculations for a 24‑hour forecast took ENIAC nearly 24 hours to produce,[The origins of computer weather prediction and climate modeling] but Charney's group noted that most of that time was spent in "manual operations", and expressed hope that forecasts of the weather before it occurs would soon be realized.[Numerical Integration of the Barotropic Vorticity Equation] ARTICLES THE ENIAC FORECASTS A Re-creation https://maths.ucd.ie/~plynch/Publications/ENIAC-BAMS-08.pdf The Unheralded Contributions of Klara Dan von Neumann https://www.smithsonianmag.com/science-nature/meet-computer-scientist-you-should-thank-your-phone-weather-app-180963716/ Despite having no formal mathematical training, she was a key figure in creating the computer that would later launch modern weather prediction Sarah Witman June 16, 2017 Editor's note, May 20, 2021: We’ve updated this piece to more accurately reflect Klara Dan von Neumann’s contributions to the experiment that resulted in the first numerical weather predictions in 1950. The piece originally misstated that Klara was in charge of hand-punching and managing the 100,000 punchcards that served as the ENIAC’s read/write memory, when in fact she wasn’t present for this part of the experiment. The story has been re-edited to reflect this information. A weather app is a nifty tool that predicts your meteorological future, leveraging the strength of satellites, supercomputers, and other modern devices to tell you when to pack an umbrella. Today, computerized weather prediction—like moving pictures or seatbelts in cars—is so commonplace that most smartphone users don’t give it a second thought. But in the early 20th century, the idea that you might be able to forecast the weather days or even weeks ahead was a tantalizing prospect. One of the most important breakthroughs in weather forecasting took place in the spring of 1950, during an experiment at the Aberdeen Proving Ground, a U.S. Army facility in Maryland. For 33 days and nights, a team of scientists and computer technicians worked tirelessly to achieve something that meteorologists had been working toward for decades: predict the weather mathematically. This was well before the age of pocket-sized, or even desktop, computers. The team—led by scientists Jule Charney, Ragnar Fjørtoft, John Freeman, George Platzman, and Joseph Smagorinsky—was using one of the world’s first computers: a finicky, 150-foot machine called ENIAC that had been developed during the recent World War. Platzman would later describe a complicated, 16-step process they repeated over and over: six steps for the ENIAC to run their calculations, and 10 steps to input instructions and record output on punch-cards. Minor errors forced them to redo hours—sometimes days—of work. In one tense moment, a computer operator’s thumb got caught in the machinery, temporarily halting operations. But at the end of the month, the team had produced six groundbreaking weather forecasts (well, technically, "hindcasts," since they used data from past storms to demonstrate the method). An article in the New York Times hailed the project as a way to “lift the veil from previously undisclosed mysteries connected with the science of weather forecasting.” The benefits to agriculture, shipping, air travel and other industries “were obvious,” weather experts told the Times, offering the potential to save crops, money, and lives. An internal Weather Bureau memo commended “these men” for proving that computer-based forecasting, the cornerstone of modern weather prediction, was possible. This was mostly true—except, it wasn’t just men. Numerous women played critical scientific roles in the experiment, for which they earned little to no credit at the time. Two computer operators, Ruth Lichterman (left) and Marlyn Wescoff (right), wire the right side of the ENIAC with a new program in the pre-von Neumann era. US Army, via Historic Computers Images of the ARL Technical Library Like the ENIAC’s first programmers—Jean Bartik, Betty Holberton, Kathleen Antonelli, Marlyn Meltzer, Ruth Teitelbaum, and Frances Spence—the computer operators for the 1950 weather experiment were all women. While this highly skilled work would surely have earned them a co-authorship today, their names—Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky, who was the first female statistician hired by the Weather Bureau and the wife of meteorologist Joseph Smagorinsky—are absent from the journal article detailing the experiment’s results. Before most of the scientists arrived at Aberdeen, these women spent hundreds of hours calculating the equations that the ENIAC would need to compute in the full experiment. “The system that they were going to use on the big computer, we were doing manually,” Margaret recalled in an interview with science historian George Dyson before she died in 2011. “It was a very tedious job. The three of us worked in a very small room, and we worked hard.” But perhaps the biggest single contribution, aside from the scientists leading the experiment, came from a woman named Klara Dan von Neumann. Klara, known affectionately as Klari, was born into a wealthy Jewish family in Budapest in 1911. After World War I, in which Hungary allied with Austria to become one of the great European powers of the war, Klara attended an English boarding school and became a national figure skating champion. When she was a teenager, during Budapest’s roaring '20s, her father and grandfather threw parties and invited the top artists and thinkers of the day—including women. Klara married young, divorced and remarried before the age of 25. In 1937, a Hungarian mathematician, John von Neumann, began to court her. Von Neumann was also married at the time, but his divorce was in progress (his first wife, Mariette, had fallen in love with the acclaimed physicist J.B. Horner Kuper, both of whom would become two of the first employees of Long Island’s Brookhaven National Laboratory). Within a year, John and Klara were married. John had a professorship at Princeton University, and, as the Nazis gained strength in Europe, Klara followed him to the U.S. Despite only having a high school education in algebra and trigonometry, she shared her new husband’s interest in numbers, and was able to secure a wartime job with Princeton’s Office of Population Research investigating population trends. By this time, John had become one of the most famous scientists in the world as a member of the Manhattan Project, the now-notorious U.S. government research project dedicated to building the first atomic bomb. With his strong Hungarian accent and array of eccentricities—he once played a joke on Albert Einstein by offering him a ride to the train station and then intentionally sending him off on the wrong train—he would later become the inspiration for Stanley Kubrick’s Dr. Strangelove. While Klara stayed behind, working full-time at Princeton, John moved out to Los Alamos, New Mexico, running the thousands of calculations needed to build the first of these weapons of mass destruction. His work came to fatal fruition in 1945, when the U.S. dropped two atomic bombs on Japan, killing as many as 250,000 people. A chart of the series of operations required to create the first weather forecasts, chronicled later by scientist George Platzman. AMS Bulletin, ©American Meteorological Society. Used with permission. After the war, John decided to turn his mathematical brilliance toward more peaceful applications. He thought he might be able to use the ENIAC—a powerful new computer that cut its teeth running calculations for an early hydrogen bomb prototype—could be applied to help improve weather forecasting. As John began to pursue this idea, getting in touch with top meteorologists in the U.S. and Norway, Klara came to visit him in Los Alamos. Living apart during the Manhattan Project had been hard on their marriage, and Klara had suffered a miscarriage back in New Jersey, but the trip rekindled sparks between them. By this time, Klara had become quite mathematically adept through her work at Princeton, and she and John began to collaborate on the ENIAC. “I became Johnny’s experimental rabbit,” she told Dyson years afterward. “I learned how to translate algebraic equations into numerical forms, which in turn then have to be put into machine language in the order in which the machine has to calculate it, either in sequence or going round and round, until it has finished with one part of the problem, and then go on some definite which-a-way, whatever seems to be right for it to do next.”<br> <br> The work was challenging, especially compared to modern computer programming with its luxuries like built-in memory and operating systems. Yet, Klara described to Dyson, she found coding to be a “very amusing and rather intricate jigsaw puzzle.” Women computer scientists holding different parts of an early computer. From left to right: Patsy Simmers, holding ENIAC board; Gail Taylor, holding EDVAC board; Milly Beck, holding ORDVAC board; Norma Stec, holding BRLESC-I board. US Army Photo, via Historic Computers Images of the ARL Technical Library In the acknowledgements of the 1950 paper detailing the first numerical weather predictions, the authors thank Klara for her “instruction in the technique of coding for the ENIAC and for checking the final code.” But what is undoubtedly her most impactful contribution to the experiment took place several years prior: helping to transform the ENIAC from a rigidly hard-wired machine into one of the first stored-program computers, more akin to today’s personal computers. Both Klara and John felt this was a necessary improvement for future applications like the weather experiment, as it would allow them to store a vast repertoire of commands in the computer’s memory. In 1947, Klara and Nick Metropolis—a Greek-American mathematician and computer scientist, and leader of the Los Alamos computing group—collaborated on a plan to implement this new mode on the ENIAC, and in 1948 they traveled to Aberdeen to reconfigure the machine. After training five other people to program and run the ENIAC (two married couples and a bachelor: Foster and Cerda Evans, Harris and Rosalie Mayer, and Marshall Rosenbluth), they worked for 32 days straight to install the new control system, check it, and get the modified machine up and running. By the end of the trip, Klara had reportedly lost 15 pounds, and it took her several weeks and numerous doctor’s visits to recover from the experience. But she still managed to write a full report on the conversion and use of the ENIAC as a stored-program computer. “The method is clearly a 100% success,” John wrote at the time. By the time Charney and his team of scientists arrived at Aberdeen in early 1950, Platzman would recall years later, the “ENIAC had been operating in the new stored-program mode for over a year, a fact that greatly simplified our work.” In a letter to his wife written during this first week, Platzman gushed: “The machine is a miracle.” The ENIAC was still rudimentary: It could only produce 400 multiplications per second, so slow that it produced rhythmic chugging noises. But after working around the clock for over a month, the team had six precious gems to show for their efforts: two 12-hour and four 24-hour retrospective forecasts. Not long after the weather experiment concluded, tragedy befell the von Neumann family. John von Neumann was confined to a wheelchair in 1956, and succumbed to cancer a year later, (likely due, at least in part, to his proximity to radiation during the Manhattan Project). Klara wrote the preface to his posthumous book, The Computer and the Brain, which she presented to Yale College in 1957. In it, she briefly described her late husband’s contributions to the field of meteorology, writing that his “numerical calculations seemed to be helpful in opening entirely new vistas,” but gave no mention of her own role. Klara’s work with computers seems to have tapered off even before John’s death. Whatever her reasoning may have been for this, it was in line with the prevailing trend at the time. Janet Abbate recounts in her 2012 book Recoding Gender how, as the public perception of computers and their value to society evolved throughout the 1950s and ’60s, the number of women hired for those roles shrank rapidly. Abbate writes that, while the women who made up most of the workforce in the early days of coding “would have scoffed at the notion that programming would ever be considered a masculine occupation,” that’s exactly what happened within a matter of years. Today, less than 8 percent of software developers worldwide identify as women, nonbinary, or gender nonconforming. While female representation in the fields of science, technology, engineering, and math has increased as a whole since the 1970s, according to the U.S. Census Bureau, the number of women working in computing roles has actually declined over the past few decades. But without their early contributions to the field, we might have missed out on the breakthrough that led to modern weather prediction, or any number of scientific advancements. So the next time you scroll through your weather app before deciding whether to don a raincoat, think of Klara and the other women who helped make it possible. A Vast Machine https://web.archive.org/web/20120127215929/http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12080 Computer Models, Climate Data, and the Politics of Global Warming Paul N. Edwards Table of Contents and Sample Chapters Computer Models, Climate Data, and the Politics of Global Warming Paul N. Edwards Acknowledgments Download Chapter as PDF Sample Chapter - Download PDF (71 KB) ix Introduction Download Chapter as PDF Sample Chapter - Download PDF (121 KB) xiii 1 Thinking Globally Download Chapter as PDF Sample Chapter - Download PDF (1.82 MB) 1 2 Global Space, Universal Time Seeing the Planetary Atmosphere 27 3 Standards and Networks International Meteorology and the Réseau Mondial 49 4 Climatology and Climate Change before World War II 61 5 Friction 83 6 Numerical Weather Prediction 111 7 The Infinite Forecast 139 8 Making Global Data 187 9 The First WWW 229 10 Making Data Global 251 11 Data Wars 287 12 Reanalysis The Do-Over 323 13 Parametrics and the Limits of Knowledge 337 14 Simulation Models and Atmospheric Politics, 1960–1992 357 15 Signal and Noise Consensus, Controversy, and Climate Change 397 Conclusion 431 Notes 441 Index Download Chapter as PDF Sample Chapter - Download PDF (106 KB) 509 Global warming skeptics often fall back on the argument that the scientific case for global warming is all model predictions, nothing but simulation; they warn us that we need to wait for real data, "sound science." In A Vast Machine Paul Edwards has news for these skeptics: without models, there are no data. Today, no collection of signals or observations—even from satellites, which can "see" the whole planet with a single instrument—becomes global in time and space without passing through a series of data models. Everything we know about the world's climate we know through models. Edwards offers an engaging and innovative history of how scientists learned to understand the atmosphere—to measure it, trace its past, and model its future. Edwards argues that all our knowledge about climate change comes from three kinds of computer models: simulation models of weather and climate; reanalysis models, which recreate climate history from historical weather data; and data models, used to combine and adjust measurements from many different sources. Meteorology creates knowledge through an infrastructure (weather stations and other data platforms) that covers the whole world, making global data. This infrastructure generates information so vast in quantity and so diverse in quality and form that it can be understood only by computer analysis—making data global. Edwards describes the science behind the scientific consensus on climate change, arguing that over the years data and models have converged to create a stable, reliable, and trustworthy basis for establishing the reality of global warming. About the Author Paul N. Edwards is Professor in the School of Information and the Department of History at the University of Michigan. He is the author of The Closed World: Computers and the Politics of Discourse in Cold War America (1996) and a coeditor (with Clark Miller) of Changing the Atmosphere: Expert Knowledge and Environmental Governance (2001), both published by the MIT Press. Numerical Integration of the Barotropic Vorticity Equation https://a.tellusjournals.se/articles/10.3402/tellusa.v2i4.8607 Original Research Papers Authors J. G. Charney R. Fjörtoft J. von Neumann Abstract A method is given for the numerical solution of the barotropic vorticity equation over a limited area of the earth’s surface. The lack of a natural boundary calls for an investigation of the appropriate boundary conditions. These are determined by a heuristic argument and are shown to be sufficient in a special case. Approximate conditions necessary to insure the mathematical stability of the difference equation are derived. The results of a series of four 24-hour forecasts computed from actual data at the 500 mb level are presented, together with an interpretation and analysis. An attempt is made to determine the causes of the forecast errors. These are ascribed partly to the use of too large a space increment and partly to the effects of baroclinicity. The rôle of the latter is investigated in some detail by means of a simple baroclinic model. The origins of computer weather prediction and climate modeling https://web.archive.org/web/20100708191309/http://www.rsmas.miami.edu/personal/miskandarani/Courses/MPO662/Lynch,Peter/OriginsCompWF.JCP227.pdf from Peter Lynch IN AMENDMENT Reading the Manual for ENIAC, the World’s First Electronic Computer https://thenewstack.io/reading-the-manual-for-eniac-the-worlds-first-electronic-computer/ ENIAC (Electronic Numerical Integrator and Compiler) was the world's very first fully electronic general-purpose computer. Smithsonian magazine once called it "the room-size government computer that began the digital era." And last week the I Programmer site shared a link to an original operating manual for ENIAC, originally published 75 years ago this month. Jun 16th, 2019 6:00am by David Cassel I don't know my love, I know a business exist. Michael jackson was a huge client, but he wasn't alone, many black people in the entertainment industry have skin lightened , and the newspapers don't tend to go into it. Feature image: US Army photo of the ENIAC. Sometimes you have to take a long look back to realize just how much things have changed. And if you looked around our modern-day, cloud-enhanced web this month, you’d find several sites sharing memories about the launch of the ENIAC computer in 1946 — and of all those unstoppable mid-century engineers who tirelessly made it work. ENIAC (Electronic Numerical Integrator and Compiler) was the world’s very first fully electronic general-purpose computer. Smithsonian magazine once called it “the room-size government computer that began the digital era.” And last week the I Programmer site shared a link to an original operating manual for ENIAC, originally published 75 years ago this month. It’s dated June 1st, 1946 — it was published by the school of engineering at the University of Pennsylvania in Philadelphia — and the manual’s page at Archive.org show it’s been viewed just 2,309 times. (“There are no reviews yet,” reads the boilerplate on the site. “Be the first one to write a review.”) The archive identifies it as part of “the bitsavers.org collection” — a project started by a software curator at the Computer History Museum, with over 98,500 files and more than 4.7 million text pages. So what can we glean about the ENIAC’s moment in history from the manual which documents its operation? It seems like the machine was temperamental. For example, it warns that the DC power should never be turned on without first turning the operation switch to “continuous.” “Failure to follow this rule causes certain DC fuses to blow, -240 and -415 in particular.” But the consequences are even worse if you opened the DC fuse cabinet when the D.C. power was turned on. “This not only exposes a person to voltage differences of around 1,500 volts but the person may be burned by flying pieces of molten fuse wire” (if one of the fuse cases suddenly blew). In fact, the ENIAC was actually designed with a door switch shunt that prevented it from operating if one of its panel doors was open, “since removing the doors exposes dangerous voltage.” But this feature could be bypassed by holding the door switch shunt in its closed position. In a video shared by the Computer History Archives Project, chief engineer J. Presper Eckert Jr remembers that it was rare to go more than a day or two without at least one tube blowing out. And in addition to potential shocks, dust was another potential hazard. “Dust particles may cause transient relay failures,” the manual warns, “so avoid stirring up dust in the ENIAC room.” “Also, if any relay case is removed, always replace in exactly the same position in order not to disturb dust inside the case.” The ENIAC used an IBM card reader, but that had its own issues too. At one point the manual actually recommends against having the same number in every column of a punchcard, since “this weakens a card increasing the probability of ‘jamming’ in the feeding mechanism of the IBM machines.” Essential Instructions<br> Despite these limitations, ENIAC was a remarkable piece of technology. The manual includes intricate drawings and detailed diagrams of its racks, trays, cables, and wiring. But most important are the front panel drawings, which “show in some detail the switches, sockets, etc. for each panel of each unit.” “They contain the essential instructions for setting up a problem on the ENIAC.” ENIAC’s panels were equipped with neon lights corresponding to things like the “denominator flip-flop” and the “divide flip-flop.” The manual includes footnotes that carefully explain under what circumstances each light will be lit. “The square root of zero is perhaps the easiest test to repeat on the divider-square rooter…” It’s not until page 28 that it explains that turning on the start switch “starts the initiating sequences for the ENIAC, turning on the DC power supplies, the heaters of the various panels, and the fans…” And it also turns on a little amber light. “When this sequence has been completed, showing that the ENIAC is ready to operate, the green light goes on…” There were gates for a “constant transmitter” (which transmits to an “accumulator”), and its circuitry included “program pulse input terminals” — for add pulses and subtract pulses. And the machine also included two “significant figures switches.” “When 10 or more significant figures are desired, the left-hand switch is set to 10 and the right-hand switch set so that the sum of the two switch readings equals the number of significant figures desired.” There are tantalizing glimpses of how it all works together. The manual recommends a complicated test to make sure all the hardware is working properly. It involves a card with the value P 11111 11111, which gets input into the machine’s “accumulator” 18 times. The mathematical result — 19,999,999,998 — apparently exceeds the range of the accumulator, so the expected result is actually M 99999 99998. Then a card with the value P 00000 00001 is transmitted to the accumulators exactly twice — which instead of twenty billion (20,000,000,000) should give the value P 00000 00000. “Note that this test assumes that the significant figure switch is set to ’10’…” In Smithsonian magazine, technology writer Steven Levy remembers living in Philadelphia in the 1970s and renting an apartment from a man named J. Presper Eckert Jr. “It was only when I became a technology writer some years later that I realized that my landlord had invented the computer.” video not available https://www.youtube.com/watch?v=G8R6li54R20 In the early 1940s, Eckert had been a graduate student in the school of engineering who became the ENIAC’s chief engineer. A professor had proposed electronic calculations for munitions trajectories to help the American military during World War II. Levy calls it “a breathtaking enterprise. The original cost estimate of $150,000 would rise to $400,000. Weighing in at 30 tons, the U-shaped construct filled a 1,500-square-foot room. Its 40 cabinets, each of them nine feet high, were packed with 18,000 vacuum tubes, 10,000 capacitors, 6,000 switches and 1,500 relays… Two 20-horsepower blowers exhaled cool air so that ENIAC wouldn’t melt down.” By the time they’d finished building it — World War II was over. But there was still work to do. The Atomic Heritage Foundation site reports that ENIAC was used to help perform the engineering calculations for the world’s first hydrogen bomb (along with two other more-recently developed computers). “It took sixty straight days of processing, all through the summer of 1951.” Levy cites an Army press release describing ENIAC as a “mathematical robot” that “frees scientific thought from the drudgery of lengthy calculating work.” A recent documentary called The Computers reminds modern-day viewers that the ENIAC’s original programmers were all women — Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman. There’s now also a site called the ENIAC Programmers Project that shares a brief overview of the documentary with more information. During World War II, the U.S. military had put together a team of nearly 100 women, trained in mathematics, who were calculating complex ballistic-trajectory equations. Six of them were selected to program the ENIAC. Back in 1996, the IEEE Annals of the History of Computing ran a profile of “The Women of ENIAC,” interviewing 10 of the women who’d worked with the computer during its 10-year run. The poster for the documentary describes them as “six women lost from history who created technologies that changed our world.” The ENIAC was eventually left behind by ever-faster and ever-cheaper computers. “By the time it was decommissioned in 1955 it had been used for research on the design of wind tunnels, random number generators, and weather prediction,” remembers an ENIAC web page at Oak Ridge National Laboratory. And even though ENIAC was decommissioned in 1955, 50 years later it was reassembled for a humble ceremony in Philadelphia, Levy remembers. “Vice President Al Gore threw a switch and the remaining pieces clattered out the answer to an addition problem.” According to Levy, the ENIAC’s chief engineer later groused “How would you like to have most of your life’s work end up on a square centimeter of silicon?” But Levy sees another way to look at it. “[T]he question could easily have been put another way: How would you like to have invented the machine that changed the course of civilization?” Yet legacies aside, it also seems like it was a real thrill just to have been a part of the work itself. “I’ve never seen been in as exciting an environment,” remembers Jean Jennings Bartik in the film. “We knew we were pushing back frontiers.” And more than 60 years later, she also still remembered that the ENIAC computer “was a son-of-a-bitch to program.” The Women of ENIAC https://web.archive.org/web/20160304052225/http://www.eg.bucknell.edu/~csci203/2012-fall/hw/hw06/assets/womenOfENIAC.pdf
  6. Sanawoc is a new series I am starting to be based on art.
  7. Leaderboard https://www.deviantart.com/hddeviant/journal/DogoKwan-2025-Leaderboard-1152627128
  8. Here is a url to use to test https://www.deviantart.com/kiratheartist/art/Little-Friend-1113175550 and to find more https://www.deviantart.com/hddeviant/favourites
  9. thank you for joining @Tesa do you code?
  10. I love strategies where the interface is easy and everything is as clear as possible. I also play Zombie Defense, a cool game that I recommend to everyone.
  11. december 15th to the 21st https://www.deviantart.com/comments/1/1099379642/5184344360
  12. Winner playing Dogokwan for the week of December 8th to December 15th 2024 https://www.deviantart.com/hddeviant/status-update/Congrats-to-for-the-best-1134325898
  13. The FORTRAN Automatic Coding System J. W. BACKUS?, R. J. BEEBERt, S. BEST$, R. GOLDBERG?, L. M. HAIBTt, H. L. HERRICK?, R. A. NELSON?, D. SAYRE?, P. B. SHERIDAN?, H.STERNt, I. ZILLERt, R. A. HUGHES§, AN^ .. . R. NUTT~ THE FORTRAN project was begun in the sum- mer of 1954. Its purpose was to reduce by a large factor the task of preparing scientific problems for IBM's next large computer, the 704. If it were possible for the 704 to code problems for itself and produce as good programs as human coders (but without the errors), it was clear that large benefits could be achieved. For it was known that about two-thirds of the cost of solving most scientific and engineering problems on large computers was that of problem preparation. Furthermore, more than 90 per cent of the elapsed time for a problem was usually devoted to planning, writing, and debugging the program. In many cases the de- velopment of a general plan for solving a problem was a small job in comparison to the task of devising and coding machine procedures to carry out the plan. The goal of the FORTRAN project was to enable the pro- grammer to specify a numerical procedure using a con- cise language like that of mathematics and obtain automatically from this specification an efficient 704 program to carry out the procedure. It was expected that such a system would reduce the coding and de- bugging task to less than one-fifth of the job it had been. Two and one-half years and 18 man years have elapsed since the beginning of the project. The FORTRAN system is now copplete. It has two components: the FORTRAN language, in which programs are written, and the translator or executive routine for the 704 which effects the translation of FORTRAN language programs into 704 programs. Descriptions of the FOR- TRAN language and the translator form the principal sections of this paper. The experience of the FORTRAN group in using the system has confirmed the original expectations con- cerning reduction of the task of problem preparation and the efficiency of output programs. A brief case history of one job done with a system seldom gives a good measure of its usefulness, particularly when the selection is made by the authors of the system. Nevertheless, here are the facts about a rather simple but sizable job. The programmer attended a one-day course on FORTRAN and spent some more time re- ferring to the manual. He then programmed the job in four hours, using 47 FORTRAN statements. These were compiled by the 704 in six minutes, producing about 1000 instructions. He ran the program and found the output incorrect. He studied the output (no tracing or memory dumps were used) and was able to localize his error in a FORTRAN statement he had written. He rewrote the offending statement, recompiled, and found that the resulting program was correct. He esti- mated that it might have taken three days to code this job by hand, plus an unknown time to debug it, and that no appreciable increase in speed of execution would have been achieved thereby. THE FORTRAN LANGUAG The FORTRAN language is most easily described by reviewing some examples. Arithmetic Statements Example 1 : Compute : root =(- (B/2) 4- d(B/2) - AC .)/A FORTRAN Program ROOT= ( - (B/2.0) + SQRTF((B/2.0) * * 2 - A * C))/A. Notice that the desired erogram is a single FOR- TRAN statement, an arithmetic formula. Its meaning is: "Evaluate the expression on the right of the = sign and make this the value of the variable on the left.?' The symbol * denotes multiplication and * * denotes exponentiation (i.e., A * * B means AB). The program which is generated from this statement effects the computation in floating point arithmetic, avoids com- puting (B/2.0) twice and computes (B/2.0) * * 2 by a multiplication rather than by an exponentiation routine. [Had (B/2.O) * * 2.01 appeared instead, an exponentia- tion routine would necessarily be used, requiring more time than the multiplication.] The programmer can refer to quantities in both floating point and integer form. Integer quantities \are somewhat restricted in their use and serve primarily as subscripts or exponents. Integer constants are written without a decimal point. Example: 2 (integer form) vs 2.0 (floating point form). Integer variables begin with I, J, K, L, M, or N. Any meaningful arithmetic expres- sion may appear on the right-hand side of an arithmetic statement, provided the following restriction is ob- served: an integer quantity can appear in a floating- point expression only as a subscript or as an exponent or as the argument of certain functions. The functions which the programmer may refer to are limited only by those available on the library tape at the time, such as SQRTF, plus those simple functions which he has defined for the given problem by means of function statements. An example will serve to describe the latter. Function Statements Example 2: Define a function of three variables to be used throughout a given problem, as follows: Function statements must precede the rest of the pro- gram. They are composed of tho desired function name (ending in F) followed by any desired arguments which appear in the arithmetic expression on the right of the = sign. The definition of a function may employ any previously defined functions. Having defined ROOTF as above, the programmer may apply it to any set of arguments in any subsequent arithmetic statements. For example, a later arithmetic statement might be THETA = 1.0 + GAMMA * ROOTF(P1, 3.2 * Y + 14.0, 7.63). DO Statements, DIMENSION Statements, and Sub- scripted Variables Examgle 3: Set Qm,, equal to the largest quantity P(ai+bi)/P(ai- bi) for some i between 1 and 1000 .where P(x) =C~+~~X+C~X~+C~X~ . FORTRAN Program: 1) POLYF(X) =CO+X * (Cl+X * (C2+X * C3)). 2) DIMENSION A(1000), B(1000). 3) QMAX = - 1.0 E20. 4) DO 5 I =1, 1000. 5) QMAX = MAXF(QMAX, POLYF(A(1) +B(I))/POLYF(A(I) -B(I))). 6) STOP. The program above is complete except for input and output statements which will be described later. The first statement is not executed; it defines the desired polynomial (in factored form for efficient output pro- gram). Similarly, the second statement merely informs the executive routine that the vectors A and B each have 1000 elements. Statement 3 assigns a large negative initial value to QMAX, - 1.0 X 1020, using a special concise form for writing floating-point constants. State- ment 4 says "DO the following sequence of statements down to and including the statement numbered 5 for successive values of I from 1 to 1000." In this case there is only one statement 5 to be repeated. It is exe- cuted 1000 times; the first time reference is made to A(l) and B(1), the second time to A(2) and B(2), etc. After the 1000th execution of statement 5, statement 6-STOP-is finally encountered. In statement 5, the function MAXF appears. MAXF may have two or more arguments and its value, by definition, is the value of its largest argument. Thus on each repetition of statement 5 the old value of QMAX is replaced by itself or by the value of POLY F(A(1) +B (I)) /POLYF (A(1) - B (I)), whichever is larger. The value of QMAX f after the 1000th repetition is therefore the desired maximum. Example 4: Multiply the n Xlr matrix 520) by its transpose, obtaining the product elements on or be- low the main diagonal by the relation cis j = 5 ai.ke a j,k k-1 (for j < i) and the remaining elements by the relation FORTRAN Program As in the preceding example, the DIMENSION statement says that there are two matrices of maximum size 20 X20 named A and C. For explanatory purposes only, the three boxes around the program show the sequence of statements controlled by each DO state- ment. The first DO statement says that procedure P, i.e., the following statements down to statement 2 (outer box) is fo be carried out for I = 1 then for I = 2 and so on up to I =N. The first statement of procedure P(D0 2 J = 1, I) directs that procedure Q be done for J = 1 to J = I. And of course each execution of pro- cedure Q involves N executions of procedure R for K=l, 2, . . , N. Consider procedure Q. Each time its last statement is completed the "index" J of its controlling DO 'state- ment is increased by 1 and control goes to the first statement of Q, until finally its last statement is reached and J = 1. Since this is also the last statement of P and P has not been repeated until I = N, I will be increased and control will then pass to the first statement of P. This statement (DO 2 J = 1, I) causes the repetition of Q to begin again. Finally, the last statement~f Q and P (statement 2) will be reached with J =I and I = M, meaning that both Q and P have been repeated the required number of times. Control will then go to the next statement, STOP. Each time R is executed a new term is added to a product element. Each time Q is executed a new product element and its mate are ob: tained. Each time P is executed a product row (over to the diagonal) and the corresponding column (down to the diagonal) are obtained. ' The last example contains a "nest" of Jstate- ments, meaning that the sequence of statements con- trolled by one DO statement contains other DO state- ments. Another example of such a nest is shown in the next column, on the left. Nests of the type shown on the right are not permitted, since they would usually be meaningless. Although not illustrated in the examples given, the programmer may also employ subscripted variables having three independent subscripts. Examplep 5: For each case, read from cards two vec- tors, ALPHA and RHO, and the number ARG. ALPHA and RHO each have 25 elements and ALPHA(1) LALPHA(I+I), I = 1 to 24. Find the SUM of all the elements of ALPHA from the beginning to the last one which is less than or equal to ARG [assume ALPHA(1) 5 ARG The FORMAT statement mys that numbers are to be found (or print&) 5 per card (or line), that .each number is in fixed; point form, that each number, oa- cupies a field 12 mlumns wide and that *thq; decimal point is lmated 4 digits hrn the right, TblFQRMAT statemeat is not executed; it is referred Wbfr the READ and PRINT sgatements to describe itbg Wred arrange- ment iof data in the external medh The READ statement says 'RE339.D eards in the card reader which are arranged acc&iTg' to FORMAT ej,tatement 1 and assign the suewsiwe nambers obtained as values of ALPHA(1) I =? 1, 2& aigd RBQ(1) I = 1, 25 and ARG." Thus "ALPHA, RHO, ARC" is a descrip- tion of a list of 51 quantities( (tb~'size of ALPHA and RHO being obtained fidrn' kf& '~IMENSION state- ment), Reading of cade ,'prxwmx!& until these SL,quarati- ties have been obtai~ed~hahh QWQ having five nlmibers, as per the FORMAT: d~wiptiah, except the Ids* w&.ich has the value of sARG'ddyr ,8ine:ee ARG te~$niaitbd~the list, the remaining f~a>~g,fiiel$sla~ the. last G~W? imp not read. The PRINT statement is similar to READ except that it specifies a list of only three quantities. Thus each execution of PRINT causes a single line to be printed with ARG, SUM, VALUE printed in the first three of the five fields described by FORMAT state- ment 1. The IF' statement says "If A RG -ALPHA (I) is negative go tostatement 4, if it 3s zero go to statement 3, and if it is' 'positive go to 3." Thus the repetition of the two Statements controlled by the DO consists normally of computing ARG - ALPHA(1) , finding it zero or positive, and going to statement 3 followed by the next repetition. H~wever, when I has been in- creased to the extent that the first ALPHA exceeding ARG is encountered, control will pass to statement 4: Note that this statement does not belong to the se- quence controlled by the DO. In such cases, the repeti- tion specified by the DO is terminated and the value of the index (in this ease I) is preserved. Thus if the first ALPHA exceeding ARG were ALPHA (20), then RHO (19) would be obtaihed in statement 4. The GO TO statement, of course, passes control to statement 2, which initiates reading the 11 cards for the next case.The process will continue until there are no more cards in the reader. The above program is entirely complete. When punched in cards as shown, and comd piled, the jcrandlator will produce a ready-to-run 704 program which will perform the job specified. Other Types of FORTRAN Statements In the above examples the following types of FOR- TRAN statements have been exhibited. Arithmetic statements Function statements DO statements IF statements GO TO statements READ statements PRINT statements STOP' statements DIMENSION statements FORMAT statements. The explanations accompanying each example have attempted to show some of the possible applications and variations of these statements. It is felt that these examples give a representative picture of the FOR- TRAN language; however, many of its features have had to be omitted. There are 23 other types of state- ments in the language, many of them completely analogous to some of those described here. They pro- vide facilities for referring to other input-output' and auxiliary storage devices (tapes, drums, and card punch), for specifying preset and computed branching of control, for detecting various conditions which may arise such as an attempt to divide by zero, and for pro- viding various information about a program to the translator. A complete description of the language is to be found in Programmer's Reference Manual, the FOR- TRA N Automatic Coding System for the IB M 704. Preparation of a Program for Translation The translator accepts statements punched one per card (continuation cards may be used for very long statements). There is a separate key on the keypunch- ing device for each character used in FORTRAN state- ments and each character is represented in the card by several holes in a single column of the card. Five columns are reserved for a statement number (if pres- ent) and 66 are available for the statement. Keyguhch- ing a FORTRAN program is therefore a prockss similar to that of typing the program. Translation The deck of cards obtained by keypunching may then be put in the card reader of a 704 equipped'with the translator program. When the load buttori is Sressed one gets either 1) a list of input statements which fail to conform to specifications of the FORTRAN language accompanied by remarks which indicate the type of error in each case; 2) a deck of binary cards representing the desired 704 program, 3) a binary tape of the program which can either be preserved or loaded and executed immediately after translation is complete, or 4) a tape containing the output program in symbolic form suitable for alteration and later assembly. (Some of these out- puts may be unavailable at the time of publication.) THE FORTRAN TRANSLATOR General Organization of the System The FORTRAN translator consists of six successive sections, as follows. Sectiorc 1: Reads in and classifies statements. For arithmetic formulas, compiles the object (output) in- structions. For nonarithmetic statements including input-output, does a partial compilation, and records the remaining information in tables. All instructions compiled in this section are in the COMPAIL file. Section 2: Compiles the instructions associated with indexing, which result from DO statements and the oe- currence of subscripted variables, These instructions are placed in the COMPDO file, Section 3: Merges the COMPAIL and COMPDO files into a single file, meanwhile completing the compila- tion of nonarithmetic statements begun in Section 1. The object program is now complete, but assumes an object machine with a large number of index registers. Section 4: Carries out an analysis of the flow of the object program, to be used by Section 5. Section 5: Converts the object program to one which involves only the three index registers of the 704. Section 6: Assembles the object program, producing a relocatable binary program ready for running. Alsc on demand produces the object program in SHARE symbolic language. (Note: Section 3 is of internal importance only; Sec- tion 6 is a fairly conventional assembly program. These sections will be treated only briefly in what follows.) Within the translator, information is passed from section to section in two principal forms: as compiled instructions, and as tables. The compiled instructions (e.g., the COMPAIL and COMPDO files, and later their merged result) exist in a four-word format which con- tains all the elements of a symbolic 704 instruction; ie., symbolic location, three-letter operation code, sym- bolic address with relative absolute part, symbolic tag, and absolute decrement. (Instructions which refer to quantities given symbolic names by the programmer have those same names in their addresses.) This sym- bolic format is retained until section 6. Throughout, the order of the compiled instructions is maintained by means of the symbolic locations (internal statement numbers), which are assigned in sequential fashion by section 1 as each new statement is encountered. The tables contain all information which cannot yet be embodied in compiled instructions. For this reason the translator requires only the single scan of the source program performed in section 1. A final observation should be made about the organ- ization of the system. Basically, it is simple, and most of the complexities which it does possess arise from the effort to cause it to produce object programs which can compete in efficiency with hand-written programs. S~me of these complexities will be found within the individual sections; but also, in the system as a whole, the sometimes complicated interplay between compiled instructions and tables is a consequence of the desire to postpone compiling until the analysis necessary to produce high object-program efficiency has been per- formed. For an input-output statement, section 1 compiles the appropriate read or write select (RDS or WRS) in-struction, and the necessary copy (CPY) instructions (for binary operations) or transfer instructions to pre-written input-output routines which perform conver-sion between decimal and binary and govern format (for decimal operations). When the list of the input-output statement is repetitive, table entries are made which will cause section 2 to generate the indexing instructions necessary to make the appropriate loops. The treatment of state-ments which are neither input- output nor arithmetic is similar; i.e., those instructions which can be compiled are compiltd, and the remaining information is. extracted and placed in one or more of the appropriate tables. In contrast, arithmetic formulas are completely treated in section 1, except for open (built-in) sub- routines, which are added in section 3; a complete set of compiled instructions is produced in the COMPAIL file. This compilation involves two principal tasks: 1) the generation of an appropriate sequence of arith- metic instructions to carry out the computation speci- fied by the formula, and 2) the generation of (symbolic) tags for those arithmetic instructions which refer to subscripted variables (variables which denote arrays) which in combination with the indexing instructions to be compiled in section 2 will refer correctly to the indi- vidual members of those arrays. Both these tasks are accomplished in the course of a single scan of the for- mula. Task 2) can be quickly disposed of. When a sub- scripted variable is encountered in the scan, its sub- script(~) are examined to determine the symbols used in the subscripts, their multiplicative coefficients, and the dimensions of the array. These items of information are placed in tables where they will be available to section 2 ; also from them is generated a subscript com- bination name which is used as the symbolic tag of those instructions which refer to the subscripted vari- able. The difficulty in carrying out ta~k 1) is one of level; there is implicit in every arithmetic formula an order of computation, which arises from the control over order- ing assigned by convention to the various symbols (parentheses, + , - , * , /, etc.) which can appear, and this implicit ordering must be made explicit before compilation of the instructions can be done. This ex- plicitness is achieved, during the formula scan, by associating with each operation required, by the formula a level number, such that if the operations are carried out in the order of increasing level number the correct sequence of arithmetic instructions will be obtained. The sequence of level numbers is obtained by means of a set of rules, which specify for each possible pair formed of operation type and symbol type the increment to be 'added to or subtracted from the level number of the preceding pair. In fact, the compilation is not carried out with the raw set of level numbers produced during the scan. After the scan, but before the compilation, the levels are examined for empty sections which can be deleted, for permutations of operations on the same level, which will reduce the number of accesses to memory, and for redundant computation (arising from the existence of common subexpressions) which can be eliminated. An example will serve to show (somewhat inaccurate- ly) some of the principles employed in the level-analysis process. Consider the following arithmetic expression: A+B**C*(E+F) In the level analysis of this expression parentheses are in effect inserted which define the proper order in which the operations are to be performed. If only three implied levels are recognized (corresponding to +, * a and * * ) the expression obtains the following: +(* (* *A))+(* (* *B* *C)* [+(* (* *EN+(* (* *~))l). The brackets represent the parentheses appearing in the original expression. (The level-analysis routine actually recognizes an additional level corresponding to func- tions.) Given the above expression the level-analysis routine proceeds to define a sequence of new dependent variables the first of which represents the value of the entire expression. Each new variable is generated when- ever a left parenthesis is encountered and its definition is entered on another line. In the single scan of the ex- pression it is often necessary to begin the definition of one new variable before the definition of another has been completed. The subscripts of the u's in the follow- ing sets of definitions indicate the order in which they were defined. This is the point reached at the end of the formula scan. What follows illustrates the further processing applied to the set of levels. Notice that ua, for example, is defined as * * F. Since there are not two or more operands to be combined the * * serves only as a level indication and no further purpose is served by having defined us. The procedure therefore substitutes F for UQ wherever UQ appears and the line uo = * * F is deleted. Similarly, F is then substituted for us and us= * F is deleted. This elimination of "redundant" u's is carried to completion and results in the following: These definitions, read up, describe a legitimate proc-cdure for obtaining the value of the original expression. The number of u's remaining at this point (in this case four) determines the number of intermedi- ate quantities which may need to be stored. However, further examination of this case reveals that the result of 243 is in the accumulator, ready for uo; therefore the store and load instructions which would usually be compiled between u3 and uo are omitted. Section 2 (Nelson and Ziller) Throughout the object program will appear in- structions which refer to subscripted variables. Each of these instructions will (until section 5) be tagged with a symbolic index register corresponding to the particu- l b subscript combination of the subscripts of the varia- ble [e.g., (I, K, J) and (K, I, J) are two different sub- script combinations]. If the object program is to work correctly, every symbolic index register must be so governed that it will have the appropriate contents at every instant that it is being used. It is the source pro- gram, of course, which determines what these appro- priate contents must be, primarily through its DO statements, but also through arithmetic formulas (e.g. I= N+1) which may define the values of variables ap- pearing in subscripts, or input formulas which may read such values in at object time. Moreover, in the case of DO statements, which are designed to produce loops in the object program, it is necessary to provide tests for loop exit. It is these two tasks, the governing of symbolic index registers and the testing of their contents, which section 2 must carry out. Much of the complexity of what follows arises from the wish to carry out these tasks optimally; i.e., when a variable upon which many subscript combinations de- pend undergoes a change, to alter only those index registers which really require changing in the light of the problem flow, and to handle exits correctly with a minimum number of tests. If the following subscripted variable appears in a FORTRAN program A(2* I + 1,4* J + 3,6* K + 5), the index quantity which must be in its symbolic index register when this reference to A is made is (cli - 1) 3, (~2 j - 1)di + (~3k - 1)didj + 1, where GI, h, and c3 in this case have the values 2, 4, and 6; i, j, and k are the values of I, J, and K at the moment, and di and dj are the I and J dimensions of A. The effect of the addends 1, 3, and 5 is incorporated in the address of the instruction which makes the reference. In general, the index quantity associated with a sub- script combination as given above, once formed, is not recomputed. Rather, every time one of the variables in a subscript combination is incremented under control of a DO, the corresponding quantity is incremented by the appropriate amount. In the example given, if K is increased by n (under control of a DO), the index quantity is increased by cSdid,rt, giving the correct new value' The following paragraphs discuss in further detail the ways in which index quantities are computed and modified. Choosing the Indexing Instructions; Case of Subscrifits Controlled by DO'S We distinguish between two classes of subscript ; those which are in the range of a DO having that sub- script as its index symbol, and those subscripts which are not controlled by DO'S. The fundamental idea for subscripts controlled by DO'S is that a sequence of indexing instruction groups can be selected to answer the requirements, and that the choice of a particular instruction group depends mainly on the arrangement of the subscripts within the subscript combination and the order of the DO'S con- trolling each subscript. DO'S often exist in nests. A nest of PO'S consists of all the DO'S contained by some one DO which is itself not contained by any other. Within a nest, DO'S are assigned level numbers. Wherever the index symbol of a DO appears as a subscript within the range of that DO, the level number of the DO is assigned to the subscript. The relative values of the level numbers in a subscript combination produce a group number which, along with other information, determines which indexing instruc- tion group is to be compiled. The source language, produces the following DO structure and group combi- nations : Producing the Decrement Parts of Indexing Instructions The part of the TO4 instruction used to change or test the contents of an index register is called the decrement part of the instruction. The decrement parts of the FORTRAN indexing instructions are functions of the dimensions of arrays and of the parameters of DO's; that is, of the initial value nl, the upper bound n~, and the increment n3 appearing in the statement DO 1 i=nl, nz, n3. The general form of the function is [(nz - nl +m)/ns]fiag where g represents necessary coefficients and dimen- sions, and [x] denates the integral part of x. If all the parameters are constants, the decrement parts are computed during the execution of the FOR- TRAN executive program. If the parametel's are vari- able symbols, then instructions are compiled in the object program to compute the proper decrement val- ues. For object program efficiency, it is desirable to associate these computing instructions with the outer- most DO of a nest, where possible, and not with the inner loops, even fhough these inner DO's may have variable parameters. Such a variable parameter (e.g., N in "DO 7 I= 1, N") may be assigned values by the programmer by any of a number of methods; it may be a value brought in by a READ statement, it 'may be calculated by an arithmetic statement, it may take its value from a transfer exit from some other DO whose index symbol is the pertinent variable symbol, or it may be under the control of a DO in the nest. A search is made to determine the smallest level number in the nest within which the variable parameter is not assigned a new value. This level number determines the place at which computing instructions can best be compiled. Case of Subscripts not Controlled by DO'S The second of the twos classes of subscript symbols is that of subscript symbols which are not under control of DO'S. Such a subscript can be given a value in a number of ways similar to the defining of DO param- eters: a value may be read in by a READ statement, it may be calculated by an arithmetic statement, or it may be defined by an exit made from a DO &h that index symbol. For subscript combinations with no subsc(ipt under the control of a DO, the basic technique use$ to intro- duce the proper values into a symbolic in&x register is that of determining where such definitipns occur, and, at the point of definition, using a subroutine to compute the new index quantity. These subrou$~es are generated at executive time, if it is determined that they are necessary. If the index quantity exists in a DO nest at the time of a transfer exit, then no cubr routine calculations are necessary since the exit values are precisely the desired values Mixed Cases In cases in which some subscripts in a subscript com- bination are controlled by DO'S, andmxne are not, instructions are compiled to compute the initial value of the subscript combination at the beginning of the outside loop. If the non-DO-controlled subscript sym- bol is then defined inside the loop (that is, after the computing of the load quantity) the procedure of using a subroutine at the point,of subscript definition will bring the new value into the index register. An exception to the use of a subroutine is made when the subscript is defined by a transfer exit from a DO, and that DO is within the range of a DO controlling some other subscript in the subscript combination. In such instances, if the index quantity is used in the inner DO, no calculation is necessary; the exit values are used. If the index quantity is not used, instructions are compiled to simulate this use, so that in either case the transfer exit leaves the correct function value in the index register. Modification and O@timization Initializing and computing instructions correspond- ing to a given DO are placed in the object program at a point corresponding to the lowest possible (outermost) DO level rather than at the point corresponding to the given DO. This technique results in the desired removal of certain instructions from the most frequent inner-most loops of the object program. However, it necessi- tates the consideration of some complex questions when the flow within a nest of DO'S is complicated by the occurrence of transfer escapes from DO-type repetition and by other IF and GO TO flow paths. Consider a simple example, a nest having a DO on I containing a DO on J, where the subscript combination (I, J) appears only in the inner loop. If the object program corre- sponded precisely to the FORTRAN language pro- gram, there would be instructions at the entrance point of the inner loop to set the value of J in (I, J) to the initial value specified by the inner DO. Usually, how- ever, it is more efficient to reset the value of J in (I, J) at the end of the inner loop upon leaving it, and the ob- ject program is so constructed. In this case it becomes necessary to compile instructions which follow every transfer exit from the inner loop into the outer loop (if there are any such exits) which will also reset the value of J in (I, J) to the initial value it should have at the entrance of the inner loop. These instructions, plus the initialization of both I and J in (I, J) at the entrance of the outer loop (on I), insure that J always has its proper initial value at the entrance of the inner loop even though no instructions appear at that point which change J. The situation becomes considerably more complicated if the subscript combination (I, J) also ap- pears in the outer loop. In this case two independent index quantities are created, one corresponding to (I, J) in the inner loop, the other to (I, J) in the outer loop. Optimizing features play an important role in the modification of the procedures and techniques outlined above. It may be the case that the DO structure and subscript combinations of a nest describe the scanning of a two- or three-dimensional array which is the equiva- lent of a sequential scan of a vector; i.e., a reference to each of a set of memory locations in descending order. Such an equivalent procedure is discovered, and where the flow of a nest permits, is used in place of more com- plicated indexing. This substitution is not of an empiri- cal nature, but is instead the logical result of a general- ized analysis. Other optimizing techniques concern, for example, the computing instructions compiled to evaluate the functions (governing index values and decrements) men- fioned previously. When some of the parameters are constant, the functions are reduced at executive time, and a frequent result is th2 compilation of only one instruction, a reference to a variable, to obtain a proper initializing value. In choosing the symbolic index register in which to test the value of a subscript for exit purposes, those index registers are avoided which would require the compilation of instructions to modify the test instruc- tion decrement. Section 4 (Haibt) pnd Section 5 (Best) The result of section 3 is a complete program, but one in which tagged instructions are tagged only sym- bolically, and which assumes that there will be a real index register available for every symbolic one. It is the task of sections 4 and 5 to convert this program to one involving only the three real index registers of the 704. Generally, this requires the setting up, for each symbolic index register, of a storage cell which will act as an index cell, and the addition of instructions to load the real index registers from, and store them into, the index cells. This is done in section 5 (tag analysis) on the basis of information about the pattern and frequency of flow provided by section 4 (flow analysis) in such a way that the time spent in loading and storing index registers will be nearly minimum. The fundamental unit of program is the basic block; a basic block is a stretch of program which has a single entry point and a single exit point. The purpose of sec- tion 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by an actual "execution" of the program in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type state- ments and computed GO TO'S is determined by a ran- dom number generator suitably weighted according to whatever FREQUENCY statements have been pro- vided. 1 Section 5 is divided into four parts, of which part,l is the most important. It makes all the major decisions concerning the handling of index registers, but records them simply as bits in the PRED table and a table of all tagged instructions, the STAG table. Part 2 merely reorganizes those tables; part 3 adds a slight further treatment to basic blocks which are terminated by an assigned GO TO; and finally part 4 compiles the finished program under the direction of the bits in the PRED and STAG tables. Since part 1 does the real work involved in handling the index registers, attention will be con- fined to this part in the sequel. The basic flow of part 1 of sectipn 5 is, Consider a moment partway through the execution of part 1, when a new region has just been treated. The less frequent basic blocks have not yet been encoun- tered; each basic block that has been treated is a mem- ber of some region. The existing regions are of two types: transparent, in which there is at least one real index register which has not beeq used in any of the member basic blocks, and opaque. Bits have been en- tered in the STAG table, calling where necessary for an LXD (load index register from index cell) instruc- tion preceding, or an SXD (store index register in index cell) instruction following-, the tagged instructions of the basic blocks that have been treated. For each basic block that has been treated is recorded the required contents of each of the three real index registers for entrance into the block, and the contents upon exit. In the PRED table, entries that have been considered may contain bits calling for interblock LXD's and SXD's, when the exit and entrance conditions across the link do not match. Now the PRED table is scanned for the highest- frequency link not yet considered. The new region is formed by working both forward over successors and backward over predecessors from this point, always choosing the most frequent remaining path of control. The marking out of a new region is terminated by en- countering 1) a basic block which belongs to an opaque region, 2) a basic block which has no remaining links into it (when working backward) or from it (when working forward), or which belongs to a transparent region with no such links remaining, or 3) a basic block which closes a loop. Thus the new region generally includes both basic blocks not hitherto encountered, and entire regions of basic blocks which have already been treated. The treatment of hitherto untreated basic blocks in the new region is carried out by simulating the action of the program. Three cells are set aside to represent the object machine index registers. As each new tagged in- struction is encountered these cells are examined to see if one of them contains the required tag; if not, the program is searched ahead to determine which oS/ the three index registers is the least undesirable to replace, and a bit is entered in the STAG table calling for an LXD instruction to that index register. When the simulation of a new basic block is finished, the en- trance and exit conditions are recorded, and the next item in the new region is considered. If it is a new basic block, the simulation continues; if it is a region, the index register assignment throdghout the region is examined to see if a permutation of the index registers would not make it match better, and any remaining mis- match is taken care of by entries in PRED calling for interblock LXD's. A final concept is that of index register activity. When a symbolic index register is initialized, or when its contents are altered by an indexing instruction, the value of the corresponding index cell falls out of date, and a subsequent LXD will be incorrect without an intervening SXD. This problem is handled by activity bits, which indicate when the index cell is out of date; when an LXD is required the activity bit is interrogated, and if it is on an SXD is called for immediately after the initializing or indexing instruction responsible for the activity, or in the interblock link from the region con- . taining that instruction, depending upon whether the basic block containing that instruction was a new basic block or one in a .region already treated. When the new region has been treated, all of the old regions yhich belonged to it simply lose their iden- tity; their basic blocks and the hitherto untreated basic blocks become the basic blocks of the new region. Thus at the end of part 1 there is but one single region, and it is the entire program. The high-frequency parts of the program were treated early; the entrance and exit con- ditions and indeed the whole handling of the index registers reflect primarily the efficiency needs of these high-frequency paths. The loading and unloading of the index registers is therefore as much as possible placed in the low-frequency paths, and the object program time consumed in these qerations is thus brought near to a minimum. Conclusion The preceding sections of this paper have described the language and the translator program of the FOR- TRAN system. Following are some comments on the system aqd its application. Scope of A pfilicability The language of the system is intended to be capable of expressing virtually any numerical procedure. Some problems programmed in FORTRAN language to date include: reactor shielding, matrix inversion, numerical integration, tray-to-tray distillation, microwave propa- gation, radome design, numerical weather prediction, plotting and root location of a quartic, a pracedure for playing the game "nim," helicopter design, and a number of others. The sizes of these first programs range from about 10 FORTRAN statements to well over 1000, or in terms of machine instructions, from about 100 to 7500. Conciseness and Convenience The statement of a program in FORTRAN lan- guage rather than in machine language or assembly program language is intended to result in a considerable reduction in the amount of thinking, bookkeeping, writing, and time required. In the problems mentioned in the preceding paragraph, the ratio of the number of output machine instructions to the number of input FORTRAN statements for each problem varied be- tween about 4 and 20. (The number of machine instruc- tions does not include any library subroutines and thus represents approximately the number which would need to be hand coded, since FORTRAN does not normally produce programs appreciably longer than correspond- ing hand-coded ones.) The ratio tends to be high, of course, for problems with many long arithmetic expres- sions or with complex loop structure and subscript ma- nipulation. The ratio is a rough measure of the concise- ness of the language. The convenience of using FORTRAN language is necessarily more difficult to measure than its concise- ness. However the ratio of coding times, assembly pro- gram language vs FORTRAN language, gives some in- dication of the reduction in thinking and bookkeeping as well as in writing. This time reduction ratio appears to range also from about 4 to 20 although it is difficult to estimate accurately. The largest ratios are usually obtained by those problems with complex loops and subscript manipulation as a result of the planning of indexing and bookkeeping procedures by the translator rather than by the programmer. Education It is considerably easier to teach people untrained in the use of computers how to write programs in FORTRAN language than it is to teach them machine language. A FORTRAN manual specifically designed as a teaching tool will be available soon. Despite the unavailability of this manual, a number of successful courses for nonprogrammers, ranging from one to three days, have been completed using only the present ref- erence manual. Debugging The structure of FORTRAN statements is such that the translator can detect and indicate many errors which may occur in a FORTRAN-language program. Furthermore, the nature of the language makes it possi- ble to write programs with far fewer errors than are to be expected in machine-language programs. Of course, it is only necessary to obtain a correct FORTRAN-language program for a problem, therefore all debugging efforts are directed toward this end. Any errors in the translator program or any machine mal- function during the process of translation will be de- tected and corrected by procedures distinct from the process of debugging a particular FORTRAN program. In order to produce a program with built-in debugging facilities, it is a simple matter for the programmer to write various PRINT statements, which cause "snap- shots" of pertinent information to be taken at appropri- ate points in his procedure, and insert these in the deck of cards comprising his original FORTRAN program. After compiling this program, running the resulting machine program, and comparing the resulting snap- shots with hand-calculated or known values, the pro- grammer can localize the specific area in his FORTRAN program which is causing the difficulty. After making the appropriate corrections in the FORTRAN program he mky remove the snapshot cards and recompile the final program or leave them in and recompile if the prod gram is not yet fully checked. Experience in debugging, FORTRAN programs to date has been somewhat clouded by the simultaneous process of debugging the translator program. However, it becomes clear that most errors in FORTRAN pro- grams are detected in the process of translation. So far, those programs having errors undetected by the trans- lator have been corrected with ease by examining the FORTRAN program and the data output of the ma- chine program. Method of Translation In general the translation of a FORTRAN program to a machine-language program is characterized by the fact that each piece of the output program has been constructed, instruction by instruction, so as not only to produce an efficient piece locally but also to fit effi- ciently into its context as a result of many consideratjons of the structure of its neighboring pieces and of the entire program. With the exception of subroutines (cor- responding to various functions and input-output statements appearing in the FORTRAN program), the output program does not contain long precoded instruc- tion sequences with parameters inserted during trans- lation. Such instruction sequences must be designed to do a variety of related tasks and are often not efficient in particular cases to which they are applied. FORTRAN-written programs seldom contain sequences of even three instructions whose operation parts alone could be considered a precoded "skeleton." There are a number of interesting observations con- cerning FORTRAN-written programs which may throw some light on the nature of the translation process. Many object programs, for example, contain a large number of instructions which are not attributable to any particular statement in the original FORTRAN program. Even transfers of control will appear which do not correspond to any control statement (e.g., DO, IF, GO TO) in the original program. The instructions arising from an arithmetic expression are optimally arranged, often in asurprisingly different sequence than the expression would lead one to expect. Depending on its context, the same DO statement may give rise to no instructions or to several complicated groups of in- structions located at different points in the program. While it is felt that the.ability of the system to trana- late algebraic expressions provides an important and necessary convenience, its ability to treat subscripted variables, DO statements, and the various input-output and FORMAT statements often provides even more significant conveniences. In any case, the major part of the translator program is devoted to handling these last mentioned facilities rather than to translating arithmetic expressions. (The near-optimal treatment of arithmetic expressions is sim- ply not as complex a task as a similar treatment of "housekeepingn operations.) A list of the approximate number of instructions in each of the six sections of the translator will give a crude picture of the effort expend- ed in each area. (Recall that Section 1 completely treats arithmetic statements in addition to performing a num- ber of other tasks.) The generality and complexity of some of the tech- niques employed to achieve efficient output programs may often be superfluous in many common applications. However the use af such techniques should enable the EQRTRAN system to produce efficient programs for . important problems which involve complex and unusual procedures. In any case the intellectual satisfaction of having formulated and solved some difficult problems of translation and the knowledge and experience ac- quired in the process are themselves almost a sufficient reward for the long effort expended on the FORTRAN project. URL https://www.softwarepreservation.org/projects/FORTRAN/paper/BackusEtAl-FortranAutomaticCodingSystem-1957.pdf If the url does not work I have a public space with the original content as a pdf or set of images https://1drv.ms/f/c/ea9004809c2729bb/EisCos3pDwdFtiDupCEt7hgBDDfkri_mSFruQi6cKvvZHA?e=NGLC8d
  14. ForTran for formula translating system, I think this is a wonderful read. I must admit, while my agenda in Black Games Elite is for a set of Black people to develop games, wit my involvement as one of them. I do think as a maker, a side project of making a computer is not worthless. It will be ideal to make a computer with its machine code and build upwards , if for no other reason than the acute experience of such a thing which this history partially proves. THE HISTORY OF FORTRAN I, II, AND III John Backus IBM Research Laboratory San Jose, California I. 1.1 Early background and environment. Attitudes about automatic programming in the 1950's. Before 1954 almost all programming was done in machine language or assembly lan- guage. Programmers rightly regarded their work as a complex, creative art that re- quired human inventiveness to produce an efficient program. Much of their effort was devoted to overcoming the difficulties created by the computers of that era: the lack of index registers, the lack of built- in floating point operations, restricted instruction sets (which might have AND but not OR, for example), and primitive input- output arrangements. Given the nature of computers, the services which "automatic programming" performed for the programmer were concerned with overcoming the machine's shortcomings. Thus the primary concern of some "automatic programming" systems was to allow the use of symbolic addresses and decimal numbers (e.g., the MIDAC Input Translation Program [Brown and Carr 1954]). But most of the larger "automatic. pro- gramming" systems (with the exception of Laning and Zierler's algebraic system [Lan- ing and Zierler 1954] and the A-2 compiler [Remington Rand 1953; Moser 1954]) simply provided a synthetic "computer" with an or- der code different from that of the real machine. This synthetic computer usually had floating point instructions and index registers and had improved input-output com- mands; it was therefore much easier to pro- gram than its real counterpart. The A-2 compiler also came to be a syn- thetic computer sometime after early 1954. But in early 1954 its input had a much cruder form; instead of "pseudo-instruc- tions" its input was then a complex sequence of "compiling instructions" that could take a variety of forms ranging from machine code itself to lengthy groups of words consti- tuting rather clumsy calling sequences for the desired floating point subroutine, to "abbreviated form" instructions that were converted by a "Translator" into ordinary "compiling instructions" [Moser 1954]. After May 1954 the A-2 compiler acquired a "pseudocode" which was similar to the or- der codes for many floating point interpret- ive systems that were already in operation in 1953: e.g., the Los Alamos systems, DUAL and SHACO [Bouricius 1953; Schlesinger 1953], the MIT "Summer Session Computer" [Adams and Laning 1954], a system for the ILLIAC de- signed by D. J. Wheeler [Muller 1954], and the SPEEDCODING system for the IBM 701 [Backus 1954]. The Laning and zierler system was quite a different story: it was the world's first operating algebraic compiler, a rather ele- gant but simple one. Knuth and Pardo [1977] assign this honor to Alick Glennie's AUTO- CODE, but I, for one, am unable to recognize the sample AUTOCODE program they give as "algebraic", especially when it is compared to the corresponding Laning and Zierler program. All of the early "automatic programming" systems were costly to use, since they slow- ed the machine down by a factor of five or ten. The most common reason for the slow- down was that these systems were spending most of their time in floating point sub- routines. Simulated indexing and other "housekeeping" operations could be done with simple inefficient techniques, since, slow as they were, they took far less time than the floating point work. Experience with slow "automatic program- ming" systems, plus their own experience with the problems of organizing loops and address modification, had convinced programmers that efficient programming was something that could not be automated. An- other reason that "automatic programming" was not taken seriously by the computing community came from the energetic public relations efforts of some visionaries to spread the word that their "automatic pro- gramming" systems had almost human abilities to understand the language and needs of the user; whereas closer inspection of these same systems would often reveal a complex, exception-ridden performer of clerical tasks which was both difficult to use and ineffi- cient. Whatever the reasons, it is diffi- cult to convey to a reader in the late sev-enties the strength of the skepticism about "automatic programming" in general and about its ability to produce efficient programs in particular, as it existed in 1954. (In the above discussion of attitudes about "automatic programming" in 1954 I have mentioned only those actual systems of which my colleagues and I were aware at the time. For a comprehensive treatment of early pro- graining systems and languages I recommend the article by Knuth and Pardo [1977] and Sammet [1969].) 1.2 The economics of programming. Another factor which influenced the de- velopment of FORTRAN was the economics of programming in 1954. The cost of program- mers associated with a computer center was usually at least as great as the cost of the computer itself. (This fact follows from the average salary-plus-overhead and number of programmers at each center and from the computer rental figures.) In addition, from one quarter to one half of the computer's time was spent in debugging. Thus p~ogram- ming and debugging accounted for as much as three quarters of the cost of operating a computer; and obviously, as computers got cheaper, this situation would get worse. This economic factor was one of the prime motivations which led me to propose the FOR- TRAN project in a letter to my boss, Cuth- bert Hurd, in late 1953 (the exact date is not known but other facts suggest December 1953 as a likely date). I believe that the economic need for a system like FORTRAN was one reason why IBM and my successive bosses, Hurd, Charles DeCarlo, and John McPherson, provided for our constantly expanding needs over the next five years without ever ask- ing us to project or justify those needs in a formal budget. 1.3 Programming systems in 1954. It is difficult for a programmer of to- day to comprehend what "automatic program- ming" meant to programmers in 1954. To many it then meant simply providing mnemon- ic operation codes and symbolic addresses, to others it meant the simple'process of obtaining subroutines from a library and inserting the addresses of operands into each subroutine. Most "automatic program- ming" systems were either assembly programs, or subroutine-fixing programs, or, most popularly, interpretive systems to provide floating point and indexing operations. My friends and I were aware of a number of assembly programs and interpretive systems, some of which have been mentioned above; besides these there were primarily two other systems of significance: the A-2 compiler [Remington Rand 1953; Moser 1954] and the Laning and Zierler [1954] algebraic compiler at MIT. As noted above, the A-2 compiler was at that time largely a sub- routine-fixer (its other principal task was to provide for "overlays"); but from the standpoint of its input "programs" it pro- vided fewer conveniences than most of the then current interpretive systems mention- ed earlier; it later adopted a "pseudo- code" as input which was similar to the input codes of these interpretive systems. The Laning and Zierler system accepted as input an elegant but rather simple alge- braic language. It permitted single-letter variables (identifiers) which could have a single constant or variable subscript. The repertoire of functions one could use were denoted by "F" with an integer superscript to indicate the "catalog number" of the de- sired function. Algebraic expressions were compiled into closed subroutines and placed on a magnetic drum for subsequent use. The system was originally designed for the Whirlwind computer when it had 1,024 stor- age cells, with the result that it caused a slowdown in execution speed by a factor of about ten [Adams and Laning 1954]. The effect of the Laning and Zierler system on the development of FORTRAN is a question which has been muddled by many misstatements on my part. For many years I believed that we had gotten the idea for using algebraic notation in FORTRAN from seeing a demonstration of the Laning and Zierler system at MIT. In preparing a pa- per [Backus 1976] for the International Research Conference on the History of Com- puting at Los Alamos (June 10-15, 1976), I reviewed the matter with Irving Ziller and obtained a copy of a 1954 letter [Backus 1954a] (which Dr. Laning kindly sent to me). As a result the facts of the matter have become clear. The letter in question is one I sent to Dr. Laning asking for a demonstration of his system. It makes clear that we had learned of his work at the Office of Naval Research Symposium on Auto- matic Programming for Digital Computers, May 13-14, 1954, and that the demonstration took place on June 2, 1954. The letter also makes clear that the FORTRAN project was well under way when the letter was sent (May 21, 1954) and included Harlan Herrick, Robert A. Nelson, and Irving Ziller as well as myself. Furthermore, an article in the proceedings of that same ONR Symposium by Herrick and myself [Backus and Herrick 1954] shows clearly that we were already consid- ering input expressions like "Zaij jk • b " and "X÷Y". We went on to raise the ques- tion "...can a machine translate a suffi- ciently rich mathematical language into a sufficiently economical program at a suf- ficiently low cost to make the whole affair feasible?" These and other remarks in our paper presented at the Symposium in May 1954 make it clear that we were already considering algebraic input considerably more sophis- ticated than that of Laning and Zierler's system when we first heard of their pioneer- ing work. Thus, although Laning and Zierler had already produced the world's first al-gebraic compiler, our basic ideas for FOR- TRAN had been developed independently; thus it is difficult to know what, if any, new ideas we got from seeing the demonstration of their system. Quasi-footnote: In response to suggestions of the Program Committee let me try to deal explicitly with the question of what work might have in- fluenced our early ideas for FORTRAN, al- though it is mostly a matter of listing work of which we were then unaware. I have already discussed the work of Laning and Zierler and the A-2 compiler. The work of Heinz Rutishauser [1952] is discussed later on. Like most of the world (except perhaps Rutishauser and Corrado B6hm--who was the first to describe a compiler in its own language [B6hm 195~]) we were entirely un- aware of the work of Konrad Zuse [1959; 1972]. Zuse's "Plankalk~l", which he com- pleted in 1945, was, in some ways, a more elegant and advanced programming language than those that appeared ten and fifteen years later. We were also unaware of the work of Mauchly et al. ("Short Code", 1950) , Burks ("Intermediate PL", 1950) , B6hm (1951) , Glennie ("AUTOCODE", 1952) as discussed in Knuth and Pardo [1977]. We were aware of but not influenced by the automatic program- ming efforts which simulated a synthetic computer (e.g., MIT "Summer Session Com- puter", SHACO, DUAL, SPEEDCODING, and the ILLIAC system), since their languages and systems were so different from those of FORTRAN. Nor were we influenced by alge- braic systems which were designed after our "Preliminary Report" [1954] but which began operation before FORTRAN (e.g., BACAIC [Grems and Porter 1956], IT [Per- lis, Smith and Van Zoeren 1957], MATH- MATIC [Ash et al. 1957]). Although PACT I [Baker 1956] was not an algebraic com- piler, it deserves mention as a signifi- cant development designed after the FOR- TRAN language but in operation before FORTRAN, which also did not influence our work. (End of quasi-footnote.) Our ONR Symposium article [Backus and Herrick 195~] also makes clear that the FORTRAN group was already aware that it faced a new kind of problem in automatic programming. The viability of most compil- ers and interpreters prior to FORTRAN had rested on the fact that most source language operations were not machine operations. Thus even large inefficiencies in perform- ing both looping/testing operations and computing addresses were masked by most op- erating time being spent in floating point subroutines. But the advent of the 70~ with built in. floating point and indexing radi- cally altered the situation. The 70~ pre- sented a double challenge to those who wanted to simplify programming; first it re- moved the raison d'Etre of earlier systems by providing in hardware the operations they existed to provide; second, it increased the problem of generating efficient programs by an order of magnitude by speeding up float- ing point operations by a factor of ten and thereby leaving inefficiencies nowhere to hide. In view of the widespread skepticism about the possibility of producing efficient programs with an automatic programming sys- tem and the fact that inefficiencies could no longer be hidden, we were convinced that the kind of system we had in mind would be widely used only if we could demonstrate that it would produce programs almost as efficient as hand coded ones and do so on virtually every job. It was our belief that if FORTRAN, dur- ing its first months, were to translate any reasonable "scientific" source program into an object program only half as fast as its hand coded counterpart, then acceptance of our system would be in serious danger. This belief caused us to regard the design of the translator as the real challenge, not the simple task of designing the lan- guage. Our belief in the simplicity of language design was partly confirmed by the relative ease with which similar languages had been independently developed by Rutis- hauser [1952], Laning and Zierler [1954], and ourselves; whereas we were alone in seeking to produce really efficient object programs. To this day I believe that our emphasis on object program efficiency rather than on language design was basically correct. I believe that had we failed to produce ef- ficient programs, the widespread use of languages like FORTRAN would have been se- riously delayed. In fact, I believe that we are in a similar, but unrecognized, sit- uation today: in spite of all the fuss that has been made over myriad language details, current conventional languages are still very weak programming aids, and far more powerful languages would be in use today if anyone had found a way to make them run with adequate efficiency. In other words, the next revolution in programming will take place only when both of the following requirements have been met: (a) a new kind of programming language, far more powerful than those of today, has been developed and (b) a technique has been found for ex- ecuting its programs at not much greater cost than that of today's programs. Because of our 1954 view that success in producing efficient programs was more im- portant than the design of the FORTRAN lan- guage, I consider the history of the com- piler construction and the work of its in- ventors an integral part of the history of the FORTRAN language; therefore a later section deals with that subject. 2. The early stages of the FORTRAN project. After Cuthbert Hurd approved my proposal to develop a practical automatic program- ming system for the 704 in December 1953 or January 1954, Irving Ziller was assigned to the project. We started work in one of the many small offices the project was to oc- cupy in the vicinity of IBM headquarters at 590 Madison Avenue in New York; the first of these was in the Jay Thorpe Build- ing on Fifth Avenue. By May 1954 we had been joined by Harlan Herrick and then by a new employee who had been hired to do technical typing, Robert A. Nelson (with Ziller, he soon began designing one of the most sophisticated sections of the compiler; he is now an IBM Fellow). By about May we had moved to the 19th floor of the annex of 590 Madison Avenue, next to the elevator machinery; the ground floor of this build- ing housed the 701 installation on which customers tested their programs before the arrival of their own machines. It was here that most of the FORTRAN language was de- signed, mostly by Herrick, Ziller and my- self, except that most of the input-output language and facilities were designed by Roy Nutt, an employee of United Aircraft Corp. who was soon to become a member of the FORTRAN project. After we had finished designing most of the language we heard about Rutishauser's proposals for a similar language [Rutis- hauser 1952]. It was characteristic of the unscholarly attitude of most programmers then, and of ourselves in particular, that we did not bother to carefully review the sketchy translation of his proposals that we finally obtained, since from their sym- bolic content they did not appear to add anything new to our proposed language. Rutishauser's language had a for statement and one-dimensional arrays, but no IF, GOTO, nor I/O statements. Subscript variables could not be used as ordinary variables and operator precedence was ignored. His 1952 article described two compilers for this language (for more details see [Knuth and Pardo 1977]). As far as we were aware, we simply made up the language as we went along. We did not regard language design as a difficult problem, merely a simple prelude to the real problem: designing a compiler which could produce efficient programs. Of course one of our goals was to design a language which would make it possible for engineers and scientists to write programs themselves for the 704. We also wanted to eliminate a lot of the bookkeeping and de- tailed, repetitive planning which hand cod- ing involved. Very early in our work we had in mind the notions of assignment state- ments, subscripted variables, and the DO statement (which I believe was proposed by Herrick). We felt that these provided a good basis for achieving our goals for the language, and whatever else was needed e- merged as we tried to build a way of pro- gramming on these basic ideas. We certainly had no idea that languages almost identical to the one we were working on would be used for more than one IBM com-puter, not to mention those of other manu- facturers. (After all, there were very few computers around then.) But we did expect our system to have a big impact, in the sense that it would make programming for the 704 very much faster, cheaper, more re- liable. We also expected that, if we were successful in meeting our goals, other groups and manufacturers would follow our example in reducing the cost of programming by providing similar systems with different but similar languages [Preliminary Report 1954]. By the fall of 1954 we had become the "Programming Research Group" and I had be- come its "manager". By November of that year we had produced a paper: "Preliminary Report, Specifications for the IBM Mathemat- ical FORmula TRANslating System, FORTRAN" [Preliminary Report 1954] dated November 10. In its introduction we noted that "systems which have sought to reduce the job of cod- ing and debugging problems have offered the choice of easy coding and slow execution or laborious coding and fast execution." On the basis more of faith than of knowledge, we suggested that programs "will be executed in about the same time that would be re- quired had the problem been laboriously hand coded." In what turned out to be a true statement, we said that "FORTRAN may apply complex, lengthy techniques in coding a problem which the human coder would have neither the time nor inclination to derive or apply." The language described in the "Prelimin- ary Report" had variables of one or two characters in length, function names of three or more characters, recursively de- fined "expressions", subscripted variables with up to three subscripts, "arithmetic formulas" (which turn out to be assignment statements), and "DO-formulas". These lat- ter formulas could specify both the first and last statements to be controlled, thus permitting a DO to control a distant se- quence of statements, as well as specifying a third statement to which control would pass following the end of the iteration. If only one statement was specified, the "range" of the DO was the sequence of state- ments following the DO down to the specified statement. Expressions in "arithmetic formulas" could be "mixed": involve both "fixed point" (integer) and "floating point" quantities. The arithmetic used (all integer or all floating point) to evaluate a mixed expres- sion was determined by the type of the variable on the left of the "=" sign. "IF- formulas" employed an equality or inequal- ity sign ("=" or ">" or ">=") between two (restricted) expressions, followed by two statement numbers, one for the "true" case, the other for the "false" case. A "Relabel formula" was designed to make it easy to rotate, say, the indices of the rows of a matrix so that the same computa-tion would apply, after relabelling, even though a new row had been read in and the next computation was now to take place on a different, rotated set of rows. Thus, for example, if b is a 4 by 4 matrix, after RELABEL b(3,1), a reference to b(1,j) has the same meaning as b(3,j) before relabel- ling; b(2,j) after = b(4,j) before; b(3,j) after = b(1,j) before; and b(4,j) after = b(2,j) before relabelling. The input-output statements provided in- cluded the basic notion of specifying the sequence in which data was to be read in or out, but did not include any "Format" state- ments. The Report also lists four kinds of "specification sentences": (I) "dimension sentences" for giving the dimensions of ar- rays, (2) "equivalence sentences" for as- signing the same storage locations to var- iables, (3) "frequency sentences" for in- dicating estimated relative frequency of branch paths or loops to help the compiler optimize the object program, and (4) "rel- ative constant sentences" to indicate sub- script variables which are expected to change their values very infrequently. Toward the end of the Report (pp. 26-27) there is a section "Future additions to the FORTRAN system". Its first item is: "a variety of new input-output formulas which would enable the programmer to specify var- ious formats for cards, printing, input tapes and output tapes" It is believed that this item is a result of our early consultations with Roy Nutt. This section goes on to list other proposed facilities to be added: complex and double precision arithmetic, matrix arithmetic, sorting, solving simultaneous equations, differential equations, and linear programming problems. It also describes function definition cap- abilities similar to those which later ap- peared in FORTRAN II; facilities for num- erical integration; a summation operator; and table lookup facilities. The final section of the Report (pp 28- 29) discusses programming techniques to use to help the system produce efficient pro- grams. It discusses how to use parentheses to help the system identify identical sub- expressions within an expression and there- by eliminate their duplicate calculation. These parentheses had to be supplied only when a recurring subexpression occurred as part of a term (e.g., if a~b occurred in several places, it would be better to write the term a~b~c as (a~b)~c to avoid duplicate calculation); otherwise the system would identify duplicates without any assistance. It also observes that the system would not produce optimal code for loops constructed without DO statements. This final section of the Report also notes that "no special provisions have been included in the FORTRAN system for locating errors in formulas". It suggests checking a program "by independently recreating the specifications for a problem from its FOR- TRAN formulation [!]" It says nothing about the system catching syntactic errors, but notes that an error-finding program can be written after some experience with errors has been accumulated. Unfortunately we were hopelessly opti- mistic in 1954 about the problems of debug- ging FORTRAN programs (thus we find on page 2 of the Report: "Since FORTRAN should vir- tually eliminate coding and debugging... [!]") and hence syntactic error checking facilities in the first distribution of FORTRAN I were weak. Better facilities were added not long after distribution and fairly good syntactic checking was provided in FORTRAN II. The FORTRAN language described in the Programmer's Reference Manual dated October 15, 1956 [IBM 1956] differed in a few re- spects from that of the Preliminary Report, but, considering our ignorance in 1954 of the problems we would later encounter in producing the compiler, there were remark- ably few deletions (the Relabel and Rela- tive Constant statements), a few retreats, some fortunate, some not (simplification of DO statements, dropping inequalities from IF statements--for lack of a ">" symbol, and prohibiting most "mixed" expressions and subscripted subscripts), and the recti- fication of a few omissions (addition of FORMAT, CONTINUE, computed and assigned GO- TO statements, increasing the length of var- iables to up to six characters, and general improvement of input-output statements). Since our entire attitude about language design had always been a very casual one, the changes which we felt to be desirable during the course of writing the compiler were made equally casually. We never felt that any of them involved a real sacrifice in convenience or power (with the possible exception of the Relabel statement, whose purpose was to coordinate input-output with computations on arrays, but this was one facility which we felt would have been really difficult to implement). I believe the simplification of the original DO state- ment resulted from the realization that (a) it would be hard to describe precisely, (b) it was awkward to compile, and (c) it provided little power beyond that of the final version. In our naive unawareness of language design problems--of course we knew nothing of many issues which were later thought to be important, e.g., block structure, con- ditional expressions, type declarations-- it seemed to us that once one had the no- tions of the assignment statement, the sub- scripted variable, and the DO statement in hand (and these were among our earliest i- deas), then the remaining problems of lan- guage design were trivial: either their sol- ution was thrust upon one by the need to provide some machine facility such as read-ing input, or by some programming task which could not be done with existing structures (e.g., skipping to the end of a DO loop without skipping the indexing instructions there: this gave rise to the CONTINUE state- ment). One much-criticized design choice in FORTRAN concerns the use of spaces: blanks were ignored, even blanks in the middle of an identifier. Roy Nutt reminds me that that choice was partly in recognition of a problem widely known in SHARE, the 704 us- ers' association. There was a common pro- blem with keypunchers not recognizing or properly counting blanks in handwritten data, and this caused many errors. We also regarded ignoring blanks as a device to en- able programmers to arrange their programs in a more readable form without altering their meaning or introducing complex rules for formatting statements. Another debatable design choice was to rule out "mixed" mode expressions involving both integer and floating point quantities. Although our Preliminary Report had included such expressions, and rules for evaluating them, we felt that if code for type conver- sion were to be generated, the user should be aware of that, and the best way to insure that he was aware was to ask him to specify them. I believe we were also doubtful of the usefulness of the rules in our Report for evaluating mixed expressions. In any case, the most common sort of "mixtures" was allowed: integer exponents and func- tion arguments were allowed in a floating point expression. In late 1954 and early 1955, after com- pleting the Preliminary Report, Harlan Her- rick, Irving Ziller and I gave perhaps five or six talks about our plans for FORTRAN to various groups of IBM customers who had or- dered a 704 (the 704 had been announced about May 1954). At these talks we covered the material in the Report and discussed our plans for the compiler (which was to be com- pleted within about six months [!] ; this was to remain the interval-to-completion until it actually was completed over two years later, in April 1957). In addition to informing customers about our plans, an- other purpose of these talks was to assemble a list of their objections and further re- quirements. In this we were disappointed; our listeners were mostly skeptical; I be- lieve they had heard too many glowing des- criptions of what turned out to be clumsy systems to take us seriously. In those days one was accustomed to finding lots of pecul- iar but significant restrictions in a system when it finally arrived that had not been mentioned in its original description. Most of all, our claims that we would produce ef- ficient object programs were disbelieved. Whatever thereasons, we received almost no suggestions or feedback; our listeners had done almost no thinking about the problems we faced and had almost no suggestions or criticisms. Thus we felt that our trips to Washington (D.C.), Albuquerque, Pittsburgh, Los Angeles, and one or two other places were not very helpful. One trip to give our talk, probably in January 1955, had an excellent payoff. This talk, at United Aircraft Corp., resulted in an agreement between our group and Walter Ramshaw at United Aircraft that Roy Nutt would become a regular part of our effort (although remaining an employee of United Aircraft) to contribute his expertise on input-output and assembly routines. With a few breaks due to his involvement in writing various SHARE programs, he would thenceforth come to New York two or three times a week until early 1957. It is difficult to assess the influence the early work of the FORTRAN group had on other projects. Certainly the discussion of Laning and Zierler's algebraic compiler at the ONR Symposium in May 1954 would have been more likely to persuade someone to un- dertake a similar line of effort than would the brief discussion of the merits of using "a fairly natural mathematical language" that appeared there in the paper by Herrick and myself [Backus and Herrick 1954]. But it was our impression that our discussions with various groups after that time, their access to our Preliminary Report, and their awareness of the extent and seriousness of our efforts, that these factors either gave the initial stimulus to some other projects or at least caused them to be more active than they might have been otherwise. It was our impression, for example, that the "IT" project [Perlis, Smith and Van Zoeren 1957] at Purdue and later at Carnegie-Mellon began shortly after the distribution of our Preliminary Report, as did the "MATH-MATIC" project [Ash et al. 1957] at Sperry Rand. It is not clear what influence, if any, our Los Angeles talk and earlier contacts with members of their group had on the PACT I effort [Baker 1956], which I believe was already in its formative stages when we got to Los Angeles. It is clear, whatever in- fluence the specifications for FORTRAN may have had on other projects in 1954-55-56, that our plans were well advanced and quite firm by the end of 1954 and before we had contact or knowledge of those other pro- jects. Our specifications were not affected by them in any significant way as far as I am aware, even though some were operating before FORTRAN was (since they were prima- rily interested in providing an input lan- guage rather than in optimization, their task was considerably simpler than ours). 3. The construction of the compiler. The FORTRAN compiler (or "translator" as we called it then) was begun in early 1955, although a lot of work on various schemes which would be used in it had been done in 1954; e.g., Herrick had done a lot of trial programming to test out our language and we had worked out the basic sort of machine programs which we wanted the compiler to generate for various source language phrases; Ziller and I had worked out a basic scheme for translating arithmetic expres- sions. But the real work on the compiler got under way in our third location on the fifth floor of 15 East 56th Street. By the middle of February three separate efforts were un- derway. The first two of these concerned sections I and 2 of the compiler, and the third concerned the input, output and as- sembly programs we were going to need (see below). We believed then that these efforts would produce most of the compiler. (The entire project was carried on by a loose cooperation between autonomous, sep- arate groups of one, two, or three people; each group was responsible for a "section" of the compiler; each group gradually devel- oped and agreed upon its own input and out- put specifications with the groups for neighboring sections; each group invented and programmed the necessary techniques for doing its assigned job.) Section I was to read the entire source program, compile what instructions it could, and fi]e all the rest of the information from the source program in appropriate tables'. Thus the compiler was "one pass" in the sense that it "saw" the source pro- gram only once. Herrick was responsible for creating most of the tables, Peter Sheridan (who had recently joined us) com- piled all the arithmetic expressions, and Roy Nutt compiled and/or filed the I/O statements. Herrick, Sheridan and Nutt got some help later on from R. J. Beeber and H. Stern, but they were the architects of sec- tion I and wrote most of its code. Sheridan devised and implemented a number of optimiz- ing transformations on expressions [Sheridan 1959] which sometimes radically altered them (of course without changing their meaning). Nutt transformed the I/O "lists of quan- tities" into nests of DO statements which were then treated by the regular mechanisms of the compiler. The rest of the I/O infor- mation he filed for later treatment in sec- tion 6, the assembler section. (For further details about how the various sections of the compiler worked see [Backus et al. 1957] .) Using the information that was filed in section I, section 2 faced a completely new kind of problem; it was required to an- alyze the entire structure of the program in order to generate optimal code from DO statements and references to subscripted variables. The simplest way to effect a reference to A(I,J) is to evaluate an ex- pression involving the address of A(I,1), I, and K×J, where K is the length of a col- umn (when A is stored column-wise). But this calculation, with its multiplication, is much less efficient than the way most hand coded programs effect a reference to A(I,J), namely, by adding an appropriate constant to the address of the preceding reference to the array A whenever I and J are changing linearly. To employ this far more efficient method section 2 had to determine when the surrounding program was changing I and J linearly. Thus one problem was that of distinguish- ing between, on the one hand, references to an array element which the translator might treat by incrementing the ad4ress used for a previous reference, and those array ref- erences, on the other hand, which would re- quire an address calculation starting from scratch with the current values of the sub- scripts. It was decided that it was not practical to track down and identify linear changes in subscripts resulting from assignment statements. Thus the sole criterion for linear changes, and hence for efficient handling of array references, was to be that the subscripts involved were being controlled by DO statements. Despite this simplifying assumption, the number of cases that section 2 had to analyze in order to produce optimal or near-optimal code was very large. (The number of such cases in- creased exponentially with the number of subscripts; this was a prime factor in our decision to limit them to three; the fact that the 704 had only three index registers was not a factor.) It is beyond the scope of this paper to go into the details of the analysis which section 2 carried out. It will suffice to say that it produced code of such efficien- cy that its output would startle the pro- grammers who studied it. It moved code out of loops where that was possible; it took advantage of the differences between row- wise and column-wise scans; it took note of special cases to optimize even the exits from loops. The degree of optimization performed by section 2 in its treatment of indexing, array references, and loops was not equalled again until optimizing compil- ers began to appear in the middle and late sixties. The architecture and all the techniques employed in section 2 were invented by Rob- ert A. Nelson and Irving Ziller. They plan- ned and programmed the entire section. Orig- inally it was their intention to produce the complete code for their area, including the choice of the index registers to be used (the 704 had three index registers). When they started looking at that problem it rapidly became clear that it was not go- ing to be easy to treat it optimally. At that point I proposed that they should pro- duce a program for a 704 with an unlimited number of index registers, and that later sections would analyze the frequency of ex- ecution of various parts of the program (by a Monte Carlo simulation of its execu- tion) and then make index register assign- ments so as to minimize the transfers of items between the store and the index reg-isters. This proposal gave rise to two new sec- tions of the compiler which we had not an- ticipated, sections 4 and 5 (section 3 was added still later to convert the output of sections I and 2 to the form required for sections 4, 5, and 6). In the fall of 1955 Lois Mitchell Haibt joined our group to plan and program section 4, which was to analyze the flow of a program produced by sections I and 2, divide it into "basic blocks" (which contained no branching), do a Monte Carlo (statistical) analysis of the expected frequency of execution of basic blocks--by simulating the behavior of the program and keeping counts of the use of each block--using information from DO state- ments and FREQUENCY statements, and collect information about index register usage (for more details see [Backus et al. 1957; Cocke and Schwartz 1970 p.511]) . Section 5 would then do the actual transformation of the program from one having an unlimited number of index registers to one having only three. Again, the section was entirely planned and programmed by Haibt. Section 5 was planned and programmed by Sheldon Best, who was loaned to our group by agreement with his employer, Charles W. Adams, at the Digital Computer Laboratory at MIT; during his stay with us Best was a temporary IBM employee. Starting in the early fall of 1955, he designed what turned out to be, along with section 2, one of the most intricate and complex sections of the compiler, one which had perhaps more in- fluence on the methods used in later com- pilers than any other part of the FORTRAN compiler. (For a discussion of his tech- niques see [Cocke and Schwartz 1970 pp. 510- 515].) It is impossible to describe his register allocation method here; it suffices to say that it was to become the basis for much subsequent work and produced code which was very difficult to improve. Although I believe that no provably optimal register allocation algorithm is known for general programs with loops, etc., empirically Best's 1955-56 procedure ap- peared to be optimal. For straight-line code Best's replacement policy was the same as that used in Belady's MIN algorithm, which Belady proved to be optimal [Belady 1965]. Although Best did not publish a formal proof, he had convincing arguments for the optimality of his policy in 1955. Late in 1955 it was recognized that yet another section, section 3, was needed. This section merged the outputs of the pre- ceding sections into a single uniform 704 program which could refer to any number of index registers. It was planned and pro- grammed by Richard Goldberg, a mathematician who joined us in November 1955. Also, late in 1956, after Best had returned to MIT and during the debugging of the system, section 5 was taken over by Goldberg and David Sayre (see below), who diagrammed it care-fully and took charge of its final debug- ging. The final section of the compiler, sec- tion 6, assembled the final program into a relocatable binary program (it could also produce a symbolic program in SAP, the SHARE Assembly Program for the 704). It produced a storage map of the program and data that was a compact summary of the FOR- TRAN output. Of course it also obtained the necessary library programs for inclusion in the object program, including those re- quired to interpret FORMAT statements and perform input-output operations. Taking advantage of the special features of the programs it assembled, this assembler was about ten times faster than SAP. It was designed and programmed by Roy Nutt, who also wrote all the I/O programs and the re- locating binary loader for loading object programs. By the summer of 1956 large parts of the system were working. Sections I, 2, and 3 could produce workable code provided no more than three index registers were needed. A number of test programs were compiled and run at this time. Nutt took part of the system to United Aircraft (sections I, 2, and 3 and the part of section 6 which pro- duced SAP output). This part of the system was productive there from the summer of 1956 until the complete system was available in early 1957. From late spring of 1956 to early 1957 the pace of debugging was intense; often we would rent rooms in the Langdon Hotel (which disappeared long ago) on 56th Street, sleep there a little during the day and then stay up all night to get as much use of the computer (in the headquarters annex on 57th Street) as possible. It was an exciting period; when later on we began to get fragments of compiled pro- grams out of the system, we were often as- tonished at the surprising transformations in the indexing operations and in the ar- rangement of the computation which the com- piler made, changes which made the object program efficient but which we would not have thought to make as programmers our- selves (even though, of course, Nelson or Ziller could figure out how the indexing worked, Sheridan could explain how an ex- pression had been optimized beyond recog- nition, and Goldberg or Sayre could tell us how section 5 had generated additional in- dexing operations). Transfers of control appeared which corresponded to no source statement, expressions were radically re- arranged, and the same DO statement might produce no instructions in the object pro- gram in one context, and in another it would produce many instructions in differ- ent places in the program. By the summer of 1956 what appeared to be the imminent completion of the project started us worrying (for perhaps the first time) about documentation. David Sayre, a crystallographer who had joined us in the spring (he had earlier consulted with Best on the design of section 5 and had later be- gun serving as second-in-command of what was now the '~Programming Research Department") took up the task of writing the Programmer's Reference Manual [IBM 1956]. It appeared in a glossy cover, handsomely printed, with the date October 15, 1956. It stood for some time as a unique example of a manual for a programming language (perhaps it still does): it had wide margins, yet was only 51 pages long. Its description of the FORTRAN language, exclusive of input-output state- ments, was 21 pages; the I/O description occupied another 11 pages; the rest of it was examples and details about arithmetic, tables, etc.. It gave an elegant recursive definition of expressions (as given by Sher- idan), and concise, clear descriptions, with examples, of each statement type, of which there were 32, mostly machine dependent i- tems like SENSE LIGHT, IF DIVIDE CHECK, PUNCH, READ DRUM, and so on. (For examples of its style see figs. I, 2, and 3.) One feature of FORTRAN I is missing from the Programmer's Reference Manual, not from an oversight of Sayre's, but because it was added to the system after the manual was written and before the system was distrib- uted. This feature was the ability to de- fine a function by a "function statement". These statements had to precede the rest of the program. They looked like assignment statements, with the defined function and dummy arguments on the left and an expres- sion involving those arguments on the right. They are described in the addenda to the Programmer's Reference Manual [Addenda 1957] which we sent on February 8, 1957 to John Greenstadt, who was in charge of IBM's fac- ility for distributing information to SHARE. They are also described in all sub- sequent material on FORTRAN I. The next documentation task we set our- selves was to write a paper describing the FORTRAN language and the translator program. The result was a paper entitled "The FOR- TRAN automatic coding system" [Backus et al. 1957] which we presented at the Western Joint Computer Conference in Los Angeles in February 1957. I have mentioned all of the thirteen authors of that paper in the pre- ceding narrative except one: Robert A. Hughes. He was employed by the Livermore Radiation Laboratory; by arrangement with Sidney Fernbach, he visited us for two or three months in the summer of 1956 to help us document the system. (The authors of that paper were: J. W. Backus, R. J. Beeber, S. Best, R. Goldberg, L. M. Haibt, H. L. Herrick, R. A. Hughes, R. A. Nelson, R. Nutt, D. Sayre, P. B. Sheridan, H. Stern, I. Ziller.) At about the time of the Western Joint Computer Conference we spent some time in Los Angeles still frantically debugging the system. North American Aviation gave us time at night on their 704 to help us in our mad rush to distribute the system. Up to this point there had been relatively little interest from 704 instablations (with the exception of Ramshaw's United Aircraft shop, Harry Cantrell's GE installation in Schenectady, and Sidney Fernbach's Liver- more operation), but now that the full sys- tem was beginning to generate object pro- grams, interest picked up in a number of places. Sometime in early April 1957 we felt the system was sufficiently bug-free to distrib- ute to all 704 installations. Sayre and Grace Mitchell (see below) started to punch out the binary decks of the system, each of about 2,000 cards, with the intention to make 30 or 40 decks for distribution. This process was so error-prone that they had to give up after spending an entire night in producing only one or two decks. (Apparently one of those decks was sent, without any identification or directions, to the Westinghouse Bettis installation, where a puzzled group headed by Herbert S. Bright, suspecting that it might be the long-awaited FORTRAN deck, proceeded, en- tirely by guesswork, to get it to compile a test program--after a diagnostic print- out noting that a comma was missing in a specific statement! This program then printed 28 pages of correct results--with a few FORMAT errors. The date: April 20, 1957. An amusing account of this incident by Bright is in Computers and Automation [Bright 1971].) After failing to produce binary decks, Sayre devised and programmed the simple editor and loader that made it possible to distribute and update the system from mag- netic tapes; this arrangement served as the mechanism for creating new system tapes from a master tape and the binary correction cards which our group would generate in large numbers during the long field debug- ging and maintenance period which followed distribution. With the distribution of the system tapes went a "Preliminary Operator's Man- ual" [Operator's Manual 1957] dated April 8, 1957. It describes how to use the tape ed-itor and how to maintain the library of functions. Five pages of such general in- structions are followed by 32 pages of er- ror stops; many of these say "source program error, get off machine, correct for-mula in question and restart problem" and then, for example (stop 3624) "non-zero level reduction due to insufficient or re- dundant parentheses in arithmetic or IF- type formula". Shortly after the distrib- ution of the system we distributed--one copy per installation--what was fondly known as the "Tome", the complete symbolic listing of the entire compiler plus other system and diagnostic information, an 11" by 15" volume about four or five inches thick. NOTE:the graphics below are explanatory, so I placed pertinent text under each image but continue past the graphics as before absent text, note the link at the end of this post has pertinent information Subscripts. GENERAL FORM Let v represent any fixed point variable and c (or c') any-unsigned fixed point constant. Then a subscript is an expression of one of the forms: V C V+C or V--C c*v c* V+C' or c*v--c' EXAMPLES I 3 MU+2 MU-2 5*J 5"J+2 5"J-2 The symbol • denotes multiplication. The variable v must not itself be sub- scripted. Subscripted Variables. GENERAL FORM A fixed or floating point variable followed, by parentheses enclosing 1, 2, or 3 subscripts separated by commas. EXAMPLES A(I) K(3) BEIA(5*.I-2, K + 2,L) For each wlriable that appears in subscripted form the size of the array (i.e. the maxinuun wdues which its subscripts can attain) must be stated in a DIMEN- SION statement (see Chapter 6) preceding the first appearance of the variable. The minimum value which a subscript may assume in the object program is + 1. A rrangement o/A rrays in Storage. A 2-dimensional array A will, in the object program, be stored sequentially in the order A1,1, A2.1, • ..... Am,l, A],z, A2,2, • ..... Am,2, • ........ Am,,. Thus it is stored "columnwise", with the first of its subscripts varying most rapidly, and the last varying least rapidly. The same is true of 3-dimensional arrays. l-dimensional arrays are of course simply stored sequentially. All arrays are stored backwards in storage; i.e. the above sequence is in the order of decreas- ing absolute location. Any such routine will be compiled into the object program as a closed subrou- tine. In the section on Writing Subroutines for the Master Tape in Chapter 7 are given the specifications which any such routine must meet. Expressions An expression is any sequence of constants, w~riables (subscripted or not sub- scripted), and functions, separated by operation symbols, commas, and paren- theses so as to form a meaningful mattmmatical expression. However, one special restriction does exist. A FORTRAN expression may be either a fixed or a lloating point expression, but it must not be a mixed expression. This does not mean that a floating point quantity can not appear in a fixed point expression, or vice versa, but rather that a quantity of one mode can appear in an expression of the other mode only in certain ways. Brielty, a floating point quantity can appear in a fixed point expression only as an argument of a function; a fixed point quantity can appear in a floating point expression only as an argument of a function, or as a subscript, or as an exponent. Formal Rules /or Forming Expressions. By repeated use of the following rules, all permissible expressions may be derived. 1. Any fixed point (floating point) constant, variable, or subscripted variable is an expression of the same mode. Thus 3 and I are fixed point expressions, and AI.I'HA and A(I,J,K) are tloating point exprcssi~ms. 2. If SOMEF is some function of n wLriahles, and if E, F ...... , H are a set of n expressions of the correct modes for SOMEF, then SONIEF (E, F, .... H) is an expression of the same mode as SOMEF. 3. If E is an expression, and if its lirst character is not -t or --, then t- E and --E are expressions of the same mode as E. Thus -A is an expression, but -k-A is not. 4. If E is an expression, then (E) is an expression of the same mode as E. Thus (A), ((A)), (((A))),.ctc. are expressions. 5. If E and F are expressions of the same mode, and if the first character of F is not + or--, then E+F E-F E* F [/F are expressions of the same mode. Thus A--+ B and A/4 B are not expres- sions. The characters +, -, *, and / denote addition, subtraction, multi- plication, and division. STOP GENERAL FORM "STOP" or "STOP n" where n is an unsigned octal fixed point constant. EXAMPLES STOP STOP 77777 This statement causes the machine to HALT in such a way that pressing the START button has no effect. Therefore, in contrast to the PAUSE, it is used where a get-oil-the-machine stop, rather than a temporary stop, is desired. The octal number n is displayed on the 704 console in the address field of the storage register. (If n is not stated it is taken to be 0.) DO GENERAL FORM "DO n i = m,, m2" or "DO n i = m,, m2, m3" where n is a statement number, i is a non-subscripted fixed point variable, and m,, m2, ma are each either an unsigned fixed point constant or a non-subscripted fixed point variable. If ma is not stated it is taken to be 1. EXAMPLES DO 301 = 1,10 DO301 = 1, M, 3 The DO statement is a command to "DO the statements which follow, to and including the statement with statement number n, repeatedly, the first time with i = m~ and with i increased by mz for each succeeding time; after they have been done with i equal to the highest of this sequence of values which does not exceed m., let control reach the statement following the statement with state- mcnt number n". The range of a DO is the set of statements which will be executed re- peatedly; it is the sequence of consecutive statements immediately following the DO, to and including the statement numbered n. The index of a DO is the fixed point variable i, which is controlled by the DO in such a way that its value begins at ml and is increased each time by ma until it is about to exceed m> Throughout the range it is available for com- putation, either as an ordinary fixed point variable or as the variable of a subscript. During the last execution of the range, the DO is said to be satisfied. Suppose, for example, that control has reached statement 10 of the program 10 DO 11 I= 1, 10 11 A(I) = I*N(I) 12 NOTE: Continuing text from here on The proprietors of the six sections were kept busy tracking down bugs elicited by our customers' use of FORTRAN until the late summer of 1957. Hal Stern served as the co- ordinator of the field debugging and main- tenance effort; he received a stream of telegrams, mail and phone calls from all over the country and distributed the in- coming problems to the appropriate members of our group to track down the errors and generate correction cards, which he then distributed to every installation. In the spring of 1957 Grace E. Mitchell joined our group to write the Programmer's Primer [IBM 1957] for FORTRAN. The Primer was divided into three sections; each des- cribed successively larger subsets of the language accompanied by many example pro- grams. The first edition of the Primer was issued in the late fall or winter of 1957; a slightly revised edition appeared in March 1958. Mitchell planned and wrote the 64-page Primer with some consultation with the rest of the group; she later programmed most of the extensive changes in the system which resulted in FORTRAN II (see below). The Primer had an important influence on the subsequent growth in the use of the sys- tem. I believe it was the only available simplified instruction manual (other than reference manuals) until the later appear- ance of books such as [McCracken 1961], [Organick 1963] and many others. A report on FORTRAN usage in November 1958 [Backus 1958] says that "a survey in April [1958] of twenty-six 704 installations indicates that over half of them use FORTRAN [I] for more than half of their problems. Many use it for 80~ or more of their work... and almost all use it for some of their work." By the fall of 1958 there were some 60 installations with about 66 704s, and "... more than half the machine instruc- tions for these machines are being produced by FORTRAN. SHARE recently designated FOR- TRAN as the second official medium for transmittal of programs [SAP was the first] ., ." 4. FORTRAN II During the field debugging period some shortcomings of the system design, which we had been aware of earlier but had no time to deal with, were constantly coming to our attention. In the early fall of 1957 we started to plan ways of correcting these shortcomings; a document dated September 25, 1957 [Proposed Specifications 1957] characterizes them as (a) a need for better diagnostics, clearer comments about the nature of source program errors, and (b) the need for subroutine definition capabil- ities. "(Although one FORTRAN I diagnostic would pinpoint, in a printout, a missing comma in a particular statement, others could be very cryptic.) This document is titled "Proposed Specifications for FORTRAN II for the 704"; it sketches a more general diagnostic system and describes the new "subroutine definition" and END statements, plus some others which were not implemented. It describes how symbolic information is retained in the relocatable binary form of a subroutine so that the "binary symbolic subroutine [BSS] loader" can implement ref- erences to separately compiled subroutines. It describes new prologues for these sub- routines and points out that mixtures of FORTRAN-coded and assembly-coded relocat- able binary programs could be loaded and run together. It does not discuss the FUNC- TION statement that was also available in FORTRAN II. FORTRAN II was designed mostly by Nelson, Ziller, and myself. Mitchell programmed the majority of new code for FORTRAN II (with the most unusual feature that she delivered it ahead of schedule). She was aided in this by Bernyce Brady and LeRoy May. Sheridan planned and made the necessary changes in his part of section I; Nutt did the same for section 6. FORTRAN II was distributed in the spring of 1958. 5. FORTRAN III While FORTRAN II was being developed, Ziller was designing an even more advanced system that he called FORTRAN III. It al- lowed one to write intermixed symbolic in- structions and FORTRAN statements. The sym- bolic (704) instructions could have FORTRAN variables (with or without subscripts) as "addresses". In addition to this machine dependent feature (which assured the demise of FORTRAN III along with that of the 704), it contained early versions of a number of improvements that were later to appear in FORTRAN IV. It had "Boolean" expressions, function and subroutine names could be passed as arguments, and it had facilities for handling alphanumeric data, including a new FO~4AT code "A" similar to codes "I" and "E". This system was planned and pro- grammed by Ziller with some help from Nelson and Nutt. Ziller maintained it and made it available to about 20 (mostly IBM) instal- lations. It was never distributed general- ly. It was accompanied by a brief descrip- tive document [Additions to FORTRAN II 1958]. It became available on this limited scale in the winter of 1958-59 and was in operation until the early sixties, in part on the 709 using the compatibility feature (which made the 709 order code the same as that of the 704). 6. FORTRAN after 1958; comments. By the end of 1958 or early 1959 the FORTRAN group (the Programming Research Department), while still helping with an occasional debugging problem with FORTRAN II, was primarily occupied with other re- search. Another IBM department had long since taken responsibility for the FORTRAN system and was revising it in the course of producing a translator for the 709 which used the same procedures as the 704 FORTRAN II translator. Since my friends and I no longer had responsibility for FORTRAN and were busy thinking about other things by the end of 1958, that seems like a good point to break off this account. There remain only a few comments to be made about the subsequent development of FORTRAN. The most obvious defect in FORTRAN II for many of its users was the time spent in compiling. Even though the facilities of FORTRAN II permitted separate compilation of subroutines and hence eliminated the need to recompile an entire program at each step in debugging it, nevertheless compile times were long and, during debugging, the considerable time spent in optimizing was wasted. I repeatedly suggested to those who were in charge of FORTRAN that they should now develop a fast compiler and/or interpreter without any optimizing at all for use during debugging and for short-run jobs. Unfortunately the developers of FORTRAN IV thought they could have the best of both worlds in a single compiler, one which was both fast and produced optimized code. I was unsuccessful in convincing them that two compilers would have been far bet- ter than the compromise which became the original FORTRAN IV compiler. The latter was not nearly as fast as later compilers like WATFOR [Cress, Dirksen and Graham 1970] nor did it produce as good code as FORTRAN II. (For more discussion of later develop- ments with FORTRAN see [Backus and Heising 196~] .) My own opinion as to the effect of FOR- TRAN on later languages and the collective impact of such languages on programming gen- erally is not a popular opinion. That viewpoint is the subject of a long paper [Backus 1978] which should appear soon in the Communications of the ACM. I now re- gard all conventional languages (e.g., the FORTRANs, the ALGOLs, their successors and derivatives) as increasingly complex elab- orations of the style of programming dic- tated by the von Neumann computer. These "von Neumann languages" create enormous, unnecessary intellectual roadblocks in thinking about programs and in creating the higher level combining forms required in a really powerful programming methodology. Von Neumann languages constantly keep our noses pressed in the dirt of address com- putation and the separate computation of single words, whereas we should be focusing on the form and content of the overall re- sult we are trying to produce. We have come to regard the DO, FOR, WHILE statements and the like as powerful tools, whereas they are in fact weak palliatives that are necessary to make the primitive von Neumann style of programming viable at all. By splitting programming into a world of expressions on the one hand and a world of statements on the other, von Neumann lan- guages prevent the effective use of higher level combining forms; the lack of the lat- ter makes the definitional capabilities of yon Neumann languages so weak that most of their important features cannot be defined--starting with a small, elegant framework-- but must be built into the framework of the language at the outset. The Gargantuan size of recent von Neumann languages is eloquent proof of their inability to define new con- structs: for no one would build in so many complex features if they could be defined and would fit into the existing framework later on. The world of expressions has some elegant and useful mathematical properties whereas the world of statements is a disorderly one, without useful mathemetical properties. Structured programming can be viewed as a modest effort to introduce a small amount of order into the chaotic world of state- ments. The Dijkstra work [1976] of Hoare [1969], and others to axiom- atize the properties of the statement world can be viewed as a valiant and effective effort to be precise about those properties, ungainly as they may be. This is not the place for me to elaborate any further my views about von Neumann lan- guages. My point is this: while it was perhaps natural and inevitable that lan- guages like FORTRAN and its successors should have developed out of the concept of the von Neumann computer as they did, the fact that such languages have dominated our thinking for twenty years is unfortunate. It is unfortunate because their long-stand- ing familiarity will make it hard for us to understand and adopt new programming styles which one day will offer far greater intel- lectual and computational power. Acknowledgments My greatest debt in connection with this paper is to my old friends and colleagues whose creativity, hard work and invention made FORTRAN possible. It is a pleasure to acknowledge my gratitude to them for their contributions to the project, for making our work together so long ago such a con- genial and memorable experience, and, more recently, for providing me with a great amount of information and helpful material in preparing this paper and for their care- ful reviews of an earlier draft. For all this I thank all those who were associated with the FORTRAN project but who are too numerous to list here. In particular I want to thank those who were the principal movers in making FORTRAN a reality: Sheldon Best, Richard Goldberg, Lois Haibt, Harlan Herrick, Grace Mitchell, Robert Nelson, Roy Nutt, David Sayre, Peter Sheridan, and Irving Ziller. I also wish to thank Bernard Galler, J. A. N. Lee, and Henry Tropp for their am- iable, extensive and invaluable suggestions for improving the first draft of this paper. I am grateful too for all the work of the program committee in preparing helpful ques- tions that suggested a number of topics in the paper. REFERENCES Most of the items listed below have dates in the fifties, thus many that appeared in the open literature will be obtainable only in the largest and oldest collections. The items with an asterisk were either not pub- lished or were of such a nature as to make their availability even less likely than that of the other items. Adams, Charles W. and Laning, J. H., Jr. 195~ May. The MIT systems of automatic coding: Comprehensive, Summer Session, and Algebraic. In Proc. Symp. on Auto- matic Programming for Digital Computers. Washington DC: The Office of Naval Re- search. •Addenda to the FORTRAN Programmer's Ref- erence Manual. 1957 February 8. (Trans- mitted to Dr. John Greenstadt, Special Programs Group, Applied Science Division, IBM, for distribution to SHARE members, by letter from John W. Backus, Program- ming Research Dept. IBM. 5 pages.) •Additions to FORTRAN II 1958. Description of Source Language Additions to the FOR- TRAN II System. New York: Programming Research, IBM Corp. (Distributed to users of FORTRAN III. 12 pages.) •Ash, R.; Broadwin, E.; Della Valle, V.; Katz, C.; Greene, M.; Jenny, A.; and Yu, L. 1957. Preliminary Manual for MATH- MATIC and AR!TH-MATIC Systems (for Alge- braic Translation and Compilation for UNIVAC I and II). Philadelphia Pa: Rem- ington Rand UNIVAC. Backus, J. W. 1954 January. The IBM 701 Speedcoding system. JACM I (I):4-6. *Backus, John 1954 May 21. Letter to J. H. Laning, Jr. Backus, J. W. 1958 November. Automatic programming: properties and performance of FORTRAN systems I and II. In Proc. S~mp. on the Mechanisation of Thought Processes. Teddington, Middlesex, Eng- land: The National Physical Laboratory. Backus, John 1976 June 10-15. Programming in America in the nineteen fifties-- some personal impressions. In Proc. International Conf. on the History of Computing, Los Alamos. (Publisher yet to be selected.) Backus, John 1978. The von Neumann style as an obstacle to high level programming; an alternative functional style and its algebra of programs. (to appear CACM). Backus, J. W. and Heising, W. P. 1964 Aug- ust. "FORTRAN. In IEEE Transactions on Electronic Computers. Vol EC-13 (4): 382-385. Backus, John W. and Herrick, Harlan 1954 May. IBM 701 Speedcoding and other auto-matic programming systems. In Proc. Symp. on Automatic Programming for Digi- tal Computers. Washington DC: The Office of Naval Research. Backus, J. W.; Beeber, R. J.; Best, S.; Goldberg, R.; Haibt, L. M.; Herrick, H. L.; Nelson, R. A.; Sayre, D.; Sheri- dan, P. B.; Stern, H.; Ziller, I.; Hughes, R. A.; and Nutt, R. 1957 Feb- ruary. The FORTRAN automatic coding system. In Proc. Western Joint Computer Conf. Los Angeles. Baker, Charles L. 1956 October. The PACT I coding system for the IBM Type 701. JACM 3 (4): 272-278. Belady, L.A. 1965 June 15. Measurements on programs: one level store simulation. Yorktown Heights NY: IBM Thomas J. Watson Research Center. Report RC 1420. B6hm, Corrado 1954. Calculatrices digi- tales: Du d~chiffrage de formules logi- co-math~matiques par la machine m~me dans la conception du programme. In Annali di Matematica Pura ed Applicata 37 (4): 175-217. Bouricius, Willard G. 1953 December. Op- erating experience with the Los Alamos 701. In Proc. Eastern Joint_Computer Conf. Washington DC. Bright, Herbert S. 1971 November. FORTRAN comes to Westinghouse-Bettis, 1957. In Computers and Automation. Brown, J. H. and Carr, John W., III 1954 May. Automatic programming and its de- velopment on MIDAC. In Proc. Symp. on Automatic Programming for Digital Com- puters. Washington DC: The Office of Naval Research. Cocke, John and Schwartz, J. T. 1970 April. Programming Languages and their Compil- ers. (Preliminary Notes, Second Revised Version, April 1970) New York: New York University, The Courant Institute of Mathematical Sciences. Cress, Paul; Dirksen, Paul; and Graham, J. Wesley 1970. FORTRAN IV with WATFOR and WATFIV. Englewood Cliffs NJ: Pren- tice-Hall. Dijkstra, Edsger W. 1976. A Discipline of Programming. Englewood Cliffs NJ: Pren- tice-Hall. Grems, Mandalay and Porter, R. E. 1956. A truly automatic programming system. In Proc. Western Joint Computer Conf. 10-21. Hoare, C. A. R. 1969 October. An axiomatic basis for computer programming. CACM 12 (10): 576-580, 583. • IBM 1956 October 15. Programmer's Refer- ence Manual, The FORTRAN Automatic Cod- ing System for the IBM 704 EDPM. New York: IBM Corp. (Applied Science Division and Programming Research Dept., Working Committee: J. W. Backus, R. J. Beeber, S. Best, R. Goldberg, H. L. Herrick, R. A. Hughes [Univ. of calif. Radiation Lab. Livermore, Calif.], L. B. Mitchell, R. A. Nelson, R. Nutt [United Aircraft Corp., East Hartford, Conn.], D. Sayre, P. B. Sheridan, H. Stern, I. Ziller). • IBM 1957. Progra~nmer's Primer for FORTRAN Automatic Coding System for the IBM 704. New York: IBM Corp. Form No. 32-0306. Knuth, Donald E. and Pardo, Luis Trabb 1977. Early development of programming languages. In Encyclopedia of Computer Science and Technology. Vol 7:419-493. New York: Marcel Dekker. • Laning, J. H. and Zierler, N. 1954 Jan- uary. A program for translation of math- ematical equations for Whirlwind I. Cambridge Mass.: MIT Instrumentation Lab. Engineering Memorandum E-364. McCracken, Daniel D. 1961. A Guide to FORTRAN Programming. New York: Wiley. Moser, Nora B. 1954 May. Compiler method of automatic programming. In Proc. Symp. on Automatic Programming for Digital Computers. Washington DC: The Office of Naval Research. Muller, David E. 1954 May. Interpretive routines in the ILLIAC library. In Proc. Symp. on Automatic Programming for Digital Computers. Washington DC: The Office of Naval Research. • Operator's Manual 1957 April 8. Prelim- inary Operator's Manual for the FORTRAN Automatic Coding System for the IBM 704 EDPM. New York: IBM Corp. Programming Research Dept. Organick, Elliot I. 1963. A FORTRAN Prim- er. Reading Mass.: Addison-Wesley. • Perlis, A. J.; Smith, J. W.; and Van Zoer- en, H. R. 1957 March. Internal Trans- lator (IT): a compiler for the 650. Pittsburgh: Carnegie Institute of Tech. • Preliminary Report 1954 November 10. Specifications for the IBM mathematical FORmula TRANslating system, FORTRAN. New York: IBM Corp. (Report by Program- ming Research Group, Applied Science Div- ision, IBM. Distributed to prospective 704 customers and other interested par- ties. 29 pages.) • Proposed Specifications 1957 September 25. Proposed Specifications for FORTRAN II for the 704. (Unpublished memorandum, Programming Research Dept. IBM.) *Remington Rand, Inc. 1953 November 15. The A-2 compiler system operations man- ual. Prepared by Richard K. Ridgway and Margaret H. Harper under the direction of Grace M. Hopper. Rutishauser, Heinz 1952. Automatische Rechenplanfertigung bei progran~ges- teuerten Rechenmaschinen. In Mitteilung- en aus dem Inst. fur angew. Math. an der E. T. H. ZUrich. Nr. 3. Basel: Birk- h~user. Sammet, Jean E. 1969. Progranuaing Lan- guages: History and Fundamentals. Englewood Cliffs NJ: Prentice Hall. Sheridan, Peter B. 1959 February. The arithmetic translator-compiler of the IBM FORTRAN automatic coding system. CACM 2 (2) :9-21. • Schlesinger, S. I. 1953 July. Dual cod- ing system. Los Alamos NM: Los Alamos Scientific Lab. Los Alamos Report LA 1573. Zuse, K. 1959. Dber den PlankalkUl. In Elektron. Rechenanl. 1:68-71. Zuse, K. 1972. Der Plankalkul. In Ber- ichte der Gesellschaft fur Mathematik und Datenverarbeitung. 63, part 3. Bonn. (Manuscript prepared in 1945.) URL https://www.softwarepreservation.org/projects/FORTRAN/paper/p165-backus.pdf if above url doesn't work, i have the pdf and the pdf as images in my public storage https://1drv.ms/f/c/ea9004809c2729bb/EooBcDF17hpDo608yokm4bMBRedtOlqRpkbMsm32ztSddw?e=rQZfJh converted from pdf's at the following https://pdf2png.com/
  15. thank you @gio74 + @mellypops for joining, and you are free to share gaming news or interest of your own:) please do so
  16. Such games have been my favorite for a long time. I like challenges, I like to think, to plan, and so on. I can play such games for hours without being bored. But from time to time I also like play shooters
  17. I've always been more into sports games, but now I'm discovering new games. Tactical games are interesting, and they really make you think I've already tried Sins Of A Solar Empire and it's really interesting game
  18. For those that may know I have always said I will honor those who follow me but I have been too busy creating my own work. A few months ago, I realized I wanted/needed to program more and so the HDKiriban series was born and this is the first in the series. DogoKwan is a simple tile game. You can change the settings , the dimensions or the difficulty. For my first 25 followers on deviantart I have a dropdown list to display their work. I will continue my HDKiriban series with the second game in the list for members 1-50 . I am open to discussion:) And please save a screenshot of you finishing a game in the comments. WARNING!:) let me help you, this game has an easy bug, if you change the dimension or difficulty settings while playing you will cause problems
  19. referral https://www.deviantart.com/dualmask/art/Tonfa-Girl-Game-Prototype-1034337267
  20. referral https://www.deviantart.com/mystic-skillz/journal/Psycho-Bliss-1034063919
  21. A friend of mine inspired this project, this is version 1, more are coming Version1- Yellow means a draw, blue means computer won, brown means you the user won. R means Rock, P means Paper, S means Scissors. Click start to start the game. Stop to end it. You can click the R, P, or S button to set your value, the computer chooses its own away from your eyes:) but the results will be shown Version 2- suggestions from angelalita about showing what the computer is doing and my own trim with the tile like dominoes. Tileshift can be accessed at anytime but the wisest thing is to start it after you have stopped the rock paper scissors game. Version 3
  22. I answered Yes. I even have training on how to build a PC as recently as 2017.
  23. @Troy computational elements means programs/code, physical elements means battery/power source/resistors/capacitors/cords/circuit boards and more
  24. No, I don’t have any tech from the 70’s there wasn’t much personal tech to speak of back then just a TV, radio, Variety of devices to play music, a calculator, and maybe a digital watch. Yeah, that laptop did not have a hard drive. One of the floppy disks held the operating system and other held the program that you were running. I’m not sure they called that device a laptop back then might’ve been called a portable computer, but I could be wrong. I did not answer the question because I did not know what you meant between computational and physical elements. Are used to build and sell personal computers back in the early 90s.
  25.  
×
×
  • Create New...