Not great or brilliant yet but we can see where it is heading. BTW, Chatgpt has been in development for quite a few years. It’s all military. Imagine what else they are withholding from humanity.




https://www.shtfplan.com/headline-news/tech-experts-say-ai-poses-a-human-extinction-risk
by Mac Slavo
Jun 1, 2023
Experts on artificial intelligence are coming out and warning of a “human extinction” risk with the progressing technology. Sam Altman, the CEO of ChatGPT-maker OpenAI, along with executives from Google’s AI arm DeepMind and Microsoft, were among those who supported and signed the short statement.
“Contemporary AI systems are now becoming human-competitive at general tasks,” said the letter. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?” the letter asked.
Other tech leaders such as Tesla’s Elon Musk and former Google CEO Eric Schmidt have cautioned about the risks AI poses to human society. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement Tuesday read according to a report by CNBC.
In an open letter released back in March, Musk, Apple co-founder Steve Wozniak and several tech leaders urged AI labs to stop training systems to be more powerful than GPT-4, which is OpenAI’s latest large language model. They also called for a six-month pause on such advanced development.
The technology has gathered pace in recent months after chatbot ChatGPT was released for public use in November and subsequently went viral. In just two months after its launch, it reached 100 million users. ChatGPT has amazed researchers and the general public with its ability to generate humanlike responses to users’ prompts, suggesting AI could replace jobs and imitate humans.
Consider Yourself Warned: ChaosGPT Declares Its Plans to Destroy Humanity
The statement added that it can be “difficult to voice concerns about some of advanced AI’s most severe risks” and had the aim of overcoming this obstacle and opening up the discussions.
ChatGPT has arguably sparked much more awareness and adoption of AI as major firms around the world have raced to develop rival products and capabilities.
The consequences of putting humanity’s existence into the hands of artificial intelligence, which has no morals or compassion could be dire.
Dystopia Disguised as Democracy: All the Ways in Which Freedom Is an Illusion
The future has arrived
https://www.naturalnews.com/2023-06-01-uber-eats-to-deploy-additional-delivery-robots.html
by: Arsenio Toledo
Thursday, June 01, 2023
Food delivery company Uber Eats has announced that it will expand the number of four-wheeled robots that will be delivering food on the company’s app.
The robots are made by Serve Robotics, and the decision to expand the use of robots was made after a successful pilot of the robot delivery program in Los Angeles. The pilot program, which began in the West Hollywood neighborhood last May, was wildly successful.
The robots serve customers from 10 a.m. to 9 p.m. every day, and Uber Eats claimed the use of robot deliveries in the city grew by more than 30 percent month-over-month and more than 200 restaurants in the city have already signed up to offer deliveries using the small robots. (Related: Generative AI could replace up to 300 million mostly white-collar jobs worldwide.)
Serve Robotics and Uber Eats will now be expanding the program to several other markets in the United States.
“This partnership is a major step towards mass commercialization of robotics for autonomous delivery, and it is a testament to the success of our partnership,” said Serve Robotics CEO Ali Kashani in a statement. “We are excited to continue our work with Uber to bring this innovative technology to more cities across the country.”
Serve Robotics’ delivery robots are about the size of a small shopping cart. They are about waist-high with four wheels and a payload dimension that could easily fit two large shopping bags or four large pizza boxes. They can carry up to 50 pounds of food or items and they use the sidewalks in the cities they operate in to conduct their deliveries.
The delivery robots are electric-powered and use artificial intelligence and several cameras and sensors to help them navigate their surroundings. The AI detects when other people or cars are in its path and the robots know when to avoid collisions.
A customer who places a food order on the Uber Eats app gets asked whether they prefer to receive their items via a sidewalk delivery robot or a human courier, with the change not costing the customer anything.
The robot travels up to seven miles per hour on the sidewalk and has a delivery range of about a mile. Kashani claimed customers usually receive their items within 30 minutes of placing their orders.
Upon arrival at the customer’s location, the robot sends a message that it has arrived along with a unique code to input to open the payload lid, ensuring that only the customer can retrieve the order.
The partnership is slated to deploy up to 2,000 delivery bots through the beginning of 2026. Once completed, this deal between Serve Robotics and Uber Eats would be one of the largest deployments of robotic delivery fleets ever in the United States.
“We expect our rapid growth on Uber Eats to continue,” said Kashani. “We expect to operate an increasing number of [robots] on Uber Eats as our coverage and delivery volume on Uber increases.”
Serve Robotics currently operates a fleet of about 100 robots in Los Angeles. With its partnership with Uber Eats, the company plans to deliver in other cities in California, Texas and even Canada.
“It’s been an incredible experience when you’re pioneering something like this to a city,” said Kashani. “The team and I right now are in the right time and the right place to be working on this and part of history. We are learning how to make robots friendly and likable while cutting down on traffic congestion.”
In addition, Serve Robotics already uses its fleet to conduct deliveries for the convenience store chain 7-Eleven in Los Angeles. It is already working with other companies to expand the use of robots for deliveries in other areas, such as Vancouver for Pizza Hut and Walmart for the entire state of Arkansas.
Serve Robotics is even working with self-driving car company Motional to trial delivering food in Santa Monica using the company’s self-driving Hyundai Ioniq 5 battery-electric cars.
Uber Eats has also been testing different delivery robot options in other locations. It launched a partnership with autonomous delivery robot company Cartken to test the robots in Miami, and following a successful trial expanded that partnership to several parts of Virginia.
Learn more about the takeover of human jobs by robots and artificial intelligence at Robotics.news.
Watch this video from “Future Now” on AMP News discussing the massive job losses caused by automation.
https://www.brighteon.com/embed/b82b5cdc-7b83-412c-bdca-b80a827be086
This video is from the AMP News channel on Brighteon.com.
Walmart converting 65% of its stores to “automation” – human employees will be let go.
ROBOCROPPERS: John Deere planning to replace farmers with fully automated farming vehicles by 2030.
Walmart announces expansion of drone deliveries to 4 million households in 6 states.
Grubhub, Yandex team up to use robots to deliver food on college campuses.
Sources include:
https://www.rt.com/news/576443-chatgpt-creator-ai-dangers/
May 18, 2023
OpenAI CEO Samuel Altman Testifies To Senate Committee On Rules For Artificial Intelligence © Getty Images / Win McNamee
The CEO of OpenAI, the company that developed ChatGPT, has urged US lawmakers to establish regulations for artificial intelligence. Sam Altman warned on Tuesday that the technology “could go quite wrong,” and likened the advent of AI programs to the invention of the printing press.
During a five-hour hearing at the Senate Judiciary Committee, the ChatGPT inventor was grilled on the potential dangers of AI chatbots and their ability to reshape human history. Altman admitted that his worst fear is that “significant harm” could be caused by the technology.
“If this technology goes wrong, it could go quite wrong, and we want to be vocal about that,” he said, adding that OpenAI is seeking to “work with the government to prevent that from happening.”
Senators quizzed Altman about the specific threats posed by ChatGPT and similar models, with Josh Hawley of Missouri asking if the technology could potentially influence elections.
“Should we be concerned about models that can predict survey opinion and then help organizations and entities fine-tune strategies to elicit behaviors from voters?” Hawley asked.
In response, Altman stated that AI’s powers of persuasion were one of his “greatest areas of concern,” insisting that significant work must be done to regulate the technology ahead of the 2024 US presidential election.
Senator Richard Blumenthal of Connecticut noted the potential dangers posed by AI to the job market, calling it “the biggest nightmare” facing the public.
Altman admitted that some professions would eventually be automated out of existence, but argued that new jobs would also be created as the technology progresses.
AI ‘more urgent’ risk than climate change – former Google engineer
Tuesday’s hearing was the first of a planned series intended to find ways to regulate AI programs such as ChatGPT. Since its release late last year, the app has become the fastest-growing in history, surpassing 100 million users in just two months. Amid fears that AI could soon get out of control, US lawmakers have stressed that measures to control it should have been developed long ago.
Earlier this month, the “godfather of AI” Geoffrey Hinton warned that the advent of artificial intelligence posed a “more urgent” threat to humanity than climate change. Other tech industry leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, also co-signed an open letter in March warning of the potential ramifications of an unregulated AI arms race. They called on Silicon Valley to impose a six-month pause in the development of the technology.
BY TYLER DURDEN
WEDNESDAY, MAY 10, 2023
Authored by Aden Tate via The Organic Prepper blog,
Do you remember when drones were first released to the commercial market? There was a lot of talk about the privacy implications of them, and more than one case of where people were flying them over other peoples’ property, looking in peoples’ windows, and more of the like.
Now, drones are firmly entrenched in modern society and they’re not going anywhere anytime soon. They’ve not only been incorporated into militaries throughout the world, but they’re now in search and rescue operations, used for photography, security, and more. During Covid, Baltimore PD even wanted to use them to enforce social distancing.
You probably don’t even think about it anymore.
AI is already all around you, but most recognize that ChatGPT ushered in a new era of AI. Right now, we’re all in the same stage that we were when drones were first released – theorizing about potential fears, costs, and what this could mean. But I would say one of the things that’s going to happen here is that it’s going to become mainstream just like drones did.
What will the world look like when this finally happens?
I think that there are a few day-to-day implications of all this.
I don’t think it’s long until a personal AI is as commonplace as a smartphone. You can sneak across the border into the US nowadays, and you still have a smartphone in your hand. I don’t think it’s too far off to say that a personal AI is on the horizon. Think Iron Man’s Jarvis, Will Smith’s I, Robot, and Ron from Ron’s Gone Wrong (the best Pixar-style movie of the past ten years).
Everybody will have their own super personal assistant. Entertainment, organization, wayfinding, learning – it’ll all be streamlined and maximized.
For science, I think you’re liable to see some amazing discoveries made over the course of the next few years. What happens when you have an AI that is able to devote all of its energy, 24/7 to a single issue without ever growing tired, needing a change of scenery, or going on a vacation? What happens when you take something that can do incredibly fast calculations and run predictive algorithms until it finds something that will successfully fight this-or-that genetic defect with an 85% success rate? You end up with a Johnny Depp Transcendence type of situation. Chemistry, epigenetics, epidemiology, physics, astronomy, mathematics, engineering, pharmaceuticals – all of these fields are going to be absolutely blasted with new information. AI is going to be used to study itself as well. As a result, advancements in robotics, coding, and AI are also going to come about. And those advancements, in turn, will be used to drive new advancements.
Militaries will have to adopt AI or they will consistently be beaten on the battlefield by those who already have AI generals and logistics experts. You’re talking about playing chess with somebody who can think 50,000,000 moves into the future. Militaries will use AI to tell them where is the best place to position their troops, the likelihood of success for different missions given the variables, and how much ammunition they need to ship to here, here, and here.
For nation-states, the main thing here is going to be about surveillance. All of the cameras, sensors, cell towers, GPS units, satellites, microphones, and all other equipment that can be hooked up to the internet – all of that will feed a constant stream of data directly into an AI that can give up to date data on everything that is going on.
The implications?
Minority Report, Shia Lebouf’s Eagle Eye, and Christian Bale’s computer when he fights Heath Ledger in The Dark Knight.
(Interestingly, in Ron’s Gone Wrong, all of the data generated by every kids’ personal AI was then collected and sold. Do you think that your personal AI would actually protect your privacy, even if it explicitly told you it did?)
For militaries? What does it look like when AI is pitted against AI?
I think there are a couple of possibilities here.
Person of Interest did a fairly good job of tackling this concept. Absolutely you would have AI engaging in cyberattacks against each other. Whichever AI was faster to learn would be the one that would win each specific fight. Which AI can code better, faster? Which AI can monitor code better and push through containment protocols faster? Seeing these types of AI utilize actual attacks in the “real” world isn’t beyond the scope of possibility either. Keep in mind that one of the first things that ChaosGPT tried to do was to source nuclear weapons. In the future, if there was a battle between AIs going on, why would they not use similar logic?
No, I can’t predict the future, nor do I try to, but I can see some of the signs around me and draw deductions just like anybody else. If I see a kid playing around a campfire, I have a pretty good idea that he’s going to get burnt. The problem is that I think this time, AI is the fire.
* * *
Aden Tate is the author of The Prepper’s Guide to Post-Disaster Communications and The Faithful Prepper
May 2, 2023
Geoffrey Hinton – the US computer scientist often called the “AI Godfather” – left his position in tech giant Google to warn society about the threat AI poses to humanity. Sputnik decided to interview ChatGPT neural network about the potential dangers of AI. This is how it answered.
Chatbots have taken over customer service interactions with ease, from providing pleasure responses to problem solving inquires as fast as possible.
Over the years, the advancement of chatbots has got stronger as the usage rate increases. However, the threats associated with AI could prove detrimental if not addressed and tackled before it’s too late. This article will provide an overview of the risks associated with chatbots, focusing on the threats posed by ChatGPT.
Chatbots operate on programmed algorithms with an aim to automate interactions based on certain input, mainly text based. Chatbots are designed to provide speedy, efficient and uninterrupted service and this is what makes them popular. However, with the emergence of hybrid AI Chatbots like ChatGPT, the risks tied with such technology are worse.
While it seems ChatGPT, an AI language model, has been created with the aim of enhancing interactions with humans through natural language processing, the potential risks of such an innovation go beyond facilitating a chat. As a hybrid AI system, ChatGPT utilizes deep learning neural networks to generate output based on human input. Given that its system is the basis for a range of AI applications, it is not far fetched to envisage ChatGPT falling into the wrong hands.
‘Godfather of AI’ Quits Google, Warns Thinking Machines Pose Danger to Humanity
The real risks of ChatGPT and chatbots, in general, are many but can be broadly classified into the following categories:
1. Privacy and Security: Given its learning potential, ChatGPT poses a significant security risk. If an attacker were to get hold of such technology, they would not only be able to make autonomous decisions but also replicate it. It’s potentially disastrous in situations such as espionage and criminal activities.
2. Bias and Discrimination: Despite being creations of humans, chatbots and AI can exhibit the same biases as humans. If unchecked, AI can lead to discrimination against specific characteristics such as age, gender, race, or religion.
3. Job Displacement: While ChatGPTs are designed to reduce operation costs, its adverse effects on human workers should equally be considered. AI will undeniably replace many jobs, and so it’s necessary to weigh the benefits against the potential harm.
Musk Warns Against Progressing Beyond GPT-4 as Report Says AI May Replace 300 Mln Jobs
It’s no longer in doubt that AI is altering how things are done in various industries. Still, it is not a perfect system and has several limitations. For instance, it requires human input and interpretation to function effectively. However, as AI technology becomes more advanced, it could theoretically replace humans in specific areas, such as data analysis and customer service, leaving many unemployed.
Elon Musk and Steve Wozniak have expressed their fears of what AI could become and urge halting AI’s development. Both believe that there needs to be proper supervision and control over the development of AI and a meeting between tech leaders and lawmakers to discuss the steps needed to control the technology.
US Lawmakers Introduce Bill to Prevent AI From Launching Nuclear Weapon
It’s undeniable that AI and Chatbots hold untold possibilities, but as the rise of ChatGPT shows, these technologies need to be monitored, challenged, and controlled. Proper regulations and oversight in human involvement in AI development and deployment are necessary. Additionally, it would be crucial to consider implementing an AI safety certification framework to ensure AI implementation is ethical.
In conclusion, the advances in AI have shown the potential benefits the technology can bring to different industries. However, the rise of ChatGPT indicates there needs to be a more significant supervisory level in the development of such AI technologies. Proper regulation, oversight, and control measures must be put in place to ensure their safety, both for humans.
https://www.zerohedge.com/technology/which-jobs-will-be-most-impacted-chatgpt
BY TYLER DURDEN
THURSDAY, MAY 04, 2023
On November 30, 2022, OpenAI heralded a new era of artificial intelligence (AI) by introducing ChatGPT to the world.
The AI chatbot stunned users with its human-like and thorough responses. ChatGPT could comprehend and answer a variety of different questions, make suggestions, research and write essays and briefs, and even tell jokes (amongst other tasks).
Many of these skills are used by workers in their jobs across the world, which begs the question: which jobs will be transformed, or even replaced, by generative AI in the coming future?
This infographic from Visual Capitalist’s Harrison Schell visualizes the March 2023 findings of OpenAI on the potential labor market impact of large language models (LLMs) and various applications of generative AI, including ChatGPT.
The OpenAI working paper specifically examined the U.S. industries and jobs most “exposed” to large language models like GPT, which the chatbot ChatGPT operates on.
Key to the paper is the definition of what “exposed” actually means:
“A proxy for potential economic impact without distinguishing between labor-augmenting or labor-displacing effects.” – OpenAI
Thus, the results include both jobs where humans could possibly use AI to optimize their work, along with jobs that could potentially be automated altogether.
OpenAI found that 80% of the American workforce belonged to an occupation where at least 10% of their tasks can be done (or aided) by AI. One-fifth of the workforce belonged to an occupation where 50% of work tasks would be impacted by artificial intelligence.
Here is a list of jobs highlighted in the paper as likely to see (or already seeing) AI disruption, where AI can reduce the time to do tasks associated with the occupation by at least 50%.
Analysis was provided by a variety of human-made models as well as ChatGPT-4 models, with results from both showing below:
Jobs | Categorized By | AI Exposure |
---|---|---|
Accountants | AI | 100% |
Admin and legal assistants | AI | 100% |
Climate change policy analysts | AI | 100% |
Reporters & journalists | AI | 100% |
Mathematicians | Human & AI | 100% |
Tax preparers | Human | 100% |
Financial analysts | Human | 100% |
Writers & authors | Human | 100% |
Web designers | Human | 100% |
Blockchain engineers | AI | 97.1% |
Court reporters | AI | 96.4% |
Proofreaders | AI | 95.5% |
Correspondence clerks | AI | 95.2% |
Survey researchers | Human | 84.0% |
Interpreters/translators | Human | 82.4% |
PR specialists | Human | 80.6% |
Animal scientists | Human | 77.8% |
Editor’s note: The paper only highlights some jobs impacted. One AI model found a list of 84 additional jobs that were “fully exposed”, but not all were listed. One human model found 15 additional “fully exposed” jobs that were not listed.
Generally, jobs that require repetitive tasks, some level of data analysis, and routine decision-making were found to face the highest risk of exposure.
Perhaps unsurprisingly, “information processing industries” that involve writing, calculating, and high-level analysis have a higher exposure to LLM-based artificial intelligence. However, science and critical-thinking jobs within those industries negatively correlate with AI exposure.
On the flipside, not every job is likely to be affected. Here’s a list of jobs that are likely least exposed to large language model AI disruption.
Jobs Least Exposed to AI | |
---|---|
Athletes | Short-order cooks |
Large equipment operators | Barbers/hair stylists |
Glass installers & repairers | Dredge operators |
Automotive mechanics | Power-line installers/repairers |
Masons, carpenters, roofers | Oil field maintenance workers |
Plumbers, painters, pipefitters | Servers, dishwashers, bartenders |
Naturally, hands-on industries like manufacturing, mining, and agriculture were more protected, but still include information processing roles at risk.
Likewise, the in-person service industry is also expected to see minimal impact from these kinds of AI models. But, patterns are beginning to emerge for job-seekers and industries that may have to contend with artificial intelligence soon.
OpenAI analyzed correlations between AI exposure in the labor market against a job’s requisite education level, wages, and job-training.
The paper found that jobs with higher wages have a higher exposure to LLM-based AI (though there were numerous low-wage jobs with high exposure as well).
Job Parameter | AI Exposure Correlation |
---|---|
Wages | Direct |
Education | Direct |
Training | Inverse |
Professionals with higher education degrees also appeared to be more greatly exposed to AI impact, compared to those without.
However, occupations with a greater level of on-the-job training had the least amount of work tasks exposed, compared to those jobs with little-to-no training.
The potential impact of ChatGPT and similar AI-driven models on individual job titles depends on several factors, including the nature of the job, the level of automation that is possible, and the exact tasks required.
However, while certain repetitive and predictable tasks can be automated, others that require intangibles like creative input, understanding cultural nuance, reading social cues, or executing good judgement cannot be fully hands-off yet.
And keep in mind that AI exposure isn’t limited to job replacement. Job transformation, with workers utilizing the AI to speed up or improve tasks output, is extremely likely in many of these scenarios. Already, there are employment ads for “AI Whisperers” who can effectively optimize automated responses from generalist AI.
As the AI arms race moves forward at a rapid pace rarely seen before in the history of technology, it likely won’t take long for us to see the full impact of ChatGPT and other LLMs on both jobs and the economy.
One final comment in this connection. Society has been plagued for a century by massive corporate campaigns to encourage disdain for science, topics well studied by Naomi Oreskes among others. It began with corporations whose products are murderous: lead, tobacco, asbestos, and later fossil fuels. Their motives are understandable. The goal of a business in a capitalist society is profit, not human welfare. That’s an institutional fact: Don’t play the game and you’re out, replaced by someone who will.
https://www.commondreams.org/opinion/noam-chomsky-on-chatgpt
Artificial intelligence (AI) is sweeping the world. It is transforming every walk of life and raising in the process major ethical concerns for society and the future of humanity. ChatGPT, which is dominating social media, is an AI-powered chatbot developed by OpenAI. It is a subset of machine learning and relies on what is called Large Language Models that can generate human-like responses. The potential application for such technology is indeed enormous, which is why there are already calls to regulate AI like ChatGPT.
Can AI outsmart humans? Does it pose public threats? Indeed, can AI become an existential threat? The world’s preeminent linguist Noam Chomsky, and one of the most esteemed public intellectuals of all time, whose intellectual stature has been compared to that of Galileo, Newton, and Descartes, tackles these nagging questions in the interview that follows.
Engineering projects can be useful, or harmful. Both questions arise in the case of engineering AI. Current work with Large Language Models (LLMs), including chatbots, provides tools for disinformation, defamation, and misleading the uninformed
C. J. Polychroniou: As a scientific discipline, artificial intelligence (AI) dates back to the 1950s, but over the last couple of decades it has been making inroads into all sorts of fields, including banking, insurance, auto manufacturing, music, and defense. In fact, the use of AI techniques has been shown in some instances to surpass human capabilities, such as in a game of chess. Are machines likely to become smarter than humans?
Noam Chomsky: Just to clarify terminology, the term “machine” here means program, basically a theory written in a notation that can be executed by a computer–and an unusual kind of theory in interesting ways that we can put aside here.
We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding. If the topic is human intelligence, or cognitive capacities of other organisms, science seeks understanding of these biological systems.
As I understand them, the founders of AI–Alan Turing, Herbert Simon, Marvin Minsky, and others–regarded it as science, part of the then-emerging cognitive sciences, making use of new technologies and discoveries in the mathematical theory of computation to advance understanding. Over the years those concerns have faded and have largely been displaced by an engineering orientation. The earlier concerns are now commonly dismissed, sometimes condescendingly, as GOFAI–good old-fashioned AI.
Continuing with the question, is it likely that programs will be devised that surpass human capabilities? We have to be careful about the word “capabilities,” for reasons to which I’ll return. But if we take the term to refer to human performance, then the answer is: definitely yes. In fact, they have long existed: the calculator in a laptop, for example. It can far exceed what humans can do, if only because of lack of time and memory. For closed systems like chess, it was well understood in the ‘50s that sooner or later, with the advance of massive computing capacities and a long period of preparation, a program could be devised to defeat a grandmaster who is playing with a bound on memory and time. The achievement years later was pretty much PR for IBM. Many biological organisms surpass human cognitive capacities in much deeper ways. The desert ants in my backyard have minuscule brains but far exceed human navigational capacities, in principle, not just performance. There is no Great Chain of Being with humans at the top.
The products of AI engineering are being used in many fields, for better or for worse. Even simple and familiar ones can be quite useful: in the language area, programs like autofill, live transcription, and google translate, among others. With vastly greater computing power and more sophisticated programming, there should be other useful applications, in the sciences as well. There already have been some: Assisting in the study of protein folding is one recent case where massive and rapid search technology has helped scientists to deal with a critical and recalcitrant problem.
Engineering projects can be useful, or harmful. Both questions arise in the case of engineering AI. Current work with Large Language Models (LLMs), including chatbots, provides tools for disinformation, defamation, and misleading the uninformed. The threats are enhanced when they are combined with artificial images and replication of voice. With different concerns in mind, tens of thousands of AI researchers have recently called for a moratorium on development because of the potential dangers they perceive.
As always, the possible benefits of technology have to be weighed against potential costs.
Quite different questions arise when we turn to AI and science. Here caution is necessary because of exorbitant and reckless claims, often amplified in the media. To clarify the issues, let’s consider cases, some hypothetical, some real.
I mentioned insect navigation, which is an astonishing achievement. Insect scientists have made much progress in studying how it is achieved, though the neurophysiology, a very difficult matter, remains elusive, along with evolution of the systems. The same is true of the amazing feats of birds and sea turtles that travel thousands of miles and unerringly return to the place of origin.
Suppose Tom Jones, a proponent of engineering AI, comes along and says: “Your work has all been refuted. The problem is solved. Commercial airline pilots achieve the same or even better results all the time.”
If even bothering to respond, we’d laugh.
Take the case of the seafaring exploits of Polynesians, still alive among Indigenous tribes, using stars, wind, currents to land their canoes at a designated spot hundreds of miles away. This too has been the topic of much research to find out how they do it. Tom Jones has the answer: “Stop wasting your time; naval vessels do it all the time.”
Same response.
Let’s now turn to a real case, language acquisition. It’s been the topic of extensive and highly illuminating research in recent years, showing that infants have very rich knowledge of the ambient language (or languages), far beyond what they exhibit in performance. It is achieved with little evidence and in some crucial cases none at all. At best, as careful statistical studies have shown, available data are sparse, particularly when rank-frequency (“Zipf’s law”) is taken into account.
Enter Tom Jones: “You’ve been refuted. Paying no attention to your discoveries, LLMs that scan astronomical amounts of data can find statistical regularities that make it possible to simulate the data on which they are trained, producing something that looks pretty much like normal human behavior. Chatbots.”
This case differs from the others. First, it is real. Second, people don’t laugh; in fact, many are awed. Third, unlike the hypothetical cases, the actual results are far from what’s claimed.
These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.
The LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable
One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable. Double the terabytes of data scanned, add another trillion parameters, use even more of California’s energy, and the simulation of behavior will improve, while revealing more clearly the failure in principle of the approach to yield any understanding. The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.
It’s as if a biologist were to say: “I have a great new theory of organisms. It lists many that exist and many that can’t possibly exist, and I can tell you nothing about the distinction.”
Again, we’d laugh. Or should.
Not Tom Jones–now referring to actual cases. Persisting in his radical departure from science, Tom Jones responds: “How do you know any of this until you’ve investigated all languages?” At this point, the abandonment of normal science becomes even clearer. By parity of argument, we can throw out genetics and molecular biology, the theory of evolution, and the rest of the biological sciences, which haven’t sampled more than a tiny fraction of organisms. And for good measure, we can cast out all of physics. Why believe in the laws of motion? How many objects have actually been observed in motion?
There is, furthermore, the small matter of burden of proof. Those who propose a theory have the responsibility of showing that it makes some sense, in this case, showing that it fails for impossible languages. It is not the responsibility of others to refute the proposal, though in this case, it seems easy enough to do so.
Let’s shift attention to normal science, where matters become interesting. Even a single example of language acquisition can yield rich insight into the distinction between possible and impossible languages.
The reasons are straightforward and familiar. All growth and development, including what is called “learning,” is a process that begins with the state of the organism and transforms it step-by-step to later stages.
Acquisition of language is such a process. The initial state is the biological endowment of the faculty of language, which obviously exists, even if it is, as some believe, a particular combination of other capacities. That’s highly unlikely for reasons long understood, but it’s not relevant to our concerns here, so we can put it aside. Plainly there is a biological endowment for the human faculty of language. The merest truism.
Transition proceeds to a relatively stable state, changed only superficially beyond: knowledge of the language. External data trigger and partially shape the process. Studying the state attained (knowledge of the language) and the external data, we can draw far-reaching conclusions about the initial state, the biological endowment that makes language acquisition possible. The conclusions about the initial state impose a distinction between possible and impossible languages. The distinction holds for all those who share the initial state–all humans, as far as is known; there seems to be no difference in capacity to acquire language among existing human groups.
All of this is normal science, and it has achieved many results.
Experiment has shown that the stable state is substantially obtained very early, by three to four years of age. It’s also well-established that the faculty of language has basic properties specific to humans, hence that it is a true species property: common to human groups and in fundamental ways a unique human attribute.
A lot is left out in this schematic account, notably the role of natural law in growth and development: in the case of a computational system like language, principles of computational efficiency. But this is the essence of the matter. Again, normal science.
It is important to be clear about Aristotle’s distinction between possession of knowledge and use of knowledge (in contemporary terminology, competence and performance). In the language case, the stable state obtained is possession of knowledge, coded in the brain. The internal system determines an unbounded array of structured expressions, each of which we can regard as formulating a thought, each externalizable in some sensorimotor system, usually sound though it could be sign or even (with difficulty) touch.
The internally coded system is accessed in use of knowledge (performance). Performance includes the internal use of language in thought: reflection, planning, recollection, and a great deal more. Statistically speaking that is by far the overwhelming use of language. It is inaccessible to introspection, though we can learn a lot about it by the normal methods of science, from “outside,” metaphorically speaking. What is called “inner speech” is, in fact, fragments of externalized language with the articulatory apparatus muted. It is only a remote reflection of the internal use of language, important matters I cannot pursue here.
Other forms of use of language are perception (parsing) and production, the latter crucially involving properties that remain as mysterious to us today as when they were regarded with awe and amazement by Galileo and his contemporaries at the dawn of modern science.
The principal goal of science is to discover the internal system, both in its initial state in the human faculty of language and in the particular forms it assumes in acquisition. To the extent that this internal system is understood, we can proceed to investigate how it enters into performance, interacting with many other factors that enter into use of language.
Data of performance provide evidence about the nature of the internal system, particularly so when they are refined by experiment, as in standard fieldwork. But even the most massive collection of data is necessarily misleading in crucial ways. It keeps to what is normally produced, not the knowledge of the language coded in the brain, the primary object under investigation for those who want to understand the nature of language and its use. That internal object determines infinitely many possibilities of a kind that will not be used in normal behavior because of factors irrelevant to language, like short-term memory constraints, topics studied 60 years ago. Observed data will also include much that lies outside the system coded in the brain, often conscious use of language in ways that violate the rules for rhetorical purposes. These are truisms known to all field workers, who rely on elicitation techniques with informants, basically experiments, to yield a refined corpus that excludes irrelevant restrictions and deviant expressions. The same is true when linguists use themselves as informants, a perfectly sensible and normal procedure, common in the history of psychology up to the present.
Proceeding further with normal science, we find that the internal processes and elements of the language cannot be detected by inspection of observed phenomena. Often these elements do not even appear in speech (or writing), though their effects, often subtle, can be detected. That is yet another reason why restriction to observed phenomena, as in LLM approaches, sharply limits understanding of the internal processes that are the core objects of inquiry into the nature of language, its acquisition and use. But that is not relevant if concern for science and understanding have been abandoned in favor of other goals.
More generally in the sciences, for millennia, conclusions have been reached by experiments–often thought experiments–each a radical abstraction from phenomena. Experiments are theory-driven, seeking to discard the innumerable irrelevant factors that enter into observed phenomena–like linguistic performance. All of this is so elementary that it’s rarely even discussed. And familiar. As noted, the basic distinction goes back to Aristotle’s distinction between possession of knowledge and use of knowledge. The former is the central object of study. Secondary (and quite serious) studies investigate how the internally stored system of knowledge is used in performance, along with the many non-linguistic factors that enter into what is directly observed.
We might also recall an observation of evolutionary biologist Theodosius Dobzhansky, famous primarily for his work with Drosophila: Each species is unique, and humans are the most unique of all. If we are interested in understanding what kind of creatures we are–following the injunction of the Delphic Oracle 2,500 years ago–we will be primarily concerned with what makes humans the most unique of all, primarily language and thought, closely intertwined, as recognized in a rich tradition going back to classical Greece and India. Most behavior is fairly routine, hence to some extent predictable. What provides real insight into what makes us unique is what is not routine, which we do find, sometimes by experiment, sometimes by observation, from normal children to great artists and scientists.
Society has been plagued for a century by massive corporate campaigns to encourage disdain for science.
One final comment in this connection. Society has been plagued for a century by massive corporate campaigns to encourage disdain for science, topics well studied by Naomi Oreskes among others. It began with corporations whose products are murderous: lead, tobacco, asbestos, and later fossil fuels. Their motives are understandable. The goal of a business in a capitalist society is profit, not human welfare. That’s an institutional fact: Don’t play the game and you’re out, replaced by someone who will.
The corporate PR departments recognized early on that it would be a mistake to deny the mounting scientific evidence of the lethal effects of their products. That would be easily refuted. Better to sow doubt, encourage uncertainty, contempt for these pointy-headed suits who have never painted a house but come down from Washington to tell me not to use lead paint, destroying my business (a real case, easily multiplied). That has worked all too well. Right now it is leading us on a path to destruction of organized human life on earth.
In intellectual circles, similar effects have been produced by the postmodern critique of science, dismantled by Jean Bricmont and Alan Sokal, but still much alive in some circles.
It may be unkind to suggest the question, but it is, I think, fair to ask whether the Tom Joneses and those who uncritically repeat and even amplify their careless proclamations are contributing to the same baleful tendencies.
CJP: ChatGPT is a natural-language-driven chatbot that uses artificial intelligence to allow human-like conversations. In a recent article in The New York Times, in conjunction with two other authors, you shut down the new chatbots as a hype because they simply cannot match the linguistic competence of humans. Isn’t it however possible that future innovations in AI can produce engineering projects that will match and perhaps even surpass human capabilities?
NC: Credit for the article should be given to the actual author, Jeffrey Watumull, a fine mathematician-linguist-philosopher. The two listed co-authors were consultants, who agree with the article but did not write it.
It’s true that chatbots cannot in principle match the linguistic competence of humans, for the reasons repeated above. Their basic design prevents them from reaching the minimal condition of adequacy for a theory of human language: distinguishing possible from impossible languages. Since that is a property of the design, it cannot be overcome by future innovations in this kind of AI. However, it is quite possible that future engineering projects will match and even surpass human capabilities, if we mean human capacity to act, performance. As mentioned above, some have long done so: automatic calculators for example. More interestingly, as mentioned, insects with minuscule brains surpass human capacities understood as competence.
CJP: In the aforementioned article, it was also observed that today’s AI projects do not possess a human moral faculty. Does this obvious fact make AI robots less of a threat to the human race? I reckon the argument can be that it makes them perhaps even more so.
NC: It is indeed an obvious fact, understanding “moral faculty” broadly. Unless carefully controlled, AI engineering can pose severe threats. Suppose, for example, that care of patients was automated. The inevitable errors that would be overcome by human judgment could produce a horror story. Or suppose that humans were removed from evaluation of the threats determined by automated missile-defense systems. As a shocking historical record informs us, that would be the end of human civilization.
Unless carefully controlled, AI engineering can pose severe threats.
CJP: Regulators and law enforcement agencies in Europe are raising concerns about the spread of ChatGPT while a recently submitted piece of European Union legislation is trying to deal with AI by classifying such tools according to their perceived level of risk. Do you agree with those who are concerned that ChatGPT poses a serious public threat? Moreover, do you really think that the further development of AI tools can be halted until safeguards can be introduced?
NC: I can easily sympathize with efforts to try to control the threats posed by advanced technology, including this case. I am, however, skeptical about the possibility of doing so. I suspect that the genie is out of the bottle. Malicious actors–institutional or individual–can probably find ways to evade safeguards. Such suspicions are of course no reason not to try, and to exercise vigilance.
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
C.J. Polychroniou is a political economist/political scientist who has taught and worked in numerous universities and research centers in Europe and the United States. His latest books are The Precipice: Neoliberalism, the Pandemic and the Urgent Need for Social Change (A collection of interviews with Noam Chomsky; Haymarket Books, 2021), and Economics and the Left: Interviews with Progressive Economists (Verso, 2021).
Noam Chomsky is Institute Professor (retired) at MIT. He is the author of many books and articles on international affairs and social-political issues
https://www.zerohedge.com/technology/rise-skynet-robot-dog-gets-chatgpt-brain
BY TYLER DURDEN
TUESDAY, MAY 02, 2023
A team of artificial intelligence engineers equipped a Boston Dynamics robot dog with OpenAI’s ChatGPT and Google’s Text-to-Speech voice, creating what could be a real-life Skynet-like robot.
In a recent video posted to Twitter, machine learning engineer Santiago Valdarrama showed how the robo-dog can interact with humans via a voice interface faster than control panels and reports.
“These robots run automated missions every day,” Valdarrama said in a Twitter thread, noting that each mission could be “miles-long, hard-to-understand configuration files” and “only technical people can handle them.” When paired with ChatGPT and Google’s Text-to-Speech voice, a user can ask simple questions to the robot about “configuration files and the mission results.”
“We can now ask the robots about past and future missions and get an answer in real time. ChatGPT interprets the question, parses the files, and formulates the answer,” he said.
The ChatGPT brain means anyone can talk to the robo-dog.
In the short term, integrating a ChatGPT brain into robots may appear harmless. However, there’s a dark risk to artificial intelligence, giving rise to intelligent robots in a Skynet-like scenario.