Thanks for posting this important article written by human intelligence (H.I. ) on how the AI chatbots can affect the human brain, its emotions and psychology, i.e., why too many of us are inclined to then think maybe these machines are sentient. The article quotes some of the would-be-believers as saying they don't really think it's sentient, but there's still that feeling to contend with.valleyrock wrote: ↑Mon Jun 05, 2023 10:37 amHere's an interesting perspective on AI and the hype: https://www.theverge.com/23604075/ai-ch ... irror-testValuethinker wrote: ↑Mon Jun 05, 2023 9:44 amConsider how good natural language translation in Google has become-- a problem once deemed unsolvable. This is big stuff.valleyrock wrote: ↑Mon Jun 05, 2023 8:30 am Heinz is using AI? Are they going to apply "AI" to photosynthesis? I don't think so.
One problem with all this hype is the failure to understand biology. Tech in general is linear. One line of code leads to the next, to put it simply. But when you throw in the nonlinearity of biology, things just don't work according to those linear perspectives. Oh, and us folks, we're animals, biological species, which throws a monkey wrench into the whole shebang.
When AI can produce ketchup in a replicator from its constituent atoms, now then we'll have something. Until then, AI is basically another form of vaporware, software that's promised, but never shows up. Maybe we need another term instead of vaporware. How about "vaporAI"? Goodness me, I have the vapors!
LLMs are capable of being non-linear. That's the whole point, they are emergent thinkers - they can do things their designers have not appreciated.
I don't have a good sense of how much is hype & how much is capability. I take notice of the fact that many of the initiators of this work (Geoffrey Hinton in particular) have publicly called for a halt until we figure out what the implications are and what the best controls are. They think it's serious enough to want to pause it.
So far what is publicly available seems a curiousity. But I don't imagine for one minute that's what is going on in the labs - that we are seeing anything like the state of the art.
The author also writes (boldface mine):
Now, though, these computer programs are no longer relatively simple and have been designed in a way that encourages such delusions. In a blog post responding to reports of Bing’s “unhinged” conversations, Microsoft cautioned that the system “tries to respond or reflect in the tone in which it is being asked to provide responses.” It is a mimic trained on unfathomably vast stores of human text — an autocomplete that follows our lead. As noted in “Stochastic Parrots,” the famous paper critiquing AI language models that led to Google firing two of its ethical AI researchers, “coherence is in the eye of the beholder.” ...
But in a time of AI hype, it’s dangerous to encourage such illusions. It benefits no one: not the people building these systems nor their end users. What we know for certain is that Bing, ChatGPT, and other language models are not sentient, and neither are they reliable sources of information. They make things up and echo the beliefs we present them with. To give them the mantle of sentience — even semi-sentience — means bestowing them with undeserved authority — over both our emotions and the facts with which we understand in the world.
These illusions of sentience may be among the reasons for the billions being invested in A.I. and the chatbots, but FOMO is surely a big one and why the deeply flawed chatbot models such as ChatGPT were launched before many techs began warning the general public - and potential investors - about A.I. dangers ahead and called for a development pause and urgent need for regulation.
Thanks to OP Nisiprius for an important warning about "AI investing hype."