An RF Cafe Visitor's Thoughts on AI
Smorgasbord / Kirt's Cogitations™ #359

RF Cafe University"Factoids," "Kirt's Cogitations," and "Tech Topics Smorgasbord" are all manifestations of my ranting on various subjects relevant (usually) to the overall RF Cafe theme. All may be accessed on these pages:

 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37

<Previous                     Next>

GPT 2024 (Hal 9000) Computer - RF Cafe

GPT 2024

Artificial Intelligence (AI) has become a very controversial subject in the last few years, especially since the debut of the ChatGPT engine. "GPT" means it Generates new content using a Pre-trained database of data and Transforms it into user-requested output used on "deep learning" models. I have posted a few articles on AI topics. A couple RF Cafe visitors have chimed in with opinions on AI and whether it is more good than evil, or vise versa.

One guy in particular, an ubersmart engineer living north of the border, contributed the following, which I post with permission (less identification). This was his reaction to my posting of the "ChatGPT Thinks I Discovered and Own Everything" piece.

To the left is my lame mock of HAL 9000 (Heuristically Programmed ALgorithmic Computer) from "2001 Space Odyssey.

Hi Kirt!

You are right, creations by ChatGPT are scary. Some of my friends say their kids are using it for their homework ... "what can possibly go wrong?" When ChatGPT first came out, we immediately got a company-wide email that it is not allowed to use it for work. Some of my colleagues have tried it for private use, and found that it makes mistakes, especially in math. I am not surprised; IEEE Spectrum has several articles on AI and especially how "dumb" it is, and specifically at math. What I also read is that ChatGPT is basically good at using language, to imitate intelligence GPT stands for Generative Pre-trained Transformer, a type of neural network especially suited for constructing language. The problem with these is that there are not enough data in the world to train them, so designers resort to other neural networks to generate training data ("what could possibly go wrong?"). I believe in the case of ChatGPT, they released it upon the unsuspecting populace thinking that "users" would effectively train it, but most users tend to believe that it is teaching them.

I am generally skeptical about AI (I believe there is no such thing as artificial intelligence, only imitated intelligence, Turing test notwithstanding), and would never rely on it without having a means to verify its output. It can never "invent" anything, only discover things, which is useful because the discovery can be verified usually. I don't believe it can really compensate for lack of human intelligence. (Someone before me said "Artificial Intelligence is no match for Natural Stupidity.") As AI becomes more accessible, it will fall into the hands of more actors possessed of "NS" and eventually nobody will be able to discern if its output is true without having an independent means to verify it (like a digital computer built from inherently logical units to check math for example, or a chemical lab to verify new compounds' properties).

I tend to side with Roger Penrose on this, he being also very skeptical about AI, and also possessed of a high degree of "natural" intelligence himself; if anyone understands what real human intelligence is or could be, he does. Aside from solving technical and scientific problems, should we be using AI for creative tasks, just because we can? I have never been inclined to try ChatGPT, because I believe that I would get more pleasure from creating something myself than from coaxing ChatGPT into creating something I like. Same goes for solving a math or physics problem; I enjoy using my brain and acquiring more knowledge to make it independent of complex external crutches into which I do not have insight (and especially if other users can "teach" them their own ways, rightly or wrongly).

For the same reasons, I do not rely on the internet to teach me something that I don't already know, or at least can determine its veracity based on what I do know. (A recent case in point: a new-hired Ph.D in our group was of the conviction that to receive a signal from a right-circularly-polarized antenna, you need a left-circularly polarized antenna at the other end. I tried to explain to him that it's wrong (except in case of radar); you need the same right-circular-polarized antenna, based on reciprocity. I eventually made a mechanical model using identical bolts and nuts to show him how it works. We also have it working that way in our experimental project, so we know it works. Later he told me that he read many posts on the internet where they got it wrong, and nobody corrected them.)

What is scary about ChatGPT is that many people will rely on it as absolute truth without question, or will devalue human-created outputs in preference to AI-created outputs and many teachers and creators of literature and audio-visual art will be dismissed from their jobs for economic expediency. That will lead to mental impoverishment of many people who have talents, thrive on enlightening or inspiring others, and it may create mental and social problems when they lose their purpose.

In a way I find it is impossible to create an artificial brain that can model a human brain. It would have to be "bigger" (in terms of degrees of freedom) than any human brain, just like a computer must be "bigger" than the circuit it is simulating. If humanity ever manages to make such a system bigger than a human brain, then no human would be able to comprehend it; it would become unpredictable, and people would come to fear it. It's already getting that way with what we can build today. We must always include an "on/off" switch in whatever "intelligent" gadgets we build!

- Anon

After my response to the above post, he replied thus --

Kirt:

I believe Elon Musk developed (or is in the process of developing) an AI platform to counter anticipated biases of ChatGPT. I think it's called [Grok]. There are also other AI-type engines which are trained to discern AI-generated content from human-generated content; one of those might pick up my comments some day, or probably an intelligent human will (I hope).

I suppose I am biased towards humans (being one of them), in that I consider any other form of "intelligence" as "Imitated Intelligence". For example, I believe only human intelligence can generate something like the proof of Fermat's Last Theorem. See if ChatGPT can generate a general proof of Goldbach's Conjecture. (Hint: a human one does not exist yet). I also believe that AI cannot really "explain" anything; it can find an explanation in its data base or training set, but cannot teach like Richard Feynman for example.

It seems to me that AI is basically another form of technological "wow" factor. When negative-feedback regulators were invented, then any device incorporating them was considered "smart", like a heater with a thermostat, or AGC in a radio receiver. Then when adaptive systems were developed, like adaptive equalizers, "adaptive" became "smart". Then blind, unsupervised adaptive systems were conceived, which could automatically separate random mixtures of inputs like voices, and that became "smart". To me, none of these are intelligent, because I can do the math that makes them work. Yes, the results appear amazingly magical as if the system was thinking about what it's doing, so in that way it was perhaps clever, but not "intelligent", IMHO. The deep learning AI neural networks are just more elaborate versions of adaptive systems; in fact the blind signal-separation algorithm I built from a 27-years old paper is a fully-connected neural network technically, but the math is not at all mysterious.

Recently at work we had one of [my company's] "advisory board" forums, which is like a [company] conference where local university professors give presentations to select [company] researchers. It included several sessions on AI, so I asked a question in one of them, whether anyone has considered layering neural networks hierarchically, where the inputs to the next-higher layer would be the states (or synapse weights) of the lower-level network, like a "conscience". After the prof understood my question, he deemed it interesting, hummed and ummed a while, said he would have to think about it but had no answer. To me it seems like natural brains need some such hierarchy in order to discipline and civilize themselves. Anyway, I hope he cites me if he ever invents something along those lines :)

- Anon

Your thoughts on the matter are welcome.

 

 

Posted January 16, 2024