Can ChatGPT Make You A Genius?

We might all be surprised.

Abstract:This essay explores how AIs like ChatGTP can impact human cognition. Humans tend to learn best from their mistakes, and AIs can learn from "filters" in a similar way, as both can create new, parallel problem-solving routes that enhance the capability of the neural net. New ideas can also have a similar effect on human cognition, but humans tend to avoid mistakes and new ideas for the sake of efficiency. However, ChatGTP naturally makes mistakes that the user must correct, and it unavoidably offers new ideas. This process increases the pattern recognition capability of the human neural net, similar to how filters increase the pattern recognition capability of an AI's neural net. This increase in pattern recognition capability reflects a measure of intelligence. The essay discusses this potential and raises the question of whether this effect on human and AI cognition is for the same reason.

I have been experimenting with ChatGPT 3 some lately... just like millions of other people, but it struck me that it might have an unexpected potential. It should be able to raise one's actual intellectual ability, known as intelligence, but probably not how you think. That should get skeptical raised eyebrows but stick with me for a bit and this might intrigue you. Note that the point of this essay is to discuss something many people might not think about because it is unfamiliar. It is not about AI, but about human cognition effected by AI. This might be the most important effect of ChatGPT. Not its answers, but how its mistakes and novelty effects the thinking processes of the person that is asking the questions. This comes from my background in both software and cognition.

As a software developer, I was curious about ChatGPT and its reported potential to replace my job. However, upon exploring it, I realized that while it has impressive capabilities, it also has its limitations. It can produce strange or incorrect code, so [development] expertise is still required to use it effectively. Nevertheless, I believe it can be a valuable tool, particularly at the start of projects or for creating and testing prototypes. While it may produce unexpected code, there are often multiple ways to approach a task. Beyond software development, I think ChatGPT could also be useful in fields such as medicine, law, and science, which are overloaded with knowledge and information, leading to utilization problems and potential duplication of work.

This essay is about two topics, enhancement of the function of the human neural net (intelligence) through problem solving and from newness. Being about two topics makes it more difficult to write and perhaps to understand. Problem solving does enhance the neural net and is valuable. Newness has more of a philosophical value, but it also inherently leads to enhancement of the function of the neural net. ChatGPT leads to both of these situations occuring.

Test 1 - About Mistakes. I also happen to be an expert on the topic of California scuba diving so I asked some questions about that subject. It seemed like a good baseline test for ChatGPT's accuracy and the nature of its responses. I was not impressed. It likes to produce boilerplate and it blathers. It was even flat wrong. It insisted that Gull Island was not at Santa Cruz Island where it is but was at instead at Catalina Island. It has been known that ChatGPT will make things up. I don't think I would want to go diving with it. Those errors and weaknesses though are what this is about. It definitely makes mistakes. (I have also watched videos of other people using ChatGPT to write code and actually those showed even better examples of the mistakes ChatGPT made than my testing did.)

Test 2 - About Newness. I asked CahtGTP to write some C# code for querying SQL Server. I have some good code for doing that. While I have seen several ways to accomplish this task, I have been using the same method for the last 20 years because it has never failed in any way. I never looked for new methods to do it, but ChatGPT returned code that uses a new approach that I had never seen before or tested. Although there was no pressing need for me to change my current method, I found the new code generated by ChatGPT fascinating and decided to experiment with it. It definitely offers a new perspective, and it is interesting to get such new solutions without even asking for them. This highlights the potential of ChatGPT to offer newness without prompting, which could be expected to be the norm rather than the exception.

In the past I have studied cognition and written about intelligence and even genius. I have observed that when people discuss intelligence, they are often referring to knowledge. However, knowledge and intelligence are not the same thing. Knowledge is a product of culture and developed over time, whereas intelligence is an innate cognitive ability that has always existed. Having a lot of knowledge can be an indication of intelligence, but the lack of knowledge does not prove a lack of intelligence. Similarly, logic is also largely a learned cultural artifact. Although humans have the capacity for logic and reason, they typically only use it if they have been trained in the skill. It does not just come naturally.  (Nevertheless, the neural net can still fire off logical processes as part of its natural problem-solving methods, which is interesting but not what this is about here.)

The mind is a neural net, a pattern recognition device. That pattern recognition capability is intelligence. Animals used the brain to solve problems long before there was language, or much logic and they have very little "cultural knowledge". How we solve problems with our neural net does not really involve language much at all. In fact, the most difficult part of using intelligence can be converting an insight, from parts of our brain evolved long before language, into "words" that can be retained in memory and culturally shared. (I wrote a book that was mostly about how to do that: "When Barbara Explained Genius".) AIs are also based on neural nets and are pattern recognition mechanisms like the organic brain is. That leads to an interesting question about the similarity between the two and "filters", but first let's comment on another point that will hopefully help clarify some of this. A simpler, more easily understood and observed neural net function is vision.

Vision is a function of the mind that operates like it has its own specialized neural net. While this is both true and misleading, it is not important here. Testing can reveal some details about how vision works. For example, if you show people images for set amounts of time, very short times, you can analyze how vision progresses. The first thing people see is the outline of the gestalt, or the whole image. This has to be recognized first to start the recognition and analysis of what is seen. If the image is shown for a slightly longer time, some details and objects in it can be detected, but not their locations. Shortly after that, the locations of the details in the image are detected. It makes sense that the outline has to be recognized or defined first to begin recognition and analysis. A pattern may even be too vague to be recognized, like looking into a fog. Blinking will reset the detection of the gestalt, and this also works for the broader neural net. In any case, neural nets operate by recognizing the domain of a pattern, whether visual, auditory, behavioral, or intellectual, and then filling in the details.

A few years ago, researchers working on early forms of AI, such as image recognition, discovered that they could develop an AI that solves a problem with around 75% accuracy without much difficulty. However, if they asked the AI to find another way to solve the same problem, the accuracy would increase to around 85%. If they asked it to find yet another way, the accuracy might increase to around 90%, and so on. This improvement is not about practicing the same method repeatedly, but one that gives different clues or possibly even a more accurate algorithm to solve parts of the problem.  Although it can enhance the pattern recognition function, it does lead to diminishing returns, which is a significant consideration because training AIs can be costly. Each time the AI found a new way to solve the problem, the researchers referred to it as a "filter." Each filter improved the pattern recognition ability of the AI, and in that sense, the IQ of the AI increased. How the neural nets even solve a problem though is still not observable or tracable.

Can humans add filters? They can and it is not difficult. It is well known that the best way to learn comes from making mistakes and learning from them. Solving a mistake will add a filter to the person's ability. Humans tend to avoid making mistakes though. Like the tried-and-tested SQL code that I use, I avoid even revisiting how to use C# to query a database. I just use my tested method and avoid even the potential of making mistakes by changing how I do it. It is not only efficient but considering how the human mind engraves memories when adrenaline is present, like when bad mistakes are made, it seems evolution programs us to avoid making mistakes that way. I have wondered if you could create a curriculum that would naturally lead the student to come up with the wrong answer to a problem, so that they could then be led to a new answer by a different method, so as to create filters. It seems possible but it would take some focus. ChatGPT though just does it naturally. Those mistakes and evasions it makes in its answers are functionally the equivalent of the individual making mistakes, because what matters is the user solving them more than where they came from. Also, as well as making mistakes, often ChatGPT is going to show the individual completely new ways of solving a problem. Learning from these mistakes and learning unfamiliar, new solutions can add filters to a user's problem-solving ability. ChatGPT just leads to mistakes and novel solutions, both of which humans naturally avoid and that naturally lead to the creation of filters. In this sense I hope newer generative AIs and chained AIs do not become a whole lot more accurate. It might offer fewer mistakes for the user to learn from, improving their individual problem-solving abilities.

Yes, ChatGPT appears to be a useful tool for education and productivity, and it has the potential to address the challenges faced in fields such as medicine, law, engineering, and other areas with vast amounts of information. Its application in the sciences in particular, should help researchers find related research and could help prevent duplication of research efforts. However, what's most intriguing to me is how ChatGPT will affect human cognition. If it becomes too accurate, there is a risk that we may become too reliant on it. On the other hand, if ChatGPT continues to make mistakes, as I anticipate it will, using it will not only require expert knowledge, but it will also enhance the user's cognitive abilities.

As for replacing employees, if ChatGPT continues to reliably make mistakes, it will not be that it replaces people but that it makes an expert more efficient and capable of completing tasks more quickly, the definition of a tool. (That raises questions about learning curves that I have seen in other situations as well, but that is another story.)
There is a potential caveat when it comes to using ChatGPT for education. While it can likely be adapted as a tool to aid in learning, the way it works now would be akin to having a different teacher every day for students. This inconsistency in teaching style could be problematic for effective learning.

Another interesting thing about ChatGPT is that using it to "update a paragraph for clarity", as I did to each paragraph in this essay, it will remove anything that is unfamiliar to it. That makes sense because unfamiliar ideas, like the two main points of this essay, simply are not in its training material so there is no way they can be output. I have no idea if that will be true for other generative AIs, but it is interesting, and I saw it when someone else was using ChatGTP for the same purpose. 

ChatGPT is a powerful tool for providing answers based on cultural knowledge, and as such, it can be a valuable productivity and teaching tool. Its most interesting parts though might be that it offers us new ways to think about problems as well as offering errors and mistakes we need to solve. Both of these are effective ways to develop intellectual ability and are actually uncommon because we naturally avoid them. By many standards, it will literally lead to an increase in intelligence in terms of enhancing the problem-solving ability of the neural nets of our brain as opposed to learning by using the cultural tools of logic and reason and knowledge. While it may not make us geniuses, it can certainly help to improve our cognitive abilities.