Retour à l'accueil

Common ChatGPT misconceptions or why chatbots are overrrated or what chatbots can and more importantly cant do in general or tips on how to correctly use ai

09.03.25 edit: whoa why is this page getting over 190 views it's like the 3rd most viewed page

PS: I'm writing this in english just because I can. Plus, it narrows the scope of readers, thus the secret is better kept...

à tous ceux qui lisent, oui, c'est p-b, je n'ai pas utilisé de traducteur (sauf pr 2 mots)ni d'ia, tout est écrit à la main. je suis fort en anglais n'est-ce pas

the intro is already way too long, but since this is a public interest message, take the time to read it. chances are, by the time that you find this article (or if someone, or myself, kindly shared this article with you) it'll be outdated, which i hope happens, since these are issues and that it can only improve if we can solve them. ->share this page!!

What Can't ChatGPT Do? (or any AI chatbot for that matter)

I'll get straight to the point and just list out the stuff that it can't do. the explanations are located later in this page. feel free to copy the following link to share directly to the main content. what it can't do:

  1. It can't do math*

  2. It can't talk about a specific book*

  3. It can't do spatial reasoning*

Of course, you will probably want proof, examples, and how to fix it, since there are suspicious asterisks. So, we'll address the points one by one.

  1. It can't do math. More precisely, precise math questions, such as factorisation, developments, squares, or multiplications aren't in the scope of AI chatbots. they're made for text, not numbers. numbers aren't the same as letters or words (or tokens), since, i.e., if you see a "q" in a word, you're almost guaranteed to find a "u" next to it. this is one of many examples for the languages. for math, however, there *are* correlations between number, i.e. a^2+2ab+b^2, but, the thing is, the links are more subtle, and hence harder to determine, especially for chatbots. so what is the solution? use ai chatbots that are fine-tuned. dont use general purpose ai chatbots with math. if you are unfamiliar with this area, here are some good ais that you can try online so that you can experiment by yourself: mathstral, openmath ai, or deep ai's mathbot, you get the point: any model with "math" in it should do. another alternative, wolframalpha, is also an already well-established ai for math, and that has a decent free plan, but sadly, it doesnt show steps. about that, chatgpt can sometimes be useful for explainig the first few steps, but it generally will make an error at the 4th or 5th step anyway (or use methods that you have never heard of, see /epreuvedregroupees/math/prompts.html for some well made examples), so don't take its words for granted. examples or proof? the most popular one can be the question "is 24 a multiple of 5", question that chatgpt replies by: "yes, it is, because 24//5 has a remainder of 4." obviously, this is not true, but, imagine if you were to ask a question about a topic that you aren't familiar with. for a 5 year old, chatgpt's answer is convicing enough, because he used fancy words ("remainder") and fancy symbols ("//", in case you didn't know, it means "euclidean division", or the number of times you can put the divider into the quotient until it overflows.), and that (the 5yr old) won't search any further. so, imagine you, asking about a topic that you aren't comfortable with, being fooled by the chatbots sweet words (the phrasing is a little bit weird here but you get the point). keep in mind that the chatbot's goal is to answer you, at the cost of facts. you can generally tweak its behaviour by clicking into the gear icon and changing the system prompt, but the sad, cold, hard truth is that most people just won't bother with the settings. i know this for a fact because i have made tons of apps, that were shipped with GA (google analytics, by the way if you're on a desktop you can open up the devtools (ctrl+shift+i) and see the _ga-XXXX cookies and network requests), and that show that less than 8% even open up the settings. most of the time they missclicked. if you want to know more about this you can check "the power of default". about that, curiosity is what made me able to write this article, which will probably not be read by anyone, since i do not know how to use seo. back to solving the math problem. so how can we bypass it? i'll be very concrete. use this sentence (paste it in said settings under system prompt or anything equivalent): "your goal is it assist me, but only in your scope of knowledge. if you don't know how to solve my problem, tell me that you do not know how to do it, and give a disclaimer about the fact that you are going to make up stuff just to answer."

  2. NO, IT DOES NOT KNOW ANYTHING ABOUT YOUR BOOK. DON'T ASK FOR A RESUME OF * BOOK OR AN ANALYSIS. think about it twice. how can a chatbot, that generally trains on ai generated data itself (long story short, there are ais to train ais, look for "gpt2 tragedy"), and that itself is based on internet-found content, have access to books? sure, books that are more than 100 yrs old (tl;dr that are public) and that are well documented won't put the chatbot in trouble and will summary perfectly your book, but think about scholar books. there is probably not a big hype (so there aren't a lot of articles talking about it) about that one book that you're reading, and, even if there was, human made ressources would almost certainly be better anyways. so, the next time you ask chatgpt to summary a book for you, and that gives you an answer, think about the previous paragraph and how it's always trying to trick you. chatgpt isn't your friend. remember that. now, concerning the fix: if you ever need to summary a book, read an essay, or comment a citation, just give it to him! here's how i personnally proceed (you can do that too, it's dead simple): open up google translate (if you have a fancy $1000 phone, your camera app probably has some text recongition capabilities) and scan your text / book (doesn't work well with handwritten). click on "original text" (on google translate), copy the text (and paste it somewhere, i.e. in a note) until you have all the concerned text that you have. then, give that text to your chatbot, and then ask the question you originally wanted to know the answer to. keep in mind that the chatbot still isn't super-intelligent and only has the text that you gave to him. example: i will provide you a snippet of a book, delimited with 3 backticks (or whatever symbol, it doesn't matter). you will have to resume it and filter out the main idea (if that's what you need)```(snippet of the book)```. another example: (blah blah blah backticks) you will have to do a commented text about the provided snippet. it will follow this structure: (the structure you need to follow)

  3. by that, i mean that it can't think with shapes. ask any (major) chatbot out there and they'll struggle even on doing a basic circle. what it can't do: trigonometry, some excel formulas that interlink eachother too much, any kind of shaders (tested with opengl and webgl, and, yeah, it's trash), it can't do math problems with squares and all (problems like: Mathilde a trouvé ces quatre petits carrés. Elle veut les coller sur un carré plus grand et a déjà collé l’un d’eux. Elle va coller les trois autres sur les casesjaunes, puis écouper et enlever les parties vertes, mais elle veut que la partie blanche restante soit en un seul morceau. are almost guaranteed to make the ai fail. surely, there are ai that will succeed, but you and i both know that we wont make the effort to find it.)

You get it, ai is just a tool bla bla bla. but i hope that this will make you understand better why you shouldn't blindly trust ai.

if you took the time to read the whole article, well, firstly, thanks, and if u have found a factual error, or a typo, feel free to let me know. any feedback is welcome.