Gpt chat, amazing tool for debugging

This is super interesting topic for developing at least simple BeOS-like applets that extend the OS. Something like TuneTracker was at start could be achievable relatively easy and safe - no?

I see it more suitable for creating simple snippets or skeleton classes very quickly to speed things up a bit.
The other interesting use case is for showing online documentation and examples of Haiku API.
I presume ChatGPT has been trained with a subset if not all the content of BeBook and Haiku Book and code from GitHub (like Copilot)

1 Like

I would not trust it in an automated way. And I do not think it is evolved enough to reply by itself to code review comments on Gerrit.

If someone wants to try to use it to improve documentation, they will have to take responsibility for any issues resulting (that means getting it past code review and applying changes as requested, with or without assistance).

Writing code is not the most important part in an application, doing the proper architecture and design is. And then debugging. And doing debugging and evolutions on code you didn’t write is a bit more difficult than on code you designed yourself.

Is that what we’re going to get? People trying to write applications this way with code they don’t fully understand? I can’t say I have a problem with that, I did it in some way with different tools (writing apps in Delphi when I was 12 years old or so, I had no idea what a variable was, it was quite a mess).

So let’s see where it can be useful and how far it can get before reaching its limitations (or at which point it becomes more annoying and verbose to tell chatGPT what you want than write the code yourself).

4 Likes

as it stands, it’s useful for debugging, it can comment code, etc. it’s also great if you need a idea on how to handle a logic issue.

it’s really a very useful tool, i think of it as a good tool for self guided learning, debugging, code lookup and reads and understands documentation.

i think having it in a IDE as a tool would be a huge help. certainly could cut down on a lot of the grinding boiler plate work.

1 Like

Who is they? Besides 3dEyes?

The only thing they added to the generated code

In fact, if you point out errors in the generated code to the AI, then the AI starts to take that into account and writes better. For example, now it already creates an instance of the application, generates the correct message handling.

5 Likes

Not adding much to the conversation, but I am just now getting around to play with it. I pasted some of our code in just to see what it came back with. Pretty cool!

Which part of the code is this?, it seems like it caught 2 memory leaks? is this thing for real?

It was from one of the vmware_fs files from the vmware addons for Haiku. No idea if its findings are legit or not.

typically, it is only correct in the context of the code you can share, I often tell it I am pasting several times and I start a new conversation. So I tell it I am pasting the code in order every 300 lines, and ask it to wait until I finish to reply, it will sometimes oblige, you have to remind it with each paste to wait though. Yeah it is a amazing debugging tool but it doe’s get it wrong , but only becuase of the limited context, I put my name in the waiting list for the professional version, I am excited, I am going to get some much shit done !!

No.

The “vmwfs_create” function is designed to allocate memory, so it calls malloc and not free. The “vmwfs_free” function is designed to free that memory, so it calls free and not malloc. This is not at all a memory leak.

As usual, ChatGPT will reply with something that looks like it would make sense at first glance, but it is just copypasted from random answers to similar questions on the internet. So, use the real thing: get one of the static analyzers that actually scan the code and understand some of it… or just ask some human to review your code, they will do a better job of it :smiley:

14 Likes

Yeah i kinda had my doubts about it, use the real thing indeed.

actually chatgpt isn’t wrong, it’s just not aware of the entire context.

if i give you a small portion of a large problem, your knowledge is limited to a small portion, but if you’re unaware of the larger set of variables, you’re going to draw incorrect conclusions. you can observe this phenomenon everywhere in human society. it’s not limited to chat gpt.

the trick to working with chat gpt is getting enough of the problem into the conversation and then engagement into the conversation. so if chat got says, this isn’t being free’d, check, and if it is, share the code snippet and correct it. it does learn in the context of a conversation.

I’ve been working with it to design a more robust kallman filtering velocity and position predicting encoder scheme for CNC machine position detection. that conversation is probably several hundred reply’s long at this point .

point is gigo, you have to feed it to get ot to work

2 Likes

Does it remember"learn" between conversations or just within one?

ChatGPT had more context than me: it had the sourcecode for these functions, and I can understand what it got wrong just by looking at the function names. Yet it guessed wrong.

At what point does it become faster to just do the research and write the code yourself? I don’t think what I typically need in my workflow is some tool to give me very fast, but often incorrect or inaccurate answers, that I then need to “correct” by talking with a robot. But maybe I’m old and grumpy and not very receptive to new technology.

2 Likes

it does remember conversational context.

i suspect that your grumpy and old, and my hands don’t work as good as they once did, so chat gpt sparing my hands endless boiler plate and algorithm typing is a huge benefit. I still wind up working on the control logic but i can give it some code and give it instructions to create specific equations, and it typically does this exceedingly well.

i have early symptoms of carpal tunnel, and typing is really bothersome to my hands.

it’s a tool like a screw driver, and requires learning to extract maximum benefit. i use it as a assistant, part Dictaphone part logic analyzer.

if you tell it your working with the be api, and keep reminding it you’re working with Haiku, eventually it commits this to memory and accuracy improves greatly.

Sounds like working with someone that hasl alsheumers diesease .-.

Honestly I’d rather not use a tool I have to train like a dog to behave. I don’t think I am a particularily good dog trainer. With valgrind I atleast know that valgrind knows mostly what it is talking about.

https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data

If I understood well, these tools are used for code generation and this can indeed cause problems. The one discussed on this thread is used for debugging so it shouldn’t be impacted.