7 Lessons Learned on Creating a Complete Product Using ChatGPT

ChatGPTā€™s coding abilities make it super easy to code entire products in no-time ā€” if you know how to use it the right way

Shaked Zychlinski šŸŽ—ļø
Towards Data Science

--

Generated using StableDiffusion

Not long ago I shared with you how I created my own French tutor out of ChatGPT (itā€™s open-sourced, by the way). I described how I designed the app (and specifically itā€™s backend) and how I connected and configured the different AI-based services. But there was one thing I pretty much skipped, which is how I created the frontend of the app. You see, Iā€™m not a frontend programmer, and my knowledge of JavaScript sums up to the fact I know I need to place it within <script></script> tags.

But the app I had in mind required a UI, and a quite dynamic one. That means HTML, JavaScript and CSS ā€” but I had no idea how to even begin coding these.

What I did know is what I wanted it to look like. I had the design in my mind, and I knew how I would do it if I wouldā€™ve known how to code these. And so I decided to go for a new and quite radical approach ā€” Iā€™ll ask ChatGPT to write the code for me. At that point I already had experience with asking ChatGPT for code-related requests, but never have I tried something so complex.

Well, as youā€™re reading these lines, you know it worked ā€” Iā€™ve created an entire app by simply instructing an LLM (Large Language Model) what Iā€™d like to see. I really want to write this one more time, just to make sure we all understand what just happened: an algorithm coded an entire app just by me explaining it in plain English. šŸ˜²

Still, as astonishing as it was, this process wasnā€™t as trivial as it might sound ā€” and therefore Iā€™d like to take the opportunity and share some tips I learned on how to generate complex code using ChatGPT.

1. Design it yourself

LLMs are powerful tools for creating code and content, but they donā€™t think ā€” the can only fulfill requests (or at least they try). That means itā€™s up to you to do the thinking, and specifically the design. Make sure you know how the final product should look like before you begin sending requests to generative model.

More on this ā€” itā€™s up to you to do the research on whatā€™s the best tech-stack for you. As youā€™ll need to break your complex app to steps (see #2 below), the LLM canā€™t foresee what the final product will look like, and might use sub-optimal libraries or services.

For example, the first UI ChatGPT generated for me was based on tkinter, which creates an actual application and not a web UI. This makes dynamic UI something much more complicated to create (and less standard these days). Another attempt was based on steamlit, which makes non-complex UIs super-easy to create, but again wasnā€™t designed for complex requests (for example: ā€œadd a play-recording button only next to the user messages but only if the user recorded an audioā€). In my case, it was up to me to understand that using Flask will be the optimal way to go.

2. Break it down to tasks & start simple

If you ask ChatGPT to code the entire product all at once, good chance youā€™ll get a broken code. As ā€œsmartā€ as it is, donā€™t expect it to be able to pay attention to all given details all at once. Break your design to tasks and phases, starting with something rather simple, and adding on top of it.

For example, hereā€™s my final chat UI, the one I initially designed and planned:

The chatbot UI

You can see there are all sort of buttons and functionalities on the UI, and yet, my very first prompt to ChatGPT was:

Write a Python web UI for a chatbot application. The text box where 
the user enters his prompt is located at the bottom of the screen, and
all previous messages are kept on screen

No special buttons, no profile images next to the messages, nothing special. Just a simple chat UI, which is will be the core Iā€™ll build upon. This prompt yielded 4 files:

  • A Python file functioning as the backend (using Flask)
  • An HTML file
  • A JavaScript file (using jQuery)
  • A CSS file

Once I had this, I could start making the product more complex.

You might think I just contradicted myself ā€” telling you to break you app to small steps, yet confessing my first prompt generated four files. Per each request from ChatGPT, thereā€™s a trade-off between how much code is required to complete the task versus how non-standard and specific it is. Asking for an entire chat UI will generate something quite general, yet requires a lot of code. Asking to ā€œadd a translation button next to the tutor messagesā€, and to also make sure itā€™s located on the right side of the message-bubble, always on the vertical-center and above the play-sound button is something very specific, and so itā€™ll be a request by itself.

3. Explain carefully what you really want

Each request and addition youā€™ll make to your product can potentially involve changes to more than one file (and to more than a single change per each file). That means new variables, functions and endpoints will be created at each such request, and will be referenced from different locations. The names provided to those will be given by ChatGPT, and it will do its best to provide them with meaningful names ā€” but it can only do so it youā€™ll explain the context well.

For example, if youā€™d like to add a ā€œSaveā€ button to your product, prefer asking it like this:

Add a "Save Session" button to the left of the text box. It should have 
a floppy-disk icon. Once clicked, all messages on the UI will be saved to
a JSON file named "saved_session.json"

instead of a context-lacking prompt as so:

Add a button to the left of the text box wth a floppy-disk icon. Once 
clicked, all messages on the UI will be saved to a JSON file.

Preferring context-rich prompts will yield better naming conventions.

4. Be very aware of exactly what you ask

Hereā€™s a true issue I has to solve and didnā€™t see coming: I wanted the UI to display the generated text from my French tutor as it is being streamed, similarly to the effect in ChatGPT. The Python API I was using to create the tutorā€™s response (OpenAI ChatCompletion API) returns a Python Generator, which was then needed to be consumed and printed on the screen. And so I asked ChatGPT:

Write a JavaScript function that consumes the generator and updates the 
message text one item at a time

What I didnā€™t know ā€” as Iā€™ve never written any serious JavaScript in my life ā€” was that I asked for something impossible; JavaScript has no way of handling a Python Generator. What happened was that ChatGPT gave me all sort of weird and completely useless solutions, as it tried to do exactly what I asked ā€” alter the JavaScript code.

You have to remember that ChatGPT tries to fulfill your requests exactly as you asked, as long as they donā€™t violate its guidelines. What I truly needed at that point was for it to tell me Iā€™m asking for something dumb, but thatā€™s just not how it works.

This issue was only fixed once I figured out Iā€™m asking for the impossible (the old way ā€” Google and StackOverflow), and altered my prompt to something like this:

Given the response generator, add functionality to consume the generator 
and updates the message text one item at a time

which resulted in modifications to both the JavaScript and the Python file, which allowed the desired result.

Generated using StableDiffusion

5. LLMs cannot revert their code (and how to revert)

While ChatGPT is exceptional at writing code, itā€™s still just a language model, and it doesnā€™t do well on reverting its own changes ā€” especially if you ask it to revert and go back two or three prompts back. When working with LLMs to generate code in phases, I highly recommend always keeping a copy of the last working version of the code youā€™re happy with; so if some new code ChatGPT added is broken and cannot be repaired, you can easily revert your code to when it worked last.

But thereā€™s a catch ā€” because if you do revert your code, youā€™ll need to revert ChatGPT too, to make sure it knows exactly how your code looks now. The best way to the that it by starting a new session, and kicking it off with a prompt like this:

I'm building a chatbot application. Here is my code so far:

HTML:
```
your HTML code
```

JavaScript:
```
your JavaScript code
```

CSS:
```
your CSS code
```

Python:
```
your Python code
```

Add a "Save Session" button to the left of the text box. It should have
a floppy-disk icon. Once clicked, all messages on the UI will be saved to
a JSON file named "saved_session.json"

(You can also upload the files to ChatGPTā€™s Code Interpreter, which was not available at that time). If the prompt is too long to be sent as a single message, split it to two. Click ā€œStop Generatingā€ when in between these messages, to prevent the bot from inserting unnecessary text in between.

6. Donā€™t fight it for too long

One of the cool things about coding with ChatGPT is that if it writes broken code, or the code doesnā€™t perform as intended, you can just send it the error message, and it will fix the code accordingly.

But that doesnā€™t always happen. Sometimes ChatGPT doesnā€™t manage to fix the bug, or created another bug instead. We then send it the new error, and ask it again to fix it. If that happens more than two or three times, thereā€™s a descent chance the code will be so broken or overly-modified, it will simply not work. If youā€™ve reached that point, stop, revert (see above) and rephrase your request.

7. Learn how to prompt

While the whole point of ChatGPT is the fact you can interact with it using everyday language, knowing how write your prompts correctly can have an immense effect on the result. I truly recommend taking the time to learn how to do that. For example, this free course by OpenAI and DeepLearning.AI on prompt engineering is a must, and specifically the lesson how to combine instructions, code and examples in a single prompt.

One of the most important things you can learn about prompting is to first to make sure thereā€™s a distinguishable difference between the free-text and the code in your prompt. So instead of this:

Here's a Python function: 
def func(x):
return x*2
Change it so it'll return the root of the absolute value of the input if
it's negative.

write it like this:

Here's a Python function: 
```
def func(x):
return x*2
```
Change it so it'll return the root of the absolute value of the input if
it's negative.

Also, if possible, provide it with input-output examples. That the best method to explain an LLM what it should do, as it removes any ambiguities in your request (what should the model return id the input is positive? keep it x*2 or maybe nothing?):

Here's a Python function: 
```
def func(x):
return x*2
```
Change it so it'll return the root of the absolute value of the input if
it's negative.

Examples:
Input: 2, Output: 4
Input: -9, Output: 3

Bonus: Choose the right LLM

Remember that ā€œChatGPTā€ is a name of the web product, not the model itself. The free version gives you access to GPT-3.5, while the paid version includes GPT-4, which performs dramatically better in coding tasks The new Code Interpreter makes it also far better, as it can actually run and test its code.

Even if you decide to chose another LLM to work with, make sure the one you choose performs well on coding tasks. Otherwise, none of these tips will be of any assistance.

As Iā€™m wrapping this all up, I guess the most important thing to realize when communicating with LLMs is that every word matters. LLMs donā€™t think and they canā€™t truly understand what we want without explicitly explaining it to them the way they need, because ā€” thank God ā€” theyā€™re not human (yet?), theyā€™re only a tool. And just like every tool ā€” if you donā€™t know how to work with it, you wonā€™t get any job done. I do hope youā€™ll find these tips useful on your next project!

Generated using StableDiffusion

--

--