7 Lessons Learned on Creating a Complete Product Using ChatGPT
ChatGPTās coding abilities make it super easy to code entire products in no-time ā if you know how to use it the right way
Not long ago I shared with you how I created my own French tutor out of ChatGPT (itās open-sourced, by the way). I described how I designed the app (and specifically itās backend) and how I connected and configured the different AI-based services. But there was one thing I pretty much skipped, which is how I created the frontend of the app. You see, Iām not a frontend programmer, and my knowledge of JavaScript sums up to the fact I know I need to place it within <script></script> tags.
But the app I had in mind required a UI, and a quite dynamic one. That means HTML, JavaScript and CSS ā but I had no idea how to even begin coding these.
What I did know is what I wanted it to look like. I had the design in my mind, and I knew how I would do it if I wouldāve known how to code these. And so I decided to go for a new and quite radical approach ā Iāll ask ChatGPT to write the code for me. At that point I already had experience with asking ChatGPT for code-related requests, but never have I tried something so complex.
Well, as youāre reading these lines, you know it worked ā Iāve created an entire app by simply instructing an LLM (Large Language Model) what Iād like to see. I really want to write this one more time, just to make sure we all understand what just happened: an algorithm coded an entire app just by me explaining it in plain English. š²
Still, as astonishing as it was, this process wasnāt as trivial as it might sound ā and therefore Iād like to take the opportunity and share some tips I learned on how to generate complex code using ChatGPT.
1. Design it yourself
LLMs are powerful tools for creating code and content, but they donāt think ā the can only fulfill requests (or at least they try). That means itās up to you to do the thinking, and specifically the design. Make sure you know how the final product should look like before you begin sending requests to generative model.
More on this ā itās up to you to do the research on whatās the best tech-stack for you. As youāll need to break your complex app to steps (see #2 below), the LLM canāt foresee what the final product will look like, and might use sub-optimal libraries or services.
For example, the first UI ChatGPT generated for me was based on tkinter, which creates an actual application and not a web UI. This makes dynamic UI something much more complicated to create (and less standard these days). Another attempt was based on steamlit, which makes non-complex UIs super-easy to create, but again wasnāt designed for complex requests (for example: āadd a play-recording button only next to the user messages but only if the user recorded an audioā). In my case, it was up to me to understand that using Flask will be the optimal way to go.
2. Break it down to tasks & start simple
If you ask ChatGPT to code the entire product all at once, good chance youāll get a broken code. As āsmartā as it is, donāt expect it to be able to pay attention to all given details all at once. Break your design to tasks and phases, starting with something rather simple, and adding on top of it.
For example, hereās my final chat UI, the one I initially designed and planned:
You can see there are all sort of buttons and functionalities on the UI, and yet, my very first prompt to ChatGPT was:
Write a Python web UI for a chatbot application. The text box where
the user enters his prompt is located at the bottom of the screen, and
all previous messages are kept on screen
No special buttons, no profile images next to the messages, nothing special. Just a simple chat UI, which is will be the core Iāll build upon. This prompt yielded 4 files:
- A Python file functioning as the backend (using Flask)
- An HTML file
- A JavaScript file (using jQuery)
- A CSS file
Once I had this, I could start making the product more complex.
You might think I just contradicted myself ā telling you to break you app to small steps, yet confessing my first prompt generated four files. Per each request from ChatGPT, thereās a trade-off between how much code is required to complete the task versus how non-standard and specific it is. Asking for an entire chat UI will generate something quite general, yet requires a lot of code. Asking to āadd a translation button next to the tutor messagesā, and to also make sure itās located on the right side of the message-bubble, always on the vertical-center and above the play-sound button is something very specific, and so itāll be a request by itself.
3. Explain carefully what you really want
Each request and addition youāll make to your product can potentially involve changes to more than one file (and to more than a single change per each file). That means new variables, functions and endpoints will be created at each such request, and will be referenced from different locations. The names provided to those will be given by ChatGPT, and it will do its best to provide them with meaningful names ā but it can only do so it youāll explain the context well.
For example, if youād like to add a āSaveā button to your product, prefer asking it like this:
Add a "Save Session" button to the left of the text box. It should have
a floppy-disk icon. Once clicked, all messages on the UI will be saved to
a JSON file named "saved_session.json"
instead of a context-lacking prompt as so:
Add a button to the left of the text box wth a floppy-disk icon. Once
clicked, all messages on the UI will be saved to a JSON file.
Preferring context-rich prompts will yield better naming conventions.
4. Be very aware of exactly what you ask
Hereās a true issue I has to solve and didnāt see coming: I wanted the UI to display the generated text from my French tutor as it is being streamed, similarly to the effect in ChatGPT. The Python API I was using to create the tutorās response (OpenAI ChatCompletion API) returns a Python Generator, which was then needed to be consumed and printed on the screen. And so I asked ChatGPT:
Write a JavaScript function that consumes the generator and updates the
message text one item at a time
What I didnāt know ā as Iāve never written any serious JavaScript in my life ā was that I asked for something impossible; JavaScript has no way of handling a Python Generator. What happened was that ChatGPT gave me all sort of weird and completely useless solutions, as it tried to do exactly what I asked ā alter the JavaScript code.
You have to remember that ChatGPT tries to fulfill your requests exactly as you asked, as long as they donāt violate its guidelines. What I truly needed at that point was for it to tell me Iām asking for something dumb, but thatās just not how it works.
This issue was only fixed once I figured out Iām asking for the impossible (the old way ā Google and StackOverflow), and altered my prompt to something like this:
Given the response generator, add functionality to consume the generator
and updates the message text one item at a time
which resulted in modifications to both the JavaScript and the Python file, which allowed the desired result.
5. LLMs cannot revert their code (and how to revert)
While ChatGPT is exceptional at writing code, itās still just a language model, and it doesnāt do well on reverting its own changes ā especially if you ask it to revert and go back two or three prompts back. When working with LLMs to generate code in phases, I highly recommend always keeping a copy of the last working version of the code youāre happy with; so if some new code ChatGPT added is broken and cannot be repaired, you can easily revert your code to when it worked last.
But thereās a catch ā because if you do revert your code, youāll need to revert ChatGPT too, to make sure it knows exactly how your code looks now. The best way to the that it by starting a new session, and kicking it off with a prompt like this:
I'm building a chatbot application. Here is my code so far:
HTML:
```
your HTML code
```
JavaScript:
```
your JavaScript code
```
CSS:
```
your CSS code
```
Python:
```
your Python code
```
Add a "Save Session" button to the left of the text box. It should have
a floppy-disk icon. Once clicked, all messages on the UI will be saved to
a JSON file named "saved_session.json"
(You can also upload the files to ChatGPTās Code Interpreter, which was not available at that time). If the prompt is too long to be sent as a single message, split it to two. Click āStop Generatingā when in between these messages, to prevent the bot from inserting unnecessary text in between.
6. Donāt fight it for too long
One of the cool things about coding with ChatGPT is that if it writes broken code, or the code doesnāt perform as intended, you can just send it the error message, and it will fix the code accordingly.
But that doesnāt always happen. Sometimes ChatGPT doesnāt manage to fix the bug, or created another bug instead. We then send it the new error, and ask it again to fix it. If that happens more than two or three times, thereās a descent chance the code will be so broken or overly-modified, it will simply not work. If youāve reached that point, stop, revert (see above) and rephrase your request.
7. Learn how to prompt
While the whole point of ChatGPT is the fact you can interact with it using everyday language, knowing how write your prompts correctly can have an immense effect on the result. I truly recommend taking the time to learn how to do that. For example, this free course by OpenAI and DeepLearning.AI on prompt engineering is a must, and specifically the lesson how to combine instructions, code and examples in a single prompt.
One of the most important things you can learn about prompting is to first to make sure thereās a distinguishable difference between the free-text and the code in your prompt. So instead of this:
Here's a Python function:
def func(x):
return x*2
Change it so it'll return the root of the absolute value of the input if
it's negative.
write it like this:
Here's a Python function:
```
def func(x):
return x*2
```
Change it so it'll return the root of the absolute value of the input if
it's negative.
Also, if possible, provide it with input-output examples. That the best method to explain an LLM what it should do, as it removes any ambiguities in your request (what should the model return id the input is positive? keep it x*2 or maybe nothing?):
Here's a Python function:
```
def func(x):
return x*2
```
Change it so it'll return the root of the absolute value of the input if
it's negative.
Examples:
Input: 2, Output: 4
Input: -9, Output: 3
Bonus: Choose the right LLM
Remember that āChatGPTā is a name of the web product, not the model itself. The free version gives you access to GPT-3.5, while the paid version includes GPT-4, which performs dramatically better in coding tasks The new Code Interpreter makes it also far better, as it can actually run and test its code.
Even if you decide to chose another LLM to work with, make sure the one you choose performs well on coding tasks. Otherwise, none of these tips will be of any assistance.
As Iām wrapping this all up, I guess the most important thing to realize when communicating with LLMs is that every word matters. LLMs donāt think and they canāt truly understand what we want without explicitly explaining it to them the way they need, because ā thank God ā theyāre not human (yet?), theyāre only a tool. And just like every tool ā if you donāt know how to work with it, you wonāt get any job done. I do hope youāll find these tips useful on your next project!