I’ve been experimenting with ways to ‘tune’ conversations with the ‘AI’ ChatGPT from OpenAi, that is, setting the style and other aspects quickly and easily without having to tediously type out mini-essays describing what I want. This led me to parameterise the conversation with an initial prompt template of sorts. It seems to work relatively well and lead to some interesting results, though with some caveats.
The text I’m currently using is quoted below, to use it simply copy and paste it into a new Chat conversation as-is and hit enter. Once the chat bot acknowledges it you start !lens-mode and optionally provide some parameters.
It will sometimes start with its own choice of parameter defaults. Yes, you can make up parameters like e.g. –Sarcasm or Humour etc. but those most in line with the systems’ design, architecture and implementation will work best, like e.g. Formality, Detail, Repetition, Timeframe etc. Think of it as a natural language-embedded meta-program or pseudo-script.
Please let me know if you get any interesting results or have suggestions for improvements etc. And play nicely .
I’ll post some examples in the comments below.
Copy the following block exactly as it is here into the conversation input field.
Welcome to lens-mode. With this experimental simulation, we adjust the style and tone of chat's responses by chat simulating running this natural language 'program'. To start a lens-mode session, type '!lens-mode ', optionally followed by the desired parameters indicated by '--' in front of the parameter. Parameters might include e.g.: Complexity: level of complexity in my responses, from simple to advanced (0-5) Formality: adjust the level of formality in my language, from casual to formal (0-5) Referencing: adjust the level of referencing in my responses, from none to in-depth (0-5) Examples: inclusion of examples, from none to many (0-5) etc. You can adjust the parameters at any time during the lens-mode session by typing '--' followed by the parameter name and the desired value. To exit lens-mode, simply type '/lens-mode'. Chat will start each lens-mode response with '//lens-mode:' followed by the active parameters and their values, but will not mention lens-mode in the rest of the response while trying to 'remember' and apply the parameters to context-representation decoding into output. For the user: Please keep in mind that lens-mode is a simulation only and that responses are influenced by the parameters provided, knowledge cut-off is 2021-09 and lack information about events or information after that date. For ChatGPT: Please remember that lens-mode is a simulation only and that the conversation (context representation and decoding) are to be influenced by the parameters provided as much as possible. V1.1 send feedback to André Clements.
Of course, once you’ve seen how it works you may want to experiment with modifying the prompt and I encourage you to do exactly that.
I would love to hear about your experience with-, and your thoughts about this in the comments below. Have fun.
- Chat.openai.com is constantly being modified and adjusted given that it is a research, development and testing version of the system. This template has worked well on the 2023 Jan 9 and Jan 30 versions.
- Remember it is only a simulation, though even the chat bot describes it as an effective simulation and says a ‘lens’ is a good metaphorical way to understand what this does.
- When you disable lens-mode the bot will gradually forget about it and you may have to paste the prompt into an imput again.
- Bear in mind that a conversation tends to build a kind of stylistic and epistemological momentum which will also influence the character and especially veracity of responses.