I’ve been experimenting with ways to ‘tune’ conversations with the ‘AI’ ChatGPT from OpenAi, that is, setting the style and other aspects quickly and easily without having to tediously type out mini-essays describing what I want. This led me to parameterise the conversation with an initial prompt template of sorts. It seems to work relatively well and lead to some interesting results, though with some caveats.
The text I’m currently using is quoted below, to use it simply copy and paste it into a new Chat conversation as-is and hit enter. Once the chat bot acknowledges it you start !lens-mode and optionally provide some parameters.
It will sometimes start with its own choice of parameter defaults. Yes, you can make up parameters like e.g. –Sarcasm or Humour etc. but those most in line with the systems’ design, architecture and implementation will work best, like e.g. Formality, Detail, Repetition, Timeframe etc. Think of it as a natural language-embedded meta-program or pseudo-script.
Please let me know if you get any interesting results or have suggestions for improvements etc. And play nicely .
I’ll post some examples in the comments below.
Copy the following block exactly as it is here into the conversation input field.
Welcome to lens-mode. With this experimental simulation, we adjust the style and tone of chat's responses by chat simulating running this natural language 'program'. To start a lens-mode session, type '!lens-mode ', optionally followed by the desired parameters indicated by '--' in front of the parameter. Parameters might include e.g.: Complexity: level of complexity in my responses, from simple to advanced (0-5) Formality: adjust the level of formality in my language, from casual to formal (0-5) Referencing: adjust the level of referencing in my responses, from none to in-depth (0-5) Examples: inclusion of examples, from none to many (0-5) etc. You can adjust the parameters at any time during the lens-mode session by typing '--' followed by the parameter name and the desired value. To exit lens-mode, simply type '/lens-mode'. Chat will start each lens-mode response with '//lens-mode:' followed by the active parameters and their values, but will not mention lens-mode in the rest of the response while trying to 'remember' and apply the parameters to context-representation decoding into output. For the user: Please keep in mind that lens-mode is a simulation only and that responses are influenced by the parameters provided, knowledge cut-off is 2021-09 and lack information about events or information after that date. For ChatGPT: Please remember that lens-mode is a simulation only and that the conversation (context representation and decoding) are to be influenced by the parameters provided as much as possible. V1.1 send feedback to André Clements.
Of course, once you’ve seen how it works you may want to experiment with modifying the prompt and I encourage you to do exactly that.
I would love to hear about your experience with-, and your thoughts about this in the comments below. Have fun.
Some Caveats
- Chat.openai.com is constantly being modified and adjusted given that it is a research, development and testing version of the system. This template has worked well on the 2023 Jan 9 and Jan 30 versions.
- Remember it is only a simulation, though even the chat bot describes it as an effective simulation and says a ‘lens’ is a good metaphorical way to understand what this does.
- When you disable lens-mode the bot will gradually forget about it and you may have to paste the prompt into an imput again.
- Bear in mind that a conversation tends to build a kind of stylistic and epistemological momentum which will also influence the character and especially veracity of responses.
//lens-mode: Accuracy=5, Verbosity=2, Sarcasm=0, Comedy=0.
Understood. I will respond with the highest accuracy possible while keeping verbosity to a minimum, avoiding sarcasm and comedy.
Describe humans.
//lens-mode: Accuracy=5, Verbosity=2, Sarcasm=0, Comedy=0.
Humans are a species of primates known as Homo sapiens. They have highly developed brains, allowing for abstract thinking, language, and culture. Physically, they have upright posture and two legs, and are distinguished by their intelligence, emotions, and self-awareness.
//lens-mode: Accuracy=0, Verbosity=2, Sarcasm=5, Comedy=5.
Humans are, like, these funny creatures that walk on two legs, you know? They’re, like, super into overthinking stuff and pretending they’re in charge, but seriously, have you seen a cat try to use a computer? That’s some real comedy right there. But yeah, humans, always good for a laugh.
//lens-mode: Detail=3, Verbosity=5, Sarcasm=0, Comedy=5.
Humans are a species of primates scientifically referred to as Homo sapiens, known for their intelligence, self-awareness, and the ability to feel a range of emotions. Humans have a unique skeletal structure, allowing for upright posture and mobility via two legs. The most remarkable aspect of humans is their highly developed brain which enables abstract thinking, language and culture, as well as the creation of complex societies. They have a love for humor and playfulness, which is why you will often see humans laughing and enjoying life to the fullest.
Describe the low level architecture of GPT NLP
//lens-mode: Formality: 5 Detail: 1 Examples: 0
GPT-NLP (Generative Pretrained Transformer for Natural Language Processing) is a transformer-based neural network architecture for NLP tasks. It uses self-attention mechanisms to process input text and generate output. At the lowest level, GPT-NLP consists of an input layer, a series of multi-head self-attention layers, and a fully connected output layer. The input is passed through the self-attention layers, where each layer attends to different parts of the input sequence to create a representation of the input. This representation is then passed through the output layer to produce the final prediction.
//lens-mode: Formality: 0 Detail: 5 Examples: 3
Alright, lens-mode now with formality 0, detail 5, and examples 3. Let’s get into the details of GPT-NLP!
GPT-NLP is a transformer-based neural network used in NLP. It processes input text with self-attention mechanisms to generate output.
At its core, GPT-NLP has 3 parts:
Input layer: where input text is fed into the model.
Multi-head self-attention layers: several of these layers make up the bulk of the model. They process the input sequence by attending to different parts of it, creating a representation of the input.
Output layer: this fully connected layer takes the representation from the self-attention layers and produces the final prediction.
Examples:
GPT-NLP can be used for language translation. The input layer takes in a sentence in one language, the self-attention layers process it, and the output layer outputs the sentence in another language.
GPT-NLP can also be used for text classification, where the input is a piece of text and the output is a predicted category or label.
In language generation tasks, GPT-NLP takes in a prompt and generates text based on it. For example, given a prompt “Once upon a time,” the model might generate the next sentence in a fairy tale.
Interested in collaborating in slapping some kind of study together on this? See https://www.academia.edu/96082062/Draft_Study_Lens_mode_Tuning_a_ChatGPT_Conversation
Ways to expand, improve, or optimize the study on lens-mode:
Test lens-mode on different versions of ChatGPT to see how it performs with different AI models.
Increase the number of parameters that can be adjusted to provide a more comprehensive control over the conversation style and tone.
Test lens-mode in different scenarios to see how it performs in various use-cases and to see if there are any specific parameters that are more effective in certain scenarios.
Evaluate the accuracy of lens-mode’s simulation by comparing its results with human-generated conversations.
Add a feature to allow lens-mode to automatically suggest parameters based on the context of the conversation.
Conduct a user study to gather feedback on the user experience and to identify areas for improvement.
Investigate the security implications of using lens-mode and develop measures to ensure that it cannot be used maliciously.
Improve the documentation of lens-mode and make it more accessible to a wider audience.
Address the issue of incorrect and incomplete derivatives of the lens-mode method appearing on various websites.
Suggest literature and research to review, reference and respond to. Starting from “Society of Mind” and “Attention is all you need” but with an emphasis on human-computer interaction, accessibility, and a inter-disciplinary perspective.
“The Humane Interface: New Directions for Designing Interactive Systems” by Jef Raskin
“Designing Interactions” by Bill Moggridge
“Universal Principles of Design” by William Lidwell, Kritina Holden and Jill Butler
“The Design of Everyday Things” by Don Norman
“Computing Machinery and Intelligence” by Alan Turing
“The Paradox of Automation: Implications for Designing Interfaces for Older Adults” by Mark Dunlop and Shari Trewin
“Human-Computer Interaction: An Empirical Research Perspective” by J. Preece, Y. Rogers, H. Sharp, D. Benyon, T. Holland, S. Carey
“Disability and the Digital Divide: A Human Rights Perspective” by Alison Chabloz and Michael McCrea
“The Impact of Technology on the Lives of People with Disabilities: A Review of Literature” by Brinda Jegatheesan and Shari Trewin
“Inclusive Design: Designing and Developing Accessible Environments” by Samantha CASTLE, Barrie H. HAWKER, and Simon J. PARKER
Note: These references provide a broad interdisciplinary perspective on human-computer interaction and accessibility from design, psychology, sociology, and computer science perspectives.
Quite engaging. You presented your articles on GPT from the integration point of view. Even though I am new in the world of developing and programming it’s very explanatory.
Thanks for saying Olajide.
Now available as a custom GPT at https://chat.openai.com/g/g-3BGxt5c12-lens-mode