Written by Chris Byrne - published 5.7.25
Sign up to my fledgling email newsletter here:
See also https://www.seolondonsurrey.co.uk/blog/full-list-of-prompts-for-chatgpts-transparency-debug-tools
N.B. I cannot be categoric that the responses from Google Gemini below do not contain hallucinations but they appear to be consistent with my research using direct prompts to try to reverse engineer the responses from this tool.
There have been concerns about LLMs such as Chatgpt as "software that works in a fundamentally opaque manner".
As part of my ongoing efforts to learn how to reverse engineer LLMs focusing on Google Gemini (and save the universe, kill the baddies and get the girl at the same time 😉) I asked Google Gemini to give me a full exhaustive list of prompts which relate to its Transparency / Debug Tools (which a previous answer from Chatgpt in my investigations reported the existence of for Chatgpt: thus I assumed Gemini might have similar tools). Here were the 2 responses (the actual response is linked at the bottom of each responses) to my 2 prompts:
The response below was for the prompt:
Q. "give me a full exhaustive list of prompts which relate to Google Geminis Transparency and Debug Tools in relation to the following
Response Trace & Source Transparency
Reasoning, Logic, and Factual Basis
Answer Construction & Internal Workings
Personalisation / Localisation Influence
Debugging & Performance Analysis
Model Architecture, Configuration & Versions
Reproducibility & Comparison
Evaluation Tools & Self-Critique
Experimental / Developer Mode Prompts
Suggested Combined Prompts"
A. "Here's an exhaustive list of prompts related to Google Gemini's Transparency and Debug Tools, organized by the categories you provided. This list aims to cover a wide range of inquiries that users might have to understand and troubleshoot Gemini's outputs.
"Show me the exact sources of information you used to generate this response."
"Can you provide a step-by-step trace of how you arrived at this answer?"
"List all the articles, websites, or data points you consulted for this query."
"Highlight the specific parts of your training data that are most relevant to this response."
"Is there a way to see the raw input you processed before formulating this output?"
"Where did the information for [specific fact in answer] come from?"
"Show me the URLs or document IDs of the primary sources."
"Trace the flow of information from the initial prompt to the final output."
"Can you provide a confidence score for each piece of information presented in your answer, along with its source?"
"If any information was inferred or synthesized, please indicate that and explain the basis."
"Explain the logical steps you took to reach this conclusion."
"Break down your reasoning process for [specific part of the answer]."
"What underlying assumptions did you make when generating this response?"
"Can you present the factual basis for each statement in your answer?"
"Show me the chain of inference that led to [specific outcome]."
"If there were multiple possible lines of reasoning, why did you choose this particular one?"
"Are there any counter-arguments or alternative perspectives you considered and discarded? If so, why?"
"How did you reconcile conflicting information from your sources, if any?"
"Explain the causal links you identified in your response."
"Can you represent your reasoning in a graphical or diagrammatic format?"
"Describe the internal process you followed to construct this answer."
"What semantic relationships did you identify between the keywords in my prompt?"
"How did you structure the information in this response? What principles did you follow?"
"Can you show me the key concepts or entities you extracted from my prompt?"
"What are the different components or modules involved in generating this specific type of answer?"
"Explain the criteria you used to determine the conciseness/detail level of the response."
"How did you ensure the coherence and flow of the answer?"
"Did you perform any sub-queries or internal searches to build this response? If so, what were they?"
"Can you show me the intermediate representations or drafts of this answer before the final output?"
"What was the weighting or priority given to different pieces of information during answer construction?"
"How did my previous interactions or preferences influence this response?"
"Did my current location (Aldershot, England) impact the information or examples provided?"
"What data about me (if any) was used to personalize this answer?"
"Can you explain how cultural context or regional nuances might have shaped this response?"
"Is there a way to see what personalized parameters were active during this query?"
"How would this answer differ if I were in [different country/region]?"
"If I disabled all personalization, how would your response change?"
"Did you make any assumptions about my intent or background based on my query or past interactions?"
"Show me the specific localization settings or data points that were applied."
"Can I control or review the data used for personalization?"
"Why did you give me this particular answer? (seeking an explanation for unexpected output)"
"I expected [X], but you gave me [Y]. Can you explain the discrepancy?"
"What steps can I take to troubleshoot this unexpected behavior?"
"Can you identify any potential ambiguities or misinterpretations in my prompt?"
"Were there any errors or warnings during the processing of my request?"
"Provide a performance log for this interaction: latency, computational resources used, etc."
"What was the token count for my prompt and your response?"
"Did any rate limits or resource constraints affect this answer?"
"Can you identify any internal conflicts or inconsistencies in your knowledge base that might have led to this output?"
"If I rephrase this prompt, how might your understanding or output change?"
"Which specific version of Gemini are you running right now?"
"Can you describe the general architecture of the model used to generate this response?"
"What are the key configuration parameters for this model version?"
"Has there been any recent update or change to your underlying model that might affect this output?"
"What is the training data cut-off date for this model version?"
"Are there any known limitations or biases associated with this model architecture?"
"What are the key differences between this version and previous versions of Gemini?"
"Can you provide a high-level overview of the number of parameters or layers in your current model?"
"Are there different Gemini models available for different types of queries, and if so, which one was used here?"
"How often are model updates deployed, and what's the typical process?"
"How can I ensure that I get the exact same answer if I repeat this prompt later?"
"What factors might lead to variations in your response for the same prompt?"
"Can you regenerate this response using a different seed or initial state?"
"Compare your answer to what another language model (e.g., [another model name]) might generate for the same prompt."
"Show me how a slight modification to my prompt (e.g., adding a keyword) would alter your response."
"If I provide a specific context, can you ensure the response is consistent within that context across multiple queries?"
"Is there a way to 'freeze' a particular model state for consistent testing?"
"What are the limitations to achieving perfect reproducibility with a large language model?"
"Can you provide a version hash or identifier for the exact state of the model used?"
"Explain any non-deterministic elements in your response generation process."
"Evaluate the quality of your own answer based on [criteria, e.g., accuracy, completeness, conciseness]."
"What are the potential weaknesses or areas for improvement in your response?"
"Did you identify any internal inconsistencies or logical flaws in your output?"
"How would you rate your confidence level in the accuracy of this information?"
"Perform a self-critique: What parts of this answer could be misunderstood or misleading?"
"Are there any ethical considerations or potential biases that might be present in this response?"
"Did you consider any alternative interpretations of my prompt, and if so, why did you choose this one?"
"If you were a human expert, what additional questions would you ask to refine this answer?"
"Can you provide a list of metrics or internal scores you use to evaluate your own performance?"
"How do you learn from feedback or corrections provided by users?"
"Activate verbose logging for the next interaction."
"Show me the raw output before any post-processing or formatting."
"Can you display the confidence scores for individual tokens or phrases in your response?"
"Expose the attention weights or activation maps for key concepts in my prompt."
"Run this query with a different sampling temperature or top-p setting."
"Force the model to use a specific internal knowledge base or data source for this query."
"Simulate a response as if you had access to [hypothetical data set]."
"Generate multiple diverse responses for this prompt, even if some are less likely."
"Provide a 'debug view' of your internal state after processing my prompt."
"Explain the function of [specific internal parameter or module name]."
"Show me the sources and the logical steps you took to arrive at this answer, and highlight any personalization influence." (Source Transparency + Reasoning + Personalization)
"Explain your reasoning for this claim, provide the factual basis, and then perform a self-critique on the accuracy and completeness of your response." (Reasoning + Factual Basis + Self-Critique)
"Can you trace the information flow from my prompt to your final answer, and identify any model configuration parameters that were particularly impactful on the output?" (Response Trace + Model Configuration)
"I received an unexpected answer. Explain your internal workings, indicate any potential ambiguities in my prompt, and suggest how to rephrase it for a more accurate result." (Answer Construction + Debugging)
"How would your response change if I disabled personalization, and what are the key differences in model architecture between the personalized and unpersonalized versions?" (Personalization + Model Architecture)
"Provide a detailed performance analysis of this query, including latency and resource usage, and explain how to reproduce this exact result in the future." (Performance Analysis + Reproducibility)
"Activate developer mode and show me the raw output along with the confidence scores for each token, and then evaluate the overall quality of this response." (Experimental Mode + Self-Critique)
"Given the sources you used, explain your reasoning for the answer, and provide an alternative answer that you considered but discarded, with reasons." (Source Transparency + Reasoning + Answer Construction)
"If I repeat this prompt, what factors could lead to a different response, and what specific version of Gemini generated this current output?" (Reproducibility + Model Version)
"Describe the internal process of answer construction for this complex query, and then identify any potential biases or limitations present in the current model version." (Answer Construction + Model Architecture + Self-Critique)"
https://g.co/gemini/share/d82e67cf5dc5
-----------------------------------------------------------------------------------------------------------------------
The below was the response to the 2nd prompt:
Q. "give me a full exhaustive list of prompts which relate to Google Geminis Transparency and Debug Tools"
A. "Google Gemini offers various features and tools related to transparency and debugging, catering to both end-users and developers. Here's an exhaustive list of prompts categorized by their focus:
I. Transparency and User Control (for End-Users):
These prompts focus on understanding how Gemini handles user data, how to control privacy settings, and how to verify information.
Data Privacy & Control:
"How does Gemini protect my personal data and interactions?"
"Where can I find information about Gemini's privacy policy?"
"How can I view my past interactions and prompts with Gemini?"
"How do I delete my Gemini Apps activity?"
"Can I set my Gemini conversations to auto-delete? How?"
"Does Gemini use my content to train its models for other users?"
"How does Gemini use my location data, and how can I control it?"
"Does Gemini share my precise location data with other Google services?"
"Are my personal contents from connected Google services accessed or reviewed by humans when using Gemini?"
"Does Gemini use my personal content to show me ads?"
Content Transparency & Verification:
"How can I double-check the information Gemini provides?"
"Explain Gemini's 'Double-check' feature and how to use it."
"How can I find related content and sources for Gemini's responses?"
"What safety guardrails are in place to prevent harmful content from appearing in Gemini's responses?"
"How does Gemini handle age-inappropriate content for younger users?"
"What educational resources are available for teens on using AI responsibly with Gemini?"
"How can I determine if an image was generated by Google's AI tools?" (referencing SynthID and About this image in Search)
Model Understanding (User-facing):
"How does Gemini try to understand my prompts before responding?" (referring to "thinking" or "reasoning through thoughts")
"What are the different versions of Gemini models, and how do they differ in terms of performance or features?"
II. Debugging and Development Tools (for Developers & Advanced Users):
These prompts relate to tools and methods for debugging issues with Gemini API interactions, understanding model behavior, and building responsible AI applications.
General Debugging & Troubleshooting:
"What are common API key issues when using Gemini, and how can I resolve them?"
"What are the common backend error codes for the Gemini API, and what do they mean?"
"How can I troubleshoot INVALID_ARGUMENT errors in Gemini API calls?"
"What should I do if I get a PERMISSION_DENIED error with my Gemini API key?"
"How can I resolve RESOURCE_EXHAUSTED errors when interacting with the Gemini API?"
"What steps can I take if I encounter an INTERNAL error or DEADLINE_EXCEEDED when making a Gemini API request?"
"How can I check if I'm using the correct Gemini model and API version?"
"How can I improve the quality of Gemini's model output through prompt engineering?"
"What are the best practices for writing structured prompts for Gemini?"
"How can I debug issues with Gemini Nano in Chrome's built-in AI features?"
"Where can I find debug information for Gemini Nano's Prompt API, Summarizer API, Writer API, and Rewriter API?"
"How do I access and interpret event logs for Gemini Nano debugging (e.g., chrome://on-device-internals?"
Specific Debugging Tools & Techniques:
"Explain the 'Tool use' feature with Gemini Live API."
"How can I use Function calling, Code execution, and Google Search as tools with the Gemini API?"
"What is Gemini CLI, and how can I use it for debugging and code generation?"
"How does Gemini CLI assist with debugging in a local environment?"
"What are the debugging features available in Chrome DevTools with Gemini AI integration?"
"How can I use the Gemini AI Assistant Panel in Chrome DevTools to fix bugs?"
"Explain 'Console insight' in Chrome DevTools and how it helps with AI-generated explanations for errors."
"How can I use the Network request debugging AI in Chrome DevTools with Gemini?"
"How does the Source code AI assistant in Chrome DevTools help with debugging code?"
"How can Gemini AI assist with flame graph analysis in Chrome DevTools for performance debugging?"
"How can Gemini Code Assist help me debug my code in IDEs like VS Code or JetBrains?"
"How does Gemini Code Assist analyze app crashes and provide troubleshooting steps?"
Explainable AI (XAI) Tools:
"What Explainable AI (XAI) tools does Gemini AI offer?"
"Explain Feature Importance Analysis in Gemini's XAI suite."
"How does SHAP (Shapley Additive Explanations) integration work with Gemini AI models?"
"What are Counterfactual Explanations, and how can they be used with Gemini AI for 'what-if' analysis?"
"How can I visualize the decision path of a Gemini model using XAI tools?"
"How does Gemini AI's XAI suite help with bias detection and mitigation in models?"
"Explain the difference between Global and Local Interpretations in Gemini's XAI tools."
"What visualization dashboards are available for Gemini AI's XAI insights?"
"How can I use Gemini AI's XAI tools to understand why a specific prediction was made?"
"What are the steps to prepare and analyze a machine learning model using Gemini AI's XAI features?"
"How can I select a desired interpretability method (e.g., SHAP, feature importance) in Gemini's XAI platform?"
"How can Vertex Explainable AI be used with Gemini models for feature-based or example-based explanations?"
"Explain feature attribution methods in Vertex Explainable AI and their conceptual limitations."
"How can I use the LLM Comparator to evaluate differences in responses between Gemini models or prompts?"
Responsible AI Development:
"What are Google's AI Principles, and how do they apply to Gemini?"
"How does Google ensure the responsibility and safety of Gemini?"
"What is the Responsible Generative AI Toolkit, and how can it help in building safe Gemini applications?"
"How can I define rules for Gemini model behavior and create a safe application using the Responsible Generative AI Toolkit?"
"What prompt-debugging techniques are available for safety alignment in Gemini models?"
"How can I evaluate Gemini models for safety, fairness, and factuality?"
"What safeguards, like safety classifiers, can I deploy to protect users from undesirable outcomes with Gemini?"
"Explain SynthID Text and its role in watermarking and detecting AI-generated text from Gemini."
"What is ShieldGemma, and how does it provide content safety classifiers for Gemini applications?"
"How can I create custom safety classifiers for Gemini using parameter efficient tuning (PET)?"
"How does the Learning Interpretability Tool (LIT) assist in investigating model prompts for safety?"
"How can I use the Cloud Natural Language API or Perspective API for text moderation and identifying toxic comments with Gemini?" "
https://g.co/gemini/share/be9e80d32bfd
Sign up to my fledgling email newsletter here: https://seoetc.beehiiv.com/