Teaching AI to Document Itself
How I'm using model introspection to solve an important GPT Store UX problem
Welcome to Unknown Arts, where builders navigate AI's new possibilities. Ready to explore uncharted territory? Join the journey!
🌄 Into the Unknown
The GPT Store unleashed a flood of specialized AI tools, each with its own unique capabilities and optimal ways of working. But there's a problem: they all look exactly the same. Every GPT, whether it's built for coding, creative writing, or graphic design, presents users with an identical blank chat interface.
This creates a frustrating experience. Users have no way to know how one GPT might work differently from another. Even popular GPTs with millions of conversations provide minimal guidance on effective use. It's like being handed different specialized tools that all look the same, with no instruction manual for any of them.
This week, I ran a practical experiment aimed at solving this problem. It's part of my ongoing research into making AI tools more approachable and effective which I'm documenting and sharing with this community. The challenge was clear: how might we help users understand and effectively engage with specialized GPTs?
While the ultimate solution might involve deeper interface changes, I focused first on a low-scope idea I thought could provide immediate user value: automatically generated documentation. It felt like an approachable first step that could help users today while working toward more comprehensive solutions.
🔍 The Case
Discovery
My exploration started simply: I asked various GPTs to tell me about themselves. These casual conversations revealed something interesting - the models could often articulate their own capabilities and intended use patterns quite well. This made me think there might an opportunity to leverage this introspection to improve the user experience without requiring users to do the digging themselves.
Research & Analysis
To validate this initial insight, I developed a systematic approach to understanding GPT capabilities. I created a diagnostic prompt that could probe interaction patterns, specialized features, and key limitations and report back its findings.
Solution Design
While the diagnostic approach proved valuable for research, I realized the output was still too technical for everyday users. The next step was clear: create a more streamlined version that could generate user-friendly documentation.
I developed a second, simplified prompt that could transform the GPT's introspection directly into standardized, easy-to-follow guides focused on practical usage patterns.
Testing & Results
I tested this approach with Universal Primer GPT, which provided an ideal case study given its large user base. The testing validated two critical aspects of the solution: 1) the model could effectively articulate its own design principles and optimal usage patterns, and 2) it could do so without exposing any proprietary system prompt details.
This approach creates value for everyone involved:
End users get clear guidance on how to effectively use each GPT
GPT creators can automatically generate user guides for their tools without exposing their proprietary prompt engineering work
OpenAI maintains a secure marketplace while improving user experience
The successful test showed that automated documentation generation could be both feasible and beneficial for the entire GPT ecosystem.
The Artifacts
The experiment produced two key prompt artifacts, which you can find on ai.unknownarts.co along with samples responses:
GPT Introspection Diagnostic - For systematically exploring GPT capabilities
User Guide Generator - For creating practical documentation
The results were promising. This approach created clear documentation that could help users understand not just what a GPT can do, but how to interact with it effectively—all while maintaining flexibility and scalability for different use cases.
Looking Forward
This was a focused, time-boxed experiment testing two key ideas: 1) whether GPTs could effectively document themselves, and 2) whether we could use that capability to improve the user experience. While both hypotheses showed promise, there's much more to explore.
I spent just a brief time (about 50 minutes) sketching out how this documentation could be integrated into the GPT Store interface in the Figma prototype above. Even this rough exploration revealed exciting possibilities for future work: automated guide generation during GPT creation, interactive onboarding experiences, and visual differentiation between GPT types.
The immediate goal was to validate that model introspection could serve as a practical foundation for improving the GPT Store's user experience challenges. The next challenge? Making it seamless for both creators and users.
🕵 Your Mission
Want to explore these ideas yourself? Use the tools I shared above to help.
Try them out with a marketplace GPT that interests you and share what you discover:
What unexpected capabilities did you uncover?
What interaction patterns made the biggest difference?
What would you add to the GPT's documentation?
Share your observations in the comments - I'd love to see what patterns emerge from our collective exploration. And if you're building your own GPTs, consider using these tools to make your creation's capabilities and interaction patterns more explicit for users.
Until next time,
Patrick
Get some value from this piece? Share it with a friend.
Not subscribed yet? Join us!
Interested in working with me? Here’s my portfolio.