Welcome to Unknown Arts, where creativity flows through new technology. Ready to forge ahead into the unknown? Join the journey!
🌄 Into the Unknown
I spent this week thinking about what makes working with large language models so different from traditional programming. Instead of issuing commands and expecting exact outputs, there's something more subtle at play.
Working with a language model feels more like guiding another human. Creative people don't like to be commanded – they prefer to be guided. You need to give them context, talk through higher-level concepts, and create a space for them to work in. It's about zooming out to structure the process rather than dictating the output.
There's something deeply meta about this approach. Instead of focusing on individual prompts, I find myself thinking about the process of prompting itself – stepping back to consider how to guide these models effectively. While exploring this higher-level thinking two core techniques have stood out so far: few-shot learning and chain-of-thought prompting.
🧭 The Compass
Few-shot learning and chain-of-thought prompting have quickly become key pieces of my interactions with AI.
Few-shot learning is about showing the model examples of what you're looking for—inputs and outputs that demonstrate a pattern. It's not complex, but it makes a big difference in aligning the model's response with your intent. Without these examples, the model defaults to patterns from its training data. That might work out okay, but when I want something specific (which is most of the time), I've learned to show rather than tell.
Chain-of-thought prompting goes beyond giving instructions—it's about sharing your reasoning. You explain why you're doing something a certain way and lay out the logic behind your decisions. When you give the model this kind of metacognition, it creates a richer, more collaborative problem-solving environment. The results are more thoughtful.
I'm not the only one seeing the value in this approach. OpenAI's o1 model now takes time to "think" and process its reasoning before responding, showing its work alongside the final output. It's a powerful reminder that this layer of metacognition is key to getting better creative work from AI.
🗝️ Artifacts of the Week
What inspired me
This week reinforced for me that some of the best insights into AI still come from research papers. We're at a curious moment where product development is outpacing our understanding of the science behind what makes the tech work (spooky 👻). Practical applications and scientific discovery are moving in tandem.
I came across three papers on few-shot learning and chain-of-thought prompting worth exploring:
Research papers aren't exactly fun to read, so here's how I make them more digestible:
I save PDFs into Readwise Reader to view outlines and highlight key passages
I start with the abstract and explore a few examples - that's often enough to get the gist (as a non-scientist)
I don't get hung up on reading everything - those long appendices are usually more detail than I need
What I made
I continued building the foundations of ai.unknownarts.co
I refined the library's structure, setting up an Obsidian-style network of interlinked resources and concepts. For example:
Video resource -> Behind the prompt with Claude AI
Related concept -> Few-shot learning
I also started dogfooding my GPTs to support my creative process. Wednesday's post came from an interesting experiment: I used ChatGPT to design a custom GPT that interviewed me during my usual walk around the neighborhood. When I got home, I used my thoughts from that walk-and-talk ideation session as the basis of my essay draft. The process wasn't perfect, but it helped a lot – and gave me ideas for next time.
📝 Field Note
The biggest takeaway this week is: show your work.
When you're collaborating with a language model, it's not enough to just give instructions – you need to explain your thinking. Setting up the context and sharing your reasoning helps the model better understand the task and produces richer output. But don't stop there—ask the model to explain its reasoning in return.
This back-and-forth creates a deeper creative collaboration, more like working with a thoughtful partner than issuing commands to a machine.
🕵 Ready to Explore?
Here’s this week’s mission (should you choose to accept it):
Craft a thoughtful ideation session with ChatGPT or Claude by giving examples and sharing reasoning
Pick a Simple Task: Choose something you do regularly that has clear steps - like planning a workout, writing an email, or brainstorming content ideas
Show Examples: Give the AI 2-3 specific examples of what good output looks like ("Here are two examples of the workout plans I like...")
Share Your Thinking: Explain why you structure things this way ("I prefer to start with mobility work because...")
Ask Questions: Have the model explain its choices as it works ("Why did you sequence the exercises in this order?")
This quick experiment will show you how sharing examples and reasoning leads to more thoughtful AI collaboration.
Bonus: Try the same task again without examples and reasoning to see the difference!
Until Next Time,
Patrick
Help grow the Unknown Arts AI Knowledge Base
I’m curating a thoughtful collection of resources to promote the true creative power of AI. Browse the website or the Github repo, and if you have an AI resource that genuinely helped you, I’d love to hear about it. Let’s build a path forward together.
Enjoyed this? Share it with a friend.