#notes Related to [My Startup’s AI Agent Playbook for 5-10x Engineering Speed (with Claude Code, MCP’s, etc)](https://www.youtube.com/watch?v=_PxkYZ_4z50) # Small team, AI scaling Essentially non-technical people will be able to think through the product via the same constraints as technical people. Instead of exchanging Figma mockups, team members will able to communicate at "TRL3" level, actual code that can be ran and inspected, instead of old TRL1-2 level of communication where you would say something like, I have this idea (because of some observed pattern, TRL1) or there is this concept (TRL2) we can use this and this library but we have to keep in mind this and that. Essentially we move away from those consideration were we need a lot of conversation to decide on something we want to PoC, and leave that to initiative because it is easier to move from 1-3 when leveraging AI and we can see immediately how it looks like instead of spending a lot of time on "How it would look like?". Rapid prototyping; give previews and show users what can be done to get feedback. "Here is an idea, is there any interest there?" Internal tooling; be able to make useful tools on the fly to aid day-to-day operations. "I am trying to give as much context as needed wherever necessary." - Context Engineer?? Key consequence; faster communication loops which change the usual meeting/talks and push to adapt to this kind rate of ideation. We are moving from, "this meeting could have been an email" into "this meeting could have been an LLM conversation". # Getting Context, Outside Engineering - [Granola — The AI notepad for people in back-to-back meetings](https://www.granola.ai/) Capture meeting conversations into notes Perform **deep research** on the ideas an concepts involved. SWOT analysis, BRD, PRDs. Essentially what we want here is take a bunch of communication around business, deep research its content, then filter that through a framework that applies constraints based on our interest. Capture > Elaborate > Filter Experiment with making 2 different LLMS talk to each other via MCP. Leverage orchestration tools for these operations like n8n, CC, Gtihub Actions - [superwhisper](https://superwhisper.com/) Speech to Text, that accept prompt customization to transform the text into a final format for something like an email or anything else. Only available in Mac. "In the shoes of the LLM" thought experiment; Instead of relying on the LLM "smartness" we need to prompt it the way we would prompt a human to conduct some work. Assuming the perspective of the LLM and identifying gaps in your prompting is an important exercise. A minor optimization point, format things using XML, because that is the format that LLM are trained with. How many of this manual vs automated flow? It depends because things are advancing fast so it is always gonna be a hybrid. Essentially we would like minimize the manual aspect of our work. One of the thing this guy mentioned is the usage of notion as a KB and a store of MD prompts. I recently worked on PTL which is supposed to offer an entry for automating all of the prompt management, sharing and generation. He uses a context folder where he throws in all of the PRDS, workflow instructions and how-tos. I got a cleaner solution to handle all of this via a `prompt-store`. # Getting Context, In Engineering MCPs and MCP registeries. As a small you should make a lot of MCPs to encapsulate workflows, prompt management and well figured out routines. An idea here would be, an MCP that handles ADRs and enable the "Architect" to use it when developing. How do you communicate these things to non-technical people? How do you get them to understand how to write and maintain prompts etc? Essentially, demos. Showing is the best way to convince and guide. Create "prompt" evals where we can to measure the performance of different LLMs. For us we could measure protocol deviation and that would be our eval. MCPs he uses; - firecrawl, turn websites into Markdown - gh cli, for version control - playwright --- # AI Engineering We have to think about it from the perspective that we are getting LLMs to make "purchase decisions", right now they are helping us, in the future they will have to make for themselves and stay aligned with us. B2A, Business-to-Agent first principles. 80/20 rule - Adapt AI within your workflows - Vibe Code - Update you Edu Stack - Podcasts - YT channels - Blogs Build orchestration layer - devops for AI Key thing atm is speed, it is easy to build and ship out product, so do it! Human in the loop - one form of eval. --- [Takeaways from the AI Engineer World's Fair: The startup playbook is being rewritten in real-time – GeekWire](https://www.geekwire.com/2025/takeaways-from-the-ai-engineer-worlds-fair-the-startup-playbook-is-being-rewritten-in-real-time/)